Re: [PATCH v6 0/6] Allwinner H6 Mali GPU support

2019-08-28 Thread Neil Armstrong
On 28/08/2019 13:49, Robin Murphy wrote:
> Hi Neil,
> 
> On 28/08/2019 12:28, Neil Armstrong wrote:
>> Hi Robin,
>>

[...]
>>>
>>> OK - with the 32-bit hack pointed to up-thread, a quick kmscube test gave 
>>> me the impression that T720 works fine, but on closer inspection some parts 
>>> of glmark2 do seem to go a bit wonky (although I suspect at least some of 
>>> it is just down to the FPGA setup being both very slow and lacking in 
>>> memory bandwidth), and the "nv12-1img" mode of kmscube turns out to render 
>>> in some delightfully wrong colours.
>>>
>>> I'll try to get a 'proper' version of the io-pgtable patch posted soon.
>>
>> I'm trying to collect all the fixes needed to make T820 work again, and
>> I was wondering if you finally have a proper patch for this and "cfg->ias > 
>> 48"
>> hack ? Or one I can test ?
> 
> I do have a handful of io-pgtable patches written up and ready to go, I'm 
> just treading carefully and waiting for the internal approval box to be 
> ticked before I share anything :(

Great !

No problem, it can totally wait until approval,

Thanks,
Neil

> 
> Robin.
> 
>>
>> Thanks,
>> Neil
>>
>>>
>>> Thanks,
>>> Robin.
>>>

 Cheers,

 Tomeu

> Robin.
>
>
> ->8-
> diff --git a/drivers/iommu/io-pgtable-arm.c 
> b/drivers/iommu/io-pgtable-arm.c
> index 546968d8a349..f29da6e8dc08 100644
> --- a/drivers/iommu/io-pgtable-arm.c
> +++ b/drivers/iommu/io-pgtable-arm.c
> @@ -1023,12 +1023,14 @@ arm_mali_lpae_alloc_pgtable(struct
> io_pgtable_cfg *cfg, void *cookie)
>   iop = arm_64_lpae_alloc_pgtable_s1(cfg, cookie);
>   if (iop) {
>   u64 mair, ttbr;
> +   struct arm_lpae_io_pgtable *data = 
> io_pgtable_ops_to_data(>ops);
>
> +   data->levels = 4;
>   /* Copy values as union fields overlap */
>   mair = cfg->arm_lpae_s1_cfg.mair[0];
>   ttbr = cfg->arm_lpae_s1_cfg.ttbr[0];
>
> ___
> dri-devel mailing list
> dri-devel@lists.freedesktop.org
> https://lists.freedesktop.org/mailman/listinfo/dri-devel
>>

___
dri-devel mailing list
dri-devel@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/dri-devel

Re: [PATCH v6 0/6] Allwinner H6 Mali GPU support

2019-08-28 Thread Robin Murphy

Hi Neil,

On 28/08/2019 12:28, Neil Armstrong wrote:

Hi Robin,

On 31/05/2019 15:47, Robin Murphy wrote:

On 31/05/2019 13:04, Tomeu Vizoso wrote:

On Wed, 29 May 2019 at 19:38, Robin Murphy  wrote:


On 29/05/2019 16:09, Tomeu Vizoso wrote:

On Tue, 21 May 2019 at 18:11, Clément Péron  wrote:



[snip]

[  345.204813] panfrost 180.gpu: mmu irq status=1
[  345.209617] panfrost 180.gpu: Unhandled Page fault in AS0 at VA
0x02400400


   From what I can see here, 0x02400400 points to the first byte
of the first submitted job descriptor.

So mapping buffers for the GPU doesn't seem to be working at all on
64-bit T-760.

Steven, Robin, do you have any idea of why this could be?


I tried rolling back to the old panfrost/nondrm shim, and it works fine
with kbase, and I also found that T-820 falls over in the exact same
manner, so the fact that it seemed to be common to the smaller 33-bit
designs rather than anything to do with the other
job_descriptor_size/v4/v5 complication turned out to be telling.


Is this complication something you can explain? I don't know what v4
and v5 are meant here.


I was alluding to BASE_HW_FEATURE_V4, which I believe refers to the Midgard architecture version - 
the older versions implemented by T6xx and T720 seem to be collectively treated as "v4", 
while T760 and T8xx would effectively be "v5".


[ as an aside, are 64-bit jobs actually known not to work on v4 GPUs, or
is it just that nobody's yet observed a 64-bit blob driving one? ]


I'm looking right now at getting Panfrost working on T720 with 64-bit
descriptors, with the ultimate goal of making Panfrost
64-bit-descriptor only so we can have a single build of Mesa in
distros.


Cool, I'll keep an eye out, and hope that it might be enough for T620 on Juno, 
too :)


Long story short, it appears that 'Mali LPAE' is also lacking the start
level notion of VMSA, and expects a full 4-level table even for <40 bits
when level 0 effectively redundant. Thus walking the 3-level table that
io-pgtable comes back with ends up going wildly wrong. The hack below
seems to do the job for me; if Clément can confirm (on T-720 you'll
still need the userspace hack to force 32-bit jobs as well) then I think
I'll cook up a proper refactoring of the allocator to put things right.


Mmaps seem to work with this patch, thanks.

The main complication I'm facing right now seems to be that the SFBD
descriptor on T720 seems to be different from the one we already had
(tested on T6xx?).


OK - with the 32-bit hack pointed to up-thread, a quick kmscube test gave me the 
impression that T720 works fine, but on closer inspection some parts of glmark2 do seem 
to go a bit wonky (although I suspect at least some of it is just down to the FPGA setup 
being both very slow and lacking in memory bandwidth), and the "nv12-1img" mode 
of kmscube turns out to render in some delightfully wrong colours.

I'll try to get a 'proper' version of the io-pgtable patch posted soon.


I'm trying to collect all the fixes needed to make T820 work again, and
I was wondering if you finally have a proper patch for this and "cfg->ias > 48"
hack ? Or one I can test ?


I do have a handful of io-pgtable patches written up and ready to go, 
I'm just treading carefully and waiting for the internal approval box to 
be ticked before I share anything :(


Robin.



Thanks,
Neil



Thanks,
Robin.



Cheers,

Tomeu


Robin.


->8-
diff --git a/drivers/iommu/io-pgtable-arm.c b/drivers/iommu/io-pgtable-arm.c
index 546968d8a349..f29da6e8dc08 100644
--- a/drivers/iommu/io-pgtable-arm.c
+++ b/drivers/iommu/io-pgtable-arm.c
@@ -1023,12 +1023,14 @@ arm_mali_lpae_alloc_pgtable(struct
io_pgtable_cfg *cfg, void *cookie)
  iop = arm_64_lpae_alloc_pgtable_s1(cfg, cookie);
  if (iop) {
  u64 mair, ttbr;
+   struct arm_lpae_io_pgtable *data = 
io_pgtable_ops_to_data(>ops);

+   data->levels = 4;
  /* Copy values as union fields overlap */
  mair = cfg->arm_lpae_s1_cfg.mair[0];
  ttbr = cfg->arm_lpae_s1_cfg.ttbr[0];

___
dri-devel mailing list
dri-devel@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/dri-devel




Re: [PATCH v6 0/6] Allwinner H6 Mali GPU support

2019-08-28 Thread Neil Armstrong
Hi Robyn,

On 31/05/2019 15:47, Robin Murphy wrote:
> On 31/05/2019 13:04, Tomeu Vizoso wrote:
>> On Wed, 29 May 2019 at 19:38, Robin Murphy  wrote:
>>>
>>> On 29/05/2019 16:09, Tomeu Vizoso wrote:
 On Tue, 21 May 2019 at 18:11, Clément Péron  wrote:
>
 [snip]
> [  345.204813] panfrost 180.gpu: mmu irq status=1
> [  345.209617] panfrost 180.gpu: Unhandled Page fault in AS0 at VA
> 0x02400400

   From what I can see here, 0x02400400 points to the first byte
 of the first submitted job descriptor.

 So mapping buffers for the GPU doesn't seem to be working at all on
 64-bit T-760.

 Steven, Robin, do you have any idea of why this could be?
>>>
>>> I tried rolling back to the old panfrost/nondrm shim, and it works fine
>>> with kbase, and I also found that T-820 falls over in the exact same
>>> manner, so the fact that it seemed to be common to the smaller 33-bit
>>> designs rather than anything to do with the other
>>> job_descriptor_size/v4/v5 complication turned out to be telling.
>>
>> Is this complication something you can explain? I don't know what v4
>> and v5 are meant here.
> 
> I was alluding to BASE_HW_FEATURE_V4, which I believe refers to the Midgard 
> architecture version - the older versions implemented by T6xx and T720 seem 
> to be collectively treated as "v4", while T760 and T8xx would effectively be 
> "v5".
> 
>>> [ as an aside, are 64-bit jobs actually known not to work on v4 GPUs, or
>>> is it just that nobody's yet observed a 64-bit blob driving one? ]
>>
>> I'm looking right now at getting Panfrost working on T720 with 64-bit
>> descriptors, with the ultimate goal of making Panfrost
>> 64-bit-descriptor only so we can have a single build of Mesa in
>> distros.
> 
> Cool, I'll keep an eye out, and hope that it might be enough for T620 on 
> Juno, too :)
> 
>>> Long story short, it appears that 'Mali LPAE' is also lacking the start
>>> level notion of VMSA, and expects a full 4-level table even for <40 bits
>>> when level 0 effectively redundant. Thus walking the 3-level table that
>>> io-pgtable comes back with ends up going wildly wrong. The hack below
>>> seems to do the job for me; if Clément can confirm (on T-720 you'll
>>> still need the userspace hack to force 32-bit jobs as well) then I think
>>> I'll cook up a proper refactoring of the allocator to put things right.
>>
>> Mmaps seem to work with this patch, thanks.
>>
>> The main complication I'm facing right now seems to be that the SFBD
>> descriptor on T720 seems to be different from the one we already had
>> (tested on T6xx?).
> 
> OK - with the 32-bit hack pointed to up-thread, a quick kmscube test gave me 
> the impression that T720 works fine, but on closer inspection some parts of 
> glmark2 do seem to go a bit wonky (although I suspect at least some of it is 
> just down to the FPGA setup being both very slow and lacking in memory 
> bandwidth), and the "nv12-1img" mode of kmscube turns out to render in some 
> delightfully wrong colours.
> 
> I'll try to get a 'proper' version of the io-pgtable patch posted soon.

I'm trying to collect all the fixes needed to make T820 work again, and
I was wondering if you finally have a proper patch for this and "cfg->ias > 48"
hack ? Or one I can test ?

Thanks,
Neil

> 
> Thanks,
> Robin.
> 
>>
>> Cheers,
>>
>> Tomeu
>>
>>> Robin.
>>>
>>>
>>> ->8-
>>> diff --git a/drivers/iommu/io-pgtable-arm.c b/drivers/iommu/io-pgtable-arm.c
>>> index 546968d8a349..f29da6e8dc08 100644
>>> --- a/drivers/iommu/io-pgtable-arm.c
>>> +++ b/drivers/iommu/io-pgtable-arm.c
>>> @@ -1023,12 +1023,14 @@ arm_mali_lpae_alloc_pgtable(struct
>>> io_pgtable_cfg *cfg, void *cookie)
>>>  iop = arm_64_lpae_alloc_pgtable_s1(cfg, cookie);
>>>  if (iop) {
>>>  u64 mair, ttbr;
>>> +   struct arm_lpae_io_pgtable *data = 
>>> io_pgtable_ops_to_data(>ops);
>>>
>>> +   data->levels = 4;
>>>  /* Copy values as union fields overlap */
>>>  mair = cfg->arm_lpae_s1_cfg.mair[0];
>>>  ttbr = cfg->arm_lpae_s1_cfg.ttbr[0];
>>>
>>> ___
>>> dri-devel mailing list
>>> dri-devel@lists.freedesktop.org
>>> https://lists.freedesktop.org/mailman/listinfo/dri-devel

___
dri-devel mailing list
dri-devel@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/dri-devel

Re: [PATCH v6 0/6] Allwinner H6 Mali GPU support

2019-06-10 Thread Tomeu Vizoso
On Wed, 29 May 2019 at 19:38, Robin Murphy  wrote:
>
> On 29/05/2019 16:09, Tomeu Vizoso wrote:
> > On Tue, 21 May 2019 at 18:11, Clément Péron  wrote:
> >>
> > [snip]
> >> [  345.204813] panfrost 180.gpu: mmu irq status=1
> >> [  345.209617] panfrost 180.gpu: Unhandled Page fault in AS0 at VA
> >> 0x02400400
> >
> >  From what I can see here, 0x02400400 points to the first byte
> > of the first submitted job descriptor.
> >
> > So mapping buffers for the GPU doesn't seem to be working at all on
> > 64-bit T-760.
> >
> > Steven, Robin, do you have any idea of why this could be?
>
> I tried rolling back to the old panfrost/nondrm shim, and it works fine
> with kbase, and I also found that T-820 falls over in the exact same
> manner, so the fact that it seemed to be common to the smaller 33-bit
> designs rather than anything to do with the other
> job_descriptor_size/v4/v5 complication turned out to be telling.
>
> [ as an aside, are 64-bit jobs actually known not to work on v4 GPUs, or
> is it just that nobody's yet observed a 64-bit blob driving one? ]

Do you know if 64-bit descriptors work on v4 GPUs with our kernel
driver but with the DDK?

Wonder if there something else to be fixed in the kernel for that scenario.

Thanks,

Tomeu

> Long story short, it appears that 'Mali LPAE' is also lacking the start
> level notion of VMSA, and expects a full 4-level table even for <40 bits
> when level 0 effectively redundant. Thus walking the 3-level table that
> io-pgtable comes back with ends up going wildly wrong. The hack below
> seems to do the job for me; if Clément can confirm (on T-720 you'll
> still need the userspace hack to force 32-bit jobs as well) then I think
> I'll cook up a proper refactoring of the allocator to put things right.
>
> Robin.
>
>
> ->8-
> diff --git a/drivers/iommu/io-pgtable-arm.c b/drivers/iommu/io-pgtable-arm.c
> index 546968d8a349..f29da6e8dc08 100644
> --- a/drivers/iommu/io-pgtable-arm.c
> +++ b/drivers/iommu/io-pgtable-arm.c
> @@ -1023,12 +1023,14 @@ arm_mali_lpae_alloc_pgtable(struct
> io_pgtable_cfg *cfg, void *cookie)
> iop = arm_64_lpae_alloc_pgtable_s1(cfg, cookie);
> if (iop) {
> u64 mair, ttbr;
> +   struct arm_lpae_io_pgtable *data = 
> io_pgtable_ops_to_data(>ops);
>
> +   data->levels = 4;
> /* Copy values as union fields overlap */
> mair = cfg->arm_lpae_s1_cfg.mair[0];
> ttbr = cfg->arm_lpae_s1_cfg.ttbr[0];
>
> ___
> dri-devel mailing list
> dri-devel@lists.freedesktop.org
> https://lists.freedesktop.org/mailman/listinfo/dri-devel


Re: [PATCH v6 0/6] Allwinner H6 Mali GPU support

2019-06-03 Thread Clément Péron
Hi Maxime, Joerg,

On Wed, 22 May 2019 at 21:27, Rob Herring  wrote:
>
> On Tue, May 21, 2019 at 11:11 AM Clément Péron  wrote:
> >
> > Hi,
> >
> > The Allwinner H6 has a Mali-T720 MP2 which should be supported by
> > the new panfrost driver. This series fix two issues and introduce the
> > dt-bindings but a simple benchmark show that it's still NOT WORKING.
> >
> > I'm pushing it in case someone want to continue the work.
> >
> > This has been tested with Mesa3D 19.1.0-RC2 and a GPU bitness patch[1].
> >
> > One patch is from Icenowy Zheng where I changed the order as required
> > by Rob Herring[2].
> >
> > Thanks,
> > Clement
> >
> > [1] https://gitlab.freedesktop.org/kszaq/mesa/tree/panfrost_64_32
> > [2] https://patchwork.kernel.org/patch/10699829/
> >
> >
> > [  345.204813] panfrost 180.gpu: mmu irq status=1
> > [  345.209617] panfrost 180.gpu: Unhandled Page fault in AS0 at VA
> > 0x02400400
> > [  345.209617] Reason: TODO
> > [  345.209617] raw fault status: 0x82C1
> > [  345.209617] decoded fault status: SLAVE FAULT
> > [  345.209617] exception type 0xC1: TRANSLATION_FAULT_LEVEL1
> > [  345.209617] access type 0x2: READ
> > [  345.209617] source id 0x8000
> > [  345.729957] panfrost 180.gpu: gpu sched timeout, js=0,
> > status=0x8, head=0x2400400, tail=0x2400400, sched_job=9e204de9
> > [  346.055876] panfrost 180.gpu: mmu irq status=1
> > [  346.060680] panfrost 180.gpu: Unhandled Page fault in AS0 at VA
> > 0x02C00A00
> > [  346.060680] Reason: TODO
> > [  346.060680] raw fault status: 0x810002C1
> > [  346.060680] decoded fault status: SLAVE FAULT
> > [  346.060680] exception type 0xC1: TRANSLATION_FAULT_LEVEL1
> > [  346.060680] access type 0x2: READ
> > [  346.060680] source id 0x8100
> > [  346.561955] panfrost 180.gpu: gpu sched timeout, js=1,
> > status=0x8, head=0x2c00a00, tail=0x2c00a00, sched_job=b55a9a85
> > [  346.573913] panfrost 180.gpu: mmu irq status=1
> > [  346.578707] panfrost 180.gpu: Unhandled Page fault in AS0 at VA
> > 0x02C00B80
> >
> > Change in v5:
> >  - Remove fix indent
> >
> > Changes in v4:
> >  - Add bus_clock probe
> >  - Fix sanity check in io-pgtable
> >  - Add vramp-delay
> >  - Merge all boards into one patch
> >  - Remove upstreamed Neil A. patch
> >
> > Change in v3 (Thanks to Maxime Ripard):
> >  - Reauthor Icenowy for her path
> >
> > Changes in v2 (Thanks to Maxime Ripard):
> >  - Drop GPU OPP Table
> >  - Add clocks and clock-names in required
> >
> > Clément Péron (5):
> >   drm: panfrost: add optional bus_clock
> >   iommu: io-pgtable: fix sanity check for non 48-bit mali iommu
> >   dt-bindings: gpu: mali-midgard: Add H6 mali gpu compatible
> >   arm64: dts: allwinner: Add ARM Mali GPU node for H6
> >   arm64: dts: allwinner: Add mali GPU supply for H6 boards
> >
> > Icenowy Zheng (1):
> >   dt-bindings: gpu: add bus clock for Mali Midgard GPUs
>
> I've applied patches 1 and 3 to drm-misc. I was going to do patch 4
> too, but it doesn't apply.
>
> Patch 2 can go in via the iommu tree and the rest via the allwinner tree.

Is this OK for you to pick up this series?

Thanks,
Clément

>
> Rob


Re: [PATCH v6 0/6] Allwinner H6 Mali GPU support

2019-05-31 Thread Robin Murphy

On 31/05/2019 13:04, Tomeu Vizoso wrote:

On Wed, 29 May 2019 at 19:38, Robin Murphy  wrote:


On 29/05/2019 16:09, Tomeu Vizoso wrote:

On Tue, 21 May 2019 at 18:11, Clément Péron  wrote:



[snip]

[  345.204813] panfrost 180.gpu: mmu irq status=1
[  345.209617] panfrost 180.gpu: Unhandled Page fault in AS0 at VA
0x02400400


  From what I can see here, 0x02400400 points to the first byte
of the first submitted job descriptor.

So mapping buffers for the GPU doesn't seem to be working at all on
64-bit T-760.

Steven, Robin, do you have any idea of why this could be?


I tried rolling back to the old panfrost/nondrm shim, and it works fine
with kbase, and I also found that T-820 falls over in the exact same
manner, so the fact that it seemed to be common to the smaller 33-bit
designs rather than anything to do with the other
job_descriptor_size/v4/v5 complication turned out to be telling.


Is this complication something you can explain? I don't know what v4
and v5 are meant here.


I was alluding to BASE_HW_FEATURE_V4, which I believe refers to the 
Midgard architecture version - the older versions implemented by T6xx 
and T720 seem to be collectively treated as "v4", while T760 and T8xx 
would effectively be "v5".



[ as an aside, are 64-bit jobs actually known not to work on v4 GPUs, or
is it just that nobody's yet observed a 64-bit blob driving one? ]


I'm looking right now at getting Panfrost working on T720 with 64-bit
descriptors, with the ultimate goal of making Panfrost
64-bit-descriptor only so we can have a single build of Mesa in
distros.


Cool, I'll keep an eye out, and hope that it might be enough for T620 on 
Juno, too :)



Long story short, it appears that 'Mali LPAE' is also lacking the start
level notion of VMSA, and expects a full 4-level table even for <40 bits
when level 0 effectively redundant. Thus walking the 3-level table that
io-pgtable comes back with ends up going wildly wrong. The hack below
seems to do the job for me; if Clément can confirm (on T-720 you'll
still need the userspace hack to force 32-bit jobs as well) then I think
I'll cook up a proper refactoring of the allocator to put things right.


Mmaps seem to work with this patch, thanks.

The main complication I'm facing right now seems to be that the SFBD
descriptor on T720 seems to be different from the one we already had
(tested on T6xx?).


OK - with the 32-bit hack pointed to up-thread, a quick kmscube test 
gave me the impression that T720 works fine, but on closer inspection 
some parts of glmark2 do seem to go a bit wonky (although I suspect at 
least some of it is just down to the FPGA setup being both very slow and 
lacking in memory bandwidth), and the "nv12-1img" mode of kmscube turns 
out to render in some delightfully wrong colours.


I'll try to get a 'proper' version of the io-pgtable patch posted soon.

Thanks,
Robin.



Cheers,

Tomeu


Robin.


->8-
diff --git a/drivers/iommu/io-pgtable-arm.c b/drivers/iommu/io-pgtable-arm.c
index 546968d8a349..f29da6e8dc08 100644
--- a/drivers/iommu/io-pgtable-arm.c
+++ b/drivers/iommu/io-pgtable-arm.c
@@ -1023,12 +1023,14 @@ arm_mali_lpae_alloc_pgtable(struct
io_pgtable_cfg *cfg, void *cookie)
 iop = arm_64_lpae_alloc_pgtable_s1(cfg, cookie);
 if (iop) {
 u64 mair, ttbr;
+   struct arm_lpae_io_pgtable *data = 
io_pgtable_ops_to_data(>ops);

+   data->levels = 4;
 /* Copy values as union fields overlap */
 mair = cfg->arm_lpae_s1_cfg.mair[0];
 ttbr = cfg->arm_lpae_s1_cfg.ttbr[0];

___
dri-devel mailing list
dri-devel@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/dri-devel


Re: [PATCH v6 0/6] Allwinner H6 Mali GPU support

2019-05-31 Thread Tomeu Vizoso
On Wed, 29 May 2019 at 19:38, Robin Murphy  wrote:
>
> On 29/05/2019 16:09, Tomeu Vizoso wrote:
> > On Tue, 21 May 2019 at 18:11, Clément Péron  wrote:
> >>
> > [snip]
> >> [  345.204813] panfrost 180.gpu: mmu irq status=1
> >> [  345.209617] panfrost 180.gpu: Unhandled Page fault in AS0 at VA
> >> 0x02400400
> >
> >  From what I can see here, 0x02400400 points to the first byte
> > of the first submitted job descriptor.
> >
> > So mapping buffers for the GPU doesn't seem to be working at all on
> > 64-bit T-760.
> >
> > Steven, Robin, do you have any idea of why this could be?
>
> I tried rolling back to the old panfrost/nondrm shim, and it works fine
> with kbase, and I also found that T-820 falls over in the exact same
> manner, so the fact that it seemed to be common to the smaller 33-bit
> designs rather than anything to do with the other
> job_descriptor_size/v4/v5 complication turned out to be telling.

Is this complication something you can explain? I don't know what v4
and v5 are meant here.

> [ as an aside, are 64-bit jobs actually known not to work on v4 GPUs, or
> is it just that nobody's yet observed a 64-bit blob driving one? ]

I'm looking right now at getting Panfrost working on T720 with 64-bit
descriptors, with the ultimate goal of making Panfrost
64-bit-descriptor only so we can have a single build of Mesa in
distros.

> Long story short, it appears that 'Mali LPAE' is also lacking the start
> level notion of VMSA, and expects a full 4-level table even for <40 bits
> when level 0 effectively redundant. Thus walking the 3-level table that
> io-pgtable comes back with ends up going wildly wrong. The hack below
> seems to do the job for me; if Clément can confirm (on T-720 you'll
> still need the userspace hack to force 32-bit jobs as well) then I think
> I'll cook up a proper refactoring of the allocator to put things right.

Mmaps seem to work with this patch, thanks.

The main complication I'm facing right now seems to be that the SFBD
descriptor on T720 seems to be different from the one we already had
(tested on T6xx?).

Cheers,

Tomeu

> Robin.
>
>
> ->8-
> diff --git a/drivers/iommu/io-pgtable-arm.c b/drivers/iommu/io-pgtable-arm.c
> index 546968d8a349..f29da6e8dc08 100644
> --- a/drivers/iommu/io-pgtable-arm.c
> +++ b/drivers/iommu/io-pgtable-arm.c
> @@ -1023,12 +1023,14 @@ arm_mali_lpae_alloc_pgtable(struct
> io_pgtable_cfg *cfg, void *cookie)
> iop = arm_64_lpae_alloc_pgtable_s1(cfg, cookie);
> if (iop) {
> u64 mair, ttbr;
> +   struct arm_lpae_io_pgtable *data = 
> io_pgtable_ops_to_data(>ops);
>
> +   data->levels = 4;
> /* Copy values as union fields overlap */
> mair = cfg->arm_lpae_s1_cfg.mair[0];
> ttbr = cfg->arm_lpae_s1_cfg.ttbr[0];
>
> ___
> dri-devel mailing list
> dri-devel@lists.freedesktop.org
> https://lists.freedesktop.org/mailman/listinfo/dri-devel
___
dri-devel mailing list
dri-devel@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/dri-devel

Re: [PATCH v6 0/6] Allwinner H6 Mali GPU support

2019-05-29 Thread Robin Murphy

On 29/05/2019 16:09, Tomeu Vizoso wrote:

On Tue, 21 May 2019 at 18:11, Clément Péron  wrote:



[snip]

[  345.204813] panfrost 180.gpu: mmu irq status=1
[  345.209617] panfrost 180.gpu: Unhandled Page fault in AS0 at VA
0x02400400


 From what I can see here, 0x02400400 points to the first byte
of the first submitted job descriptor.

So mapping buffers for the GPU doesn't seem to be working at all on
64-bit T-760.

Steven, Robin, do you have any idea of why this could be?


I tried rolling back to the old panfrost/nondrm shim, and it works fine 
with kbase, and I also found that T-820 falls over in the exact same 
manner, so the fact that it seemed to be common to the smaller 33-bit 
designs rather than anything to do with the other 
job_descriptor_size/v4/v5 complication turned out to be telling.


[ as an aside, are 64-bit jobs actually known not to work on v4 GPUs, or 
is it just that nobody's yet observed a 64-bit blob driving one? ]


Long story short, it appears that 'Mali LPAE' is also lacking the start 
level notion of VMSA, and expects a full 4-level table even for <40 bits 
when level 0 effectively redundant. Thus walking the 3-level table that 
io-pgtable comes back with ends up going wildly wrong. The hack below 
seems to do the job for me; if Clément can confirm (on T-720 you'll 
still need the userspace hack to force 32-bit jobs as well) then I think 
I'll cook up a proper refactoring of the allocator to put things right.


Robin.


->8-
diff --git a/drivers/iommu/io-pgtable-arm.c b/drivers/iommu/io-pgtable-arm.c
index 546968d8a349..f29da6e8dc08 100644
--- a/drivers/iommu/io-pgtable-arm.c
+++ b/drivers/iommu/io-pgtable-arm.c
@@ -1023,12 +1023,14 @@ arm_mali_lpae_alloc_pgtable(struct 
io_pgtable_cfg *cfg, void *cookie)

iop = arm_64_lpae_alloc_pgtable_s1(cfg, cookie);
if (iop) {
u64 mair, ttbr;
+   struct arm_lpae_io_pgtable *data = 
io_pgtable_ops_to_data(>ops);

+   data->levels = 4;
/* Copy values as union fields overlap */
mair = cfg->arm_lpae_s1_cfg.mair[0];
ttbr = cfg->arm_lpae_s1_cfg.ttbr[0];

___
dri-devel mailing list
dri-devel@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/dri-devel

Re: [PATCH v6 0/6] Allwinner H6 Mali GPU support

2019-05-29 Thread Tomeu Vizoso
On Tue, 21 May 2019 at 18:11, Clément Péron  wrote:
>
[snip]
> [  345.204813] panfrost 180.gpu: mmu irq status=1
> [  345.209617] panfrost 180.gpu: Unhandled Page fault in AS0 at VA
> 0x02400400

>From what I can see here, 0x02400400 points to the first byte
of the first submitted job descriptor.

So mapping buffers for the GPU doesn't seem to be working at all on
64-bit T-760.

Steven, Robin, do you have any idea of why this could be?

Thanks,

Tomeu


Re: [PATCH v6 0/6] Allwinner H6 Mali GPU support

2019-05-27 Thread Clément Péron
Hi Rob,

On Wed, 22 May 2019 at 21:27, Rob Herring  wrote:
>
> On Tue, May 21, 2019 at 11:11 AM Clément Péron  wrote:
> >
> > Hi,
> >
> > The Allwinner H6 has a Mali-T720 MP2 which should be supported by
> > the new panfrost driver. This series fix two issues and introduce the
> > dt-bindings but a simple benchmark show that it's still NOT WORKING.
> >
> > I'm pushing it in case someone want to continue the work.
> >
> > This has been tested with Mesa3D 19.1.0-RC2 and a GPU bitness patch[1].
> >
> > One patch is from Icenowy Zheng where I changed the order as required
> > by Rob Herring[2].
> >
> > Thanks,
> > Clement
> >
> > [1] https://gitlab.freedesktop.org/kszaq/mesa/tree/panfrost_64_32
> > [2] https://patchwork.kernel.org/patch/10699829/
> >
> >
> > [  345.204813] panfrost 180.gpu: mmu irq status=1
> > [  345.209617] panfrost 180.gpu: Unhandled Page fault in AS0 at VA
> > 0x02400400
> > [  345.209617] Reason: TODO
> > [  345.209617] raw fault status: 0x82C1
> > [  345.209617] decoded fault status: SLAVE FAULT
> > [  345.209617] exception type 0xC1: TRANSLATION_FAULT_LEVEL1
> > [  345.209617] access type 0x2: READ
> > [  345.209617] source id 0x8000
> > [  345.729957] panfrost 180.gpu: gpu sched timeout, js=0,
> > status=0x8, head=0x2400400, tail=0x2400400, sched_job=9e204de9
> > [  346.055876] panfrost 180.gpu: mmu irq status=1
> > [  346.060680] panfrost 180.gpu: Unhandled Page fault in AS0 at VA
> > 0x02C00A00
> > [  346.060680] Reason: TODO
> > [  346.060680] raw fault status: 0x810002C1
> > [  346.060680] decoded fault status: SLAVE FAULT
> > [  346.060680] exception type 0xC1: TRANSLATION_FAULT_LEVEL1
> > [  346.060680] access type 0x2: READ
> > [  346.060680] source id 0x8100
> > [  346.561955] panfrost 180.gpu: gpu sched timeout, js=1,
> > status=0x8, head=0x2c00a00, tail=0x2c00a00, sched_job=b55a9a85
> > [  346.573913] panfrost 180.gpu: mmu irq status=1
> > [  346.578707] panfrost 180.gpu: Unhandled Page fault in AS0 at VA
> > 0x02C00B80
> >
> > Change in v5:
> >  - Remove fix indent
> >
> > Changes in v4:
> >  - Add bus_clock probe
> >  - Fix sanity check in io-pgtable
> >  - Add vramp-delay
> >  - Merge all boards into one patch
> >  - Remove upstreamed Neil A. patch
> >
> > Change in v3 (Thanks to Maxime Ripard):
> >  - Reauthor Icenowy for her path
> >
> > Changes in v2 (Thanks to Maxime Ripard):
> >  - Drop GPU OPP Table
> >  - Add clocks and clock-names in required
> >
> > Clément Péron (5):
> >   drm: panfrost: add optional bus_clock
> >   iommu: io-pgtable: fix sanity check for non 48-bit mali iommu
> >   dt-bindings: gpu: mali-midgard: Add H6 mali gpu compatible
> >   arm64: dts: allwinner: Add ARM Mali GPU node for H6
> >   arm64: dts: allwinner: Add mali GPU supply for H6 boards
> >
> > Icenowy Zheng (1):
> >   dt-bindings: gpu: add bus clock for Mali Midgard GPUs
>
> I've applied patches 1 and 3 to drm-misc. I was going to do patch 4
> too, but it doesn't apply.

Thanks,

I have tried to applied on drm-misc/for-linux-next but it doesn't apply too.
It looks like commit d5ff1adb3809e2f74a3b57cea2e57c8e37d693c4 is
missing on drm-misc ?
https://github.com/torvalds/linux/commit/d5ff1adb3809e2f74a3b57cea2e57c8e37d693c4#diff-c3172f5d421d492ff91d7bb44dd44917

Clément

>
> Patch 2 can go in via the iommu tree and the rest via the allwinner tree.
>
> Rob
___
dri-devel mailing list
dri-devel@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/dri-devel

Re: [PATCH v6 0/6] Allwinner H6 Mali GPU support

2019-05-24 Thread Robin Murphy

On 21/05/2019 17:10, Clément Péron wrote:

Hi,

The Allwinner H6 has a Mali-T720 MP2 which should be supported by
the new panfrost driver. This series fix two issues and introduce the
dt-bindings but a simple benchmark show that it's still NOT WORKING.

I'm pushing it in case someone want to continue the work.

This has been tested with Mesa3D 19.1.0-RC2 and a GPU bitness patch[1].

One patch is from Icenowy Zheng where I changed the order as required
by Rob Herring[2].

Thanks,
Clement

[1] https://gitlab.freedesktop.org/kszaq/mesa/tree/panfrost_64_32
[2] https://patchwork.kernel.org/patch/10699829/


[  345.204813] panfrost 180.gpu: mmu irq status=1
[  345.209617] panfrost 180.gpu: Unhandled Page fault in AS0 at VA
0x02400400
[  345.209617] Reason: TODO
[  345.209617] raw fault status: 0x82C1
[  345.209617] decoded fault status: SLAVE FAULT
[  345.209617] exception type 0xC1: TRANSLATION_FAULT_LEVEL1
[  345.209617] access type 0x2: READ
[  345.209617] source id 0x8000
[  345.729957] panfrost 180.gpu: gpu sched timeout, js=0,
status=0x8, head=0x2400400, tail=0x2400400, sched_job=9e204de9
[  346.055876] panfrost 180.gpu: mmu irq status=1
[  346.060680] panfrost 180.gpu: Unhandled Page fault in AS0 at VA
0x02C00A00
[  346.060680] Reason: TODO
[  346.060680] raw fault status: 0x810002C1
[  346.060680] decoded fault status: SLAVE FAULT
[  346.060680] exception type 0xC1: TRANSLATION_FAULT_LEVEL1
[  346.060680] access type 0x2: READ
[  346.060680] source id 0x8100
[  346.561955] panfrost 180.gpu: gpu sched timeout, js=1,
status=0x8, head=0x2c00a00, tail=0x2c00a00, sched_job=b55a9a85
[  346.573913] panfrost 180.gpu: mmu irq status=1
[  346.578707] panfrost 180.gpu: Unhandled Page fault in AS0 at VA
0x02C00B80


FWIW I seem to have reproduced the same behaviour on a different T720 
setup, so this may well be more about the GPU than the platform. There 
doesn't look to be anything obviously wrong with the pagetables, but if 
I can find some more free time I may have a bit more of a poke around.


Robin.



Change in v5:
  - Remove fix indent

Changes in v4:
  - Add bus_clock probe
  - Fix sanity check in io-pgtable
  - Add vramp-delay
  - Merge all boards into one patch
  - Remove upstreamed Neil A. patch

Change in v3 (Thanks to Maxime Ripard):
  - Reauthor Icenowy for her path

Changes in v2 (Thanks to Maxime Ripard):
  - Drop GPU OPP Table
  - Add clocks and clock-names in required

Clément Péron (5):
   drm: panfrost: add optional bus_clock
   iommu: io-pgtable: fix sanity check for non 48-bit mali iommu
   dt-bindings: gpu: mali-midgard: Add H6 mali gpu compatible
   arm64: dts: allwinner: Add ARM Mali GPU node for H6
   arm64: dts: allwinner: Add mali GPU supply for H6 boards

Icenowy Zheng (1):
   dt-bindings: gpu: add bus clock for Mali Midgard GPUs

  .../bindings/gpu/arm,mali-midgard.txt | 15 -
  .../dts/allwinner/sun50i-h6-beelink-gs1.dts   |  6 +
  .../dts/allwinner/sun50i-h6-orangepi-3.dts|  6 +
  .../dts/allwinner/sun50i-h6-orangepi.dtsi |  6 +
  .../boot/dts/allwinner/sun50i-h6-pine-h64.dts |  6 +
  arch/arm64/boot/dts/allwinner/sun50i-h6.dtsi  | 14 
  drivers/gpu/drm/panfrost/panfrost_device.c| 22 +++
  drivers/gpu/drm/panfrost/panfrost_device.h|  1 +
  drivers/iommu/io-pgtable-arm.c|  2 +-
  9 files changed, 76 insertions(+), 2 deletions(-)


___
dri-devel mailing list
dri-devel@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/dri-devel

Re: [PATCH v6 0/6] Allwinner H6 Mali GPU support

2019-05-22 Thread Rob Herring
On Wed, May 22, 2019 at 2:41 PM Clément Péron  wrote:
>
> Hi Rob,
>
> On Wed, 22 May 2019 at 21:27, Rob Herring  wrote:
> >
> > On Tue, May 21, 2019 at 11:11 AM Clément Péron  wrote:
> > >
> > > Hi,
> > >
> > > The Allwinner H6 has a Mali-T720 MP2 which should be supported by
> > > the new panfrost driver. This series fix two issues and introduce the
> > > dt-bindings but a simple benchmark show that it's still NOT WORKING.
> > >
> > > I'm pushing it in case someone want to continue the work.
> > >
> > > This has been tested with Mesa3D 19.1.0-RC2 and a GPU bitness patch[1].
> > >
> > > One patch is from Icenowy Zheng where I changed the order as required
> > > by Rob Herring[2].
> > >
> > > Thanks,
> > > Clement
> > >
> > > [1] https://gitlab.freedesktop.org/kszaq/mesa/tree/panfrost_64_32
> > > [2] https://patchwork.kernel.org/patch/10699829/
> > >
> > >
> > > [  345.204813] panfrost 180.gpu: mmu irq status=1
> > > [  345.209617] panfrost 180.gpu: Unhandled Page fault in AS0 at VA
> > > 0x02400400
> > > [  345.209617] Reason: TODO
> > > [  345.209617] raw fault status: 0x82C1
> > > [  345.209617] decoded fault status: SLAVE FAULT
> > > [  345.209617] exception type 0xC1: TRANSLATION_FAULT_LEVEL1
> > > [  345.209617] access type 0x2: READ
> > > [  345.209617] source id 0x8000
> > > [  345.729957] panfrost 180.gpu: gpu sched timeout, js=0,
> > > status=0x8, head=0x2400400, tail=0x2400400, sched_job=9e204de9
> > > [  346.055876] panfrost 180.gpu: mmu irq status=1
> > > [  346.060680] panfrost 180.gpu: Unhandled Page fault in AS0 at VA
> > > 0x02C00A00
> > > [  346.060680] Reason: TODO
> > > [  346.060680] raw fault status: 0x810002C1
> > > [  346.060680] decoded fault status: SLAVE FAULT
> > > [  346.060680] exception type 0xC1: TRANSLATION_FAULT_LEVEL1
> > > [  346.060680] access type 0x2: READ
> > > [  346.060680] source id 0x8100
> > > [  346.561955] panfrost 180.gpu: gpu sched timeout, js=1,
> > > status=0x8, head=0x2c00a00, tail=0x2c00a00, sched_job=b55a9a85
> > > [  346.573913] panfrost 180.gpu: mmu irq status=1
> > > [  346.578707] panfrost 180.gpu: Unhandled Page fault in AS0 at VA
> > > 0x02C00B80
> > >
> > > Change in v5:
> > >  - Remove fix indent
> > >
> > > Changes in v4:
> > >  - Add bus_clock probe
> > >  - Fix sanity check in io-pgtable
> > >  - Add vramp-delay
> > >  - Merge all boards into one patch
> > >  - Remove upstreamed Neil A. patch
> > >
> > > Change in v3 (Thanks to Maxime Ripard):
> > >  - Reauthor Icenowy for her path
> > >
> > > Changes in v2 (Thanks to Maxime Ripard):
> > >  - Drop GPU OPP Table
> > >  - Add clocks and clock-names in required
> > >
> > > Clément Péron (5):
> > >   drm: panfrost: add optional bus_clock
> > >   iommu: io-pgtable: fix sanity check for non 48-bit mali iommu
> > >   dt-bindings: gpu: mali-midgard: Add H6 mali gpu compatible
> > >   arm64: dts: allwinner: Add ARM Mali GPU node for H6
> > >   arm64: dts: allwinner: Add mali GPU supply for H6 boards
> > >
> > > Icenowy Zheng (1):
> > >   dt-bindings: gpu: add bus clock for Mali Midgard GPUs
> >
> > I've applied patches 1 and 3 to drm-misc. I was going to do patch 4
> > too, but it doesn't apply.
>
> Thanks,
>
> I have tried to applied on drm-misc/for-linux-next but it doesn't apply too.
> It looks like commit d5ff1adb3809e2f74a3b57cea2e57c8e37d693c4 is
> missing on drm-misc ?
> https://github.com/torvalds/linux/commit/d5ff1adb3809e2f74a3b57cea2e57c8e37d693c4#diff-c3172f5d421d492ff91d7bb44dd44917

5.2-rc1 is merged in now and I've applied patch 4.

Rob
___
dri-devel mailing list
dri-devel@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/dri-devel

Re: [PATCH v6 0/6] Allwinner H6 Mali GPU support

2019-05-22 Thread Rob Herring
On Tue, May 21, 2019 at 11:11 AM Clément Péron  wrote:
>
> Hi,
>
> The Allwinner H6 has a Mali-T720 MP2 which should be supported by
> the new panfrost driver. This series fix two issues and introduce the
> dt-bindings but a simple benchmark show that it's still NOT WORKING.
>
> I'm pushing it in case someone want to continue the work.
>
> This has been tested with Mesa3D 19.1.0-RC2 and a GPU bitness patch[1].
>
> One patch is from Icenowy Zheng where I changed the order as required
> by Rob Herring[2].
>
> Thanks,
> Clement
>
> [1] https://gitlab.freedesktop.org/kszaq/mesa/tree/panfrost_64_32
> [2] https://patchwork.kernel.org/patch/10699829/
>
>
> [  345.204813] panfrost 180.gpu: mmu irq status=1
> [  345.209617] panfrost 180.gpu: Unhandled Page fault in AS0 at VA
> 0x02400400
> [  345.209617] Reason: TODO
> [  345.209617] raw fault status: 0x82C1
> [  345.209617] decoded fault status: SLAVE FAULT
> [  345.209617] exception type 0xC1: TRANSLATION_FAULT_LEVEL1
> [  345.209617] access type 0x2: READ
> [  345.209617] source id 0x8000
> [  345.729957] panfrost 180.gpu: gpu sched timeout, js=0,
> status=0x8, head=0x2400400, tail=0x2400400, sched_job=9e204de9
> [  346.055876] panfrost 180.gpu: mmu irq status=1
> [  346.060680] panfrost 180.gpu: Unhandled Page fault in AS0 at VA
> 0x02C00A00
> [  346.060680] Reason: TODO
> [  346.060680] raw fault status: 0x810002C1
> [  346.060680] decoded fault status: SLAVE FAULT
> [  346.060680] exception type 0xC1: TRANSLATION_FAULT_LEVEL1
> [  346.060680] access type 0x2: READ
> [  346.060680] source id 0x8100
> [  346.561955] panfrost 180.gpu: gpu sched timeout, js=1,
> status=0x8, head=0x2c00a00, tail=0x2c00a00, sched_job=b55a9a85
> [  346.573913] panfrost 180.gpu: mmu irq status=1
> [  346.578707] panfrost 180.gpu: Unhandled Page fault in AS0 at VA
> 0x02C00B80
>
> Change in v5:
>  - Remove fix indent
>
> Changes in v4:
>  - Add bus_clock probe
>  - Fix sanity check in io-pgtable
>  - Add vramp-delay
>  - Merge all boards into one patch
>  - Remove upstreamed Neil A. patch
>
> Change in v3 (Thanks to Maxime Ripard):
>  - Reauthor Icenowy for her path
>
> Changes in v2 (Thanks to Maxime Ripard):
>  - Drop GPU OPP Table
>  - Add clocks and clock-names in required
>
> Clément Péron (5):
>   drm: panfrost: add optional bus_clock
>   iommu: io-pgtable: fix sanity check for non 48-bit mali iommu
>   dt-bindings: gpu: mali-midgard: Add H6 mali gpu compatible
>   arm64: dts: allwinner: Add ARM Mali GPU node for H6
>   arm64: dts: allwinner: Add mali GPU supply for H6 boards
>
> Icenowy Zheng (1):
>   dt-bindings: gpu: add bus clock for Mali Midgard GPUs

I've applied patches 1 and 3 to drm-misc. I was going to do patch 4
too, but it doesn't apply.

Patch 2 can go in via the iommu tree and the rest via the allwinner tree.

Rob
___
dri-devel mailing list
dri-devel@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/dri-devel

[PATCH v6 0/6] Allwinner H6 Mali GPU support

2019-05-22 Thread Clément Péron
Hi,

The Allwinner H6 has a Mali-T720 MP2 which should be supported by
the new panfrost driver. This series fix two issues and introduce the
dt-bindings but a simple benchmark show that it's still NOT WORKING.

I'm pushing it in case someone want to continue the work.

This has been tested with Mesa3D 19.1.0-RC2 and a GPU bitness patch[1].

One patch is from Icenowy Zheng where I changed the order as required
by Rob Herring[2].

Thanks,
Clement

[1] https://gitlab.freedesktop.org/kszaq/mesa/tree/panfrost_64_32
[2] https://patchwork.kernel.org/patch/10699829/


[  345.204813] panfrost 180.gpu: mmu irq status=1
[  345.209617] panfrost 180.gpu: Unhandled Page fault in AS0 at VA
0x02400400
[  345.209617] Reason: TODO
[  345.209617] raw fault status: 0x82C1
[  345.209617] decoded fault status: SLAVE FAULT
[  345.209617] exception type 0xC1: TRANSLATION_FAULT_LEVEL1
[  345.209617] access type 0x2: READ
[  345.209617] source id 0x8000
[  345.729957] panfrost 180.gpu: gpu sched timeout, js=0,
status=0x8, head=0x2400400, tail=0x2400400, sched_job=9e204de9
[  346.055876] panfrost 180.gpu: mmu irq status=1
[  346.060680] panfrost 180.gpu: Unhandled Page fault in AS0 at VA
0x02C00A00
[  346.060680] Reason: TODO
[  346.060680] raw fault status: 0x810002C1
[  346.060680] decoded fault status: SLAVE FAULT
[  346.060680] exception type 0xC1: TRANSLATION_FAULT_LEVEL1
[  346.060680] access type 0x2: READ
[  346.060680] source id 0x8100
[  346.561955] panfrost 180.gpu: gpu sched timeout, js=1,
status=0x8, head=0x2c00a00, tail=0x2c00a00, sched_job=b55a9a85
[  346.573913] panfrost 180.gpu: mmu irq status=1
[  346.578707] panfrost 180.gpu: Unhandled Page fault in AS0 at VA
0x02C00B80

Change in v5:
 - Remove fix indent

Changes in v4:
 - Add bus_clock probe
 - Fix sanity check in io-pgtable
 - Add vramp-delay
 - Merge all boards into one patch
 - Remove upstreamed Neil A. patch

Change in v3 (Thanks to Maxime Ripard):
 - Reauthor Icenowy for her path

Changes in v2 (Thanks to Maxime Ripard):
 - Drop GPU OPP Table
 - Add clocks and clock-names in required

Clément Péron (5):
  drm: panfrost: add optional bus_clock
  iommu: io-pgtable: fix sanity check for non 48-bit mali iommu
  dt-bindings: gpu: mali-midgard: Add H6 mali gpu compatible
  arm64: dts: allwinner: Add ARM Mali GPU node for H6
  arm64: dts: allwinner: Add mali GPU supply for H6 boards

Icenowy Zheng (1):
  dt-bindings: gpu: add bus clock for Mali Midgard GPUs

 .../bindings/gpu/arm,mali-midgard.txt | 15 -
 .../dts/allwinner/sun50i-h6-beelink-gs1.dts   |  6 +
 .../dts/allwinner/sun50i-h6-orangepi-3.dts|  6 +
 .../dts/allwinner/sun50i-h6-orangepi.dtsi |  6 +
 .../boot/dts/allwinner/sun50i-h6-pine-h64.dts |  6 +
 arch/arm64/boot/dts/allwinner/sun50i-h6.dtsi  | 14 
 drivers/gpu/drm/panfrost/panfrost_device.c| 22 +++
 drivers/gpu/drm/panfrost/panfrost_device.h|  1 +
 drivers/iommu/io-pgtable-arm.c|  2 +-
 9 files changed, 76 insertions(+), 2 deletions(-)

-- 
2.17.1

___
dri-devel mailing list
dri-devel@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/dri-devel