Re: [PATCH] drm/mgag200: Increase bandwidth for G200se A rev1

2023-08-04 Thread Javier Martinez Canillas
Javier Martinez Canillas  writes:

> Thomas Zimmermann  writes:
>

[...]

>> The reasoning is that userspace should always be in control of the 
>> format (sans that one exception). If userspace wants packed 24-bits it 
>> can support RGB888 explicitly.  For the color-format transformation, 
>> it's better to do that in userspace with SSE-like instructions than in 
>> the kernel.
>>
>> I wasn't super-happy with this, but I think it's a clear statement with 
>> simple rules to follow. I'd prefer that over relaxed rules where each 
>> driver does its own thing.
>>
>
> I wasn't super happy either, but if I remember correctly we were the only
> two that had this point of view and everyone else there thought that the
> drivers shouldn't expose a format that don't support (other than XR24) ?.
>

I think that this unwritten rule must be documented somewhere so that we
could know what the rule really is instead of people making assumptions.

Because my understanding from the discussion was that user-space has no
way to know if a format is "native" or "fake" and it shouldn't pay for the
performance penalty of doing these format conversions in the drivers.

But the irony here is that enforcing such a rule in this particular case
would prevent to have a performance gain. So it seems to me as if it's the
opposite to what such a rule wanted to achieve.

My proposal is to write a doc patch explicitly stating that drivers must
only expose a "virtual" XRGB format for compatibility and that it is
allowed to drop the alpha channel in the driver if that would improve the
performance for a particular device (e.g: slow DMA as is the case here).

That way, the spirit of the rule would be to make the driver/device as
performant as possible. What do you think ? Does this make sense to you ?

-- 
Best regards,

Javier Martinez Canillas
Core Platforms
Red Hat



Re: [PATCH] drm/mgag200: Increase bandwidth for G200se A rev1

2023-08-02 Thread Roger Sewell


Thomas, Jocelyn,

I just thought I'd let you know that it occurred to me that, now I
understood the workings of the mgag200 module better, I might be
able to get an even higher screen resolution than before.

So I relaxed the limits of this chip that returns unique_rev_id=1 to be
those that would have applied if it had had unique_rev_id=3, and set a
/etc/X11/xorg.conf.d/20-screen.conf file to contain the stuff below, and
used Settings->Display to pick 1920x1080 mode - and sure enough I get
perfect 1920x1080@60Hz screen resolution.

Interesting. Thomas, I owe you thanks for introducing the bug/feature we
have been discussing which has eventually led to my realising a big
improvement in my display. Also to all those who support open-source
software, without which a user would not be able to taylor his system to
match what he needs so well.

With best wishes,
Roger.



Section "Monitor"
Identifier  "My monitor"
Modeline "1440x900" 106.50  1440 1520 1672 1904  900 903 909 934 -hsync 
+vsync
Modeline "1280x800"  83.50  1280 1352 1480 1680  800 803 809 831 +hsync 
-vsync
Modeline "1280x720"  74.44  1280 1336 1472 1664  720 721 724 746 -hsync 
+vsync
Modeline "1920x1080"  148.50  1920 2008 2056 2200  1080 1084 1089 1125 
+hsync +vsync
EndSection
Section "Screen"
Identifier  "Builtin Default mga Screen 0"
Device  "Builtin Default Device 0"
Monitor "My monitor"
SubSection "Display"
 Virtual 1440 900
 Modes "1440x900"
EndSubSection
EndSection
Section "ServerLayout"
Identifier  "Builtin Default Layout"
Screen  "Builtin Default mga Screen 0"
EndSection


Re: [PATCH] drm/mgag200: Increase bandwidth for G200se A rev1

2023-08-01 Thread Thomas Zimmermann

Hi

Am 01.08.23 um 14:21 schrieb Roger Sewell:

Jocelyn, Thomas,

Jocelyn, your patch works perfectly - thank you.


That's an interesting find. I briefly looked through the code that 
validates the modes, but there's no obvious reason why it would now work.




May I leave it to the two of you to decide what should happen about
propagating this patch ? (I have set out my user's point of view about
it in my email of Fri, 28 Jul 2023 10:11:00 +0100, but obviously my
opinion is not binding on either of you.)


Your proposal of giving userspace the opportunity to override kernel 
validation and program anything is not feasible in practice. It would 
open holes to the security and safety of the system.


Best regards
Thomas



I would be glad to hear what you eventually decide between you.

Many thanks indeed for your help !
Roger.


--
Thomas Zimmermann
Graphics Driver Developer
SUSE Software Solutions Germany GmbH
Frankenstrasse 146, 90461 Nuernberg, Germany
GF: Ivo Totev, Andrew Myers, Andrew McDonald, Boudien Moerman
HRB 36809 (AG Nuernberg)


OpenPGP_signature
Description: OpenPGP digital signature


Re: [PATCH] drm/mgag200: Increase bandwidth for G200se A rev1

2023-08-01 Thread Thomas Zimmermann

Hi

Am 01.08.23 um 16:24 schrieb Jocelyn Falempe:

On 01/08/2023 12:25, Thomas Zimmermann wrote:

Hi

Am 01.08.23 um 12:11 schrieb Jocelyn Falempe:

On 28/07/2023 14:12, Roger Sewell wrote:


Thomas, Jocelyn,

JF> I think the culprit is probably this patch:
JF> https://patchwork.freedesktop.org/patch/486242/
JF>
JF> before this patch,
JF> mgag200_simple_display_pipe_mode_valid() always return MODE_OK
JF>
JF> after this patch, it checks the bandwidth limit too.

It turns out to be more complicated than that - and I still think it is
something to do with how the two functions
mgag200_simple_display_pipe_mode_valid and
mgag200_mode_config_mode_valid are made known to the rest of the drm
system, i.e. which slots in the various function structures they are 
put

in.

I attach a contiguous excerpt from /var/log/messages, recording what
happened when I did the following.

1. I instrumented the old mgag200 module with printk statements in
    mgag200_simple_display_pipe_mode_valid and mga_vga_mode_valid and
    mga_vga_calculate_mode_bandwidth, and also put in a call to the
    latter in mgag200_simple_display_pipe_mode_valid so that I could 
see
    what parameters it had been called with. I then rebooted the 
system,

    getting the messages starting at Jul 28 10:42:45 . As you will see,
    almost every time mgag200_simple_display_pipe_mode_valid is 
called it
    is immediately following a return of MODE_OK from 
mga_vga_mode_valid

    with the same display parameters - the two exceptions are:

    a) at line 1156 is when it is called after "fbcon: mgag200drmfb 
(fb0)

   is primary device", and

    b) with the mode actually being set (1440x900) at line 2681 when it
   of course returns MODE_OK (as that is what it always returns, as
   you say).

2. I then switched to the new mgag200 module similarly instrumented, 
but
    with the unique_rev_id increased by 1 to get sufficient 
bandwidth to

    make 1440x900 usable. I then rebooted the system, getting the
    messages starting at Jul 28 11:46:53 . Again, almost every time
    mgag200_simple_display_pipe_mode_valid is called it is immediately
    after a return of MODE_OK from mgag200_mode_config_mode_valid, 
and we

    still have exception type (a) at line 5672. However, the exception
    type (b) is no longer present, as at line 6590, on setting the
    1440x900 mode, there is now a call of 
mgag200_mode_config_mode_valid

    preceding the call of mgag200_simple_display_pipe_mode_valid.

3. I then modified that mgag200 module by forcing a return of MODE_OK
    from mgag200_simple_display_pipe_mode_valid and removing the
    increment to unique_rev_id, so that 1440x900 is no longer "valid"
    according to the criteria being used. I rebooted, getting the
    messages starting at Jul 28 12:12:34 . Now at line 11004 we have a
    failure to set mode immediately followed by a return of 
MODE_BAD, not

    from mgag200_simple_display_pipe_mode_valid but from
    mgag200_mode_config_mode_valid.

Thus between the old mgag200 module and the new one, there is a change
in when the mode_config_mode_valid function gets called - there being
one extra call. So even if one were to revert to
mgag200_simple_display_pipe_mode_valid just always returning MODE_OK it
wouldn't fix the problem.

At present I don't see how the change of behaviour can be anything 
other

than to do with the way these function names are passed to the rest of
the drm system (though of course even if that were reverted, the fact
that mgag200_simple_display_pipe_mode_valid now tests bandwidth would
still cause what I want to do to fail).

Sadly I don't understand how the drm system works, so I'm not sure that
I can shed any more light - but if there are any more experiments that
would help, please do let me know.


I think the issue is that in v5.18, the memory check was done on the 
connector mode_valid() callback, and in v6.0, it's done in the 
mode_config mode_valid() callback.


Can you please try the patch attached, I moved the bandwidth check 
back to the connector callback.


It should not make difference. I'd be surprised if it does. And in any 
case, the bandwidth check belongs in to the mode_config test, as it is 
a device-wide limit.


It does, and it goes back to the v5.18 behavior, where the "out of spec" 
resolutions are not proposed, but you can still force them from userspace.
Also this claims to be a "bandwidth" limit, but all mgag200 are using 
the PCI bus for the memory clock, so have the same bandwidth. The limit 
is on the pixel clock, which is used only for the VGA DAC, so it's 
probably fine to attach this limit to the VGA connector. Of course 
mgag200 only have VGA connector, so it doesn't matter in practice.



FYI I intend to close this bug report as INVALID. The hardware and 
driver work according to the known specs, so there's no bug here.


I still think it's a kernel regression, because a userspace 
configuration that used to work is now broken.


No 

Re: [PATCH] drm/mgag200: Increase bandwidth for G200se A rev1

2023-08-01 Thread Jocelyn Falempe

On 01/08/2023 12:25, Thomas Zimmermann wrote:

Hi

Am 01.08.23 um 12:11 schrieb Jocelyn Falempe:

On 28/07/2023 14:12, Roger Sewell wrote:


Thomas, Jocelyn,

JF> I think the culprit is probably this patch:
JF> https://patchwork.freedesktop.org/patch/486242/
JF>
JF> before this patch,
JF> mgag200_simple_display_pipe_mode_valid() always return MODE_OK
JF>
JF> after this patch, it checks the bandwidth limit too.

It turns out to be more complicated than that - and I still think it is
something to do with how the two functions
mgag200_simple_display_pipe_mode_valid and
mgag200_mode_config_mode_valid are made known to the rest of the drm
system, i.e. which slots in the various function structures they are put
in.

I attach a contiguous excerpt from /var/log/messages, recording what
happened when I did the following.

1. I instrumented the old mgag200 module with printk statements in
    mgag200_simple_display_pipe_mode_valid and mga_vga_mode_valid and
    mga_vga_calculate_mode_bandwidth, and also put in a call to the
    latter in mgag200_simple_display_pipe_mode_valid so that I could see
    what parameters it had been called with. I then rebooted the system,
    getting the messages starting at Jul 28 10:42:45 . As you will see,
    almost every time mgag200_simple_display_pipe_mode_valid is 
called it

    is immediately following a return of MODE_OK from mga_vga_mode_valid
    with the same display parameters - the two exceptions are:

    a) at line 1156 is when it is called after "fbcon: mgag200drmfb 
(fb0)

   is primary device", and

    b) with the mode actually being set (1440x900) at line 2681 when it
   of course returns MODE_OK (as that is what it always returns, as
   you say).

2. I then switched to the new mgag200 module similarly instrumented, but
    with the unique_rev_id increased by 1 to get sufficient bandwidth to
    make 1440x900 usable. I then rebooted the system, getting the
    messages starting at Jul 28 11:46:53 . Again, almost every time
    mgag200_simple_display_pipe_mode_valid is called it is immediately
    after a return of MODE_OK from mgag200_mode_config_mode_valid, 
and we

    still have exception type (a) at line 5672. However, the exception
    type (b) is no longer present, as at line 6590, on setting the
    1440x900 mode, there is now a call of mgag200_mode_config_mode_valid
    preceding the call of mgag200_simple_display_pipe_mode_valid.

3. I then modified that mgag200 module by forcing a return of MODE_OK
    from mgag200_simple_display_pipe_mode_valid and removing the
    increment to unique_rev_id, so that 1440x900 is no longer "valid"
    according to the criteria being used. I rebooted, getting the
    messages starting at Jul 28 12:12:34 . Now at line 11004 we have a
    failure to set mode immediately followed by a return of MODE_BAD, 
not

    from mgag200_simple_display_pipe_mode_valid but from
    mgag200_mode_config_mode_valid.

Thus between the old mgag200 module and the new one, there is a change
in when the mode_config_mode_valid function gets called - there being
one extra call. So even if one were to revert to
mgag200_simple_display_pipe_mode_valid just always returning MODE_OK it
wouldn't fix the problem.

At present I don't see how the change of behaviour can be anything other
than to do with the way these function names are passed to the rest of
the drm system (though of course even if that were reverted, the fact
that mgag200_simple_display_pipe_mode_valid now tests bandwidth would
still cause what I want to do to fail).

Sadly I don't understand how the drm system works, so I'm not sure that
I can shed any more light - but if there are any more experiments that
would help, please do let me know.


I think the issue is that in v5.18, the memory check was done on the 
connector mode_valid() callback, and in v6.0, it's done in the 
mode_config mode_valid() callback.


Can you please try the patch attached, I moved the bandwidth check 
back to the connector callback.


It should not make difference. I'd be surprised if it does. And in any 
case, the bandwidth check belongs in to the mode_config test, as it is a 
device-wide limit.


It does, and it goes back to the v5.18 behavior, where the "out of spec" 
resolutions are not proposed, but you can still force them from userspace.
Also this claims to be a "bandwidth" limit, but all mgag200 are using 
the PCI bus for the memory clock, so have the same bandwidth. The limit 
is on the pixel clock, which is used only for the VGA DAC, so it's 
probably fine to attach this limit to the VGA connector. Of course 
mgag200 only have VGA connector, so it doesn't matter in practice.



FYI I intend to close this bug report as INVALID. The hardware and 
driver work according to the known specs, so there's no bug here.


I still think it's a kernel regression, because a userspace 
configuration that used to work is now broken.


Best regards,

--

Jocelyn




Best regards
Thomas



Best regards,




Re: [PATCH] drm/mgag200: Increase bandwidth for G200se A rev1

2023-08-01 Thread Roger Sewell


Jocelyn, Thomas,

Jocelyn, your patch works perfectly - thank you.

May I leave it to the two of you to decide what should happen about
propagating this patch ? (I have set out my user's point of view about
it in my email of Fri, 28 Jul 2023 10:11:00 +0100, but obviously my
opinion is not binding on either of you.) 

I would be glad to hear what you eventually decide between you.

Many thanks indeed for your help !
Roger.


Re: [PATCH] drm/mgag200: Increase bandwidth for G200se A rev1

2023-08-01 Thread Thomas Zimmermann

Hi

Am 01.08.23 um 12:11 schrieb Jocelyn Falempe:

On 28/07/2023 14:12, Roger Sewell wrote:


Thomas, Jocelyn,

JF> I think the culprit is probably this patch:
JF> https://patchwork.freedesktop.org/patch/486242/
JF>
JF> before this patch,
JF> mgag200_simple_display_pipe_mode_valid() always return MODE_OK
JF>
JF> after this patch, it checks the bandwidth limit too.

It turns out to be more complicated than that - and I still think it is
something to do with how the two functions
mgag200_simple_display_pipe_mode_valid and
mgag200_mode_config_mode_valid are made known to the rest of the drm
system, i.e. which slots in the various function structures they are put
in.

I attach a contiguous excerpt from /var/log/messages, recording what
happened when I did the following.

1. I instrumented the old mgag200 module with printk statements in
    mgag200_simple_display_pipe_mode_valid and mga_vga_mode_valid and
    mga_vga_calculate_mode_bandwidth, and also put in a call to the
    latter in mgag200_simple_display_pipe_mode_valid so that I could see
    what parameters it had been called with. I then rebooted the system,
    getting the messages starting at Jul 28 10:42:45 . As you will see,
    almost every time mgag200_simple_display_pipe_mode_valid is called it
    is immediately following a return of MODE_OK from mga_vga_mode_valid
    with the same display parameters - the two exceptions are:

    a) at line 1156 is when it is called after "fbcon: mgag200drmfb (fb0)
   is primary device", and

    b) with the mode actually being set (1440x900) at line 2681 when it
   of course returns MODE_OK (as that is what it always returns, as
   you say).

2. I then switched to the new mgag200 module similarly instrumented, but
    with the unique_rev_id increased by 1 to get sufficient bandwidth to
    make 1440x900 usable. I then rebooted the system, getting the
    messages starting at Jul 28 11:46:53 . Again, almost every time
    mgag200_simple_display_pipe_mode_valid is called it is immediately
    after a return of MODE_OK from mgag200_mode_config_mode_valid, and we
    still have exception type (a) at line 5672. However, the exception
    type (b) is no longer present, as at line 6590, on setting the
    1440x900 mode, there is now a call of mgag200_mode_config_mode_valid
    preceding the call of mgag200_simple_display_pipe_mode_valid.

3. I then modified that mgag200 module by forcing a return of MODE_OK
    from mgag200_simple_display_pipe_mode_valid and removing the
    increment to unique_rev_id, so that 1440x900 is no longer "valid"
    according to the criteria being used. I rebooted, getting the
    messages starting at Jul 28 12:12:34 . Now at line 11004 we have a
    failure to set mode immediately followed by a return of MODE_BAD, not
    from mgag200_simple_display_pipe_mode_valid but from
    mgag200_mode_config_mode_valid.

Thus between the old mgag200 module and the new one, there is a change
in when the mode_config_mode_valid function gets called - there being
one extra call. So even if one were to revert to
mgag200_simple_display_pipe_mode_valid just always returning MODE_OK it
wouldn't fix the problem.

At present I don't see how the change of behaviour can be anything other
than to do with the way these function names are passed to the rest of
the drm system (though of course even if that were reverted, the fact
that mgag200_simple_display_pipe_mode_valid now tests bandwidth would
still cause what I want to do to fail).

Sadly I don't understand how the drm system works, so I'm not sure that
I can shed any more light - but if there are any more experiments that
would help, please do let me know.


I think the issue is that in v5.18, the memory check was done on the 
connector mode_valid() callback, and in v6.0, it's done in the 
mode_config mode_valid() callback.


Can you please try the patch attached, I moved the bandwidth check back 
to the connector callback.


It should not make difference. I'd be surprised if it does. And in any 
case, the bandwidth check belongs in to the mode_config test, as it is a 
device-wide limit.


FYI I intend to close this bug report as INVALID. The hardware and 
driver work according to the known specs, so there's no bug here.


Best regards
Thomas



Best regards,



--
Thomas Zimmermann
Graphics Driver Developer
SUSE Software Solutions Germany GmbH
Frankenstrasse 146, 90461 Nuernberg, Germany
GF: Ivo Totev, Andrew Myers, Andrew McDonald, Boudien Moerman
HRB 36809 (AG Nuernberg)


OpenPGP_signature
Description: OpenPGP digital signature


Re: [PATCH] drm/mgag200: Increase bandwidth for G200se A rev1

2023-08-01 Thread Jocelyn Falempe

On 28/07/2023 14:12, Roger Sewell wrote:


Thomas, Jocelyn,

JF> I think the culprit is probably this patch:
JF> https://patchwork.freedesktop.org/patch/486242/
JF>
JF> before this patch,
JF> mgag200_simple_display_pipe_mode_valid() always return MODE_OK
JF>
JF> after this patch, it checks the bandwidth limit too.

It turns out to be more complicated than that - and I still think it is
something to do with how the two functions
mgag200_simple_display_pipe_mode_valid and
mgag200_mode_config_mode_valid are made known to the rest of the drm
system, i.e. which slots in the various function structures they are put
in.

I attach a contiguous excerpt from /var/log/messages, recording what
happened when I did the following.

1. I instrumented the old mgag200 module with printk statements in
mgag200_simple_display_pipe_mode_valid and mga_vga_mode_valid and
mga_vga_calculate_mode_bandwidth, and also put in a call to the
latter in mgag200_simple_display_pipe_mode_valid so that I could see
what parameters it had been called with. I then rebooted the system,
getting the messages starting at Jul 28 10:42:45 . As you will see,
almost every time mgag200_simple_display_pipe_mode_valid is called it
is immediately following a return of MODE_OK from mga_vga_mode_valid
with the same display parameters - the two exceptions are:

a) at line 1156 is when it is called after "fbcon: mgag200drmfb (fb0)
   is primary device", and

b) with the mode actually being set (1440x900) at line 2681 when it
   of course returns MODE_OK (as that is what it always returns, as
   you say).

2. I then switched to the new mgag200 module similarly instrumented, but
with the unique_rev_id increased by 1 to get sufficient bandwidth to
make 1440x900 usable. I then rebooted the system, getting the
messages starting at Jul 28 11:46:53 . Again, almost every time
mgag200_simple_display_pipe_mode_valid is called it is immediately
after a return of MODE_OK from mgag200_mode_config_mode_valid, and we
still have exception type (a) at line 5672. However, the exception
type (b) is no longer present, as at line 6590, on setting the
1440x900 mode, there is now a call of mgag200_mode_config_mode_valid
preceding the call of mgag200_simple_display_pipe_mode_valid.

3. I then modified that mgag200 module by forcing a return of MODE_OK
from mgag200_simple_display_pipe_mode_valid and removing the
increment to unique_rev_id, so that 1440x900 is no longer "valid"
according to the criteria being used. I rebooted, getting the
messages starting at Jul 28 12:12:34 . Now at line 11004 we have a
failure to set mode immediately followed by a return of MODE_BAD, not
from mgag200_simple_display_pipe_mode_valid but from
mgag200_mode_config_mode_valid.

Thus between the old mgag200 module and the new one, there is a change
in when the mode_config_mode_valid function gets called - there being
one extra call. So even if one were to revert to
mgag200_simple_display_pipe_mode_valid just always returning MODE_OK it
wouldn't fix the problem.

At present I don't see how the change of behaviour can be anything other
than to do with the way these function names are passed to the rest of
the drm system (though of course even if that were reverted, the fact
that mgag200_simple_display_pipe_mode_valid now tests bandwidth would
still cause what I want to do to fail).

Sadly I don't understand how the drm system works, so I'm not sure that
I can shed any more light - but if there are any more experiments that
would help, please do let me know.


I think the issue is that in v5.18, the memory check was done on the 
connector mode_valid() callback, and in v6.0, it's done in the 
mode_config mode_valid() callback.


Can you please try the patch attached, I moved the bandwidth check back 
to the connector callback.


Best regards,

--

Jocelyn


Thanks,
Roger.

From f134c822b11f8e8d16c7f72fe8077c58f9ebb1fd Mon Sep 17 00:00:00 2001
From: Jocelyn Falempe 
Date: Tue, 1 Aug 2023 11:49:14 +0200
Subject: [PATCH] drm/mgag200: move the memory bandwidth check to the connector
 callback

Prior to commit 475e2b970cc3 ("drm/mgag200: Split up connector's mode_valid
helper"), the memory bandwidth check was done on the connector helper.
It then moved to the drm_mode_config_funcs helper, which can't be
bypassed by the userspace, and some resolution (like 1440x900) that
used to work are now broken.

Signed-off-by: Jocelyn Falempe 
---
 drivers/gpu/drm/mgag200/mgag200_mode.c | 83 ++
 1 file changed, 46 insertions(+), 37 deletions(-)

diff --git a/drivers/gpu/drm/mgag200/mgag200_mode.c b/drivers/gpu/drm/mgag200/mgag200_mode.c
index 225cca2ed60e..6fd57340d677 100644
--- a/drivers/gpu/drm/mgag200/mgag200_mode.c
+++ b/drivers/gpu/drm/mgag200/mgag200_mode.c
@@ -707,8 +707,54 @@ static int mgag200_vga_connector_helper_get_modes(struct drm_connector *connecto
 	return ret;
 }
 
+/* 

Re: [PATCH] drm/mgag200: Increase bandwidth for G200se A rev1

2023-07-28 Thread Roger Sewell


Jocelyn, Thomas,

JF> I think the culprit is probably this patch:
JF> https://patchwork.freedesktop.org/patch/486242/
JF> 
JF> before this patch,
JF> mgag200_simple_display_pipe_mode_valid() always return MODE_OK
JF> 
JF> after this patch, it checks the bandwidth limit too.

I can easily test this theory, which sounds entirely reasonable - I will
do so and let you know the result.

TZ> I'm not quite sure how to proceed here, as the driver is correct and
TZ> the problem came from a mismatching config file on your system.

In so far as a user's viewpoint is relevant, I would say this: 

A user wants to be able to obtain any mode that actually is usable with
his hardware. He starts from a position of not having any extra config
files in place, and it works. He wants a different mode, so he adds a
config file. If he does that deliberately, then the system should allow
it, even if it goes over some theoretical limit - as if he gets no
video, he will then know exactly why he got no video (and can use the
tty interface after a reboot to remove his config file). (But by all
means put a warning message in the logs.)

What is really bad is for a mode that used to work to stop working with
no explanation - and even to stop working in such a manner that the only
way to get the system back is to force a reboot by removing power rather
than defaulting to a different mode. (But I think that that behaviour
results from userspace not expecting a late rejection of the mode.)

Moreover, any decent system that uses a GUI to allow you to change
screen mode offers you a preview before locking that mode in - and
automatically reverses the change if the user doesn't confirm it.

So I would prefer a mode chosen by a deliberate xorg config file to not
be rejectable by the driver, but just give a warning in the logs.

(But I do understand that you will also need to consider other points of
view.)

I'll confirm the result of the above test shortly.

Thanks,
Roger.


Re: [PATCH] drm/mgag200: Increase bandwidth for G200se A rev1

2023-07-28 Thread Jocelyn Falempe

On 28/07/2023 09:45, Thomas Zimmermann wrote:

Hi

Am 27.07.23 um 23:34 schrieb Roger Sewell:


Thomas, Jocelyn,

As a result of the collection of the Xorg logs requested by Thomas, I've
realised that at some long-past point upgrade the 1440x900@60 mode
disappeared, and in order to get it back I introduced the file
/etc/X11/xorg.conf.d/20-screen.conf attached.

If I remove this file, then with the unmodified new mgag200 module in
place, on going to Settings->Display I am no longer even offered the
1440x900 option - presumably as a result of
mgag200_mode_config_mode_valid returning MODE_BAD.

With this file in place, and with the unmodified new mgag200 module in
place, I am offered this setting, but if I pick it then it is not used
but *is* written into ~/.config/monitors.xml, and then on next login
graphics fail completely.


Thanks for this information. If you configure a screen and/or modes via 
config file, such as 20-screen.conf, it overrides Xorg's autodetection. 
So X will happily use whatever you configured. This explains my other 
mail's question why Xorg even tries 1440x900 at all.




Hypothesis: there are two different places in the code where mode_valid
is potentially checked, both of which use the mgag200 module, and the
old module only does this check at one of them (deciding which modes to
offer) (which check is bypassed for modes requested in xorg.conf.d),
while the new module does it at both (and hence fails invalid modes
requested in xorg.conf.d) ??


There's mode detection, which does the general test with mode_valid. It 
returns a list with possible modes. And there's mode setting, which 
tests if the display setting as a whole (mode, color format, outputs, 
and more) can can be programmed in the given combination. That's what 
fails with the new driver, but worked with the old driver.


I guess that the configured mode of 1440x900 somehow passed testing in 
the old driver. And as your chip could handle the overclocked pixelrate 
settings, you got a stable output even with 32-bit colors.


I assume that the new driver is somehow more strict in testing and 
rejects the overclocked pixel rate. (But I cannot see that in the code 
TBH).


I think the culprit is probably this patch:
https://patchwork.freedesktop.org/patch/486242/

before this patch,
mgag200_simple_display_pipe_mode_valid() always return MODE_OK

after this patch, it checks the bandwidth limit too.





Looking for possible reasons why this might be the case (and straying
way beyond my competence), I note that in the old module
mga_vga_mode_valid is made known to other parts of the system in a
different way than mgag200_mode_config_mode_valid is in the new module
(and indeed they have different parameter types). (Might this be
relevant ?)


I'm not quite sure how to proceed here, as the driver is correct and the 
problem came from a mismatching config file on your system.


So the bandwidth limit was not really enforced before, and the more 
strict behavior breaks the user's configuration that are over the limit.




Best regards
Thomas



Best wishes,
Roger.








Re: [PATCH] drm/mgag200: Increase bandwidth for G200se A rev1

2023-07-28 Thread Thomas Zimmermann

Hi

Am 27.07.23 um 23:34 schrieb Roger Sewell:


Thomas, Jocelyn,

As a result of the collection of the Xorg logs requested by Thomas, I've
realised that at some long-past point upgrade the 1440x900@60 mode
disappeared, and in order to get it back I introduced the file
/etc/X11/xorg.conf.d/20-screen.conf attached.

If I remove this file, then with the unmodified new mgag200 module in
place, on going to Settings->Display I am no longer even offered the
1440x900 option - presumably as a result of
mgag200_mode_config_mode_valid returning MODE_BAD.

With this file in place, and with the unmodified new mgag200 module in
place, I am offered this setting, but if I pick it then it is not used
but *is* written into ~/.config/monitors.xml, and then on next login
graphics fail completely.


Thanks for this information. If you configure a screen and/or modes via 
config file, such as 20-screen.conf, it overrides Xorg's autodetection. 
So X will happily use whatever you configured. This explains my other 
mail's question why Xorg even tries 1440x900 at all.




Hypothesis: there are two different places in the code where mode_valid
is potentially checked, both of which use the mgag200 module, and the
old module only does this check at one of them (deciding which modes to
offer) (which check is bypassed for modes requested in xorg.conf.d),
while the new module does it at both (and hence fails invalid modes
requested in xorg.conf.d) ??


There's mode detection, which does the general test with mode_valid. It 
returns a list with possible modes. And there's mode setting, which 
tests if the display setting as a whole (mode, color format, outputs, 
and more) can can be programmed in the given combination. That's what 
fails with the new driver, but worked with the old driver.


I guess that the configured mode of 1440x900 somehow passed testing in 
the old driver. And as your chip could handle the overclocked pixelrate 
settings, you got a stable output even with 32-bit colors.


I assume that the new driver is somehow more strict in testing and 
rejects the overclocked pixel rate. (But I cannot see that in the code TBH).




Looking for possible reasons why this might be the case (and straying
way beyond my competence), I note that in the old module
mga_vga_mode_valid is made known to other parts of the system in a
different way than mgag200_mode_config_mode_valid is in the new module
(and indeed they have different parameter types). (Might this be
relevant ?)


I'm not quite sure how to proceed here, as the driver is correct and the 
problem came from a mismatching config file on your system.


Best regards
Thomas



Best wishes,
Roger.




--
Thomas Zimmermann
Graphics Driver Developer
SUSE Software Solutions Germany GmbH
Frankenstrasse 146, 90461 Nuernberg, Germany
GF: Ivo Totev, Andrew Myers, Andrew McDonald, Boudien Moerman
HRB 36809 (AG Nuernberg)


OpenPGP_signature
Description: OpenPGP digital signature


Re: [PATCH] drm/mgag200: Increase bandwidth for G200se A rev1

2023-07-28 Thread Roger Sewell

Thomas, Jocelyn,

As a result of the collection of the Xorg logs requested by Thomas, I've
realised that at some long-past point upgrade the 1440x900@60 mode
disappeared, and in order to get it back I introduced the file
/etc/X11/xorg.conf.d/20-screen.conf attached.

If I remove this file, then with the unmodified new mgag200 module in
place, on going to Settings->Display I am no longer even offered the
1440x900 option - presumably as a result of
mgag200_mode_config_mode_valid returning MODE_BAD.

With this file in place, and with the unmodified new mgag200 module in
place, I am offered this setting, but if I pick it then it is not used
but *is* written into ~/.config/monitors.xml, and then on next login
graphics fail completely.

Hypothesis: there are two different places in the code where mode_valid
is potentially checked, both of which use the mgag200 module, and the
old module only does this check at one of them (deciding which modes to
offer) (which check is bypassed for modes requested in xorg.conf.d),
while the new module does it at both (and hence fails invalid modes
requested in xorg.conf.d) ??

Looking for possible reasons why this might be the case (and straying
way beyond my competence), I note that in the old module
mga_vga_mode_valid is made known to other parts of the system in a
different way than mgag200_mode_config_mode_valid is in the new module
(and indeed they have different parameter types). (Might this be
relevant ?)

Best wishes,
Roger.


Section "Monitor"
Identifier  "My monitor"
Modeline "1440x900" 106.50  1440 1520 1672 1904  900 903 909 934 -hsync 
+vsync
Modeline "1280x800"  83.50  1280 1352 1480 1680  800 803 809 831 +hsync 
-vsync
Modeline "1280x720"  74.44  1280 1336 1472 1664  720 721 724 746 -hsync 
+vsync
EndSection
Section "Screen"
Identifier  "Builtin Default mga Screen 0"
Device  "Builtin Default Device 0"
Monitor "My monitor"
SubSection "Display"
 Virtual 1440 900
 Modes "1440x900"
EndSubSection
EndSection
Section "ServerLayout"
Identifier  "Builtin Default Layout"
Screen  "Builtin Default mga Screen 0"
EndSection


Re: [PATCH] drm/mgag200: Increase bandwidth for G200se A rev1

2023-07-28 Thread Roger Sewell

Thomas,

> Could you provide us with the Xorg logs from /var/log/Xorg.0.log ,
> please?
> 
> Specifically, I'd be interested in different logs from combinations of
> the old and new kernel and the old and new userspace.

I've managed to do six of the eight possible combinations of kernel,
mgag200 module, and userspace. The attached archive contains all the
corresponding Xorg logs as well as a detailed notes file.

I couldn't do the remaining two combinations because I couldn't get the
new mgag200 module to compile for the old kernel. 

My previous theory was wrong, but hopefully the attached logs will help
you work out why the old kernel permitted the high bandwidth mode I
always use (despite returning MODE_BAD) and the new kernel doesn't.

(The "old" kernel is 9.1, i.e. 5.14.0-162.6.1.el9_1.0.1, and "new"
kernel is 9.2, i.e. 5.14.0-284.18.1.el9_2 .)

Best wishes,
Roger.



zimmermann20230726aresponse.tar.gz
Description: Various log files and notes


Re: [PATCH] drm/mgag200: Increase bandwidth for G200se A rev1

2023-07-28 Thread Jocelyn Falempe

On 27/07/2023 19:24, Roger Sewell wrote:


Thomas,


Could you provide us with the Xorg logs from /var/log/Xorg.0.log ,
please?

Specifically, I'd be interested in different logs from combinations of
the old and new kernel and the old and new userspace.


I've managed to do six of the eight possible combinations of kernel,
mgag200 module, and userspace. The attached archive contains all the
corresponding Xorg logs as well as a detailed notes file.


Thanks for running all the tests.


I couldn't do the remaining two combinations because I couldn't get the
new mgag200 module to compile for the old kernel.

My previous theory was wrong, but hopefully the attached logs will help
you work out why the old kernel permitted the high bandwidth mode I
always use (despite returning MODE_BAD) and the new kernel doesn't.

(The "old" kernel is 9.1, i.e. 5.14.0-162.6.1.el9_1.0.1, and "new"


For the drm/mgag200 part, it correspond to upstream v5.18 + 3 patches 
(which shouldn't impact the mode check):

c48a36301634 drm/mgag200: Optimize damage clips
3064debaf55e drm/mgag200: Add FB_DAMAGE_CLIPS support
c577b2f43e80 drm/mgag200: Enable atomic gamma lut update


kernel is 9.2, i.e. 5.14.0-284.18.1.el9_2 .)


And this is equivalent to v6.0




Best wishes,
Roger.





Re: [PATCH] drm/mgag200: Increase bandwidth for G200se A rev1

2023-07-28 Thread Thomas Zimmermann

Hi

Am 27.07.23 um 19:24 schrieb Roger Sewell:


Thomas,


Could you provide us with the Xorg logs from /var/log/Xorg.0.log ,
please?

Specifically, I'd be interested in different logs from combinations of
the old and new kernel and the old and new userspace.


I've managed to do six of the eight possible combinations of kernel,
mgag200 module, and userspace. The attached archive contains all the
corresponding Xorg logs as well as a detailed notes file.

I couldn't do the remaining two combinations because I couldn't get the
new mgag200 module to compile for the old kernel.

My previous theory was wrong, but hopefully the attached logs will help
you work out why the old kernel permitted the high bandwidth mode I
always use (despite returning MODE_BAD) and the new kernel doesn't.


Thanks for providing all these logs.

But I cannot really figure out why this works in the unmodified old 
kernel. In log 8, I can see an error at time 78.509.  That's the failed 
modeset operation. With the new driver, you get a lower resolution then?
It's not clear to me why X even tried 1440x900 at all. It should not be 
in the list of available modes.


Best regards
Thomas



(The "old" kernel is 9.1, i.e. 5.14.0-162.6.1.el9_1.0.1, and "new"
kernel is 9.2, i.e. 5.14.0-284.18.1.el9_2 .)

Best wishes,
Roger.



--
Thomas Zimmermann
Graphics Driver Developer
SUSE Software Solutions Germany GmbH
Frankenstrasse 146, 90461 Nuernberg, Germany
GF: Ivo Totev, Andrew Myers, Andrew McDonald, Boudien Moerman
HRB 36809 (AG Nuernberg)


OpenPGP_signature
Description: OpenPGP digital signature


Re: [PATCH] drm/mgag200: Increase bandwidth for G200se A rev1

2023-07-27 Thread Javier Martinez Canillas
Thomas Zimmermann  writes:

> Hi Maxime
>
> Am 27.07.23 um 16:33 schrieb Maxime Ripard:
>> Hi Thomas,
>> 
>> On Wed, Jul 26, 2023 at 05:36:15PM +0200, Thomas Zimmermann wrote:
 I've already sent a patch to use internally 24bit colors, when userspace
 can use 32bit that would solve this issue as well. In the end, on the
 VGA link, 24 or 32 bit colors are the same. That would allow modern
 userspace to work on par with Xorg. So maybe we can reconsider it ?
>>>
>>> No way. We've had IRC flamewars over this topic already. People didn't get
>>> work done that day; others ragequit-ed in frustration. Ask Javier, he'll
>>> remember. :)
>>>
>>> The consent in DRM-land is: we're not going to mess with color formats
>>> behind the back of userspace. The only exception is the emulation of
>>> XRGB iff the hardware does not support that. And only because it's the
>>> fallback format that is universally supported.
>> 
>> I'm aware that I might be reviving old heated debates here, but I'm not
>> sure what is the problem here.
>> 
>> I guess part of the problem is that due to the memcpy call, we would
>> have it in software.
>> 
>> But it's not clear to me how we're messing with color formats there: the
>> memcpy would drop the alpha part that was doing to be dropped by the
>> hardware anyway (and likely would have done so transparently if it
>> wasn't for the memcpy).
>> 
>> It doesn't affect the userspace at all, it doesn't change the way we
>> interpret the format either, it just ignores a part of the format that
>> is supposed to be ignored anyway. And doing so frees up a bunch of
>> resources?
>> 
>> So, AFAIU, it doesn't have any side effect except for the fact that it
>> consumes less memory bandwidth and allows for more pixels to go through.
>> That's not really "messing with the formats" in my book.

I agree with you Maxime and I believe Thomas also does, but the concensus
when that topic was (heatedly) dicusssed was that drivers should not fake
a pixel format. Even if is only about dropping the alpha channel, like in
the patch that Jocelyn mentioned.

>
> Technically not, but it's still a difference. Even in such cases without 
> side effects, it was dismissed when discussed in the context of simpledrm.
>

Indeed. I remember that discussion.

> The reasoning is that userspace should always be in control of the 
> format (sans that one exception). If userspace wants packed 24-bits it 
> can support RGB888 explicitly.  For the color-format transformation, 
> it's better to do that in userspace with SSE-like instructions than in 
> the kernel.
>
> I wasn't super-happy with this, but I think it's a clear statement with 
> simple rules to follow. I'd prefer that over relaxed rules where each 
> driver does its own thing.
>

I wasn't super happy either, but if I remember correctly we were the only
two that had this point of view and everyone else there thought that the
drivers shouldn't expose a format that don't support (other than XR24) ?.

-- 
Best regards,

Javier Martinez Canillas
Core Platforms
Red Hat



Re: [PATCH] drm/mgag200: Increase bandwidth for G200se A rev1

2023-07-27 Thread Thomas Zimmermann

Hi Maxime

Am 27.07.23 um 16:33 schrieb Maxime Ripard:

Hi Thomas,

On Wed, Jul 26, 2023 at 05:36:15PM +0200, Thomas Zimmermann wrote:

I've already sent a patch to use internally 24bit colors, when userspace
can use 32bit that would solve this issue as well. In the end, on the
VGA link, 24 or 32 bit colors are the same. That would allow modern
userspace to work on par with Xorg. So maybe we can reconsider it ?


No way. We've had IRC flamewars over this topic already. People didn't get
work done that day; others ragequit-ed in frustration. Ask Javier, he'll
remember. :)

The consent in DRM-land is: we're not going to mess with color formats
behind the back of userspace. The only exception is the emulation of
XRGB iff the hardware does not support that. And only because it's the
fallback format that is universally supported.


I'm aware that I might be reviving old heated debates here, but I'm not
sure what is the problem here.

I guess part of the problem is that due to the memcpy call, we would
have it in software.

But it's not clear to me how we're messing with color formats there: the
memcpy would drop the alpha part that was doing to be dropped by the
hardware anyway (and likely would have done so transparently if it
wasn't for the memcpy).

It doesn't affect the userspace at all, it doesn't change the way we
interpret the format either, it just ignores a part of the format that
is supposed to be ignored anyway. And doing so frees up a bunch of
resources?

So, AFAIU, it doesn't have any side effect except for the fact that it
consumes less memory bandwidth and allows for more pixels to go through.
That's not really "messing with the formats" in my book.


Technically not, but it's still a difference. Even in such cases without 
side effects, it was dismissed when discussed in the context of simpledrm.


The reasoning is that userspace should always be in control of the 
format (sans that one exception). If userspace wants packed 24-bits it 
can support RGB888 explicitly.  For the color-format transformation, 
it's better to do that in userspace with SSE-like instructions than in 
the kernel.


I wasn't super-happy with this, but I think it's a clear statement with 
simple rules to follow. I'd prefer that over relaxed rules where each 
driver does its own thing.


Best regards
Thomas



Maxime


--
Thomas Zimmermann
Graphics Driver Developer
SUSE Software Solutions Germany GmbH
Frankenstrasse 146, 90461 Nuernberg, Germany
GF: Ivo Totev, Andrew Myers, Andrew McDonald, Boudien Moerman
HRB 36809 (AG Nuernberg)


OpenPGP_signature
Description: OpenPGP digital signature


Re: [PATCH] drm/mgag200: Increase bandwidth for G200se A rev1

2023-07-27 Thread Maxime Ripard
Hi Thomas,

On Wed, Jul 26, 2023 at 05:36:15PM +0200, Thomas Zimmermann wrote:
> > I've already sent a patch to use internally 24bit colors, when userspace
> > can use 32bit that would solve this issue as well. In the end, on the
> > VGA link, 24 or 32 bit colors are the same. That would allow modern
> > userspace to work on par with Xorg. So maybe we can reconsider it ?
> 
> No way. We've had IRC flamewars over this topic already. People didn't get
> work done that day; others ragequit-ed in frustration. Ask Javier, he'll
> remember. :)
> 
> The consent in DRM-land is: we're not going to mess with color formats
> behind the back of userspace. The only exception is the emulation of
> XRGB iff the hardware does not support that. And only because it's the
> fallback format that is universally supported.

I'm aware that I might be reviving old heated debates here, but I'm not
sure what is the problem here.

I guess part of the problem is that due to the memcpy call, we would
have it in software.

But it's not clear to me how we're messing with color formats there: the
memcpy would drop the alpha part that was doing to be dropped by the
hardware anyway (and likely would have done so transparently if it
wasn't for the memcpy).

It doesn't affect the userspace at all, it doesn't change the way we
interpret the format either, it just ignores a part of the format that
is supposed to be ignored anyway. And doing so frees up a bunch of
resources?

So, AFAIU, it doesn't have any side effect except for the fact that it
consumes less memory bandwidth and allows for more pixels to go through.
That's not really "messing with the formats" in my book.

Maxime


signature.asc
Description: PGP signature


Re: [PATCH] drm/mgag200: Increase bandwidth for G200se A rev1

2023-07-27 Thread Roger Sewell


Jocelyn,

>> How can I check for sure whether I am using 24 or 32 bits per pixel
>> ?
> 
> The easiest solution if you already rebuilt your kernel is to print
> the variable bpp here:
> 
> https://elixir.bootlin.com/linux/latest/source/drivers/gpu/drm/mgag200/mgag200_mode.c#L326
> 
> pr_info("mgag200 bpp %d\n", bpp);

Did that, and I get "mgag200 bpp 32".

> Also if you can run Wayland at 1440x900@60, that would mean the
> hardware is indeed able to handle it in 32bit.

I can indeed run Wayland at 1440x900@60.

So I think I'm right in saying that my graphics chip can handle
1440x900@60 with 32 bits per pixel.

However, while searching /var/log/messages for the bpp output above, I
also found these two lines:

/usr/libexec/gdm-x-session[2366]: (==) modeset(0): Depth 24, (==) framebuffer 
bpp 32
/usr/libexec/gdm-x-session[2366]: (==) modeset(0): RGB weight 888

in case it makes a difference.

Thanks,
Roger.


Re: [PATCH] drm/mgag200: Increase bandwidth for G200se A rev1

2023-07-26 Thread Thomas Zimmermann

Hi

Am 25.07.23 um 23:31 schrieb Roger Sewell:

Thomas,

Thank you for your reply.


Table 10 in the following document says that  1440x900@60Hz is
supported:
https://www.intel.com/content/dam/support/us/en/documents/motherboards/server/s5520ur/sb/e44031012_s5520ur_s5520urt_tps_r1_9.pdf


That manual says that the resolution is only supported with at most
24-bit colors. The old X code still supports that to some extend, but
modern userspace doesn't.


How can I check for sure whether I am using 24 or 32 bits per pixel ?


Could you provide us with the Xorg logs from /var/log/Xorg.0.log , please?

Specifically, I'd be interested in different logs from combinations of 
the old and new kernel and the old and new userspace.


Best regards
Thomas




so in particular the chip is said to be a G200e, not the G200SE-A
that the kernel module seems to be interpreting it as.



It actually is the G200SE-A. It's just named differently by lspci. The
PCI device id should be 0x0522.


It is indeed 0x0522.


So the old kernel already did the right thing.


(but if I understand the situation right Xorg didn't ?)


You can do

   cat /sys/class/drm/card1-VGA-1/modes

on the old and new kernel. With a monitor connector, it will tell you
the supported resolutions.


With the new kernel and the unmodified mgag200 module it tells me that
1440x900 is not on the list - but Xorg still lists it under
Settings->Display (and Wayland doesn't). With the increased bandwidth
module in place it tells me 1440x900 is on the list. I think this is as
you expect as far as the kernel rather than Xorg is concerned.

Thanks,
Roger.


--
Thomas Zimmermann
Graphics Driver Developer
SUSE Software Solutions Germany GmbH
Frankenstrasse 146, 90461 Nuernberg, Germany
GF: Ivo Totev, Andrew Myers, Andrew McDonald, Boudien Moerman
HRB 36809 (AG Nuernberg)


OpenPGP_signature
Description: OpenPGP digital signature


Re: [PATCH] drm/mgag200: Increase bandwidth for G200se A rev1

2023-07-26 Thread Thomas Zimmermann

Hi Jocelyn

Am 26.07.23 um 10:10 schrieb Jocelyn Falempe:
[...]

So the old kernel already did the right thing.



In the *new* code the nearest-named function I could see 
issys/class/drm/card1-eDP-1/modes

mgag200_mode_config_mode_valid, which returns MODE_OK at the end of the
function if the bandwidth limit is increased to 30100, and returns
MODE_BAD three lines higher up if it is left at 24400.


Nothing has changed in the new kernel.


That's not completely true, as if I understand correctly, changing only 
the kernel with the same userspace, breaks the 1440x900 resolution. 
(Even if MODE_BAD is returned in both cases).


But how so? My understanding is that the kernel's behavior does not 
change. Only userspace. I'm rather puzzled about the details here. Maybe 
the old Xorg uses an entirely different driver, such as the old Xorg 
userspace code.








Moreover if when using the old code we switch to Wayland instead of
Xorg, it doesn't let me pick the 1440x900@60Hz mode at all, but it does
with Xorg (one of the reasons I hadn't used Wayland).


You can do

   cat /sys/class/drm/card1-VGA-1/modes

on the old and new kernel. With a monitor connector, it will tell you 
the supported resolutions.




Therefore I think the reason that the old code allowed use of
1440x900@60Hz was that Xorg somehow didn't properly check the return
value from mga_vga_mode_valid, but Wayland did. Moreover I think that
the latest version of the Xorg stuff does PARTIALLY check that return
value, to the extent that it won't let you actually use that mode, but
does nonetheless present it as a choice when you go to Settings->Display
- and then saves the values it didn't allow you to take in
~/.config/monitors.xml, and on relogin refuses to give you any graphics
at all because it doesn't like those values. But that, of course, is
nothing to do with the mgag200 driver (if it is indeed true - I haven't
looked at the Xorg source code at all).

The issue from the point of view of my usage case is that the chip works
just fine in the 1440x900@60Hz mode, even though 30318019 > 1024*24400.


I don't want to increase that limit in the driver, as it might have 
consequences for a lot of other hardware. And if you assume that 
30318019 * 3 / 4 ~= 22738514 , 24-bit color mode fits into the current 
limit.


There are no public hardware specifications, so it's pretty hard to know 
if the 24400 limit is legitimate or not. At least one hardware is able 
to overcome that.


That limit has been in the driver since the dawn of time. I'm not going 
to tinker with it.



https://cgit.freedesktop.org/xorg/driver/xf86-video-mga/tree/src/mga_driver.c#n3890



I've already sent a patch to use internally 24bit colors, when userspace 
can use 32bit that would solve this issue as well. In the end, on the 
VGA link, 24 or 32 bit colors are the same. That would allow modern 
userspace to work on par with Xorg. So maybe we can reconsider it ?


No way. We've had IRC flamewars over this topic already. People didn't 
get work done that day; others ragequit-ed in frustration. Ask Javier, 
he'll remember. :)


The consent in DRM-land is: we're not going to mess with color formats 
behind the back of userspace. The only exception is the emulation of 
XRGB iff the hardware does not support that. And only because it's 
the fallback format that is universally supported.



https://patchwork.freedesktop.org/series/116381/
I would need to update the bandwidth test to accommodate for 24bit.



Jocelyn, should we attempt to make extra resolutions available for 16- 
and 24-bit modes? We could do the bandwith test in the primary plane's 
atomic_check, where we know the resolution and the color format. The 
general mode test would use bpp=8.  IDK how userspace reacts to that, 
so it would require some testing.


That will only work for old userspace (Xorg) able to do 16 or 24 bit 
modes. Also the driver don't do 8bit, so 16bit is the minimum, and can 
be used in the general mode test. I can still give it a try.


Mesa still supports 16-bits IIRC. But 24-bit pixels are unaligned, which 
makes it hard.  I'm worried about autodetection here. If the userspace 
fails with 1440x900-32, it needs to test 1440x900-16 next. I wouldn't 
bet on this.


Best regards
Thomas



Best regards,



--
Thomas Zimmermann
Graphics Driver Developer
SUSE Software Solutions Germany GmbH
Frankenstrasse 146, 90461 Nuernberg, Germany
GF: Ivo Totev, Andrew Myers, Andrew McDonald, Boudien Moerman
HRB 36809 (AG Nuernberg)


OpenPGP_signature
Description: OpenPGP digital signature


Re: [PATCH] drm/mgag200: Increase bandwidth for G200se A rev1

2023-07-26 Thread Thomas Zimmermann

Hi

Am 26.07.23 um 12:11 schrieb Roger Sewell:

Jocelyn,


How can I check for sure whether I am using 24 or 32 bits per pixel
?


The easiest solution if you already rebuilt your kernel is to print
the variable bpp here:

https://elixir.bootlin.com/linux/latest/source/drivers/gpu/drm/mgag200/mgag200_mode.c#L326

pr_info("mgag200 bpp %d\n", bpp);


Did that, and I get "mgag200 bpp 32".


Also if you can run Wayland at 1440x900@60, that would mean the
hardware is indeed able to handle it in 32bit.


I can indeed run Wayland at 1440x900@60.

So I think I'm right in saying that my graphics chip can handle
1440x900@60 with 32 bits per pixel.


Of course. You modified that limit, so the pixel rate remains below.

Best regards
Thomas



However, while searching /var/log/messages for the bpp output above, I
also found these two lines:

/usr/libexec/gdm-x-session[2366]: (==) modeset(0): Depth 24, (==) framebuffer 
bpp 32
/usr/libexec/gdm-x-session[2366]: (==) modeset(0): RGB weight 888

in case it makes a difference.

Thanks,
Roger.


--
Thomas Zimmermann
Graphics Driver Developer
SUSE Software Solutions Germany GmbH
Frankenstrasse 146, 90461 Nuernberg, Germany
GF: Ivo Totev, Andrew Myers, Andrew McDonald, Boudien Moerman
HRB 36809 (AG Nuernberg)


OpenPGP_signature
Description: OpenPGP digital signature


Re: [PATCH] drm/mgag200: Increase bandwidth for G200se A rev1

2023-07-26 Thread Jocelyn Falempe

On 26/07/2023 12:11, Roger Sewell wrote:


Jocelyn,


How can I check for sure whether I am using 24 or 32 bits per pixel
?


The easiest solution if you already rebuilt your kernel is to print
the variable bpp here:

https://elixir.bootlin.com/linux/latest/source/drivers/gpu/drm/mgag200/mgag200_mode.c#L326

pr_info("mgag200 bpp %d\n", bpp);


Did that, and I get "mgag200 bpp 32".


Also if you can run Wayland at 1440x900@60, that would mean the
hardware is indeed able to handle it in 32bit.


I can indeed run Wayland at 1440x900@60.

So I think I'm right in saying that my graphics chip can handle
1440x900@60 with 32 bits per pixel.


Thanks for confirming that. So either the specification is a bit too 
conservative, or your specific hardware is above average.


However, while searching /var/log/messages for the bpp output above, I
also found these two lines:

/usr/libexec/gdm-x-session[2366]: (==) modeset(0): Depth 24, (==) framebuffer 
bpp 32
/usr/libexec/gdm-x-session[2366]: (==) modeset(0): RGB weight 888


Yes in 32bits, only 24 are actual color, the remaining 8 bits are 
discarded. In newer monitor, you can have 10 bits per color, but that 
would still be 32bits per pixel. Thus in your case the color depth is 24 
and bits_per_pixel is 32, which is the most common.


in case it makes a difference.

Thanks,
Roger.





Re: [PATCH] drm/mgag200: Increase bandwidth for G200se A rev1

2023-07-26 Thread Jocelyn Falempe

On 25/07/2023 23:31, Roger Sewell wrote:


Thomas,

Thank you for your reply.


Table 10 in the following document says that  1440x900@60Hz is
supported:
https://www.intel.com/content/dam/support/us/en/documents/motherboards/server/s5520ur/sb/e44031012_s5520ur_s5520urt_tps_r1_9.pdf


That manual says that the resolution is only supported with at most
24-bit colors. The old X code still supports that to some extend, but
modern userspace doesn't.


How can I check for sure whether I am using 24 or 32 bits per pixel ?


The easiest solution if you already rebuilt your kernel is to print the 
variable bpp here:


https://elixir.bootlin.com/linux/latest/source/drivers/gpu/drm/mgag200/mgag200_mode.c#L326

pr_info("mgag200 bpp %d\n", bpp);

Also if you can run Wayland at 1440x900@60, that would mean the hardware 
is indeed able to handle it in 32bit.





so in particular the chip is said to be a G200e, not the G200SE-A
that the kernel module seems to be interpreting it as.



It actually is the G200SE-A. It's just named differently by lspci. The
PCI device id should be 0x0522.


It is indeed 0x0522.


So the old kernel already did the right thing.


(but if I understand the situation right Xorg didn't ?)


You can do

   cat /sys/class/drm/card1-VGA-1/modes

on the old and new kernel. With a monitor connector, it will tell you
the supported resolutions.


With the new kernel and the unmodified mgag200 module it tells me that
1440x900 is not on the list - but Xorg still lists it under
Settings->Display (and Wayland doesn't). With the increased bandwidth
module in place it tells me 1440x900 is on the list. I think this is as
you expect as far as the kernel rather than Xorg is concerned.

Thanks,
Roger.



Best regards,

--

Jocelyn



Re: [PATCH] drm/mgag200: Increase bandwidth for G200se A rev1

2023-07-26 Thread Jocelyn Falempe

On 25/07/2023 21:37, Thomas Zimmermann wrote:

Hi Roger,

thanks for all the information.

Am 24.07.23 um 22:57 schrieb Roger Sewell:

Thomas,

Hello, I'm the user who reported the issue. Definitely happy to help you
sort this out if I can, though my response speed will decrease when term
restarts in October.


I'd be interested in the exact model and the unique_rev_id
(you said A, rev1 ?)


The machine is an Intel SR1625URR server including an S5520UR
motherboard.

Table 10 in the following document says that  1440x900@60Hz is supported:
https://www.intel.com/content/dam/support/us/en/documents/motherboards/server/s5520ur/sb/e44031012_s5520ur_s5520urt_tps_r1_9.pdf


That manual says that the resolution is only supported with at most 
24-bit colors. The old X code still supports that to some extend, but 
modern userspace doesn't.


It's not a Wayland thing, but applications now use Mesa for drawing, 
which doesn't like 24-bit color mode much. Userspace is slowly loosing 
the ability to work with anything less the 32-bit colors.




lspci -v returns:

07:00.0 VGA compatible controller: Matrox Electronics Systems Ltd. MGA 
G200e [Pilot] ServerEngines (SEP1) (rev 02) (prog-if 00 [VGA controller])

Subsystem: Intel Corporation Device 0101
Physical Slot: 5
Flags: bus master, fast devsel, latency 0, IRQ 16
Memory at b000 (32-bit, prefetchable) [size=16M]
Memory at b180 (32-bit, non-prefetchable) [size=16K]
Memory at b100 (32-bit, non-prefetchable) [size=8M]
Expansion ROM at 000c [disabled] [size=128K]
Capabilities: 
Kernel driver in use: mgag200
Kernel modules: mgag200

so in particular the chip is said to be a G200e, not the G200SE-A that
the kernel module seems to be interpreting it as. In the lspci return it


It actually is the G200SE-A. It's just named differently by lspci. The 
PCI device id should be 0x0522.



calls itself "rev 02", but the unique_rev_id returned is 0x01, not 02,
and not 00. (My originally suggested solution was that "rev 02" might
correspond to unique_rev_id=0x01 and that one should add 1 to the
unique_rev_id, but Jocelyn indicated that isn't right.)


That rev 02 if the PCI revision number. Matrox also has another revision 
ID named 'unique_rev_id' in the code. Who knows why...




I instrumented a version of the new code by adding printk statements to
a version of the module embodied in a kmod-mgag200 package and observing
messages in the /var/log/messages file. These tell me that:


and if the early-out branches in mga_vga_calculate_mode_bandwidth()
are being taken.


In the "old" code the options to return 0 are NOT being taken, and the
bandwidth returned is the expected value of 30318019.

In the *new* code the options to return 0 are NOT being taken, and the
bandwidth returned is the expected value of 30318019.


Can you figure out how exactly the CPU moves through
mga_vga_mode_valid().


In the "old" code we enter the true part of the if (IS_G200_SE(mdev)),
then the true part of the if (unique_rev_id == 0x01), then return
MODE_BANDWIDTH (i.e. MODE_BAD) at the third if statement in that block.


So the old kernel already did the right thing.



In the *new* code the nearest-named function I could see 
issys/class/drm/card1-eDP-1/modes

mgag200_mode_config_mode_valid, which returns MODE_OK at the end of the
function if the bandwidth limit is increased to 30100, and returns
MODE_BAD three lines higher up if it is left at 24400.


Nothing has changed in the new kernel.


That's not completely true, as if I understand correctly, changing only 
the kernel with the same userspace, breaks the 1440x900 resolution. 
(Even if MODE_BAD is returned in both cases).






Moreover if when using the old code we switch to Wayland instead of
Xorg, it doesn't let me pick the 1440x900@60Hz mode at all, but it does
with Xorg (one of the reasons I hadn't used Wayland).


You can do

   cat /sys/class/drm/card1-VGA-1/modes

on the old and new kernel. With a monitor connector, it will tell you 
the supported resolutions.




Therefore I think the reason that the old code allowed use of
1440x900@60Hz was that Xorg somehow didn't properly check the return
value from mga_vga_mode_valid, but Wayland did. Moreover I think that
the latest version of the Xorg stuff does PARTIALLY check that return
value, to the extent that it won't let you actually use that mode, but
does nonetheless present it as a choice when you go to Settings->Display
- and then saves the values it didn't allow you to take in
~/.config/monitors.xml, and on relogin refuses to give you any graphics
at all because it doesn't like those values. But that, of course, is
nothing to do with the mgag200 driver (if it is indeed true - I haven't
looked at the Xorg source code at all).

The issue from the point of view of my usage case is that the chip works
just fine in the 1440x900@60Hz mode, even though 30318019 > 1024*24400.


I don't want to increase that limit in the driver, as it might 

Re: [PATCH] drm/mgag200: Increase bandwidth for G200se A rev1

2023-07-26 Thread Roger Sewell


Thomas,

Thank you for your reply.

>> Table 10 in the following document says that  1440x900@60Hz is
>> supported:
>> https://www.intel.com/content/dam/support/us/en/documents/motherboards/server/s5520ur/sb/e44031012_s5520ur_s5520urt_tps_r1_9.pdf
> 
> That manual says that the resolution is only supported with at most
> 24-bit colors. The old X code still supports that to some extend, but 
> modern userspace doesn't.

How can I check for sure whether I am using 24 or 32 bits per pixel ?

>> so in particular the chip is said to be a G200e, not the G200SE-A
>> that the kernel module seems to be interpreting it as. 

> It actually is the G200SE-A. It's just named differently by lspci. The
> PCI device id should be 0x0522.

It is indeed 0x0522.

> So the old kernel already did the right thing.

(but if I understand the situation right Xorg didn't ?)

> You can do
> 
>   cat /sys/class/drm/card1-VGA-1/modes
> 
> on the old and new kernel. With a monitor connector, it will tell you
> the supported resolutions.

With the new kernel and the unmodified mgag200 module it tells me that
1440x900 is not on the list - but Xorg still lists it under
Settings->Display (and Wayland doesn't). With the increased bandwidth
module in place it tells me 1440x900 is on the list. I think this is as
you expect as far as the kernel rather than Xorg is concerned.

Thanks,
Roger.


Re: [PATCH] drm/mgag200: Increase bandwidth for G200se A rev1

2023-07-25 Thread Thomas Zimmermann

Hi Roger,

thanks for all the information.

Am 24.07.23 um 22:57 schrieb Roger Sewell:

Thomas,

Hello, I'm the user who reported the issue. Definitely happy to help you
sort this out if I can, though my response speed will decrease when term
restarts in October.


I'd be interested in the exact model and the unique_rev_id
(you said A, rev1 ?)


The machine is an Intel SR1625URR server including an S5520UR
motherboard.

Table 10 in the following document says that  1440x900@60Hz is supported:
https://www.intel.com/content/dam/support/us/en/documents/motherboards/server/s5520ur/sb/e44031012_s5520ur_s5520urt_tps_r1_9.pdf


That manual says that the resolution is only supported with at most 
24-bit colors. The old X code still supports that to some extend, but 
modern userspace doesn't.


It's not a Wayland thing, but applications now use Mesa for drawing, 
which doesn't like 24-bit color mode much. Userspace is slowly loosing 
the ability to work with anything less the 32-bit colors.




lspci -v returns:

07:00.0 VGA compatible controller: Matrox Electronics Systems Ltd. MGA G200e 
[Pilot] ServerEngines (SEP1) (rev 02) (prog-if 00 [VGA controller])
Subsystem: Intel Corporation Device 0101
Physical Slot: 5
Flags: bus master, fast devsel, latency 0, IRQ 16
Memory at b000 (32-bit, prefetchable) [size=16M]
Memory at b180 (32-bit, non-prefetchable) [size=16K]
Memory at b100 (32-bit, non-prefetchable) [size=8M]
Expansion ROM at 000c [disabled] [size=128K]
Capabilities: 
Kernel driver in use: mgag200
Kernel modules: mgag200

so in particular the chip is said to be a G200e, not the G200SE-A that
the kernel module seems to be interpreting it as. In the lspci return it


It actually is the G200SE-A. It's just named differently by lspci. The 
PCI device id should be 0x0522.



calls itself "rev 02", but the unique_rev_id returned is 0x01, not 02,
and not 00. (My originally suggested solution was that "rev 02" might
correspond to unique_rev_id=0x01 and that one should add 1 to the
unique_rev_id, but Jocelyn indicated that isn't right.)


That rev 02 if the PCI revision number. Matrox also has another revision 
ID named 'unique_rev_id' in the code. Who knows why...




I instrumented a version of the new code by adding printk statements to
a version of the module embodied in a kmod-mgag200 package and observing
messages in the /var/log/messages file. These tell me that:


and if the early-out branches in mga_vga_calculate_mode_bandwidth()
are being taken.


In the "old" code the options to return 0 are NOT being taken, and the
bandwidth returned is the expected value of 30318019.

In the *new* code the options to return 0 are NOT being taken, and the
bandwidth returned is the expected value of 30318019.


Can you figure out how exactly the CPU moves through
mga_vga_mode_valid().


In the "old" code we enter the true part of the if (IS_G200_SE(mdev)),
then the true part of the if (unique_rev_id == 0x01), then return
MODE_BANDWIDTH (i.e. MODE_BAD) at the third if statement in that block.


So the old kernel already did the right thing.



In the *new* code the nearest-named function I could see 
issys/class/drm/card1-eDP-1/modes
mgag200_mode_config_mode_valid, which returns MODE_OK at the end of the
function if the bandwidth limit is increased to 30100, and returns
MODE_BAD three lines higher up if it is left at 24400.


Nothing has changed in the new kernel.



Moreover if when using the old code we switch to Wayland instead of
Xorg, it doesn't let me pick the 1440x900@60Hz mode at all, but it does
with Xorg (one of the reasons I hadn't used Wayland).


You can do

  cat /sys/class/drm/card1-VGA-1/modes

on the old and new kernel. With a monitor connector, it will tell you 
the supported resolutions.




Therefore I think the reason that the old code allowed use of
1440x900@60Hz was that Xorg somehow didn't properly check the return
value from mga_vga_mode_valid, but Wayland did. Moreover I think that
the latest version of the Xorg stuff does PARTIALLY check that return
value, to the extent that it won't let you actually use that mode, but
does nonetheless present it as a choice when you go to Settings->Display
- and then saves the values it didn't allow you to take in
~/.config/monitors.xml, and on relogin refuses to give you any graphics
at all because it doesn't like those values. But that, of course, is
nothing to do with the mgag200 driver (if it is indeed true - I haven't
looked at the Xorg source code at all).

The issue from the point of view of my usage case is that the chip works
just fine in the 1440x900@60Hz mode, even though 30318019 > 1024*24400.


I don't want to increase that limit in the driver, as it might have 
consequences for a lot of other hardware. And if you assume that 
30318019 * 3 / 4 ~= 22738514 , 24-bit color mode fits into the current 
limit.


Jocelyn, should we attempt to make extra 

Re: [PATCH] drm/mgag200: Increase bandwidth for G200se A rev1

2023-07-25 Thread Roger Sewell


Thomas,

Hello, I'm the user who reported the issue. Definitely happy to help you
sort this out if I can, though my response speed will decrease when term
restarts in October.

> I'd be interested in the exact model and the unique_rev_id
> (you said A, rev1 ?) 

The machine is an Intel SR1625URR server including an S5520UR
motherboard. 

Table 10 in the following document says that  1440x900@60Hz is supported:
https://www.intel.com/content/dam/support/us/en/documents/motherboards/server/s5520ur/sb/e44031012_s5520ur_s5520urt_tps_r1_9.pdf

lspci -v returns:

07:00.0 VGA compatible controller: Matrox Electronics Systems Ltd. MGA G200e 
[Pilot] ServerEngines (SEP1) (rev 02) (prog-if 00 [VGA controller])
Subsystem: Intel Corporation Device 0101
Physical Slot: 5
Flags: bus master, fast devsel, latency 0, IRQ 16
Memory at b000 (32-bit, prefetchable) [size=16M]
Memory at b180 (32-bit, non-prefetchable) [size=16K]
Memory at b100 (32-bit, non-prefetchable) [size=8M]
Expansion ROM at 000c [disabled] [size=128K]
Capabilities: 
Kernel driver in use: mgag200
Kernel modules: mgag200

so in particular the chip is said to be a G200e, not the G200SE-A that
the kernel module seems to be interpreting it as. In the lspci return it
calls itself "rev 02", but the unique_rev_id returned is 0x01, not 02,
and not 00. (My originally suggested solution was that "rev 02" might
correspond to unique_rev_id=0x01 and that one should add 1 to the
unique_rev_id, but Jocelyn indicated that isn't right.)

I instrumented a version of the new code by adding printk statements to
a version of the module embodied in a kmod-mgag200 package and observing
messages in the /var/log/messages file. These tell me that:

> and if the early-out branches in mga_vga_calculate_mode_bandwidth()
> are being taken. 

In the "old" code the options to return 0 are NOT being taken, and the
bandwidth returned is the expected value of 30318019. 

In the *new* code the options to return 0 are NOT being taken, and the
bandwidth returned is the expected value of 30318019. 

> Can you figure out how exactly the CPU moves through
> mga_vga_mode_valid().

In the "old" code we enter the true part of the if (IS_G200_SE(mdev)),
then the true part of the if (unique_rev_id == 0x01), then return
MODE_BANDWIDTH (i.e. MODE_BAD) at the third if statement in that block.

In the *new* code the nearest-named function I could see is
mgag200_mode_config_mode_valid, which returns MODE_OK at the end of the
function if the bandwidth limit is increased to 30100, and returns
MODE_BAD three lines higher up if it is left at 24400.

Moreover if when using the old code we switch to Wayland instead of
Xorg, it doesn't let me pick the 1440x900@60Hz mode at all, but it does
with Xorg (one of the reasons I hadn't used Wayland).

Therefore I think the reason that the old code allowed use of
1440x900@60Hz was that Xorg somehow didn't properly check the return
value from mga_vga_mode_valid, but Wayland did. Moreover I think that
the latest version of the Xorg stuff does PARTIALLY check that return
value, to the extent that it won't let you actually use that mode, but
does nonetheless present it as a choice when you go to Settings->Display
- and then saves the values it didn't allow you to take in
~/.config/monitors.xml, and on relogin refuses to give you any graphics
at all because it doesn't like those values. But that, of course, is
nothing to do with the mgag200 driver (if it is indeed true - I haven't
looked at the Xorg source code at all).

The issue from the point of view of my usage case is that the chip works
just fine in the 1440x900@60Hz mode, even though 30318019 > 1024*24400. 

If I haven't made anything sufficiently clear, or if you need more info,
please ask.

Best wishes,
Roger.


Re: [PATCH] drm/mgag200: Increase bandwidth for G200se A rev1

2023-07-24 Thread Thomas Zimmermann



Am 24.07.23 um 20:48 schrieb Thomas Zimmermann:

Hi Jocelyn

Am 17.07.23 um 15:29 schrieb Jocelyn Falempe:
A G200SE_A rev 1 user reported that prior to v6.0, he was able to get 
Xorg

working at 1440x900@60Hz. This somehow bypassed the bandwidth test in the
driver. After v6.0, and the driver refactor, it's no longer possible.


I took that bandwidth number from the old source code at


https://elixir.bootlin.com/linux/v5.19/source/drivers/gpu/drm/mgag200/mgag200_mode.c#L725

 From this code, I don't see how it skipped this test. Maybe the 
refactoring is incorrect.


There's a difference that might be worth investigating: if the unique 
rev equals 0x00, the current code will use the settings for rev 1.



https://elixir.bootlin.com/linux/v6.4/source/drivers/gpu/drm/mgag200/mgag200_g200se.c#L514

Compare this to the old code, which would have taken the settings for 
rev 3 and later:


https://elixir.bootlin.com/linux/v5.19/source/drivers/gpu/drm/mgag200/mgag200_mode.c#L719

Best regards
Thomas



Do you have the opportunity to further debug this issue on the users 
machine?  I'd be interested in the exact model and the unique_rev_id 
(you said A, rev1 ?) and if the early-out branches in 
mga_vga_calculate_mode_bandwidth() are being taken. Can you figure out 
how exactly the CPU moves through mga_vga_mode_valid().


Best regards
Thomas


So increase the bandwidth, as the hardware is able to do it.

In mgag200_calculate_mode_bandwidth(), the bandwidth for 1440x900@60 is
30318019, while 24400 * 1024 = 24985600 and 30100 * 1024 = 30822400.
Raising the bandwidth from 24400 to 30100 is enough to allow this mode.

Reported-by: Roger Sewell 
Tested-by: Roger Sewell 
Signed-off-by: Jocelyn Falempe 
---
  drivers/gpu/drm/mgag200/mgag200_g200se.c | 2 +-
  1 file changed, 1 insertion(+), 1 deletion(-)

diff --git a/drivers/gpu/drm/mgag200/mgag200_g200se.c 
b/drivers/gpu/drm/mgag200/mgag200_g200se.c

index bd6e573c9a1a..3b49e30931e1 100644
--- a/drivers/gpu/drm/mgag200/mgag200_g200se.c
+++ b/drivers/gpu/drm/mgag200/mgag200_g200se.c
@@ -437,7 +437,7 @@ static int mgag200_g200se_pipeline_init(struct 
mga_device *mdev)

   */
  static const struct mgag200_device_info 
mgag200_g200se_a_01_device_info =

-    MGAG200_DEVICE_INFO_INIT(1600, 1200, 24400, false, 0, 1, true);
+    MGAG200_DEVICE_INFO_INIT(1600, 1200, 30100, false, 0, 1, true);
  static const struct mgag200_device_info 
mgag200_g200se_a_02_device_info =

  MGAG200_DEVICE_INFO_INIT(1920, 1200, 30100, false, 0, 1, true);

base-commit: c2268daa65fb415cfd463016ad54c20afef8f75e




--
Thomas Zimmermann
Graphics Driver Developer
SUSE Software Solutions Germany GmbH
Frankenstrasse 146, 90461 Nuernberg, Germany
GF: Ivo Totev, Andrew Myers, Andrew McDonald, Boudien Moerman
HRB 36809 (AG Nuernberg)


OpenPGP_signature
Description: OpenPGP digital signature


Re: [PATCH] drm/mgag200: Increase bandwidth for G200se A rev1

2023-07-24 Thread Thomas Zimmermann

Hi Jocelyn

Am 17.07.23 um 15:29 schrieb Jocelyn Falempe:

A G200SE_A rev 1 user reported that prior to v6.0, he was able to get Xorg
working at 1440x900@60Hz. This somehow bypassed the bandwidth test in the
driver. After v6.0, and the driver refactor, it's no longer possible.


I took that bandwidth number from the old source code at


https://elixir.bootlin.com/linux/v5.19/source/drivers/gpu/drm/mgag200/mgag200_mode.c#L725

From this code, I don't see how it skipped this test. Maybe the 
refactoring is incorrect.


Do you have the opportunity to further debug this issue on the users 
machine?  I'd be interested in the exact model and the unique_rev_id 
(you said A, rev1 ?) and if the early-out branches in 
mga_vga_calculate_mode_bandwidth() are being taken. Can you figure out 
how exactly the CPU moves through mga_vga_mode_valid().


Best regards
Thomas


So increase the bandwidth, as the hardware is able to do it.

In mgag200_calculate_mode_bandwidth(), the bandwidth for 1440x900@60 is
30318019, while 24400 * 1024 = 24985600 and 30100 * 1024 = 30822400.
Raising the bandwidth from 24400 to 30100 is enough to allow this mode.

Reported-by: Roger Sewell 
Tested-by: Roger Sewell 
Signed-off-by: Jocelyn Falempe 
---
  drivers/gpu/drm/mgag200/mgag200_g200se.c | 2 +-
  1 file changed, 1 insertion(+), 1 deletion(-)

diff --git a/drivers/gpu/drm/mgag200/mgag200_g200se.c 
b/drivers/gpu/drm/mgag200/mgag200_g200se.c
index bd6e573c9a1a..3b49e30931e1 100644
--- a/drivers/gpu/drm/mgag200/mgag200_g200se.c
+++ b/drivers/gpu/drm/mgag200/mgag200_g200se.c
@@ -437,7 +437,7 @@ static int mgag200_g200se_pipeline_init(struct mga_device 
*mdev)
   */
  
  static const struct mgag200_device_info mgag200_g200se_a_01_device_info =

-   MGAG200_DEVICE_INFO_INIT(1600, 1200, 24400, false, 0, 1, true);
+   MGAG200_DEVICE_INFO_INIT(1600, 1200, 30100, false, 0, 1, true);
  
  static const struct mgag200_device_info mgag200_g200se_a_02_device_info =

MGAG200_DEVICE_INFO_INIT(1920, 1200, 30100, false, 0, 1, true);

base-commit: c2268daa65fb415cfd463016ad54c20afef8f75e


--
Thomas Zimmermann
Graphics Driver Developer
SUSE Software Solutions Germany GmbH
Frankenstrasse 146, 90461 Nuernberg, Germany
GF: Ivo Totev, Andrew Myers, Andrew McDonald, Boudien Moerman
HRB 36809 (AG Nuernberg)


OpenPGP_signature
Description: OpenPGP digital signature


[PATCH] drm/mgag200: Increase bandwidth for G200se A rev1

2023-07-17 Thread Jocelyn Falempe
A G200SE_A rev 1 user reported that prior to v6.0, he was able to get Xorg
working at 1440x900@60Hz. This somehow bypassed the bandwidth test in the
driver. After v6.0, and the driver refactor, it's no longer possible.
So increase the bandwidth, as the hardware is able to do it.

In mgag200_calculate_mode_bandwidth(), the bandwidth for 1440x900@60 is
30318019, while 24400 * 1024 = 24985600 and 30100 * 1024 = 30822400.
Raising the bandwidth from 24400 to 30100 is enough to allow this mode.

Reported-by: Roger Sewell 
Tested-by: Roger Sewell 
Signed-off-by: Jocelyn Falempe 
---
 drivers/gpu/drm/mgag200/mgag200_g200se.c | 2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)

diff --git a/drivers/gpu/drm/mgag200/mgag200_g200se.c 
b/drivers/gpu/drm/mgag200/mgag200_g200se.c
index bd6e573c9a1a..3b49e30931e1 100644
--- a/drivers/gpu/drm/mgag200/mgag200_g200se.c
+++ b/drivers/gpu/drm/mgag200/mgag200_g200se.c
@@ -437,7 +437,7 @@ static int mgag200_g200se_pipeline_init(struct mga_device 
*mdev)
  */
 
 static const struct mgag200_device_info mgag200_g200se_a_01_device_info =
-   MGAG200_DEVICE_INFO_INIT(1600, 1200, 24400, false, 0, 1, true);
+   MGAG200_DEVICE_INFO_INIT(1600, 1200, 30100, false, 0, 1, true);
 
 static const struct mgag200_device_info mgag200_g200se_a_02_device_info =
MGAG200_DEVICE_INFO_INIT(1920, 1200, 30100, false, 0, 1, true);

base-commit: c2268daa65fb415cfd463016ad54c20afef8f75e
-- 
2.41.0