I am trying to enable I/O port tracing on current xserver head in my home machine (Linux 2.6.28 on x86 Pentium 4 32-bits, ProSavageDDR-K as primary card, Oak OTI64111 as secondary card) in order to learn about the register initialization for the video BIOS of both the Savage and the Oak chipsets:

* For savage, I want to eventually see the POST port accesses as they occur in VESA, so that the current driver can do the same port enabling on the case of a savage as secondary card. Currently, the xorg driver can initialize a secondary savage without BIOS (but see below for caveat), but the colors are washed out and horrible artifacts appear on any attempt to accelerate operations. Same issue happens with the savagefb kernel framebuffer driver. * For oak, I want to peek at the register initialization for mode switching in VESA, in order to have better understanding towards writing a driver for the chipset.

Now, I tried to perform the changes shown in the attached patch, but without success - the server shows no output that hints of a trace. I tried disabling ioperm() as the code comments suggest, but it seems to be made redundant by the iopl(3) call on the same line, and if I disable both, I get SIGSEGV as the port accesses are not enabled in vgahw. So how should I properly enable I/O port tracing n the current xserver? Maybe the code comments are out of date?

Another question I have is this: as far as I understand, PCI video cards have to run the POST (or do an equivalent operation) in order to execute the chipset-specific hocus-pocus that enables legacy vga port access (0x3c0 through 0x3df). So only one chipset can be mapped into that I/O address range at a time (right?). When initializing a secondary card via POST, the real-mode code of the secondary card will also attempt to map its own registers into that range (I would assume). So what steps are taken in the xserver to move the primary card out of the way (if at all) so that the second card initializes properly? What happens if the drivers for both chipsets require some access to the legacy I/O ports in order to perform normal operations? (for example, if both are driven by the VESA driver) How can I tell (from lspci output or from other sources) which card is currently mapped into the legacy I/O range? This questions arise from the fact that the current xserver head, despite having a correction for the lspciaccess reading of ROM (https://bugs.freedesktop.org/show_bug.cgi?id=18160), still locks up in int10 after reading the Oak ROM BIOS and trying to initialize it as a secondary card (with savage as primary). I want to check whether the wrong PCI chipset is mapped at the VGA I/O port range, or whether the wrong POST is being executed. I know that it is not enough to look at the enabled status of the PCI card, since I have enabled both on my machine, and the primary one (the one initialized at boot time) still is in control of the VGA I/O port range.

When I boot my home machine with Oak as the primary, the savage PCI card ends up disabled (as reported in lspci). If I then attempt to run the savage driver for xserver without further ado, and without using the VGA BIOS to set modes, the xserver hangs (endless loop trying to enable the acceleration registers). I have to manually enable the card with setpci or sysfs before the driver initializes it it properly. Somewhere the xserver should be doing this for me. Where? In the xserver code, or the driver code? Is it ok to use libpciaccess to enable the card from within the savage driver?

Still another question. From the savage driver code, I see that it has a replica of the VGA register range as a range of MMIO declared as one resource in the PCI card. This allows VGA legacy registers to be programmed when I boot with Oak as primary, without having the savage driver run the POST (which ties back to the first issue). Is this a requirement for all PCI cards, to have a replica of VGA registers somewhere to be programmed when VGA legacy mapping is not available? Or are there known PCI cards that require legacy I/O ports to be enabled for basic mode switching?

--
perl -e '$x=2.4;print sprintf("%.0f + %.0f = %.0f\n",$x,$x,$x+$x);'

diff -ur /home/alex/instaladores-linux/xserver/xorg-git/xserver/hw/xfree86/int10/helper_exec.c xserver/hw/xfree86/int10/helper_exec.c
--- /home/alex/instaladores-linux/xserver/xorg-git/xserver/hw/xfree86/int10/helper_exec.c	2008-12-03 10:59:10.000000000 -0500
+++ xserver/hw/xfree86/int10/helper_exec.c	2009-01-13 22:53:09.000000000 -0500
@@ -18,7 +18,7 @@
 #include <xorg-config.h>
 #endif
 
-#define PRINT_PORT 0
+#define PRINT_PORT 1
 
 #include <unistd.h>
 
@@ -33,7 +33,7 @@
 #ifdef _X86EMU
 #include "x86emu/x86emui.h"
 #else
-#define DEBUG_IO_TRACE() 0
+#define DEBUG_IO_TRACE() 1
 #endif
 #include <pciaccess.h>
 

_______________________________________________
xorg mailing list
[email protected]
http://lists.freedesktop.org/mailman/listinfo/xorg

Reply via email to