Spec Files and the *.ms
What type of files are the .ms files in the Spec directory? I was just wondering. Thanks, Wade ___ Devel mailing list [EMAIL PROTECTED] http://XFree86.Org/mailman/listinfo/devel
Re: Your details
Thank you for contacting Network Solutions. We no longer use e-mailboxes with the extension @netsol.com. Please replace @netsol.com with @networksolutions.com and resubmit your question. Sincerely, Network Solutions Customer Service This e-mail was sent from a notification-only address and cannot receive incoming e-mail messages or replies. If you have any questions, contact Customer Service at [EMAIL PROTECTED] or by phone at 1-888-642-9675 within the U.S. and Canada or at 1-703-742-0914 outside the U.S. © 2003 Network Solutions, Inc. All rights reserved. ___ Devel mailing list [EMAIL PROTECTED] http://XFree86.Org/mailman/listinfo/devel
Re: CVS XFree (savage driver) xsuite failures
On Tue, 20 Jan 2004, Nicolas Joly wrote: If your lines are correct, you should be able to run: http://www.xfree86.org/~mvojkovi/linetest.c without artifacts. The lines seems wrong as i do see artifacts when running the program with zero width lines (works fine for w0). If you add to the Section Device of the XF86Config file: Option XaaNoSolidTwoPointLine that will force the XAA to only use the driver's Bresenham line export. Does that change the behavior? Mark. ___ Devel mailing list [EMAIL PROTECTED] http://XFree86.Org/mailman/listinfo/devel
Re: -rpath not used under Linux
On Thu, 8 Jan 2004, David Dawes wrote: Hi, so I downloaded latest cvs version of xc and compiled with defaults on Linux. Running ldd ./programs/xdm/xdm show xlibs are resolved from /usr/X11R6/lib. OK, I edited /etc/ld.so.conf and commented out line /usr/X11R6/lib and rerun ldconfig. Ran ldd ./programs/xdm/xdm and yes, xlibs are unresolved. Shutdown your xdm session and try to start again, you will not be able to fire it up unlesss you edit ld.so.conf again and rerun ldconfig. I think the default USRLIBDIRPATH ^H^H^H^H^H^H^ SHLIBDIRPATH should be compiled in by default. Yes, some systems will need either -rpath or -R as noted in this thread. Ideas? Martin On Wed, Jan 07, 2004 at 08:00:27PM +0100, Mario Klebsch wrote: Hi! Am Mittwoch, 07.01.04 um 14:00 Uhr schrieb Martin MOKREJ: I believe -Wl,-rpath,$(SHLIBDIRPATH) should be used on Linux and possibly all Unix platforms. Me too, in fact, I am convinced, this should be done. But this probably is a religious issue. It is done on most platforms that support this type of thing. The Linux settings (in lnxLib.rules) are: # if LinuxBinUtilsMajorVersion = 26 # ifdef UseInstalled #if LinuxBinUtilsMajorVersion 27 # define ExtraLoadFlags -Wl,-rpath-link,$(USRLIBDIRPATH) #endif # else #define ExtraLoadFlags -Wl,-rpath-link,$(BUILDLIBDIR) === # endif # else # define ExtraLoadFlags -Wl,-rpath,$(USRLIBDIRPATH) # endif LinuxBinUtilsMajorVersion is somewhat mis-named. For binutils versions x.y.z, LinuxBinUtilsMajorVersion = x * 10 + yx 2 || (x == 2 y = 9) LinuxBinUtilsMajorVersion = x * 100 + y otherwise I don't know why ExtraLoadFlags is set this way. The setting I've marked with === is what gets used on any modern Linux. If it is a religious issue, it's not an XFree86 religious issue, but probably a Linux one :-). Or maybe it is just an oversight. What I am most curious about is why the original reporter is seeing a problem and most others apparently do not. I never have, providing I re-run ldconfig after installing new libraries (and I never use LD_LIBRARY_PATH, which I agree is not a solution). David -- Martin Mokrejs [EMAIL PROTECTED] PGP5.0i key is at http://www.natur.cuni.cz/~mmokrejs ___ Devel mailing list [EMAIL PROTECTED] http://XFree86.Org/mailman/listinfo/devel
Question regarding VT switches
Hi, I have a problem when I boot linux (2.4.x and 2.6.x) with video=atyfb and using the XFree86 'ati' video driver: whenever I switch to any VT console from X (tty1, tty2, ...), a lot of green blocks appear blinking on the screen, and then the console sessions become unusable (it's ok to switch back to X). When booting with vesafb these green blocks doesn't appear when switching back into a VT, but a blank screen appears instead. Does anyone have an idea on where should I go to try to fix this issue (linux' vesafb + atyfb or XFree86)? I don't know if this is related to the XFree86' ati driver or to the atyfb/vesafb kernel driver, but any help will be kindly appreciated. This is how 'lspci' identifies my video card: 02:0d.0 VGA compatible controller: ATI Technologies Inc 3D Rage II+ 215GTB [Mach64 GTB] (rev 9a) (prog-if 00 [VGA]) Subsystem: ATI Technologies Inc 3D Rage II+ 215GTB [Mach64 GTB] Flags: bus master, stepping, medium devsel, latency 0, IRQ 5 Memory at ec00 (32-bit, non-prefetchable) [size=16M] I/O ports at b800 [size=256] Memory at eb80 (32-bit, non-prefetchable) [size=4K] Expansion ROM at ee7c [disabled] [size=128K] Thanks, Lucas ___ Devel mailing list [EMAIL PROTECTED] http://XFree86.Org/mailman/listinfo/devel
Re: Question regarding VT switches
Oh, I forgot to inform the XFree86 release. I'm running XFree86 4.3 here (didn't test with the latest CVS version yet). Lucas On Tuesday 20 January 2004 20:10, Lucas Correia Villa Real wrote: Hi, I have a problem when I boot linux (2.4.x and 2.6.x) with video=atyfb and using the XFree86 'ati' video driver: whenever I switch to any VT console from X (tty1, tty2, ...), a lot of green blocks appear blinking on the screen, and then the console sessions become unusable (it's ok to switch back to X). When booting with vesafb these green blocks doesn't appear when switching back into a VT, but a blank screen appears instead. Does anyone have an idea on where should I go to try to fix this issue (linux' vesafb + atyfb or XFree86)? I don't know if this is related to the XFree86' ati driver or to the atyfb/vesafb kernel driver, but any help will be kindly appreciated. This is how 'lspci' identifies my video card: 02:0d.0 VGA compatible controller: ATI Technologies Inc 3D Rage II+ 215GTB [Mach64 GTB] (rev 9a) (prog-if 00 [VGA]) Subsystem: ATI Technologies Inc 3D Rage II+ 215GTB [Mach64 GTB] Flags: bus master, stepping, medium devsel, latency 0, IRQ 5 Memory at ec00 (32-bit, non-prefetchable) [size=16M] I/O ports at b800 [size=256] Memory at eb80 (32-bit, non-prefetchable) [size=4K] Expansion ROM at ee7c [disabled] [size=128K] Thanks, Lucas ___ Devel mailing list [EMAIL PROTECTED] http://XFree86.Org/mailman/listinfo/devel ___ Devel mailing list [EMAIL PROTECTED] http://XFree86.Org/mailman/listinfo/devel
Re: -rpath not used under Linux
On Tue, Jan 20, 2004 at 10:54:05PM +0100, Martin MOKREJS wrote: On Thu, 8 Jan 2004, David Dawes wrote: Hi, so I downloaded latest cvs version of xc and compiled with defaults on Linux. Running ldd ./programs/xdm/xdm show xlibs are resolved from /usr/X11R6/lib. OK, I edited /etc/ld.so.conf and commented out line /usr/X11R6/lib and rerun ldconfig. Ran ldd ./programs/xdm/xdm and yes, xlibs are unresolved. Shutdown your xdm session and try to start again, you will not be able to fire it up unlesss you edit ld.so.conf again and rerun ldconfig. I think the default USRLIBDIRPATH ^H^H^H^H^H^H^ SHLIBDIRPATH should be compiled in by default. Yes, some systems will need either -rpath or -R as noted in this thread. I don't have any objections to doing this on Linux. As I said, we already do it on a range of other platforms and I'm not sure why Linux is something of an exception in this regard. Does anyone have a good reason to not do this? David -- David Dawes developer/release engineer The XFree86 Project www.XFree86.org/~dawes ___ Devel mailing list [EMAIL PROTECTED] http://XFree86.Org/mailman/listinfo/devel
Re: CVS XFree (savage driver) xsuite failures
Hello, Mark Vojkovich wrote: On Mon, 19 Jan 2004, David Dawes wrote: Tests for XChangeKeyboardControl Test 9: FAIL Test 10: FAIL That has been showing up for a while. It should be followed up. That's been showing up for a couple years. It's a regression. I think the tests are incorrect. Both tests try to set keyboard LEDs (using XChangeKeyboardControl) then read the LEDs state (XGetKeyboardControl) and compare values. The difference between the tests is that the first one tries to change some of LEDs (uses some mask) and the second one tries to set all LEDs together (without specifying a particular LED number). But some LEDs can be protected from their change by client application and reflect keyboard state only. The man about {Change|Get}KeyboardControl tells: XChangeKeyboardControl - ...the state of that leds is changed, if possible.. XGetKeyboardControl - ...each bit set to 1 in led_mask indicates an LED that is lit (Note if possible and an LED that is lit.) I understand it as Get returns the actual state of LEDs, those that are not protected and were changed by Change and those that are protected but are switched on reflecting the keyboard state. But the tests, obviously, are written in assumption that Get should return the LED state exactly as it was written into keyboard by Change call. It means all LEDs should be unprotected (it is not by default) or the keyboard control structure keeps values written by XChangeKeyboardControl call(s) and at the XGetKeyboardControl request just returns this record instead of real state of LEDs. BTW, the fix for this regression is very simple. We just have to remove one line in dix/devices.c where the LEDs mask field of the keyboard controls structure is being reloaded with the actual LEDs state. The tests will be passed with success. But there will not be any way (in the core protocol) to know the real state of LEDs. Tests for XRebindKeysym Test 1: FAIL The XRebindKeysym failure goes away if XKB is disabled. Yes, it's a XKB problem/feature. It is a feature. :) This problem can be fixed easy too with 'one word patch'. But there is one unclear thing there. The RebindKeysym mechanism allows to tie any string to a keysym or a combination of keysym and a set of modifiers. The binding itself works well, the problem is the modifiers set interpretation. For example, we have a key with two keysyms [a A] and want to bind two different strings to combiantions Alt+a and Alt+A. How should we specify the second combination - Alt + 'A', Alt + Shift + 'a' or Alt + Shift + 'A'? The core protocol's assumes the third variant, i.e. takes the keysym figured out with taking in account the Shift modifier but also checks all modifiers obtained from the key event. But the XKB-aware XLookupString 'consumes' all modifiers used at the keysym choosing and hide them from the routine that checks string_to_key bindins, i.e. it expects that the combiantion is just Alt + 'A'. BTW, this behavour can be swithed on/off with a special 'client side XKB' flag but by default XLookupString 'eats' consumed modifiers. Thus the problem is what modifiers set should the bindings check routine use. Shoud it always be the original 'state' field from the key event or the consumed modifiers may be removed from a consideration. If we require there a full compatibility with the core protocol the answer is obvious. But some calls in XKB-aware Xlib already have differences from core protocol ones. And the first form of that combination seems to me more logical (IMHO, of course). Side note: I wonder if anybody (anything) uses this 'rebind keysym' feature anywhere. On the other hand the test itself could be changed. One way is to make it XKB-aware and make it set the needed flag (that turns the XLookupString behavior to the 'core protocol like' one). Another way is don't use the Shift modifier (that can be 'eaten' under some circumstance) there. All other modifiers (except Caps) would be interpreted equaly in both (with/without XKB) cases. -- Ivan U. Pascal | e-mail: [EMAIL PROTECTED] Administrator of | Tomsk State University University Network | Tomsk, Russia ___ Devel mailing list [EMAIL PROTECTED] http://XFree86.Org/mailman/listinfo/devel
Hafiz Info regarding vigra
unanimity, How Vigras works. And you can better understand, what Vigras can do for you. If you are sensible about your health, reflect on what you can do for your seual health, to keep the chances that you will need Vigras as low as possible. sexist among smeared, leafing. http://www.pvmsolutions.com/index.php?pid=pharmaboss Inrease Seks Drive Bost Seual Performance Fuller Harder Erecions Inrease Stamna Endurance Quicker Rechages adulterers prosper Brandeis, Baden. exploiter arches cursors, words. Happy holidays, Patricia ___ Devel mailing list [EMAIL PROTECTED] http://XFree86.Org/mailman/listinfo/devel
Re: [Dri-devel] MGA font corruption revisited - now reproducible
On Tue, Jan 20, 2004 at 08:23:55AM -0800, Alex Deucher wrote: I don't run XFree86 except when trying to hunt DRI related bugs. It's been well over a year since I really used XFree86 and I honestly don't remember if DPMS ever worked with the second head. I don't have a second monitor to test right now. I just uploaded a patch to the bug tracker that makes DPMS work on the second head among other things (i2c/maven related). if you copied any code directly from the mga FB driver, you need to ask Petr Vandrovec if you can release it with a X11 license because the FB driver is GPL'ed. I think in the past Petr said he didn't care, but it's worth asking again. FWIW, I'd love to see native maven support in the X11 driver. No code was copied, only some defines. I need other people to check the code and tell me if it will break on other video cards. I only have a G400 DH, but there is G450, G550, G200 DH, G200 non-maven DH, etc which need to be tested, and some changes were made to the main driver code too so there is a potential I made a mistake that would affect even non-G series matrox cards. The main thing I am worrying about is how some of the maven registers I used will behave on different cards. Right now I am trying to get DDC working on port 2 so I can be sure my i2c code is 100% correct. Someone needs to track down the bug that causes a server crash and subsequent lockup if a dualhead config is used but mga_hal is not available (either not around or wasn't compiled with support for it). I thought I fixed it with a oneliner in that patch but it turns out that I was using the wrong config at the time to test it. -- Ryan Underwood, [EMAIL PROTECTED] signature.asc Description: Digital signature
Re: [Dri-devel] MGA font corruption revisited - now reproducible
--- Ryan Underwood [EMAIL PROTECTED] wrote: [snip] No code was copied, only some defines. I need other people to check the code and tell me if it will break on other video cards. I only have a G400 DH, but there is G450, G550, G200 DH, G200 non-maven DH, etc which need to be tested, and some changes were made to the main driver code too so there is a potential I made a mistake that would affect even non-G series matrox cards. The main thing I am worrying about is how some of the maven registers I used will behave on different cards. Right now I am trying to get DDC working on port 2 so I can be sure my i2c code is 100% correct. You might ask Petr or one of the kernel fbdev or directfb developers. they might be able to help you. unfortunately all my matrox cards have either died or or are no longer around :( Alex __ Do you Yahoo!? Yahoo! Hotjobs: Enter the Signing Bonus Sweepstakes http://hotjobs.sweepstakes.yahoo.com/signingbonus ___ Devel mailing list [EMAIL PROTECTED] http://XFree86.Org/mailman/listinfo/devel
Re: [Dri-devel] MGA font corruption revisited - now reproducible
On Tue, Jan 20, 2004 at 02:13:50PM -0800, Alex Deucher wrote: --- Ryan Underwood [EMAIL PROTECTED] wrote: [snip] No code was copied, only some defines. I need other people to check the code and tell me if it will break on other video cards. I only have a G400 DH, but there is G450, G550, G200 DH, G200 non-maven DH, etc which need to be tested, and some changes were made to the main driver code too so there is a potential I made a mistake that would affect even non-G series matrox cards. The main thing I am worrying about is how some of the maven registers I used will behave on different cards. Right now I am trying to get DDC working on port 2 so I can be sure my i2c code is 100% correct. You might ask Petr or one of the kernel fbdev or directfb developers. they might be able to help you. unfortunately all my matrox cards have either died or or are no longer around :( I got DDC working. It was my second monitor that was the problem; its EDID data seems to be corrupt. It doesn't even work on the first head, and I can read my first monitor's EDID on the second head, so looks like we are in business. -- Ryan Underwood, [EMAIL PROTECTED] signature.asc Description: Digital signature