Re: denotiation against Apple Inc
djamil wrote: Now i see their new gadget IPhone dont even have G3 ... using my idea of using two separate cursors, u cant see it in their version because it's touchscreen so no mouse but the way u use two fingers to zoom a picture from both opposed side IS two cursors grabing each Angle of the picture , u can email me i ll be more than glad to even call u from france and explain it to you :) I have anouced this idea back in between 2001 and 2004 and was working for server-express in france/paris, my mail was [EMAIL PROTECTED] mailto:[EMAIL PROTECTED], the list used to be called xfree86-expert or so i also program in C/++ but never had the time to participate in Xfree86 . I am not trying to get ur help to sew them for money, but if u can for Xfree why not. There's no cause for a lawsuit unless you patented the idea. If not, then anyone is free to reimplement it as often as they like. Also, XFree86 is not now and never has been GPL. The XFree86 license does not prohibit others from using the ideas in the code. I am not trying to get my name somewhere in any means, but it hurts to see these folks using Our ideas to shine on the desktop of average users . Why does it hurt? Isn't that the whole spirit of open source? -- Tim Roberts, [EMAIL PROTECTED] Providenza Boekelheide, Inc.
Re: Is there an Xlib clone?
intmail wrote: I would like to know if a clone of Xlib exist. Programming under Linux with Xlib, I am looking for library like Xlib (just to compare and test them). I know that there is Qt, GTK etc... but they do not interest me because people say that both are based under the same raw Xlib and run only under unix like. I don't understand your question. If you need Xlib, then you build Xlib. Why do you need a clone? Also, I am wondering if system based under Mac or Windows have enough tools or library to connect to an X server. Mac OS is Unix-based, so it's easy to build X clients. For Windows, you can certainly do this in a Cygwin or MinGW environment. However, you should make sure that you are really asking the question you meant to ask. Running an X client on Windows means that the application runs on Windows, but its window appears on an X-based computer. Is that what you wanted? -- Tim Roberts, [EMAIL PROTECTED] Providenza Boekelheide, Inc. ___ Devel mailing list Devel@XFree86.Org http://XFree86.Org/mailman/listinfo/devel
Re: WD90C24 Anyone?
[EMAIL PROTECTED] wrote: I have an old ThinkPad 750P. It uses the WD90C24 chip, which was in the old svga driver. What would it take to port that to the new XFree86 code? I'm not above writing assembly code or digging in here, I just don't know where to start or how much effort it might take... swatting a fly or eating an elephant? Holy moly! You have a whopping 1 megabyte of video RAM there. Will it work with the VESA fb driver? If not, then you might as well give up. I have the source code for their old Windows 3.1 driver, and it is more than 76,000 lines of 16-bit x86 assembler. The blitter provided virtually no acceleration, so you won't really be giving anything up to use the fb driver. -- Tim Roberts, [EMAIL PROTECTED] Providenza Boekelheide, Inc. ___ Devel mailing list Devel@XFree86.Org http://XFree86.Org/mailman/listinfo/devel
Re: x86emu emulation problem
jf simon wrote: Hi, 2- The same code as seen from ndisasm: 68DA A00080mov al,[0x8000] 68DD 04F5 add al,0xf5 68DF 0002 add [bp+si],al 68E1 C8008015 enter 0x8000,0x15 68E5 0Epush cs 68E6 0106C800 add [0xc8],ax 68EA 80100Eadc byte [bx+si],0xe 68ED 0105 add [di],ax 68EF C800800B enter 0x8000,0xb 68F3 0Epush cs 68F4 0104 add [si],ax 68F6 C8008006 enter 0x8000,0x6 68FA 0Epush cs 68FB 0102 add [bp+si],ax 68FD E80080call 0xe900 !!!HERE AGAIN This is probably data -- either font data or VGA register tables. Can you trace backwards any more and figure out how you got to 68DA? You are right. I have found that the problem was on a JMP SHORT which was incorrectly landing in that part of the VGA BIOS. The relative displacement was negative (was 0xBA), but the JMP was considering it to be a jump to [PC]+0xBA rather than applying the signed arithmetic. Setting GCC -fsigned-char switch made the signed displacemnt correctly appliedand solved the problem. I didn't know that the char type was unsigned by default. On a different issue, I think that the emulator may be wrong as it sometimes fetches values from the DATA segment, even if the CS was previously selected as the source segment. For exemple, in x86emu/ops.c, see [*]: if (M.x86.mode SYSMODE_PREFIX_DATA) { u32 destval,srcval; DECODE_PRINTF(TEST\tDWORD PTR ); destoffset = decode_rm00_address(rl); DECODE_PRINTF(,); srcval = fetch_long_imm(); DECODE_PRINTF2(%x\n, srcval); destval = fetch_data_long(destoffset); TRACE_AND_STEP(); test_long(destval, srcval); } else { u16 destval,srcval; DECODE_PRINTF(TEST\tWORD PTR ); destoffset = decode_rm00_address(rl); DECODE_PRINTF(,); srcval = fetch_word_imm(); DECODE_PRINTF2(%x\n, srcval); destval = fetch_data_word(destoffset); [*] TRACE_AND_STEP(); test_word(destval, srcval); } [*] : shouldn't that be a fetch from the CS segt, since the mode (in M.x86.mode) is not of the DATA type? No. When they say SYSMODE_PREFIX_DATA, they are talking about the 0x66 prefix, which Intel calls the operand size override. That determines whether the instruction uses 16-bit units or 32-bit units. Compare with SYSMODE_PREFIX_ADDR, the 0x67 prefix, which Intel calls address size override, which determines whether the addresses are 16-bit or 32-bits wide. The fetch_data_long and fetch_data_word functions will use the segment overrides to decide which segment register to use. -- Tim Roberts, [EMAIL PROTECTED] Providenza Boekelheide, Inc. ___ Devel mailing list Devel@XFree86.Org http://XFree86.Org/mailman/listinfo/devel
Re: x86emu emulation problem
jf simon wrote: 2- The same code as seen from ndisasm: 68DA A00080mov al,[0x8000] 68DD 04F5 add al,0xf5 68DF 0002 add [bp+si],al 68E1 C8008015 enter 0x8000,0x15 68E5 0Epush cs 68E6 0106C800 add [0xc8],ax 68EA 80100Eadc byte [bx+si],0xe 68ED 0105 add [di],ax 68EF C800800B enter 0x8000,0xb 68F3 0Epush cs 68F4 0104 add [si],ax 68F6 C8008006 enter 0x8000,0x6 68FA 0Epush cs 68FB 0102 add [bp+si],ax 68FD E80080call 0xe900 !!!HERE AGAIN This is probably data -- either font data or VGA register tables. Can you trace backwards any more and figure out how you got to 68DA? -- Tim Roberts, [EMAIL PROTECTED] Providenza Boekelheide, Inc. ___ Devel mailing list Devel@XFree86.Org http://XFree86.Org/mailman/listinfo/devel
Re: Xfree86 drivers versus kernel drivers
jf simon wrote: I am having some difficulty understanding the fundamental differences between xfree86 drivers and linux kernel drivers. Is there a good reference somewhere? I'm not sure a reference is really necessary. The only problem is that the word driver is overloaded. XFree86 drivers and kernel drivers are both dynamically loadable modules, like a Linux shared object or a Windows DLL. XFree86 happens to have its own module loader, so that a single XFree86 driver binary can be loaded regardless of operating system. Kernel drivers are loaded into the kernel, and run in the privileged kernel mode. XFree86 drivers are loaded by the XFree86 server process. It is a user-mode process, although it must run as root in order to touch I/O ports. Historically, the original Windows NT 3.x had the exact same design. Display drivers were actually user-mode DLLs that ran in a separate process (called CSRSS). Users complained about the task-switch overhead, and so Microsoft moved display drivers into the kernel in NT 4. It's still not clear this was a net win. I have read that xfree86 drivers are really user space programs that map the graphic video memory and then access it. Basically correct, although XFree86 drivers are not really programs. They are just modules that are loaded by the XFree86 server program. If so does xfree86 needs anything at all from the linux kernel stuff that is in ./drivers/video ? Or is that stuff just needed by linux so that it can print out boot messages before X starts? Remember that XFree86 is not a fundamental part of Linux. It's just another Linux application. Linux runs perfectly well without XFree86, and the kernel video stuff supports that. Most XFree86 drivers use the Linux kernel video stuff only for mapping the frame buffer into memory (via the mmap system call). Some drivers use more than that. If a graphics chip requires the use of DMA, as some do, then the XFree86 driver has to rely on a kernel component for help. Almost all of the OpenGL drivers need a kernel component for that reason. -- Tim Roberts, [EMAIL PROTECTED] Providenza Boekelheide, Inc. ___ Devel mailing list Devel@XFree86.Org http://XFree86.Org/mailman/listinfo/devel
Re: Xfree86 drivers versus kernel drivers
Enrico Weigelt wrote: * Tim Roberts [EMAIL PROTECTED] wrote: Historically, the original Windows NT 3.x had the exact same design. Display drivers were actually user-mode DLLs that ran in a separate process (called CSRSS). Users complained about the task-switch overhead, and so Microsoft moved display drivers into the kernel in NT 4. It's still not clear this was a net win. As far as I remember, the NT model was much more low-level than X11, so the percentage of data to pass through processes vs. data rendered within the display server was much worse than on X11. But I'm not an NT expert ... We're getting a bit far afield here, but I'm always willing to orate at length on arcane topics of little interest to anyone. The XFree86 approach is not all that different from the old NT approach. The Xlib application interface is similar to the GDI application interface. The XAA driver interface is similar in many ways to the GDI driver interface. The major difference is that, whereas X crosses that gap by feeding X protocol through a socket (thereby enabling client and server on different machines), NT crossed that gap by using an RPC mechanism. BTW: could anyone point out if there are major differences between traditional xf86 and current Xorg ? In practical terms, there are no major differences. X.org branched from XFree86 at version 4.3.99, so at that point, they were identical. I would judge that there has been more activity on the source base since then by X.org, but I may be out of touch. The Linux video drivers are their own layer. They provide lots of low-level things, ie. device independent framebuffers access and some rendering primitives. xf86/xorg - when running on GNU/Linux - sits on top of it. Only in a fleeting way. As I said, most 2D XFree86 drivers use the kernel video drivers to map the frame buffer into memory, and nothing else. Last time I looked, the kernel video drivers did not expose any graphics accelerator features at all, making them useless to XFree86. -- Tim Roberts, [EMAIL PROTECTED] Providenza Boekelheide, Inc.
Re: How Xserver build the makefil for dirver?
vichy.kuo wrote: Dear All: I need to build the via driver within XFree86_4_3 but I found via didn't support the driver until 4.4. I directly copy the via folder to the driver directory in 4.3, but the makefile cannot been build after typing make World. Is there any configure file I need to modify? It isn't that easy. There were a number of changes between 4.3 and 4.4, so a fair amount of source code modification will probably be required. Have you tried just loading the 4.4 driver in 4.3? If it complains about ABI problems, you can try the -ignoreABI switch. -- Tim Roberts, [EMAIL PROTECTED] Providenza Boekelheide, Inc. ___ Devel mailing list Devel@XFree86.Org http://XFree86.Org/mailman/listinfo/devel
Re: How Xserver build the makefil for dirver?
kuo chechun wrote: I just try to do that you show me, ant the server did complain the module ABI minor version (7) is newer than the server's version (6). Would you please tell me how to -ignoreABI switch? The -ignoreABI switch is a parameter to the X command that launches the server. The method you use depends on how you start the server. If you use startx from a command line, you can say startx -- -ignoreABI. If you run a version of xdm at boot time, you probably have a file called /etc/X11/Xservers that gives the commands lines. You can add the switch there. -- Tim Roberts, [EMAIL PROTECTED] Providenza Boekelheide, Inc. ___ Devel mailing list Devel@XFree86.Org http://XFree86.Org/mailman/listinfo/devel
Re: Questions about XAA
cckuo wrote: Dear X friends: I got some questions about behaviors about XAA. Below are the log messages I excerpted from the log file as I attached: (II) VIA(0): Using XFree86 Acceleration Architecture (XAA) 8x8 color pattern filled rectangles Screen to Screen color expansion Image Writes Offscreen Pixmaps Setting up tile and stipple cache: 6 128x128 slots 32 8x8 color pattern slots The messages below the was printed from XAAInitPixmapCache in xaaPCache.c by clipping the areas passed from FBManager. And here comes my question: 1) How does XAA determine the size and # of tile and stipple cache for a given graphics controller? Let's say your video memory is 1024x1024, a once common size. You'll probably run your monitor at 1024x768. That leaves a 1024x256 chunk of memory available for other things, such as offscreen bitmaps and tile caches. XAA will try to carve that up into pieces as large as possible, as squares, in powers of 2. In your case, there was probably an area reserved at the end of the buffer for the cursor image that made it impossible to use all 256 rows, so it fell back to 128x128. 2) Are there any graphic controllers with fixed features, etc., built directly into XAA? I'm not sure what you mean. Almost every driver in XFree86 has XAA handlers for the common operations, like screen-to-screen copy, text, mono-to-color expansion, memory-to-screen copy, and so on. There is no standard method of implementing these accelerations, so the only way for XAA to use them is to have code in the driver. 3) Is there anyway to specifically set the slot sizes and # of slots for a given graphics controller? I mean is there options I can set in my configure file or the only way I can do is to modify the source code of frame buffer manager? No. There's really no point. XAA knows what it needs, and it suballocates as required. What would you hope to gain? By the way, you have the full source code available to you. You can just look this up. -- Tim Roberts, [EMAIL PROTECTED] Providenza Boekelheide, Inc. ___ Devel mailing list Devel@XFree86.Org http://XFree86.Org/mailman/listinfo/devel
Re: how to change mouse select behavior?
Thomas Dickey wrote: On Wed, 8 Mar 2006, Jeff Chua wrote: It's xterm. I'm referring to double-clicking on 1233:03030-30303 and it'll select the whole 1233:03030-30303 instead of 1233 In #208 that was done by changing the charClass resource in the XTerm app-defaults file. I commented out that line (near the end of the file) and added in xterm's code (but did not configure in the app-defaults file) support for regular expressions. There is a comparable example using regular expressions to select URLs at the bottom of the app-defaults file, also commented-out. This paragraph suggests to me that the OP can restore this behavior for himself by adding the appropriate charClass definition into his private Xresources. Do I misunderstand? -- Tim Roberts, [EMAIL PROTECTED] Providenza Boekelheide, Inc. ___ Devel mailing list Devel@XFree86.Org http://XFree86.Org/mailman/listinfo/devel
Re: X Error : BadLength (poly request too large or Internal Xlib lengh Error )16
[EMAIL PROTECTED] wrote: Hi to all Iam building an kde Application When I pause the current pthread and invoke an dialog in another thread the following error is coming . X Error : BadLength (poly request too large or Internal Xlib lengh Error )16 Major opcode : 18 Minor opcode : 0 Resource id : 0x2005375 This error is not coming always . Can any one help me to solve this problem . We just had this conversation about the problems with multithreading Xlib. Have you taken the suggestions that were already given? -- Tim Roberts, [EMAIL PROTECTED] Providenza Boekelheide, Inc.
Re: Xlib : sequence lost (0x1718e 0x71a0) in reply to 0x0!
[EMAIL PROTECTED] wrote: What do you mean by separate display connections ? The case is this : I mean a display connection returned by XOpenDisplay. Right. I think what Carsten is saying is, if you might need to pause a thread in the middle of Xlib, then that thread needs to make its own call to XOpenDisplay, so that it has its own connection, separate from the rest of the application. -- Tim Roberts, [EMAIL PROTECTED] Providenza Boekelheide, Inc. ___ Devel mailing list Devel@XFree86.Org http://XFree86.Org/mailman/listinfo/devel
Re: copying with the middle mouse button
Clemens Zeidler wrote: Hi, could anybody tell me how this work? I mean how does the x-server know which text is selected? Must every application sent a message to the xserver that something is selected? Or sent the xserver a copy command to the active application when a mouse button is released? The burden is entirely on the application to decide what gets selected and when it gets updated. Yes, the app has to send a message to the server with the selection. The clipboard process is more complicated in X than it is in Windows. If you want to read about it, chapter 10.3 of volume I of the old O'Reilly books on Xlib describe the process. You can read about XSetSelectionOwner and XConvertSelection, and the SelectionNotify event. -- Tim Roberts, [EMAIL PROTECTED] Providenza Boekelheide, Inc. ___ Devel mailing list Devel@XFree86.Org http://XFree86.Org/mailman/listinfo/devel
Re: Xaw: Label widget class
Alexander Pohoyda wrote: There's a feature to draw a pixmap on the left of a label, but why is this done using XCopyPlane(..., 1L) and not XCopyArea() instead? Because the original Xaw widget set is an antique, designed when even televisions were only black and white. OK, that's a joke, but the sentiment is correct. It wasn't designed for color pixmaps. I'd very like to have color pixmaps in labels and buttons, which I suppose is only possible with XCopyArea() call. Please correct me if I'm wrong. Yes, but that's not the only place it will burn you. Xaw is so yesterday. There has been some work at updating it (check for Xaw3d, XawPlus, or XawXpm), but I suspect you'd be better off moving into the 1990s with something like gtk or Qt. -- Tim Roberts, [EMAIL PROTECTED] Providenza Boekelheide, Inc. ___ Devel mailing list Devel@XFree86.Org http://XFree86.Org/mailman/listinfo/devel
Re: Problems with xlib + ipc
chinlu chinawa wrote: Hi, Ei, it worked!, thanks very much. I didn't tought of opening the display again, stupid me. I understand you meant to add a StructureNotifyMask, as well as the pertinent handlers on the event's queue, when you said to let my parent process to be able to handle XDestroy. I say this, cose I've done it, filled my xsever structure again in the child process (display, screen, fontsets, etc), but whether I try to map my windows again, a BadWindow (X_DestroyWIndow) error comes up, which doesn't happen if I just destroy them and exit. Well, that may have been a red herring on my part. All I was trying to say is that the owning process will get the events, even if another process destroys the window. If the owning process doesn't need to do any special handling when the window is destroyed, you don't have to add anything now. -- Tim Roberts, [EMAIL PROTECTED] Providenza Boekelheide, Inc. ___ Devel mailing list Devel@XFree86.Org http://XFree86.Org/mailman/listinfo/devel
Re: Problems with xlib + ipc
chinlu chinawa wrote: I'm using linked lists within a shared memory block to store (among other), xserver information such as display, windows ids, etc. ... This hasn't been problematic, till I've been about to destroy a set of windows. This is something I can do within the process who created them, but not with any child one. Althoug my child processes, can se the memory region where windows' ids are located, as soon as I try, a segfault comes up. Been trying to somehow, tell to XCreateWindow to store window information within my memory segment but it hasn't been possible, though. That's not possible, although it shouldn't be necessary in this case. Window IDs are global to the server. As long as your child process has done an XOpenDisplay on the same display as the parent, you should be able to call XDestroyWindow with a valid Window id. After all, that's how xkill works. The parent process will have to be able to handle the messages that result from the XDestroyWindow. -- Tim Roberts, [EMAIL PROTECTED] Providenza Boekelheide, Inc. ___ Devel mailing list Devel@XFree86.Org http://XFree86.Org/mailman/listinfo/devel
Re: where the XFree86 xserver opening the framebuffer device..????
romel dutta wrote: hii everyone.. here, i want some serious help... i m trying to use XFree86 xserver for framebufferbut i m unable to find the place from where it's opening the fb device.i want it to put some of my codes in order to grab the frames ...,wht i found from /var/log/XFree86.0.log is tht it's going inside /build/programs/Xserver/hw/xfree86 directory..bt,WHR it's opening the fb device is still unknown to me..i doubt it's from fbdevhw.c in build/programs/Xserver/hw/xfree86/fbdevhw ,bt not sure.. can,any1 plzz help me get the hook..!!! Which driver? Many of the drivers don't use /dev/fb at all. They access the hardware directly. You can grab frames without writing any code at all. That's what the xwd tool is for. -- Tim Roberts, [EMAIL PROTECTED] Providenza Boekelheide, Inc. ___ Devel mailing list Devel@XFree86.Org http://XFree86.Org/mailman/listinfo/devel
Re: xfree86 4.5 cross compilation
zhanglei wrote: Make sure cc command in your $PATH directories !!! Marco Longhin wrote: I'm cross compiling Xfree86 (4.5.0) with TinyX, but I haven't success!! Is There anybody that have success with it? Could you tell me how do it? Do you know any documentation how to use it? this is my work... ...making imake with BOOTSTRAPCFLAGS= and CROSSCOMPILEFLAGS=-DCROSSCOMPILEDIR=/home/lon/svil/main/sigma/smp86xx_toolchain/trunk/src/build_mipsel/staging_dir/bin in config/imake cc -o ccimake -DCROSSCOMPILEDIR=\/home/lon/svil/main/sigma/smp86xx_toolchain/trunk/src/build_mipsel/staging_dir/bin\ -O -I../../include -I../../imports/x11/include/X11 ccimake.c if [ -n /home/lon/svil/main/sigma/smp86xx_toolchain/trunk/src/build_mipsel/staging_dir/bin ] ; then \ /home/lon/svil/main/sigma/smp86xx_toolchain/trunk/src/build_mipsel/staging_dir/bin/cc -E `./ccimake` \ -DCROSSCOMPILE_CPP imakemdep.h imakemdep_cpp.h; \ else touch imakemdep_cpp.h; fi cc: No such file or directory And, just in case it isn't clear, the cc in your path must be the NATIVE compiler, not the cross-compiler. The build process has to build a couple of the tools that will be used later on. -- Tim Roberts, [EMAIL PROTECTED] Providenza Boekelheide, Inc. ___ Devel mailing list Devel@XFree86.Org http://XFree86.Org/mailman/listinfo/devel
Re: [PATCH] trident_video.c
Jeff Chua wrote: The following patch is needed in order to compile trident_video.c with gcc-2.95.3 ... --- xfree86/xc/programs/Xserver/hw/xfree86/drivers/trident/trident_video.c.org 2005-12-09 12:05:15 +0800 +++ xfree86/xc/programs/Xserver/hw/xfree86/drivers/trident/trident_video.c 2005-12-09 12:05:43 +0800 @@ -666,10 +666,11 @@ OUTW(vgaIOBase + 4, ((width1) 0xff00) | 0x91); OUTW(vgaIOBase + 4, ((offset) 0xff) 8 | 0x92); OUTW(vgaIOBase + 4, ((offset) 0xff00)| 0x93); -if (pTrident-Chipset = CYBER9397) +if (pTrident-Chipset = CYBER9397) { OUTW(vgaIOBase + 4, ((offset) 0x0f) 8 | 0x94); -else +} else { OUTW(vgaIOBase + 4, ((offset) 0x07) 8 | 0x94); +} Why? If the OUTW macro is generating multiple statements, then the OUTW macro should be fixed. Otherwise, this is just a nasty bug waiting to happen. -- Tim Roberts, [EMAIL PROTECTED] Providenza Boekelheide, Inc. ___ Devel mailing list Devel@XFree86.Org http://XFree86.Org/mailman/listinfo/devel
Re: how to attach windows' ids?
chinlu chinawa wrote: Yes, sorry, I'm talking about the window manager, why? and why don't you think a xml configuration file ok? I'm not a proper programmer or something like that, but I've used xml, toghether with a dtd, so don't have to spend time writting code for parsing an validating a config file, that's the kind of things xml is for, isn't? (appart from the documentation process itself) Oh, don't let Josip fool you. It's a religious issue. Some people think XML is overkill in many of the areas where it is being applied, but if you already have a good set of tools for manipulating XML files, there's nothing wrong with XML as a configuration manager. -- Tim Roberts, [EMAIL PROTECTED] Providenza Boekelheide, Inc. ___ Devel mailing list Devel@XFree86.Org http://XFree86.Org/mailman/listinfo/devel
Re: Offscreenmemory copy
ayachi gherissi wrote: Hi (Newbie Question) I have created two area with xf86AllocateLinearOffscreenArea 1) How can I copy the contents of area to screen (if I have to use SubsequentScreenToScreenCopy what src coordinates x, y to use) ScreenToScreenCopy assumes that the source and destination have the same pitch -- the distance between scanlines. When you allocate a linear offscreen area, you're usually allocating a space that is not as wide as the screen, and linear means that the scanlines are packed as tightly as possible. So, ScreenToScreenCopy won't do the job. However, ScreenToScreenCopy is just setting up the graphics chip to do a blit. You can certainly do the same kind of blit yourself, plus whatever setup you need to define the pitch of the source. 2) How can I copy one area to another without CPU intervention Most modern graphics chips can blit between arbitrary offscreen areas. You just have to describe the areas (width, height, pitch). The means of doing that depends on the chip. Anybody know why there were a set of messages from two weeks ago (including this one) that are just showing up today? -- Tim Roberts, [EMAIL PROTECTED] Providenza Boekelheide, Inc. ___ Devel mailing list Devel@XFree86.Org http://XFree86.Org/mailman/listinfo/devel
Re: portrait mode how to
krish ritik wrote: I am working on X11R6.8.2. Just trying to explore if there is any posibility to put screen in portrait mode. I know about Nvidia driver. but i want to try it by myself. any hints how to put screen in 786x1024 mode (take the example of Intel card). I don;t need icon rotation and all. but just want to set the mode as 768x1024. For most of the the Intel graphics chips, as far as I know, the driver sets the mode through the video BIOS. As such, if the mode is not present in the BIOS, it simply cannot be selected. -- Tim Roberts, [EMAIL PROTECTED] Providenza Boekelheide, Inc. ___ Devel mailing list Devel@XFree86.Org http://XFree86.Org/mailman/listinfo/devel
Re: How to turn off the hardware mouse rendering?
Andrew C Aitchison wrote: On Mon, 28 Nov 2005, [gb2312] Daniel(�) wrote: I want to snap a desktop include the mouse pointer. However, the common tools and functions can not capture a windows image include mouse. I think it's because the mouse is not draw by Graphic Card and is not put to the color buffer. So how to stop the hardware acceleration? Or there is some special way to do this job? The tools that take a desktop snapshot intentionally remove the mouse pointer from the screen, because in the vast majority of cases, you don't want it in the snapshot. If you want the pointer in the snap, you will have to add it by hand. -- Tim Roberts, [EMAIL PROTECTED] Providenza Boekelheide, Inc. ___ Devel mailing list Devel@XFree86.Org http://XFree86.Org/mailman/listinfo/devel
Re: How to get the topmost window through xlib programming
Karthik Ramamoorthy wrote: Is there any option to get the topmost window or currently active window through anu xlib API . Because i think, through XQueryTree we can get all the Alive windows(windows that are not closed) only. So is there any means to get the topmost window, also is it possible to check whether the windowID that i am having is top most window or not? XGetInputFocus will tell you the window that currently holds the keyboard focus. That is usually what you what when you ask for the topmost window. -- Tim Roberts, [EMAIL PROTECTED] Providenza Boekelheide, Inc. ___ Devel mailing list Devel@XFree86.Org http://XFree86.Org/mailman/listinfo/devel
Re: Multiple Xv overlays cause blue flashing
Smoof . wrote: My plan was to do all the rendering with the same client and I know that my overlay adaptor only has a single port for the YUV420 format that I am using. Can someone say if the following would be possible: Suppose I create a single overlay that is the size of the entire screen. Then I could track the absolute position and visibility of the individual widget windows I want to send the video streams to. I would then tile in the images into the correct spot in the overlay to match the window position. Now, if there were some way of using the alpha channel to only cause the certain portions of the overlay to be seen then that might do the trick. Or could I just manually fill the areas I want to expose with the color key? Please keep in mind that I really don't know what I'm talking about and have no idea if this is possible but it sounds like the only way to prevent the flashing is to use a single overlay and somehow figure out how to share it among the widget windows. Does your graphics card support OpenGL? One practical alternative is to render the movies into textures. -- Tim Roberts, [EMAIL PROTECTED] Providenza Boekelheide, Inc. ___ Devel mailing list Devel@XFree86.Org http://XFree86.Org/mailman/listinfo/devel
Re: TCP/IP interface code in XFree86
Kaliraj Kalaichelvan - CTD, Chennai wrote: I have downloaded the XFree86 Version 4.3.0 . I would like to know which part of the XFree86 implementation (code) deals with the TCP/IP connections i..e how the x protocol are sent via the TCP/IP. I would like to know the c file/function (not in xlib level but in x protocol level)that takes care in sending these x messages via TCP/IP. Hope i am clear with my doubt. Surely it would be more efficient to use find and grep to search the code for socket calls, rather than wait for a response from an Internet mailing list... The transport code is in xc/lib/xtrans. The TCP socket code is in Xtranssock.c. This message and any attachment(s) contained here are information that is confidential, proprietary to HCL Technologies and its customers. Contents may be privileged or otherwise protected by law. The information is solely intended for the individual or the entity it is addressed to. Anyone remember the Lily Tomlin character Ernestine, the ATT operator? Two ringy-dingies... Oh, yes, snort, good morning. Have I reached the party to whom I am speaking? -- Tim Roberts, [EMAIL PROTECTED] Providenza Boekelheide, Inc. ___ Devel mailing list Devel@XFree86.Org http://XFree86.Org/mailman/listinfo/devel
Re: Wire protocol for X
Eddy Hahn wrote: Hi, I'm in the process to design a system that will traclate wire level protocol from X Windows to RDP. So, you an hook up a PC or a dumb (brick) terminal using RDP to a Linux/Unix system. For that, I need the wire protocol. Can someone help me to find it somewhere? There are a number of competent PC-based X servers today, and many of the Winterminals do X as well as RDP. What you're asking is hard; I would think there are a number of much more economical solutions. -- Tim Roberts, [EMAIL PROTECTED] Providenza Boekelheide, Inc. ___ Devel mailing list Devel@XFree86.Org http://XFree86.Org/mailman/listinfo/devel
Re: Fwd: Hooking
ramalingareddy bommalapura wrote: Can anybody please suggest how hooking can be done in the xserver functions. XServer needs to invoke my function before the call is passed to the XServer original function. I have found that this is the procedure in windows for doing hooking: X is entirely different from Windows. The same internal concepts simply do not apply. What are you REALLY trying to do? If you tell us your real task, instead of how you think you need to solve it, perhaps one of us can offer a real solution. There are two aspects to X: client and server. GUI programs (X clients) call routines in Xlib to do drawing. These are things like XDrawArc and so on. Xlib exists in the process of the GUI program. It converts the calls into messages that are transmitted over a socket to the X server, which does the actual drawing. The X server is a completely separate process. The messages are parsed, and eventually handed down to some driver that does the drawing. It is possible to insert a tee into the socket stream and split off a second copy for yourself, but that means parsing the X protocol, not handling APIs. -- Tim Roberts, [EMAIL PROTECTED] Providenza Boekelheide, Inc. ___ Devel mailing list Devel@XFree86.Org http://XFree86.Org/mailman/listinfo/devel
Re: tdfx and DDC2
Michael wrote: I don't see why they should be enabled - they're PC-specific and even with x86 emulation they would be pretty much useless since you're not too likely to encounter a graphics board with PC firmware in a Mac ( or other PowerPC boxes ) Wrong. No hardware manufacturer in their right mind would build a Mac-only PCI graphics board, with the possible exception of Apple. They're going to build a generic graphics board that works in a PC and by the way also works in a Mac. Such a board will have a video BIOS. I suppose you might find a board with a Mac-only SKU that does not stuff the BIOS chip. -- Tim Roberts, [EMAIL PROTECTED] Providenza Boekelheide, Inc. ___ Devel mailing list Devel@XFree86.Org http://XFree86.Org/mailman/listinfo/devel
Re: Some doubts in Xlib
Puneet Goel wrote: 1) While debugging I am seeing that dpy structure pointer is being passed by some calls. dpy intern contains 'buffer', 'bufptr' etc. While digging through the location pointed by 'buffer' and 'bufptr' the data is shown as \003\005 or \002 etc. What is the meaning of these values ? I was expecting buffer or bufptr to be some real data buffers being passed to Xserver but found something else. Any hints what are these ? What makes you believe that the buffer does not represent the data being passed to Xserver? What were you expecting to see? Many of the packets sent to the server are mouse position updates, which are not going to be neatly human-readable. -- Tim Roberts, [EMAIL PROTECTED] Providenza Boekelheide, Inc. ___ Devel mailing list Devel@XFree86.Org http://XFree86.Org/mailman/listinfo/devel
Re: Query mouse events from other Windows/Screens
George Liu wrote: Hi Andrew, Thanks for your reponse. This was exactly my thought in method 1. I assume window root root = RootWindow(d,s); is the invisble window covering the whole screen. Is that right? Thanks. No. That's the visible window that always exists at the BACK of the window order, underneath everything else. What Andrew is saying is that you have to create a NEW window, desktop-sized, and bring it to the front. -- Tim Roberts, [EMAIL PROTECTED] Providenza Boekelheide, Inc. ___ Devel mailing list Devel@XFree86.Org http://XFree86.Org/mailman/listinfo/devel
Re: can we change desktop size
Karthik Ramamoorthy wrote: I used some API(XF86VidSwitchMode) to change the resolution of the Monitor, but its nto changing the size of the Desktop. So know i am in serach of some API to change the Desktop size. So if anybody has some idea or know API to change the Desktop size please mail me. The standard X protocol simply does not allow this. The desktop size is fixed at startup time and must remain a constant. The xrandr extension in XFree86 adds this ability. That's the only way, without stopping and restarting the server. -- Tim Roberts, [EMAIL PROTECTED] Providenza Boekelheide, Inc. ___ Devel mailing list Devel@XFree86.Org http://XFree86.Org/mailman/listinfo/devel
[XFree86] xlogin linux-PAM
I'm not sure that this is really the right place to ask this question, but if it isn't maybe someone can direct me. I have a Mandrake distibution which comes with: XFree86-4.3-29mdk.rpm This rpm includes xdm. As supplied, xdm does not use PAM, or at least it does not use the configuration file:/etc/pam.d/xdm. And I cannot see what other configuration file it does use. Does anyone there know about this stuff ? Is this behaviour standard, or is it some sort of Mandrake-specifc thing ? Does anyone know how the xdm login (via xlogin widget ?) is configured ? I would like to tinker with the login as can be done via PAM. regards, Tim Johnston ___ XFree86 mailing list XFree86@XFree86.Org http://XFree86.Org/mailman/listinfo/xfree86
Re: Fatal Error --? Video driver?
[EMAIL PROTECTED] wrote: devel@XFree86.Org Hi I am trying to get XFree running on this configuration butno success so far. It looks likevsomething to do with the on board video The motherboard is ABIT VA-20 (www.abit.com) Integrated on board Unichrome Pro Graphics with 2D/3D/video controller 64 Meg of DDR ram allocated for video Total ram 1G The VIA Unichrome chip does not have a driver built-in to XFree86. VIA distributes one, but I don't know whether it plays with XFree86 4.5.0. Google for xfree86 unichrome for lots of hints. You should be able to run the vesafb driver. You won't get acceleration, but it should work. -- Tim Roberts, [EMAIL PROTECTED] Providenza Boekelheide, Inc. ___ Devel mailing list Devel@XFree86.Org http://XFree86.Org/mailman/listinfo/devel
Re: 4.4.99.902: s3 fails some of xtests
Nmeth Mrton wrote: I've tested the two settings using xtest 4.0.10 at color depth 16. Here are my results: pXAA-ScreenToScreenCopyFlags = ROP_NEEDS_SOURCE; = the XCopyArea tests passed pXAA-ScreenToScreenCopyFlags = NO_TRANSPARENCY; = the XCopyArea tests fails the following tests: - GXclear (6) - GXinvert (16) - GXset (21) Is there any need to set the ROP_NEEDS_SOURCE on S3 Trio64V+ and not on the other S3 chips or the ROP_NEEDS_SOURCE will work on all S3 cards? I don't know about ALL, but the Trio 32/64 family share the same graphics engine. There is certainly no danger in leaving that flag set for everyone. I believe the fallback is to use a solid fill, and that's probably the right answer in every case for those ROPs. The ViRGE and the Savage also have this problem, but as you note, they are in different drivers. (What does ROP mean, anyway?) Raster operation. That's the Windows term for a function that describes how to merge to pixel streams. (Actually, for all I know, the term might pre-date Windows. X uses the more mnemonic word function.) -- - Tim Roberts, [EMAIL PROTECTED] Providenza Boekelheide, Inc. ___ Devel mailing list Devel@XFree86.Org http://XFree86.Org/mailman/listinfo/devel
Re: 4.4.99.902: s3 fails some of xtests
Nmeth Mrton wrote: Hi! I've tested 4.5.0RC2 with xtest 4.0.10, see http://bugs.xfree86.org/show_bug.cgi?id=1557 for details. I've attached a test C program which always produces bad rendering using acceleration, and never if XaaNoScreenToScreenCopy is set (=without acceleration). The results are also attached. Have anyone see souch behaviour? Have anyone programers manual about 86c764/765 [Trio32/64/64V+] chip? I have a Trio64V+ manual. The graphics engine is basically the same as the 8514/A. I'll take a look at the source and see if anything looks funny. -- - Tim Roberts, [EMAIL PROTECTED] Providenza Boekelheide, Inc. ___ Devel mailing list Devel@XFree86.Org http://XFree86.Org/mailman/listinfo/devel
Re: 4.4.99.902: s3 fails some of xtests
Nmeth Mrton wrote: Hi! I've tested 4.5.0RC2 with xtest 4.0.10, see http://bugs.xfree86.org/show_bug.cgi?id=1557 for details. I've attached a test C program which always produces bad rendering using acceleration, and never if XaaNoScreenToScreenCopy is set (=without acceleration). The results are also attached. Have anyone see souch behaviour? Have anyone programers manual about 86c764/765 [Trio32/64/64V+] chip? Is it really only GXclear, GXinvert, and GXset that fail? If so, the diagnosis is pretty easy. For those three ROPs, it's not really a screen-to-screen blit at all: the source surface is not used. Most S3 chips (Savage included) fail if you attempt to use a two-operand bitblt command when the source is not involved. That's why there is an XAA flag specifically for this case. The solution is to add pXAA-ScreenToScreenCopyFlags = ROP_NEEDS_SOURCE; to the S3AccelInitXxx function at the bottom of the file. -- - Tim Roberts, [EMAIL PROTECTED] Providenza Boekelheide, Inc. ___ Devel mailing list Devel@XFree86.Org http://XFree86.Org/mailman/listinfo/devel
Re: help about server flags
Manikandan Thangavelu wrote: Hi All, Setting up the server flags DontVTSwitch, DontZap etc to true in the XF86Config file and the restarting the X stops the switching between the terminals using Ctrl+Alt+F (1-6).What exactly is happening in the background during this? Does this mean Ctrl+Alt+F (1-6) are handled by the XServer? Is there any programmatic way of stopping this switching between Terminals rather than changing XF86Config file directly? It is a cooperative process. Ordinarily, the kernel console driver handles VT switching by itself. When XFree86 runs, it tells the kernel that it wants to handle all VT switching when it is the current process. Once a handler has been registered, the kernel is no longer involved. So, if XFree86 never releases control, no other VT can get in. -- - Tim Roberts, [EMAIL PROTECTED] Providenza Boekelheide, Inc.
Re: External CRT works on 1024x768 but LCD not. Please Help!!!
Nqnsome wrote: Christian Zietz wrote: I still suppose that your BIOS only recognizes the LCD as a 800x600 one, while the CRT is recognized correctly as being able to display 1024x768. The Windows XP driver doesn't care about the BIOS but bypasses it. X on the other hand needs the BIOS to set the resolution because the information on how to do that without the BIOS is not publicly available. Sorry, but what do you mean with how to do that? What kind of information is (not!) in the BIOS that tells X how to change the resolution? A function? A memory address? Something else? Changing the resolution on a video card requires writing a number of timing and configuration registers directly to the chip hardware. The definitions, usage, interactions, and philosophy of those registers are often quite complicated, and vary wildly from manufacturer to manufacturer. The video BIOS knows how to write those registers when you make a VBE INT 10 call, because the engineers that designed the chip wrote that BIOS. The Windows driver knows how to write those registers, because a team at the chip manufacturer wrote that driver with the assistance of the design engineers. However, chip manufacturers often do not write XFree86 drivers, because it represents approximately 0% of their annual sales. Further, more and more chip manufacturers consider their chip specs to be proprietary, so they are not released without a license agreement and a set of stiff legal handcuffs. Without the chip specs, the only way a non-Windows driver can set the video mode is by asking the BIOS pretty please. If the BIOS has been crippled by not supporting certain modes, then X will be crippled in the same way. I was astounded to learn that many laptops with Intel graphics chips ship with 1400x1050 LCD panels, and a video BIOS that does not support 1400x1050 mode. That's just criminally negligent. There is nothing we can do, short of reverse engineering, which has its own set of legal issues. I am asking this because, if I have more information about what is possibly broken/missing in the BIOS, I can try to contact the manufacturer and ask for a fix. Without specific information it is difficult to get a useful answer from the manufacturer (COMPAL). You will find no help at COMPAL. All they do is repackage the BIOS and drivers from Intel. It is quite likely they don't even have a graphics driver writer on staff. -- - Tim Roberts, [EMAIL PROTECTED] Providenza Boekelheide, Inc. ___ Devel mailing list Devel@XFree86.Org http://XFree86.Org/mailman/listinfo/devel
Re: x86 emulator bug
Charles Dobson wrote: I am not sure if I should post this here or on bugzilla. While trying to get a Silicon Motion SM722 video controller working with Solaris, I have discovered a problem with the emulation of the SHLD and SHRD (double precision shift) instructions of the x86 emulator. According to the Intel Pentium User Guide Vol 3, these instructions can shift upto 31 bits with both 16 and 32 bit operands. The emulator code will only work with shifts of upto 15 bits for 16 bit operands. The pseudo-code for those instructions in that same document also say that, when the shift count is greater than or equal to the operand size, the contents of the destination register and the flags are undefined. Thus, there is technically nothing wrong with the emulator code as-is. Your patch is right, but the existing code is also right. -- - Tim Roberts, [EMAIL PROTECTED] Providenza Boekelheide, Inc. ___ Devel mailing list Devel@XFree86.Org http://XFree86.Org/mailman/listinfo/devel
Re: Can I ask about developping *with* XFree or just *on* XFree
Adilson Oliveira wrote: Hello. I'm developping applications using xlib functions. Can I ask my quastions about it here or this list is just about developing XFree itself? You can ask them, and they'll probably get answered quickly and accurately, but we'll have to make fun of you when we do. As long as you can deal with that, ask away. ;) -- - Tim Roberts, [EMAIL PROTECTED] Providenza Boekelheide, Inc. ___ Devel mailing list Devel@XFree86.Org http://XFree86.Org/mailman/listinfo/devel
Re: How does XSetInputFocus() generate errors?
Joel L. Breazeale wrote: On my XSetInputFocus() man page it says, XSetInputFocus() can generate BadMatch, BadValue, and BadWindow errors. In looking at the source code for XFree86 4.2.1, I see XSetInputFocus() in lib/X11/SetIFocus.c always returns 1. ... So... When I call XSetInputFocus() and get a return value of 1 what does this mean? If I may expand a bit on what Carsten said, you should remember that X was designed to work over a network, where the application (the X client) runs on a separate computer from the machine with the frame buffer (the X server). We sometimes forget that in the XFree86 world. In the network environment, there are potentially lengthy propogation delays between the time a request is issued and the time it is actually executed. For performance reasons, Xlib does not wait for a request to execute before returning. So, when XSetInputFocus() returns 1, it means your request has been successfully submitted. It does NOT mean your request has been completed. That happens later, asynchronously. When it is finally executed by the server, THAT'S when BadMatch, BadValue, and BadWindow errors can be returned to your message loop. -- - Tim Roberts, [EMAIL PROTECTED] Providenza Boekelheide, Inc. ___ Devel mailing list [EMAIL PROTECTED] http://XFree86.Org/mailman/listinfo/devel
Re: Documentation about drivers
[EMAIL PROTECTED] wrote: Is there anyway to compile drivers without compile whole XFree? No. You must compile the whole thing at least once. The build process creates a vast quantity of symbolic links, libraries, and intermediate files for your particular environment, and they must be present for the drivers to build. However, you only have to do it once. Once the tree has been created, you can build your driver by itself. -- - Tim Roberts, [EMAIL PROTECTED] Providenza Boekelheide, Inc. ___ Devel mailing list [EMAIL PROTECTED] http://XFree86.Org/mailman/listinfo/devel
Re: How can I get rid of the cursor for the Xserver?
Barry Scott wrote: I need to get rid of the cursor from the Xserver. There are a number of X client programs on screen and I cannot modify all of them to hide the cursor. What I want is a way to globally hide the cursor. If you have a number of programs on the screen, why would you want to hide the cursor? What's the use case? I can easily understand wanting to hide the cursor within a single application. In a kiosk application, for example, or a VCR or DVD app, it's easy to see the justification. However, I do NOT understand the motivation for hiding the cursor in a general desktop situation, thereby making the system unusable. Unlike Windows, where every program is required to operate without a mouse, X apps tend to rely heavily on the pointer. -- - Tim Roberts, [EMAIL PROTECTED] Providenza Boekelheide, Inc. ___ Devel mailing list [EMAIL PROTECTED] http://XFree86.Org/mailman/listinfo/devel
Re: Added Pseudocolor Visuals for XFree86?
Mark Vojkovich wrote: ...Some hardware support 8 bit PseudoColor overlays now but I expect this to go the way of the dodo. My impression is that a future Microsoft operating system will not support 8 bit PseudoColor modes nor will it support overlays so eventually these will disappear from the hardware, leaving emulation as the only solution. Exactly correct. Windows XP still includes support for 8-bit pseudo-color, if you know how to hack the registry, but it is not exposed in the UI, and their documentation implies that it is not supported. -- - Tim Roberts, [EMAIL PROTECTED] Providenza Boekelheide, Inc. ___ Devel mailing list [EMAIL PROTECTED] http://XFree86.Org/mailman/listinfo/devel
Re: Added Pseudocolor Visuals for XFree86?
Title: Added Pseudocolor Visuals for XFree86? Bussoletti, John E wrote: At Boeing we have a number of graphics applications that have been developed in-house, originally for various SGI platforms. These applications are used for engineering visualization They work well on the native hardware and even display well across the network using third party applications under Windows like Hummingbird's ExCeed 3D. However, under Linux, the fail to work properly, either natively or via remote display with the original SGI hardware acting as server, due to omissions in the available Pseudocolor Visuals. Examination of the output of xdpyinfo in the SGI machines shows that the SGI X drivers support Pseudocolor visuals at both 8 bit planes and 12 bit planes. Similar output under Linux shows support for Pseudocolor Visuals at only 8 bit planes. These applications were built to take advantage of the 12 bit plane Pseudocolor Visual under the SGI X drivers. To allow use of these graphics applications within a Linux environment, we're contemplating a port of the applications to Directcolor Visuals. But prior to initiating such an activity, I've been asked to ask whether new developments or releases of the XFree86 X drivers might be in the pipeline for future release that might offer a wider variety of Pseudocolor Visuals. Hence this note. Is there any support for 12 bit plane Pseudocolor Visuals within at least one video card and the XFree86 drivers? Will there be support for such features in the future? If so, is there an anticipated release date? The problem is not XFree86, the problem is technology. I'm not aware of ANY commodity graphics chips that support a 12-bit palettized video display mode. That's mostly because Windows doesn't handle it, and if Windows doesn't handle it, there is no business case for developing it in hardware. Assuming there was such a chip, there are no architectural barriers to supporting it in XFree86.. -- - Tim Roberts, [EMAIL PROTECTED] Providenza Boekelheide, Inc.
Re: Memory leaks when lots of graphical operations? [Qt/X11 3.1.1]
Jay Cotton wrote: What about The X-Resource Extension was developed by Mark Vojkovich for the XFree86 project to help debug reports of excessive Xserver memory usage by reporting how many resources each client had asked the X server to allocate on its behalf. The original poster's complaint was that the CLIENT was leaking memory, not the SERVER. -- - Tim Roberts, [EMAIL PROTECTED] Providenza Boekelheide, Inc. ___ Devel mailing list [EMAIL PROTECTED] http://XFree86.Org/mailman/listinfo/devel
Re: UseFBDev makes X-display in several wide bars, each 110 degree rotated, unusable
[EMAIL PROTECTED] wrote: (**) RADEON(0): *Mode 1400x1050: 108.0 MHz (scaled from 0.0 MHz), 64.6 kHz, 60.8 Hz (II) RADEON(0): Modeline 1400x1050 108.00 1400 34208 34320 1672 1050 1050 1053 1063 (**) RADEON(0): *Mode 1280x1024: 108.0 MHz (scaled from 0.0 MHz), 64.6 kHz, 60.8 Hz (II) RADEON(0): Modeline 1280x1024 108.00 1280 34208 34320 1672 1024 1050 1053 1063 (**) RADEON(0): *Mode 640x480: 108.0 MHz (scaled from 0.0 MHz), 64.6 kHz, 60.8 Hz (II) RADEON(0): Modeline 640x480 108.00 640 34208 34320 1672 480 1050 1053 1063 (**) RADEON(0): *Mode 800x600: 108.0 MHz (scaled from 0.0 MHz), 64.6 kHz, 60.8 Hz (II) RADEON(0): Modeline 800x600 108.00 800 34208 34320 1672 600 1050 1053 1063 (**) RADEON(0): *Mode 1024x768: 108.0 MHz (scaled from 0.0 MHz), 64.6 kHz, 60.8 Hz (II) RADEON(0): Modeline 1024x768 108.00 1024 34208 34320 1672 768 1050 1053 1063 These aren't right. Every mode in the list (including the 7 default modes I deleted) is shown as having a pixel clock of 108 MHz, horiz of 64.6 kHz, and vert of 60.8 Hz. The sync and blank numbers in the modeline are bonkers, too. Is this a general issue in the 4.4.99 release, or only for this gentleman? -- - Tim Roberts, [EMAIL PROTECTED] Providenza Boekelheide, Inc. ___ Devel mailing list [EMAIL PROTECTED] http://XFree86.Org/mailman/listinfo/devel
Re: Driver (i810) ignores modeline directives in Config File
Nqnsome wrote: No replies ... No solutions ... Too sad ... :'( I'm not sure what you're complaining about. The thread you replied to, which you included in your quoted text, includes the solution to your problem. -- - Tim Roberts, [EMAIL PROTECTED] Providenza Boekelheide, Inc. Nqnsome wrote: Hi, I have a Compal CY27 laptop with the 82852/855GM Integrated Graphics Device (rev 02) (lspci output). Even though the BIOS allow me to change the memory allocated to video (32M right now), Xfree only see two built in modes: (**) I810(0): *Built-in mode 800x600 (**) I810(0): *Built-in mode 640x480 I already tried to replace the 800x600 mode by 1024x768 using Poirer's 855resolution , but it does not work (I loose the 800x600 mode, and can not get 1024x768, ending only with 640x480). Does any one know why this is happening? WindowsXP and a commercial X server (Xi Graphics) both can reach 1024x768. Regards, Sergio Peter Gale wrote: On Fri, 2004-08-06 at 10:10, Erwann Thoraval wrote: Hello, I had a problem with my laptop (a DELL 510m with a 1400x1050 screen and the i855 chip). It seems that the i810 XFree driver can *only* use the video resolutions which are listed into the BIOS. I also have a Medion laptop with 1280x800... and Alain Poirer's 855resolution patch fixed it perfectly... Peter Gale ___ Devel mailing list [EMAIL PROTECTED] http://XFree86.Org/mailman/listinfo/devel
Re: Colors
Andrzej Popielewicz wrote: Hi, I have ported XFree 4.4 to an old Unix-like OS Coherent 4.2.10. Rock stable but I have some problems with colors , for S3Virge 1MB. It is set to 8 bit defaultdepth in configfile. But clients see only 4 bit root_depth, and LoadPalette loads only 16 colors. Some suggestions ? I would expect 256 colors. Screenshots at http://www.staff.amu.edu.pl/~apopiele/embed.html , in the bottom part of page. There are way more than 16 colors in those screen shots. -- - Tim Roberts, [EMAIL PROTECTED] Providenza Boekelheide, Inc. ___ Devel mailing list [EMAIL PROTECTED] http://XFree86.Org/mailman/listinfo/devel
Re: Emulation of Alt+Numpad+Digits behavior
Jörg Henne wrote: That sounds really interesting. I've just read a few things about X and input methods. What irritates me about this solution is that it seems like it would need special support for each and every application. However, I'm looking for a solution which works with every or most of the existing applications. Is this goal even achievable? The Alt-NumPad thing works in MS-DOS and Windows because it's implemented in the BIOS, and because the keyboard character sets mappings are universal and strictly controlled. Alt-0227 maps to ã because that's the way it is in the standard code page. X, on the other hand, is not supposed to be defining policy. Let's assume your solution was implemented. What would you expect to happen when you do Alt-227? What symbol would it be, and in what character set? Is it the same with a German keyboard layout? Can we say unconditionally that no X application currently uses Alt-Numpad combinations? Or maybe I'm just scaring up issues where none exist. -- - Tim Roberts, [EMAIL PROTECTED] Providenza Boekelheide, Inc. ___ Devel mailing list [EMAIL PROTECTED] http://XFree86.Org/mailman/listinfo/devel
Savage patch
It has been pointed out to me that we need a small patch to the Savage driver. In the two cases where the driver turns on rotation, where psav-rotate is set non-zero, the driver should call xf86DisableRandR(). Xrandr does not play will with the shadowFB-based rotation, and the results are not pretty. In fact, they are not usable. One of the unwashed masses has made this change to his 4.3.0 source base and rotation is now working. I do not, at the present time, have access to a machine where I can easily generate a context diff against the tips of the 4.4 tree. Thus, I am hoping someone can help me. The two spots are roughly at line 835 in savage_driver.c. The code looks like this in the older code, but I don't know if the surrounding code has changed. The added lines are marked with +. if ((s = xf86GetOptValString(psav-Options, OPTION_ROTATE))) { if(!xf86NameCmp(s, CW)) { /* accel is disabled below for shadowFB */ psav-shadowFB = TRUE; psav-rotate = 1; + xf86DisableRandR(); xf86DrvMsg(pScrn-scrnIndex, X_CONFIG, Rotating screen clockwise - acceleration disabled\n); } else if(!xf86NameCmp(s, CCW)) { psav-shadowFB = TRUE; psav-rotate = -1; xf86DrvMsg(pScrn-scrnIndex, X_CONFIG, Rotating screen counter clockwise - acceleration disabled\n); + xf86DisableRandR(); } else { The messages could be changed to indicate xrandr disabling as well, but that's of secondary importance. -- - Tim Roberts, [EMAIL PROTECTED] Providenza Boekelheide, Inc. ___ Devel mailing list [EMAIL PROTECTED] http://XFree86.Org/mailman/listinfo/devel
Re: rotate functionality in i8xx driver?
Alex Deucher wrote: Check the i810 driver options. there may be an option for rotaion, I'm not too familiar with the i810 driver. If not, adding roation isn't too hard. Take a look at another driver that implements it in SW (shadowfb), like savage for instance. Then port the required changes to the i810 driver. Yes, I was pleasantly surprised at how easy it was to add this. It disables acceleration, of course, but it still runs quite respectably on a hot CPU. -- - Tim Roberts, [EMAIL PROTECTED] Providenza Boekelheide, Inc. ___ Devel mailing list [EMAIL PROTECTED] http://XFree86.Org/mailman/listinfo/devel
[XFree86] X crashes when playing movie
Hi, I have a problem: when I try to play a movie using mplayer or kaboodle X crashes. Recently the debian x* packages have been upgraded. The problem didn't occur before the upgrade. I use debian unstable, xfree86 4.3.0, kernel 2.6.6. My video card is an ATI Radeon. My XFree86 log file and XF86Config file are attached. I hope anyone can help me. Regards, Tim This is a pre-release version of XFree86, and is not supported in any way. Bugs may be reported to [EMAIL PROTECTED] and patches submitted to [EMAIL PROTECTED] Before reporting bugs in pre-release versions, please check the latest version in the XFree86 CVS repository (http://www.XFree86.Org/cvs). XFree86 Version 4.3.0.1 (Debian 4.3.0.dfsg.1-2 20040525201850 [EMAIL PROTECTED]) Release Date: 15 August 2003 X Protocol Version 11, Revision 0, Release 6.6 Build Operating System: Linux 2.4.23 i686 [ELF] Build Date: 25 May 2004 Before reporting problems, check http://www.XFree86.Org/ to make sure that you have the latest version. Module Loader present OS Kernel: Linux version 2.6.5 ([EMAIL PROTECTED]) (gcc version 3.3.3 (Debian 20040401)) #1 Wed Apr 7 14:24:04 CEST 2004 Markers: (--) probed, (**) from config file, (==) default setting, (++) from command line, (!!) notice, (II) informational, (WW) warning, (EE) error, (NI) not implemented, (??) unknown. (==) Log file: /var/log/XFree86.0.log, Time: Fri May 28 21:06:46 2004 (==) Using config file: /etc/X11/XF86Config-4 (==) ServerLayout Default Layout (**) |--Screen Default Screen (0) (**) | |--Monitor Generic Monitor (**) | |--Device Generic Video Card (**) |--Input Device Generic Keyboard (**) Option XkbRules xfree86 (**) XKB: rules: xfree86 (**) Option XkbModel pc104 (**) XKB: model: pc104 (**) Option XkbLayout us (**) XKB: layout: us (==) Keyboard: CustomKeycode disabled (**) |--Input Device Configured Mouse (**) |--Input Device Generic Mouse (WW) The directory /usr/lib/X11/fonts/cyrillic does not exist. Entry deleted from font path. (**) FontPath set to unix/:7100,/usr/lib/X11/fonts/misc,/usr/lib/X11/fonts/100dpi/:unscaled,/usr/lib/X11/fonts/75dpi/:unscaled,/usr/lib/X11/fonts/Type1,/usr/lib/X11/fonts/Speedo,/usr/lib/X11/fonts/100dpi,/usr/lib/X11/fonts/75dpi (==) RgbPath set to /usr/X11R6/lib/X11/rgb (==) ModulePath set to /usr/X11R6/lib/modules (--) using VT number 7 (WW) Open APM failed (/dev/apm_bios) (No such file or directory) (II) Module ABI versions: XFree86 ANSI C Emulation: 0.2 XFree86 Video Driver: 0.6 XFree86 XInput driver : 0.4 XFree86 Server Extension : 0.2 XFree86 Font Renderer : 0.4 (II) Loader running on linux (II) LoadModule: bitmap (II) Loading /usr/X11R6/lib/modules/fonts/libbitmap.a (II) Module bitmap: vendor=The XFree86 Project compiled for 4.3.0.1, module version = 1.0.0 Module class: XFree86 Font Renderer ABI class: XFree86 Font Renderer, version 0.4 (II) Loading font Bitmap (II) LoadModule: pcidata (II) Loading /usr/X11R6/lib/modules/libpcidata.a (II) Module pcidata: vendor=The XFree86 Project compiled for 4.3.0.1, module version = 1.0.0 ABI class: XFree86 Video Driver, version 0.6 (II) PCI: Probing config type using method 1 (II) PCI: Config type is 1 (II) PCI: stages = 0x03, oldVal1 = 0x8060, mode1Res1 = 0x8000 (II) PCI: PCI scan (all values are in hex) (II) PCI: 00:00:0: chip 1106,0691 card , rev c4 class 06,00,00 hdr 00 (II) PCI: 00:01:0: chip 1106,8598 card , rev 00 class 06,04,00 hdr 01 (II) PCI: 00:07:0: chip 1106,0686 card 1106, rev 1b class 06,01,00 hdr 80 (II) PCI: 00:07:1: chip 1106,0571 card , rev 06 class 01,01,8a hdr 00 (II) PCI: 00:07:2: chip 1106,3038 card 0925,1234 rev 0e class 0c,03,00 hdr 00 (II) PCI: 00:07:3: chip 1106,3038 card 0925,1234 rev 0e class 0c,03,00 hdr 00 (II) PCI: 00:07:4: chip 1106,3057 card , rev 20 class 06,00,00 hdr 00 (II) PCI: 00:0e:0: chip 1011,0019 card , rev 41 class 02,00,00 hdr 00 (II) PCI: 00:0f:0: chip 1274,5880 card 1274,2000 rev 02 class 04,01,00 hdr 00 (II) PCI: 00:11:0: chip 10ec,8029 card 10ec,8029 rev 00 class 02,00,00 hdr 00 (II) PCI: 01:00:0: chip 1002,474d card 1002,0008 rev 65 class 03,00,00 hdr 00 (II) PCI: End of PCI scan (II) Host-to-PCI bridge: (II) Bus 0: bridge is at (0:0:0), (0,0,1), BCTRL: 0x0008 (VGA_EN is set) (II) Bus 0 I/O range: [0] -1 0 0x - 0x (0x1) IX[B] (II) Bus 0 non-prefetchable memory range: [0] -1 0 0x - 0x (0x0) MX[B] (II) Bus 0 prefetchable memory range: [0] -1 0 0x - 0x (0x0) MX[B] (II) PCI-to-PCI bridge: (II) Bus 1: bridge is at (0:1:0), (0,1,1), BCTRL: 0x000c (VGA_EN is set) (II) Bus 1 I/O range: [0] -1 0 0x9000 - 0x90ff (0x100) IX[B] [1] -1 0 0x9400 - 0x94ff (0x100) IX[B] [2] -1 0 0x9800 - 0x98ff (0x100) IX[B] [3] -1 0 0x9c00 - 0x9cff (0x100) IX[B] (II) Bus 1 non-prefetchable memory range: [0] -1 0 0xd400 - 0xd7ff (0x400) MX[B] (II) PCI-to-ISA bridge: (II) Bus -1: bridge is at (0:7:0), (0,-1,-1
Re: Where do I start?
Marc Aurele La France wrote: There have been several attempts in the past to draft a TODO list, but that's usually turned out to be a project in itself. To Do List --- 1. Create to-do-list. 2. Solicit feedback from developers. 3. Wouldn't it be neat to have an on-line to-do list manager? 4. Create on-line to-do list manager. 5. I suppose I need to pick a platform. I'd like to get better with PHP. 6. Learn PHP to build on-line to-do list manager. 7. I need to get smarter about Apache 2.0. 8. Install and learn Apache 2.0. 9. It would be cool to offer this as a web service. 10. Learn XML packages for PHP. 11. Golly, I need a to-do list to keep track of all of this.. That's the way most of my spare time projects go. -- - Tim Roberts, [EMAIL PROTECTED] Providenza Boekelheide, Inc. ___ Devel mailing list [EMAIL PROTECTED] http://XFree86.Org/mailman/listinfo/devel
[XFree86] men this is your chance!
Hello Mailman ( Wed, 19 May 2004 05:56:14 +0300 ) Introgducing Instzant groawth foremula pidll! Our maqle enhankcement piulls will enlaorge your peenis by 1-3+ incfhes prowduce powegrful erecttions, incrwease the inteynsity of your orgxasms, and much more ! Stop being shy and insecure about your pecnis sivze, Imtagine being the iron man in bed you always wanted to be and leaving your partner breathless! Just take a quick look at everything you can have... Visdit siqte now clixck hevre rm http://kzvvcafzavgjkmbqboa.Mailman.boonussofffer2u.net/rachel.php
Re: offtopic help request (?)
Iñaki Etxebarria wrote: Hi all, I am working on a system level debugger ( pICE ) and I am giving universal output support to it. I use Linear FrameBuffer by direct writes and it works pretty well. How are you determining the address of the frame buffer? The problem is, that by now, calculations for drawing are based on fixed X,Y,BPP(color depth) resolution, which is a bit annoying... So I´d like to detect those X,Y and BPP each time the system debugger is fired up (HOTKEY pressed or breakpoint or other ints triggered). I read something about usong CRT regs but I don´t really know how. Also, I have also seen some MMIO stuff, but no idea... There is simply no generic way to do this. Every graphics card is different. If you are using the VESA BIOS calls to fetch the address of the frame buffer, the same interface can be used to fetch the current mode, but there is no guarantee that the VESA BIOS has the current information. You could try parsing /var/log/XFree86.0.log, but that's only accurate if XFree86 happens to be running and in the current VT. You could try running a process periodically to do the equivalent of xdpyinfo. That gives you resolution and depth, but not the frame buffer address. Plus, that doesn't help if a console VT happens to be visible. One important thing is that -being a system level debugger- it can´t be a call driver or call kernel approach, I must do direct hardware ( IO ports or memory ) only, since when the debugger is active, the whole system is frozen and I cant rely on other stuff than my own code or direct hardware. Now you know why the best kernel debuggers for Windows use a -- - Tim Roberts, [EMAIL PROTECTED] Providenza Boekelheide, Inc. ___ Devel mailing list [EMAIL PROTECTED] http://XFree86.Org/mailman/listinfo/devel
Re: Testing X on 2.6 Mega Hertz FPGA
Andrew E. Mileski wrote: Andrew E. Mileski wrote: Suresh Chandra Mannava wrote: Dear Friends, We are porting Xfree86 on to a new 32bit RISC processor. We have test FPGA system running at 2.6 Mega Hertz (Bogomips 0.16) on kernel 2.4.7. proposed system(ASIC) runs on ~300Mhz. Ack! I read GHz there for some reason. Sorry. I don't think it will be very useful with stock XFree86. Well, the question is not whether it would be USEFUL. The question is whether it would be POSSIBLE. They're trying to test their design in an FPGA before committing it to ASIC. It's a proof-of-concept, not a marketing trial. As long as you are patient, I don't see any reason why this shouldn't work. To the original poster, what is your video device and how is connected? Have you implemented a kernel frame buffer device so that the XFree86 frame buffer driver will work? -- - Tim Roberts, [EMAIL PROTECTED] Providenza Boekelheide, Inc. ___ Devel mailing list [EMAIL PROTECTED] http://XFree86.Org/mailman/listinfo/devel
Re: [support@ati.com: XFree / Linux Support # 2118096]
Andreas Klemm wrote: Hi XFree86 dev team, how good does ATI support you in comparison to nVidia ? I believe the answer is the same. Is it only a manpower problem, that the new ATI cards based on R3xx chips are missing 3D support (I noticed that in the 4.4 release notes) ? Or is it just because you don't get easy hardware or developer informations from them ? Yes and yes. Very few of the graphics chip manufacturers release any developer information at all, unless you are a proven OEM with a signed non-disclosure agreement. A few days ago I had to call ATI hotline (where you have to pay $9.90 for phone support) because of some problems under XP... There I mentioned, that I had the feeling that nVidia seems to support XFree86 team more than ATI, since - drivers of up to date cards have been available in earlier releases than 4.4 and - there are no restrictions concerning 3D mode I told ATI that for me as Windows and Unix user it had nearly been a reason, not to choose the ATI card, even if I think that ATI cards have better quality and design (256Bit RAM access, 8 parallel pixel shaders, ...). ATI seems to be interested and wants to know exactly why I think, that nVidia seems to support XFree86 better. This statement is way too broad. Your letter shows that one customer service agent seems to be interested. It is NOT accurate to extrapolate that to ATI seems to be interested. It seems for me, that this question might be a door opener for you just for the case there exist some difficulties. Extremely doubtful. I suspect you were seeing polite interest by a single customer service representative with no official backing from the company. He will put an appropriate note in your file and perhaps mention it at a staff meeting. Do not hope for a policy shift. For me personally I'd love to see that you get all the informations you need from ATI, to make drivers of same quality and featureism (2D AND 3D) like the nVidia ones. The procss of getting a full-featured driver for a new chipset is as much about good luck and coincidence as anything else. You have to have someone who (a) is an XFree86 developer, who (b) happens to acquire one of the new boards, and who (c) has the free time to invest in extending a driver to handle the new chip. -- - Tim Roberts, [EMAIL PROTECTED] Providenza Boekelheide, Inc. ___ Devel mailing list [EMAIL PROTECTED] http://XFree86.Org/mailman/listinfo/devel
Re: DGA - the future?
James Wright wrote: It doesn't seem all that long ago that DGA V2 was added, why was it ever introduced if it causes grief for the driver writers? What where the original intentions of including the DGA extension into Xfree86? Same as DirectDraw in Windows. Some app writers want to own the desktop and draw directly onto the bits of the frame buffer. Both DirectDraw and DGA provide that access, and both of them are a pain for driver writers. It doesn't make them evil. -- - Tim Roberts, [EMAIL PROTECTED] Providenza Boekelheide, Inc. ___ Devel mailing list [EMAIL PROTECTED] http://XFree86.Org/mailman/listinfo/devel
Re: lockups
Fred Heitkamp wrote: I've complained about lockups in the past. I have uninstalled xscreensaver 4.14 and my PC has not locked up for two days so far. What worries me is that a poorly writen or buggy program can lock up my machine hard so that only a hard reset will cure it. A program can't do that. A driver can, however, by feeding incorrect data to the graphics hardware. xscreensaver is one of the most stressful programs you'll ever find for graphics drivers. It does more high-speed and edge-condition 2D graphics than any other program you're likely to run. A scrolling xterm is a piece of cake compared to some of the wild hacks in xscreensaver. I am using: XFree86 Version 4.3.99.902 (4.4.0 RC 2) Release Date: 18 December 2003 X Protocol Version 11, Revision 0, Release 6.6 Build Operating System: Linux 2.6.1 i686 [ELF] Current Operating System: Linux pc1 2.6.3-rc3 #4 SMP Sun Feb 15 10:11:45 EST 2004 i686 Build Date: 23 January 2004 Interesting, but you left out the most critical piece of information: what graphics chip and driver? -- - Tim Roberts, [EMAIL PROTECTED] Providenza Boekelheide, Inc. ___ Devel mailing list [EMAIL PROTECTED] http://XFree86.Org/mailman/listinfo/devel
Re: Modifications to linuxPci.c to optimize PCI scan
Pier Paolo Glave wrote: I'm trying to optimize an embedded system based on ARM9 CPU, which is running a cross-compiled version of XFree86 4.3.0 on linux 2.4.18. I noticed that XFree86, at start-up, makes a complete scan of 256 possible PCI buses, looking for devices, without checking (e.g. from /proc/bus/pci) how many buses are actually present on the system. I thought that in my system, where I have one bus only, this could lead to a high startup time, so I tried to make a patch (reported below) that detects the actual number of buses by parsing /proc/bus/pci/devices. The results were not amazing, because the saved time is really little. Right. The gain is very, very small, and it comes at the cost of an additional dependency on the presence and exact format of /proc/bus/pci/devices. /proc/bus was not introduced until the 2.2 kernels, so your change would prevent XFree86 from running on 2.0.x kernels. I don't know whether there are other issue with 2.0.x kernels or not, but since the cost of a full PCI bus search is so small, it seems counterproductive to make this change. -- - Tim Roberts, [EMAIL PROTECTED] Providenza Boekelheide, Inc. ___ Devel mailing list [EMAIL PROTECTED] http://XFree86.Org/mailman/listinfo/devel
Re: XVideo seems to work on NeoMagic NM2097
Karl Oberjohn wrote: ... The only problem I had was the cursor turned into a scattered mess of dots instead of a nice arrow. I noticed in XFree86.0.log that the neomagic driver detected 2 MB of video memory when in fact I only have 1152 kB. So I added one more line to my XF86Config: VideoRam 1152 And that fixed the cursor. All other functions seem to work fine. I am running 800x600 @ 16 bpp. The other warning I received in the log file was: Can not reserve 829440 bytes for overlay. Resize to 218624 bytes. But I'm still able to play full-screen videos. Is there any reason the neomagic driver shouldn't activate the video overlay on a NM2097 chipset? (Maybe it would work on even older chipsets?) It would sure be a nice feature addition for the upcoming version 4.4... The message is quite correct: at 800x600 16 bpp, there is only 200k bytes of unused video memory. That's enough for a 320x240 YUV overlay, but nothing bigger. If you're seeing full-screen MPEG videos, then they are probably being drawn without the use of the overlay. The 2090 could be made to work, but the 2070 only has 900k of video RAM. It won't even do 800x600 16bpp. -- - Tim Roberts, [EMAIL PROTECTED] Providenza Boekelheide, Inc. ___ Devel mailing list [EMAIL PROTECTED] http://XFree86.Org/mailman/listinfo/devel
Re: does XFree86 need kernel framebuffer support?
Sergey Babkin wrote: Tim Roberts wrote: Andrew C Aitchison wrote: There are two cases where XFree86 does need kernel support. * Chipsets like the i810/i815/i835/... family have no framebuffer memory but use main system memory for the framebuffer. This requires agpgart support from the kernel. This doesn't actually REQUIRE agpgart support from the kernel unless you're doing bus mastering. The ProSavages are UMA chips, with their frame buffers in main memory, and as long as the BIOS has done the proper division of memory at boot time, that's all it needs. Many of the cards in the i810 family have only a very limited amount of video memory on the card. So if you want to get anything over 800x600 on them, you need agpgart. On some of them the i810 driver won't start even in 800x600 mode, so VESA is the only option. I was about to express my utter confusion at these comments, since in fact the i810 family (just like the ProSavages and other UMA solutions) has exactly zero megabytes on-chip video memory, but after doing some reading, I think I understand now. The issue here, if I understand it, is that the BIOS on i810 systems is utterly brain-dead. It will not allocate more than 1 MB of system RAM to the i810. Thus, if you want more than 1024x768x8 or 800x600x16, you do, in fact, need agpgart support to remap the addresses. This is NOT the case for ProSavage chips, nor for any of the other UMA chips I've encountered (like SiS). In those cases, the BIOS carves up the system memory, and is able to allocate 8MB or 16MB or more to the graphics chip. In that case, agpgart is not necessary. So, I guess I learned something today. -- - Tim Roberts, [EMAIL PROTECTED] Providenza Boekelheide, Inc. ___ Devel mailing list [EMAIL PROTECTED] http://XFree86.Org/mailman/listinfo/devel
Re: does XFree86 need kernel framebuffer support?
Andrew C Aitchison wrote: XFree86 does not in general need kernel framebuffer support for hardware which is supported by an XFree86 driver, as it has its own framebuffer interface. There are two cases where XFree86 does need kernel support. * Chipsets like the i810/i815/i835/... family have no framebuffer memory but use main system memory for the framebuffer. This requires agpgart support from the kernel. This doesn't actually REQUIRE agpgart support from the kernel unless you're doing bus mastering. The ProSavages are UMA chips, with their frame buffers in main memory, and as long as the BIOS has done the proper division of memory at boot time, that's all it needs. -- - Tim Roberts, [EMAIL PROTECTED] Providenza Boekelheide, Inc. ___ Devel mailing list [EMAIL PROTECTED] http://XFree86.Org/mailman/listinfo/devel
Re: Manufacturers who fully disclosed specifications for agp cards?
Brad Hards wrote: Is it possible to insert a shim in the Windows video call chain? We have something like that for USB (http://sourceforge.net/projects/usbsnoop/) and it works pretty well. Do you mean between a Windows display driver and the PCI bus, so that you can snaggle all the I/O port and memory writes? No, that is not possible, at least not in the NT-based systems. Windows display drivers run in kernel mode, where I/O trapping is not possible. Display drivers are supposed to use a set of macros to access memory-mapped I/O space, so in theory you could recompile the driver with those macros redefined (if you have source), but many drivers violate that rule and use ordinary pointer access. Alternatively, are there tools (even for pay) that can monitor certain addresses/IO ports under Windows? My needs are not so extravagant (I just want to be able turn on dual head mirror on my i830 based laptop, without rebooting), and that would likely be enough to get the missing info. In Win9X, this is possible, because display drivers are user-mode. You could install a VXD to trap I/O ports and handle page faults for the memory-mapped space, but it would be a huge pain in the butt. It is not possible in NT/2K/XP. -- - Tim Roberts, [EMAIL PROTECTED] Providenza Boekelheide, Inc. ___ Devel mailing list [EMAIL PROTECTED] http://XFree86.Org/mailman/listinfo/devel
Re: Manufacturers who fully disclosed specifications for agp cards?
Knut J Bjuland wrote: It is possible to gain the specs for a chip by discetion for i.e R300 chip or NV 30 chips with the right tools like a electon microscope? Absolutely not. That is comparable to figuring out how Windows 2000 works given nothing more than a printed binary dump (and I mean nothing but 011010111010110111) of a memory image. You could probably figure out the number of bits in the datapath between the graphics engine and the onboard RAM that way, but there is no way you could deduce that the lower 4 bits of the register at offset 000123C4 controls the RAM clock slew. -- - Tim Roberts, [EMAIL PROTECTED] Providenza Boekelheide, Inc. ___ Devel mailing list [EMAIL PROTECTED] http://XFree86.Org/mailman/listinfo/devel
Re: Manufacturers who fully disclosed specifications for agp cards?
Ryan Underwood wrote: I agree. To illuminate that point, a friend of mine applied for a NDA with Xerox to develop a Ghostscript filter for some of their inkjet printers. Upon receiving the NDA document, he realized that it specifically barred him from releasing any source code that he would develop based on their documentation. Xerox would not negotiate, so he gave up on it and bought a new printer from a different company. It is possible that NDAs that do not explicitly outline terms under which you can release source code could be even more dangerous than NDAs that come right out and say you can't publish source. The latter stops you from going any further right at the beginning, but the former could waste a lot of time and money and ruin your day if the company got the wrong attitude down the road. When S3 contracted with me to do the Savage driver way back when (3.3.4!), I put explicit language in the proposal stating that the resulting driver would be open source, and would be released to the XFree86 team on a periodic basis. I mentioned it several times during the negotiation process, just to make sure everyone understood what I was saying. It raises an interesting question, since you can actually glean more information from the source than you can from the confidential chip specs (which are extremely terse), but they had no problem with it. -- - Tim Roberts, [EMAIL PROTECTED] Providenza Boekelheide, Inc. ___ Devel mailing list [EMAIL PROTECTED] http://XFree86.Org/mailman/listinfo/devel
Re: CVS XFree (savage driver) xsuite failures
David Dawes wrote: On Tue, Jan 20, 2004 at 10:32:12PM +0100, Nicolas Joly wrote: On Tue, Jan 20, 2004 at 10:28:16AM -0800, Mark Vojkovich wrote: On Tue, 20 Jan 2004, Nicolas Joly wrote: If your lines are correct, you should be able to run: http://www.xfree86.org/~mvojkovi/linetest.c without artifacts. The lines seems wrong as i do see artifacts when running the program with zero width lines (works fine for w0). If you add to the Section "Device" of the XF86Config file: Option "XaaNoSolidTwoPointLine" that will force the XAA to only use the driver's Bresenham line export. Does that change the behavior? Yes ! I do not see the problem with the linetest program anymore. Will check (in a day or two) with the testsuite, and report. So it looks like the attached patch should be committed? Yes, I would concur with this. -- - Tim Roberts, [EMAIL PROTECTED] Providenza Boekelheide, Inc.
Re: CVS XFree (savage driver) xsuite failures
Mark Vojkovich wrote: On Mon, 19 Jan 2004, David Dawes wrote: n Mon, Jan 19, 2004 at 10:11:53PM +0100, Nicolas Joly wrote: Tests for XDrawLine Test 52: FAIL Tests for XDrawLines Test 57: FAIL Tests for XDrawSegments Test 53: FAIL Those three are real. Does the Savage hardware really have a Bresenham line interface? If so, why is it providing the TwoPointLine interface? Probably inexperience on the part of the author. When I was working on the driver, it wasn't obvious to me when one would be preferable to the other, so I added both. The TwoPointLine routine just converts to Bresenham. If XAA does that for me, then it's silly to support both. -- - Tim Roberts, [EMAIL PROTECTED] Providenza Boekelheide, Inc. ___ Devel mailing list [EMAIL PROTECTED] http://XFree86.Org/mailman/listinfo/devel
xprt 902 eats up all cpu cycles
Hi I am currently testing the 4.3.99.902 release on an Radeon 9200. Debian Sid It is much more stable than the 901 release for me. And i had only one lockup so far (and i was'nt able to trace the error back to xfree since linux 2.6.0 also had some problems). It was a hard lockup, so couldn't do anything in finding the error. Is there any way to debug an running process without having it started in gdb. So i could get some more useful information what goes wrong, when the xprt starts sucking up all cpu again. In other news i can't start opengl apps anymore. The screen gets black and only stopping the application brings the screen back (which is kind of difficult with a black screen ...). My Monitor with 1600x1200 res. is connected via dvi-d to the gfx-card. But i think this problem is known but still not solved currently? Tim ___ Devel mailing list [EMAIL PROTECTED] http://XFree86.Org/mailman/listinfo/devel
Re: Driver for CT69030 for rendering YUV data.
On Thu, 8 Jan 2004 09:37:42 +0530, Karthikeyan Somanathan [EMAIL PROTECTED] wrote: Hi, i'm writing a driver for CT69030 VGA controller to render YUV data. I'm not sure of the format in which YUV data should be written onto the framebuffer. And what should be the bits per pixel setting? Can anybody help me out on this. Only you can answer that question, by looking it up in the 69030 data book. Whatever overlay formats it supports, that's what you'll advertise. The most common are YUY2 and UYVY, both of which are 4:2:2 formats and have 12 bits per pixel. YUV is supported as an overlay, not as a frame buffer format. -- - Tim Roberts, [EMAIL PROTECTED] Providenza Boekelheide, Inc. ___ Devel mailing list [EMAIL PROTECTED] http://XFree86.Org/mailman/listinfo/devel
Re: Driver for CT69030 for rendering YUV data.
On Thu, 8 Jan 2004 09:46:34 -0600, Billy Biggs [EMAIL PROTECTED] wrote: Tim Roberts ([EMAIL PROTECTED]): The most common are YUY2 and UYVY, both of which are 4:2:2 formats and have 12 bits per pixel. You mean 16 bits per pixel ;-) Doh, of course I do. Thanks. I spend too much time at Intel, where their favorite format is a planar YUV 4:2:0 format called I420, which DOES have 12 bits per pixel. -- - Tim Roberts, [EMAIL PROTECTED] Providenza Boekelheide, Inc. ___ Devel mailing list [EMAIL PROTECTED] http://XFree86.Org/mailman/listinfo/devel
[XFree86] Monitor
Hi I am installing red hat 9 on to a laptop but once installed it comes up saying no screens found I have selected laptop panel and the resolution is correct could you please help? Regards Tim Rose Annual Solutions Ltd
RE: [XFree86] Re: can't get direct 3D with Radeon 9200
On Thu, 04 Dec 2003 21:02:44 +0100 (CET), [EMAIL PROTECTED] wrote: You found that quickly. I don't know what the chipset is. How can I figure that out? I am researching the kernel. Their ads sayit has a via chipset. So I tried tust the via-agp module and got the same problem. I looked again and the mother board should have a Via ProSavage KN400 chipset. I can't imagine what the marketeers at VIA are smoking. The graphics engine they are now labelling ProSavage KN400 is NOT, in fact, a Savage. It is a CastleRock. There is a CastleRock driver available from VIA, but it is completely separate from their Savage driver. This is going to cause no end of grief. I've always said that the XFree86 Savage driver supports every graphics chip with Savage in the name. Now, that is no longer true. -- - Tim Roberts, [EMAIL PROTECTED] Providenza Boekelheide, Inc. ___ Devel mailing list [EMAIL PROTECTED] http://XFree86.Org/mailman/listinfo/devel
Re: patch for a crash in x86 emulator
On Thu, 04 Dec 2003 20:35:02 -0500, Sergey Babkin wrote: I've been running XFree86 4.3.0.1 on a machine with a particularly weird videocard and I've got the VESA driver craching. ... A little investigation has shown that it crashes in x86emuOp_mov_word_SR_RM() ... This happens because destreg is 0, because it's returned that way from decode_rm_seg_register(). rh is 4, and that's the value that decode_rm_seg_register() in decode.c (also linked from extras) does not understand. I've looked it up in the manual and actually the value 4 is for FS and value 5 is for GS. So, the conclusion here is that the Intel 815 BIOS uses FS and GS? That is both surprising and disturbing. The BIOS runs in real/v86 mode, and is not generally allowed to make any assumptions about the addressing in the machine. XFree86 doesn't map in any low-memory sections other than the segments at 0, A, B, and C. Given that, it is hard to imagine a scenario where DS and ES are not completely sufficient to do the job. Do you know what call is being made at the time of the crash? -- - Tim Roberts, [EMAIL PROTECTED] Providenza Boekelheide, Inc. ___ Devel mailing list [EMAIL PROTECTED] http://XFree86.Org/mailman/listinfo/devel
Re: Test report of cvs head 1. nov
Hi Thanks for your replys. 1. sometimes all signals of the gfx-card go missing. The screen turns black. One time it was possible to revive the output by pressing str-alt-f1 and then str-alt-f7 eg. switching to the console and back. I havent figured out yet if this is a whole lockup or just the xserver. That would be interesting, can you log into the machine over the network? Yesterday i compiled the snapshot of the 20.Nov, currently running it, and while writing a much longer answer... it happend again :( and no i can not log in over network. I have the feeling that this is a kernel (or more precise: sound driver/preemtion) related bug and not xfree. But that is just a guess since it is hard to get some infos from a hard locked up machine. Another user has a similar problem: his dvi port turns off when he runs an opengl app. closing the app brings the display back as I recall. I haven't been able to reproduce it on my 9200. Hui is also looking into it. do you have display problems with the DRI disabled? With the older 4.3 i had such problems, thats why i switched to cvs. The most display problems seem to come from dvi-d out of the gigabyte cards. My monitor is running 1600x1200 on dvi-d and there are a lot of pixel errors in the image and the monitor looses sync now and then. Thats why i switched to a pci card (Club3d 9200) wich has a real crappy analog output put the digital output seems to be ok! (The gigabyte card shows the same problems under an other widespread os, so it seems to be hardware related) The mail would have been a little longer... but typing everything twice is bothering to much. Tim ___ Devel mailing list [EMAIL PROTECTED] http://XFree86.Org/mailman/listinfo/devel
Re: [XFree86] Multihead System with 2 Radeon 9100s
Yup, that took care of the problem. On Wednesday 03 December 2003 10:29 am, Alex Deucher wrote: You might want to try with the radeon driver from cvs. some fixes for problems with multiple cards went in a while back. Alex ___ XFree86 mailing list [EMAIL PROTECTED] http://XFree86.Org/mailman/listinfo/xfree86
Re: setpriority() call on server fork
On Mon, 1 Dec 2003 21:44:28 +1100, Andrew Bevitt wrote: Im looking into why the Xserver starts with niceness -1 when started as root. Ive tracked the occurance down to these lines in xinit.c #ifdef PRIO_PROCESS setpriority( PRIO_PROCESS, serverpid, -1 ); #endif PRIO_PROCESS is a kernel header include which is defined indefintely, so basically the setpriority() call is made whenever the Xserver is initialised. If this is done by a normal user niceness will become 0 as a normal user cannot set niceness below 0. But as root, well as you can see -1 ... The X server always runs as root, even when launched by a user. It's required for I/O access. What I cant figure out is why this is done... ? GUI responsiveness is critically important. Nothing makes a system feel sluggish more than poor mouse response, even if everything else is blazingly fast. I think you know this, but just in case, -1 is a HIGHER priority than 0. -- - Tim Roberts, [EMAIL PROTECTED] Providenza Boekelheide, Inc. ___ Devel mailing list [EMAIL PROTECTED] http://XFree86.Org/mailman/listinfo/devel
[XFree86] Multihead System with 2 Radeon 9100s
Hello, I've been having trouble getting the stock XFree86 radeon drivers to work with multiple cards. I'm using 2 Radeon 9100 PCI video cards, one monitor each for an Xinerama setup, or at least I'm trying to. The driver works fine as long as X is only dealing with one card, but as soon as I try to get it to use two it aborts. As you can see from the log, I actually have 4 of these cards (all PCI) that I want to use ultimately in a 4 head display. The config is set up for two right now. I suspect that it's probably a build option I'm missing. The other two cards incidently are also on bus 7, devices 6 7. I can use any of the four cards as long as references to the other three are commented out. Thanks, Tim Mathews Other info that may be of interest: I compiled with gcc-2.95.3, binutils-2.14 and glibc-2.3.2. I've also tried a clean build with gcc-3.3.2 (the kernel modules were still built with 2.95.3 though). The relevant sections from my XF86Config Section Device Identifier Card0 Driver radeon VendorName ATI Technologies Inc BoardName Radeon R200 QM [Radeon 9100] BusID PCI:3:1:0 EndSection Section Device Identifier Card1 Driver radeon VendorName ATI Technologies Inc BoardName Radeon R200 QM [Radeon 9100] BusID PCI:7:5:0 EndSection The output in /var/log/XFree86.0.log XFree86 Version 4.3.0 Release Date: 27 February 2003 X Protocol Version 11, Revision 0, Release 6.6 Build Operating System: Linux 2.4.23 i686 [ELF] Build Date: 01 December 2003 Before reporting problems, check http://www.XFree86.Org/ to make sure that you have the latest version. Module Loader present Markers: (--) probed, (**) from config file, (==) default setting, (++) from command line, (!!) notice, (II) informational, (WW) warning, (EE) error, (NI) not implemented, (??) unknown. (==) Log file: /var/log/XFree86.0.log, Time: Mon Dec 1 15:32:08 2003 (++) Using config file: /root/XF86Config.new (==) ServerLayout XFree86 Configured (**) |--Screen Screen0 (0) (**) | |--Monitor Monitor0 (**) | |--Device Card0 (**) |--Screen Screen1 (1) (**) | |--Monitor Monitor1 (**) | |--Device Card1 (**) |--Input Device Mouse0 (**) |--Input Device Keyboard0 (==) Keyboard: CustomKeycode disabled (**) FontPath set to /usr/X11R6/lib/X11/fonts/misc/,/usr/X11R6/lib/X11/fonts/Speedo/,/usr/X11R6/lib/X11/fonts/Type1/,/usr/X11R6/lib/X11/fonts/CID/,/usr/X11R6/lib/X11/fonts/75dpi/,/usr/X11R6/lib/X11/fonts/100dpi/ (**) RgbPath set to /usr/X11R6/lib/X11/rgb (**) ModulePath set to /usr/X11R6/lib/modules (--) using VT number 7 (WW) Open APM failed (/dev/apm_bios) (No such file or directory) (II) Module ABI versions: XFree86 ANSI C Emulation: 0.2 XFree86 Video Driver: 0.6 XFree86 XInput driver : 0.4 XFree86 Server Extension : 0.2 XFree86 Font Renderer : 0.4 (II) Loader running on linux (II) LoadModule: bitmap (II) Loading /usr/X11R6/lib/modules/fonts/libbitmap.a (II) Module bitmap: vendor=The XFree86 Project compiled for 4.3.0, module version = 1.0.0 Module class: XFree86 Font Renderer ABI class: XFree86 Font Renderer, version 0.4 (II) Loading font Bitmap (II) LoadModule: pcidata (II) Loading /usr/X11R6/lib/modules/libpcidata.a (II) Module pcidata: vendor=The XFree86 Project compiled for 4.3.0, module version = 1.0.0 ABI class: XFree86 Video Driver, version 0.6 (II) PCI: Probing config type using method 1 (II) PCI: Config type is 1 (II) PCI: stages = 0x03, oldVal1 = 0x, mode1Res1 = 0x8000 (II) PCI: PCI scan (all values are in hex) (II) PCI: 00:00:0: chip 8086,2550 card 15d9,4280 rev 03 class 06,00,00 hdr 80 (II) PCI: 00:00:1: chip 8086,2551 card 15d9,4280 rev 03 class ff,00,00 hdr 00 (II) PCI: 00:01:0: chip 8086,2552 card , rev 03 class 06,04,00 hdr 01 (II) PCI: 00:02:0: chip 8086,2553 card , rev 03 class 06,04,00 hdr 01 (II) PCI: 00:1d:0: chip 8086,24c2 card 15d9,4280 rev 02 class 0c,03,00 hdr 80 (II) PCI: 00:1d:1: chip 8086,24c4 card 15d9,4280 rev 02 class 0c,03,00 hdr 00 (II) PCI: 00:1d:2: chip 8086,24c7 card 15d9,4280 rev 02 class 0c,03,00 hdr 00 (II) PCI: 00:1d:7: chip 8086,24cd card 15d9,4280 rev 02 class 0c,03,20 hdr 00 (II) PCI: 00:1e:0: chip 8086,244e card , rev 82 class 06,04,00 hdr 01 (II) PCI: 00:1f:0: chip 8086,24c0 card , rev 02 class 06,01,00 hdr 80 (II) PCI: 00:1f:1: chip 8086,24cb card 15d9,4280 rev 02 class 01,01,8a hdr 00 (II) PCI: 00:1f:3: chip 8086,24c3 card 15d9,4280 rev 02 class 0c,05,00 hdr 00 (II) PCI: 02:1c:0: chip 8086,1461 card 15d9,4280 rev 04 class 08,00,20 hdr 00 (II) PCI: 02:1d:0: chip 8086,1460 card , rev 04 class 06,04,00 hdr 01 (II) PCI: 02:1e:0: chip 8086,1461 card 15d9,4280 rev 04 class 08,00,20 hdr 00 (II) PCI: 02:1f:0: chip 8086,1460 card , rev 04 class 06,04,00 hdr 01 (II) PCI: 03:01:0: chip 1002,514d
Re: Could the VESA BIOS be of assistance? (ID 20311056 ignore this filter)
On Mon, 24 Nov 2003 21:46:09 +, Raymond Jennings wrote: Could you fudge it so that the VESA driver just sets standard modes and passes custom modes and all other requests to the default driver? That way you wouldn't miss out on any acceleration. I was talking about only using VESA to set the video modes, and use standard drivers for the rest. The Savage driver currently does exactly this. For 98% of the users, it works perfectly well. However, there are some downsides. By using the BIOS, the driver is forced to choose the BIOS mode and refresh rate that most closely matches the user's request. For most folks, that is perfectly acceptable. However, some users want to have COMPLETE control over their video timing, down to the last microsecond of horizontal sync, as was done in older XFree86 versions. Maybe they have a 1080x804 monitor, maybe they want to reduce the margins, maybe they need exactly 77.8 Hz refresh. Whatever the reason, they should be allowed to do exactly that. Further, at least in VBE 2.0, the extensions are not standardized. S3 has a BIOS extension for specifying the preferred refresh rate for a given mode. They apparently changed their extension at one point, because the Savage driver sets the wrong refresh rate on some ProSavage-DDR boards. Because of those issues, I am forced to keep both the BIOS method and the old register-pounding method alive. -- - Tim Roberts, [EMAIL PROTECTED] Providenza Boekelheide, Inc. ___ Devel mailing list [EMAIL PROTECTED] http://XFree86.Org/mailman/listinfo/devel
RE: XvShmPutImage with XDraw commands
On Thu, 13 Nov 2003 08:51:25 -0500, Steve Thrash wrote: Also - is XvMC the right solution if I want to blend two images to an arbitrary percentage tranparency at 30 FPS using the graphics hardware? Unless I have missed a staff meeting somewhere, XvMC is specifically designed to allow hardware acceleration of MPEG motion compensation vectors. It is VERY specific to the MPEG protocol, and is likely of no use in any other context. The xrender extension can do alpha blending, although driver support is still lacking. -- - Tim Roberts, [EMAIL PROTECTED] Providenza Boekelheide, Inc. ___ Devel mailing list [EMAIL PROTECTED] http://XFree86.Org/mailman/listinfo/devel
Re: XvShmPutImage with XDraw commands
--Original Message Text--- From: Steve Thrash Date: Wed, 12 Nov 2003 13:09:01 -0500 I am using XvShmPutImage to draw video via a YUV overlay into a window. Then I am using XDraw commands to draw "overlays" directly to the window (lines, arcs, text, etc.). When I do this the video image appears correctly, but the overlays do not update properly. Each time the overlays move, the old XDraw data remains along with the new, until I do something to force an exposure on the drawable - such as drag a window over it. Then the window is redrawn with only the new XDraw data drawn over the video image ("new" meaning from XDraw calls made since the last XvShmPutImage call) which is what I wanted. I don't seem to be able to force the XServer to re-expose the window via software, or I would find that an acceptable workaround. Is there something I am not understanding? I have seen this same behavior both on a Matrox G550 and an nVidia Quadro 4 card, so I don't believe it has anything to do with the particular graphics card drivers. I'm not sure why this should be a surprise to you. When you use an overlay, the on-screen window gets solid filled with a chromakey color. If you draw something over the top of the chromakey fill, those somethings will be visible until either you erase them, by restoring the chromakey, or you force an expose event, which tells xvideo to fill with the chromakey again. This is working by design. If the chromakey refilled with every overlay update, it would be impossible to draw anything over a running movie. -- -TimRoberts,[EMAIL PROTECTED] Providenza&Boekelheide,Inc.
Re: Nvidia driver relation to XFree
On Mon, 03 Nov 2003 17:51:53 +0100, Gerhard W. Gruber wrote: I'm working on a kernel debugger which is similar to SoftICE on WIndows. I would like to take advantage of the graphics mode when a user activates the debugger under X and so I was investigating how to solve this. When I use normal VGA mode it doesn't work on my card when I have X running while a fellow coder has a similar card with also nvidia drvier and it works for him (more or less). Now I wonder what is the relation of the video driver to X. The video driver is part of XFree86. What is happening when i.e. the user changes to a console? X must save the current state of the VGA card (i.e. resolution, frequency, etc.) and switch to a suitable console mode. Basically, yes. It isn't usually necessary to save the graphics state; the driver put the card in a graphics state initially, and since it still knows the parameters requested in the XF86Config file, it can put the card back into that state whenever it wishes. However, the driver does have to save the INITIAL state of the graphics card when the driver starts up so it can restore that before switching back to a console. The console driver does NOT know how to switch back to text mode, so XFree86 must insure that the card is in its original condition before allowing the VT switch to go on. Similarily when the user switches back to X this has to be reversed and the state restored. Now I wonder how exactly this is going to happen. Since nvidia doesn't open it's code I can't look at it, but there has to be some interface so that X can do this stuff without knowing the details of the driver. The driver is part of XFree86. Each driver has functions called EnterVT and LeaveVT (where depends on the driver name) that implement the switch to and from a console (VT = virtual terminal). Go look through some of the drivers and you can see how it is done. Can I use this interface in kernel modules as well? I think it should be possible. Absolutely not. XFree86 drivers are just user-mode shared object files loaded by the XFree86 executable. They use a custom API and call a bunch of user-mode functions. They will not work in kernel mode. Another thing is, that I would like to take advantage of the current display and draw my window directly into the framebuffer. Is this possible? Haven't we had this conversation before? I know that I can install the framebuffer support for the kernel and then I would have an interface to do this, but I wonder how X is doing that. The XFree86 driver has INTIMATE knowledge of the specific graphics card. Typically, the driver goes out and reads the PCI configuration registers (or allows XFree86 to do it) to get the physical addresses assigned to the board. Because the driver knows the card, it knows which of the addresses is the frame buffer and which has the memory-mapped registers. It maps that space into user-mode address space, and starts writing. Can I get the pointer to the framebuffer and use it, or is there some way to do this via an interface through the driver. It is not exaclty clear to me on how X and nvidia.o are related to each other. nvidia.o is part of XFree86. It is a dynamically-loaded library that implements the same interface as all other XFree86 video drivers, and it calls back in to the XFree86 executable and its other drivers for most of its services. Of course the same holds true for other cards, so the best solution for my purpose would be to find some way that is more or less independent of the card, but I guess this is to much to ask for. :) DGA is one was for you to take ownership of the frame buffer, but like all of XFree86, it is a user-mode service. -- - Tim Roberts, [EMAIL PROTECTED] Providenza Boekelheide, Inc. ___ Devel mailing list [EMAIL PROTECTED] http://XFree86.Org/mailman/listinfo/devel
Re: Nvidia driver relation to XFree
On Mon, 03 Nov 2003 19:53:56 +0100, Gerhard W. Gruber wrote: ? On Mon, 03 Nov 2003 10:08:37 -0800, Tim Roberts [EMAIL PROTECTED] wrote: The video driver is part of XFree86. I don't think that this is neccessarily true, or is it? Yes, it is. I don'tknow how it is for other cards but in case of NVidia you have this kernel module nvidia.o which you need to load and in the Device section I specify as driver nvidia. nvidia.o is not a kernel module. It is just a dynamically loaded object file that gets loaded by the XFree86 dynamic loader and called entirely in user mode. It could have been done as an ordinary .so DLL, but the design objective was to have these work regardless of operating system. That's why there is a loader built-in to XFree86. Of course it could be that this is just a name which coincides with the kernel module name. I assumed that it is the name of the module that is the driver, but I realize that this is not neccessarily so. Some drivers DO have kernel modules, to handle the DMA transfers that are necessary for adequate 3D operation. However, kernel drivers are loaded with insmod. If you specify a driver name in the Device section of an XF86Config file, it is NOT a kernel driver. It is a user-mode library. Do you mean that this driver also has some part within X itself? ALL XFree86 graphics drivers are part of XFree86. The interface they use is specified by and used by the XFree86 executable ONLY. Have you never looked at the XFree86 source code? You need to do so. Really. Much of this would be cleared up. It owuld make sense I guess. But then where is this particular module? Is this also some closed source stuff from nVidia? At least I only had to download the kernel module and nothing else because Suse is not allowed to bundle it with their distribution. I suppose, that this would be true for a module within X as well. If the driver lives in /usr/X11R6/lib/modules/drivers and is named in XF86Config, it is NOT a kernel module. It is a dynamic library that is an integral part of XFree86. If you have a kernel module that you load with insmod, there still needs to be an XFree86 board-specific driver that can talk to that kernel driver. Basically, yes. It isn't usually necessary to save the graphics state; the driver put the card in a graphics state initially, and since it still knows the parameters requested in the XF86Config file, it can put the card back into that state whenever it wishes. Well, at least it needs to know which mode it was, because I can configure several modes and the switch shold restore the one that has been active when I switched console. The driver put it into that mode originally. It has a data structure that tells it exactly what timing parameters it set. All it has to do is do that again. The driver is part of XFree86. Each driver has functions called EnterVT and LeaveVT (where depends on the driver name) that implement the switch to and from a console (VT = virtual terminal). Go look through some of the drivers and you can see how it is done. Is this the name of the driver mentioned in the XF86Config Device section? In my case this would then be called nvidiaLeaveVT? Yes, the name of the driver file is in the XF86Config Device section. If you say: Driver nvidia then XFree86 will load /usr/X11R6/lib/modules/drivers/nvidia_drv.o. The name of the EnterVT entry point is up to the driver, but it will usually be based on the driver name, just like you said. But the call of the EnterVT/LeaveVT has to end up in the kernel module somewhere, so I guess it should be possible to trace that and see what is called. NO, NO, NO! EnterVT/LeaveVT do NOT end up in a kernel module! The user-mode driver that is part of XFree86 does ALL of the register manipulation needed to change the video mode in and out of graphics. It's ALL done in the user-mode driver. For those drivers that DO have kernel components, the kernel sections are doing little more than DMA memory management, which cannot be done in user-mode. Register I/O and mode switching is STILL in user mode. ... Because the driver knows the card, it knows which of the addresses is the frame buffer and which has the memory-mapped registers. It maps that space into user-mode address space, and starts writing. And where can I find that code, which interacts with the driver? I think this EnterLeaveVT functions are only a small part of this. Is the most of that card dependent stuff in there as well? It doesn't INTERACT with the driver. It IS the driver. Every driver in the XFree86 source code (which you really need to read) includes EnterVT and LeaveVT entry points, that do whatever needs to be done to switch the board into and out of graphics mode. For many of the drivers, EnterVT and LeaveVT looks the same; they just call into other functions within that driver. DGA is one was for you to take ownership of the frame buffer, but like all of XFree86
[XFree86] triple head config problems (cyberblade,sis + tseng)
Hi For a couple of years i've been running a triple headed X setup on my workstation and X has been completely stable. Unfortunately the workstation itself recently died, and i've replaced it with a new machine, AMD athlon on MSI 6378L motherboard. The board has an onboard trident cyberblade of some sort (identified by X as a Trident cyberblade generic). My other two cards are pretty old pci cards, a Tseng et6000 2Mb ram and an Sis 6326 8Mb ram. The Sis has some problems and needs to be run with a few features disabled. Anyway, I thought I had it all set up properly, but I'm getting really weird intermittent crashes, where my X session just goes 'bang' and I got dropped out to the console. There's no meaningful messages in /var/log/messages or /var/log/XFree86.0.log, but its starting to get out of control. It's impossible to reproduce the crash, it seems to basically happen randomly around once every hour or two. The remainder is some details on the setup, and more are available if you need em. OS: Red hat 8 X: XFree86-4.2.0-72 kernel:2.4.20-20.8 athlon Cards are: pci bus 0x cardnum 0x09 function 0x00: vendor 0x1039 device 0x6326 SiS 6326 pci bus 0x cardnum 0x0a function 0x00: vendor 0x100c device 0x3208 Tseng Labs ET6000/6100 pci bus 0x0001 cardnum 0x00 function 0x00: vendor 0x1023 device 0x8500 Trident CyberBlade/i1 cat /proc/mtrr reg00: base=0x ( 0MB), size=1024MB: write-back, count=1 reg01: base=0x3f80 (1016MB), size= 8MB: uncachable, count=1 reg02: base=0xd580 (3416MB), size= 8MB: write-combining, count=1 reg03: base=0xd900 (3472MB), size= 8MB: write-combining, count=1 reg05: base=0xd000 (3328MB), size= 64MB: write-combining, count=1 If anyone out there has some suggestions of things to try please advise Cheers Tim ___ XFree86 mailing list [EMAIL PROTECTED] http://XFree86.Org/mailman/listinfo/xfree86
Re: turnning lcd on side for a display
On Tue, 28 Oct 2003 15:07:17 -0500, [EMAIL PROTECTED] wrote: I am trying to make X display as a landscape so I can turn a lcd monitor on it's side. Is this possible and if os could you point me to documentation of how to do so. Landscape is the normal orientation (wide). What you want is portrait (tall). With many drivers, you can use Option Rotate CW in the XF86Config file. Note, however, that this often involves turning off the hardware acceleration, since most graphics chips do not support hardware rotation like that. -- - Tim Roberts, [EMAIL PROTECTED] Providenza Boekelheide, Inc. ___ Devel mailing list [EMAIL PROTECTED] http://XFree86.Org/mailman/listinfo/devel
Re: C+T 69030 driver on powerpc
On Thu, 23 Oct 2003 17:16:19 +0100, Rob Taylor wrote: Has anyone sucessfully run the chips 69030 driver on powerpc based systems? I'm trying to get it running on a custon 7410 based board with two 69030's on,a nd i;'m seing soem off things: ... Also does anyone know the reason for the addition of 0x80 to the base address in the big-endian case? Many graphics chips include two separate views of the frame buffer: one that swaps bytes, one that does not. This makes it easy to handle endian mismatches. I'm guessing that's the case here. -- - Tim Roberts, [EMAIL PROTECTED] Providenza Boekelheide, Inc. ___ Devel mailing list [EMAIL PROTECTED] http://XFree86.Org/mailman/listinfo/devel
[XFree86] Setting up a virtual display larger than video memory?
I'm running Debian Linux on an old machine with a video-card with one meg on it. I'd like to keep running it in 16bit color, but the best I can do with that limited memory is 800x600. If I try to add Virtual 1024 768 (or higher dimensions) it gripes that (--) SVGA: chipset: tgui9680 (--) SVGA: videoram: 1024k (**) SVGA: using 16 bpp, Depth 16; Color weight 565 SVGA: too little memory for resolution 1024 768 Is there some way to make use of the more copious system memory to hold a virtual screen, and just make the 640x480x16bpp or 800x600x16bpp a virtual window into this? (my monitor's refresh-rate's not so hot when using the 8x6x16, flickering worse than a silent film, so 640x480x16bpp would be easier on the eyes). I'm using VESA rather than the Trident driver, FWIW, because of some trouble with the Trident driver. It's running XFree86 version 3.3.6a (though I can get 4 if it would help matters) Any pointers? Thanks, -tim ___ XFree86 mailing list [EMAIL PROTECTED] http://XFree86.Org/mailman/listinfo/xfree86
VIA's Savage Drivers
Some months ago, VIA released an XFree86 Savage driver in source form that included, among other things, a DRI driver and XvMC support. Has that code been integrated into the XFree86 source tree? Will it make XFree86 4.4? Or is it still waiting in limbo for someone to do the integration? -- - Tim Roberts, [EMAIL PROTECTED] Providenza Boekelheide, Inc. ___ Devel mailing list [EMAIL PROTECTED] http://XFree86.Org/mailman/listinfo/devel
Re: Kernel Module? On second thought...
On Wed, 15 Oct 2003 20:38:44 +, Raymond Jennings wrote: Oh well, I hope it was at least worth brainstorming. Brainstorming is (almost) never a bad idea. XFree86 *might* wish to consider a modulette to cover things that userland CAN'T do, like AGP, DMA, IRQ, and so on. AGP stuff can be done in usermode. Most the DRI drivers DO include a kernel module for handling DMA and interrupts. The idea of a generic DMA/IRQ handler is somewhat attractive, but the various graphics chips are so very different that doing anything generically is quite difficult. Or maybe the modulette could grant I/O privileges on behalf of an X server that opens it (thus the X server doesn't require root privileges)? You get the same spoofing issue here. If an unprivileged XFree86 server can gain access to the kernel module, then any arbitrary unprivileged application can do so as well. You really need some way to identify the XFree86 server as trusted. In Linux today, the only mechanism for doing that is suid root. Does the notion of a kernel module have ANY merit at all? Or was the idea complete garbage? As we have said, many of the drivers DO have kernel modules for implementing OpenGL acceleration. However, there is a tradeoff. You're getting additional functionality, in exchange for an operating system dependency and the inherent stability risks in moving stuff to the kernel. There is clearly a threshhold beyond which the tradeoff makes good sense. My key point is that the threshhold needs to be set rather high. It's not that the kernel idea is unconditionally bad. It's just that, for the typical 2D driver, the gain isn't worth the pain. -- - Tim Roberts, [EMAIL PROTECTED] Providenza Boekelheide, Inc. ___ Devel mailing list [EMAIL PROTECTED] http://XFree86.Org/mailman/listinfo/devel
Re: You suggest an upgrade, eh?
On Wed, 15 Oct 2003 00:19:58 +0300, Alexander Shopov wrote: I'm not quite convinced that that is an objective comparison however. Was Quake 3 running in both operating systems with the exact same 3D settings? Of course not! ;-) I am as skeptical as you are regarding similar tests. However - a demonstration like this helps me when proving that X is not slow. It does no such thing. It demonstrates that OpenGL on Linux is not slow, but to run those applications you essentially shut down X. You've demonstrated nothing about X's performance. -- - Tim Roberts, [EMAIL PROTECTED] Providenza Boekelheide, Inc. ___ Devel mailing list [EMAIL PROTECTED] http://XFree86.Org/mailman/listinfo/devel
Re: What about a kernel module?
On Wed, 08 Oct 2003 19:12:56 +, Raymond Jennings wrote: I'd like to suggest that you implement device-specific code as a kernel module. Have something like /dev/videocard or /dev/framebuffer, and a kernel module to control it Cause reads and writes to access video memory, and have IOCTL's for everything else (including sync frequencies, video modes, palettes, resolutions, hardware cursors, accelerator functions, anything besides video memory). The key problem with this is that kernel modules are Linux-specific, and further often need to be kernel-version specific. XFree86 runs quite well in many non-Linux environments today. Further, for reliability reasons, the goal of minimizing kernel code is one that is healthy in ANY environment. Never go to kernel mode if you don't have to. Remember, each transition to kernel mode costs cycles; if I can do the same operations in user mode, performance will be better. Implementing a kernel module might give access to more resources, like tighter console control, asynchronous accelerations, No, I don't think any of that is true. and it would allow unprivileged SVGAlib programs to run because the kernel module would do the dirty work, and a process wouldn't need root privileges to access SVGA. For security reasons, wouldn't you want to restrict access to the kernel module to root programs anyway? You don't want arbitrary code accessing your video card and changing the mode. Also I have Red Hat 7.0 and when I drag a window, it is SLOW. Scrolling with xpdf is also very slow. Could you somehow accelerate window movement and scrolling? I see no difference in blitting from an offscreen pixmap to a window, and blitting the window from the old position to the new one. In fact, the window movement ought to be FASTER because BOTH pixmaps are in video memory. Depends on your video card. There are certainly acceleration hooks for screen-to-screen blitting, but each driver implements different accelerations. What video card and driver are you using? -- - Tim Roberts, [EMAIL PROTECTED] Providenza Boekelheide, Inc. ___ Devel mailing list [EMAIL PROTECTED] http://XFree86.Org/mailman/listinfo/devel
Re: Starting XFree86 without an XF86Config file
On Fri, 03 Oct 2003 15:55:47 -0500, Bryan W. Headley wrote: It's 3 curves of 256 datapoints. Floating point or integer. What you have to assume is that every point on the curve is grabbable, either through a spline curve widget, or something like datapoint [123]^ red [ 45] green [ 23] blue [ 52] With the premise being, you scroll to whichever element you want with the datapoint wheel widget; the values for red/green/blue are actually what you'd call red[123], green[123] blue[123] (only because in the example above, we're at the 123rd element) This discussion needs an infusion of reality. I fully realize there are many graphics cards for which the color curves can be set exactly as you describe, as 3 arrays of 256 elements. The S3 Savages do it that way. However, the UI you describe is just silly. There is NO real-world reason to have a configuration widget that allows gamma setting on a point-by-point basis. For gamma, a single exponent (perhaps one exponent per primary) is the only thing that a UI needs to provide. Sure, there are specific applications that might need peculiar curves, or even non-curve mappings. Those applications can go talk to the API. The lesson here is that a configuration UI needs to expose the things that need adjusting; it does NOT have to expose every feature of the hardware. -- - Tim Roberts, [EMAIL PROTECTED] Providenza Boekelheide, Inc. ___ Devel mailing list [EMAIL PROTECTED] http://XFree86.Org/mailman/listinfo/devel
RE: VideoRAM option
On Thu, 2 Oct 2003 11:18:11 -0700, Sottek, Matthew J wrote: The Intel hardware used shared memory architecture and therefore use the VideoRam option as a tunable parameter. Is this right? The XFree86 Intel driver is able to reconfigure the system RAM partitioning on the fly? Color me surprised. I thought the amount of system RAM was fixed at boot time. -- - Tim Roberts, [EMAIL PROTECTED] Providenza Boekelheide, Inc. ___ Devel mailing list [EMAIL PROTECTED] http://XFree86.Org/mailman/listinfo/devel
Re: embedded (GPL)Xfree86
On 26 Sep 2003 11:05:02 -, jassi brar wrote: Dear all, I m working on porting the X11 onto an embedded environment. The regular XFree86 is obviously MUCH MORE than needed. Why do you think so? If you just take the core server, the drivers you need, and the fonts you need, you get a very compact solution. If all you need is the SERVER, you can throw out all the utilities and most of the libraries. I need some trimmed version of XFree86 for embedded linux(mizi), that too under GPL(i can't buy any comercial s/w). Cud you plz suggest me some suggestion or link where i can find the same. I have extensively google'd without much luck :-( I'm surprised by that statement, because there are a number of excellent choices for this, and there are several websites that have surveys of the available choices. Besides the TinyX/kdrive option that was already mentioned, you should also look at nanoX, which is part of Microwindows. -- - Tim Roberts, [EMAIL PROTECTED] Providenza Boekelheide, Inc. ___ Devel mailing list [EMAIL PROTECTED] http://XFree86.Org/mailman/listinfo/devel
[XFree86] Mapping a frame-buffer for a larger virtual display into a 1meg card?
Greetings, I've got a Trident 96xx series card and an older Multisync][ monitor which seems to work fine for basic purposes (in either 800x600x8 or 640x480x16). The trouble is that it looks like it's only got 1 meg of video memory on the card, and many apps want at least 800x600x16. I've got plenty of system memory, and am wondering if there's a way to convince XF86 to use a frame buffer for virtual resolution (at, say 1024x768x16) and then map that into the 640x480x16 display area. I've tried playing with the Virtual option (as this has worked on other systems with more video RAM but the same monitor limitations), but when I push it to these resolutions, it gripes about not having enough memory on the card to do it. I've seen something about a ShadowFB option, but I'm neither sure whether it will solve the problem, nor how (if it would solve matters) to go about configuring it to do what I want. Any assistance would be most appreciated! Thanks, -tim ___ XFree86 mailing list [EMAIL PROTECTED] http://XFree86.Org/mailman/listinfo/xfree86
Re: BitBlt for big Endian
On Wed, 10 Sep 2003 09:14:35 +0530, Nitin Mahajan wrote: HI! I have written the following BIt BLT function for the Little endian byte sex. But same is not working for theBig endian after doing some changes.Iam using CT 69030 card. Can any one please tell how to convert it so that the same function works for both little and big endian. The commented statements were for Little endian. The commented statements have exactly the same effect as the statements they replace. All you've done is replace shifts with multiplications and divisions. Endianness does not matter in arithmetic. The tricky part will be in moveDWORDS. If the image in memory has a different endianness from the frame buffer, you will have to swap every DWORD as you write it. You can't just do this: *puiDest = *puiSrc; You have to swap the bytes, something like this: tmp = *puiSrc; *puiDest = (tmp 0xff) 24 | ((tmp 0xff00) 8) | ((tmp 0xff) 8) | ((tmp 0xff00) 24); Some graphcs chips have both a little-endian and big-endian view of their frame buffers. You might check the CT 69030 docs to see if it does. -- - Tim Roberts, [EMAIL PROTECTED] Providenza Boekelheide, Inc. ___ Devel mailing list [EMAIL PROTECTED] http://XFree86.Org/mailman/listinfo/devel
Re: ansifying xwininfo.c
On Tue, 9 Sep 2003 21:57:23 +0200, Matthieu Herrb wrote: Thomas Dickey wrote (in a message from Tuesday 9) On Tue, 9 Sep 2003, Warren Turkal wrote: -#if NeedFunctionPrototypes -extern void scale_init(void); -extern char *nscale(int, int, int, char *); while I'm perfectly aware that extern is redundant, there are two things to be said in favor of keeping it: a) it's easy to grep for b) some compilers silently ignore conflicts with a static definition of the prototype, but can be persuaded to warn if the extern is explicit. (gcc does this, making it unsuitable as the only compiler to use for testing). ... and a 3rd reason is that 'extern' is not optional for variables. Wrong. Most traditional Unix linkers will allow the same variable to be declared without 'extern' in multiple object files and merge them into only one, but this behaviour is not a feature one should rely on. And it fact, at least the Darwin linker treats this as an error. If so, then the Darwin linker is defective. ISO Standard C variables do NOT require the extern modifier in order to have external linkage. A variable at file scope without an initializer is automatically extern. file_1.c #include stdio.h int xxx; int main() { printf( xxx is %d\n, xxx ); return 0; } file_2.c int xxx = 8; Those two files comprise an ANSI/ISO compliant C program which produces a well-defined result. A compiler which fails to compile this is not ISO compliant. -- - Tim Roberts, [EMAIL PROTECTED] Providenza Boekelheide, Inc. ___ Devel mailing list [EMAIL PROTECTED] http://XFree86.Org/mailman/listinfo/devel