Re: User space background process blocking
On Wed, 22 Jan 2003, Fabio Alemagna wrote: Not at all! AROS does it, AmigaOS does it, exactly that way! Screens have buttons on the top right that let them be put to back/front, and applications NEVER notice it: they continue to run seamlessy and when their screen is put to front again its content is just like it should be. Now, that demonstrates that it's possible to do it, and that's the reason for which I got so involved in this discussion: I know it can be done - the OS I code can do it - so I see no reasons as for why ggi/kgi shouldn't do it. Yeah, but if you think well about this, you would see that it doesn't work in the general case. And that is what I / we try to tell you the entire time. I assume your OS forgets the ratio between video memory and main memory or lacks consoles. In the general case an application must be able to deal with the situation it can't draw on, for there will always be a user with a configuration where background buffers just don't work. Which means that it better deal with this situation gracefully, or else it will definitely be stopped. Now, for it deals with this situation gracefully, there is no need to make KGI a memory hog. And yes, most of the dealing must be done by ggi (the kgi target), but the application must be able to choose the behaviour. Questions here: why the hell should an application be able to draw on ? It is in the background, the user doesn't see anything, and the ability to redraw is needed for other targets anyway. True, sometimes it will cost a few seconds to redraw very complex scenes, but that is IMHO the price the user pays for console switching on slow machines. The user doesn't have to switch, it is a feature. Jos
Re: User space background process blocking
On Wed, 22 Jan 2003, Christoph Egger wrote: uhmm... I guess you mean: The user doesn't _need_ to switch... :-) Amen. English and early mornings don't go together... Jos
Re: User space background process blocking
On Wed, 22 Jan 2003, Fabio Alemagna wrote: On Wed, 22 Jan 2003, Jos Hulzink wrote: Yeah, but if you think well about this, you would see that it doesn't work in the general case. Sorry, you're just wrong there: it has worked in the general case for ages, since AmigaOS was born. Yeah yeah. Amiga OS, the greatest OS of them all. AmigaOS was not a general case, it was an OS for a very specific hardware platform, with very strict rules about hardware. AmigaOS had no backbuffer for 64 graphical consoles. The Amigas had one big advantage: They had hell a lot of memory for that era, and a damn good graphics processor. And that is what I / we try to tell you the entire time. I assume your OS forgets the ratio between video memory and main memory or lacks consoles. In the general case an application must be able to deal with the situation it can't draw on, for there will always be a user with a configuration where background buffers just don't work. Please, just make some examples. Easy, all cases where the user doesn't have a lot of memory: Say I got 64 MB ram, and we allow the kernel and some software to exist in main memory, so we keep 32 MB left. Now I got a good old ViRGE, with 4 MB memory, running 1024x768 @ 32 bpp. Nothing special, right ? With 8 backbuffers I eat 32 MB, or half my main memory. KGI allows up to 64 graphical consoles,plus 64 text consoles. Which means that it better deal with this situation gracefully, or else it will definitely be stopped. There is only one case in which the app should be stopped, and I'll explain it in another email, in all other cases the default should be to let the application run, you chose whether with backingstoore or not. That's what I say: It is allowed to run on, while it behaves. And we let nothing but the application itself decide whether it should be stopped or not. Questions here: why the hell should an application be able to draw on ? It is in the background, the user doesn't see anything, and the ability to redraw is needed for other targets anyway. True, sometimes it will cost a few seconds to redraw very complex scenes, but that is IMHO the price the user pays for console switching on slow machines. The user doesn't have to switch, it is a feature. What?! The user doesn't have to switch? Are you next time going to say that the OS doesn't have to multitask, it's just a feature, amnd if it wants to multitasks then it has to pay the price of slow multitasking? Indeed. If a user wants to run 30 apps at the same time, that's his choice. He shouldn't blame ME for the fact his computer is loosing speed. Oh, and for smooth multitasking you need to switch tasks a few hundred times a second, so milliseconds (and thus optimization) are crucial there. If multitasking costed megabytes per task, it wouldn't be there in your Windows / *BSD / Linux. A normal user doesn't switch consoles every second, it does cost Megabytes to use backbuffers. Besides, you might actually loose a lot of speed when using your feature: Your kernel has to start swapping a lot earlier. What annoys me most is that you don't answer my question, like you never really have an answer to the arguments against backbuffering in KGI. Or is this your answer: Backbuffering is a MUST for the user must be able to switch graphical consoles 20 times a second. Bah... Yeah, Bah. A freak that doesn't think beyond AmigaOS and thinks every OS should use backbuffers for AmigaOS did it. So far I have not ever heard a good argumented reason from you why implementing backbuffers is worth the loss of memory, the loss of cpu power, the trouble implementing it. It is nice doesn't count for me. But, I'm done with you. Arguments against backbuffering in KGI seem not to reach your mind. Jos
Re: User space background process blocking
On Wed, 22 Jan 2003, Rodolphe Ortalo wrote: I just start now in this heated discussion. Apparently, there are several subjects: 1) Whether generic backbuffering is desirable or not in general; 2) Whether it is possible to implement such a backbuffering mechanisms in a safe way on current hardware. As to 1), I guess this is primarily an application point of view. It seems to me that applications that are worth a backbuffer are those that do not have an easy mean to redraw their screen entirely. These are probably: fastly written small apps, and apps that do a lot of computations to produce the pixels (raytracer for example). A good raytracer doesn't raytrace directly to video memory, and I'd like to translate fast written small apps by buggy garbage. I can imagine that the raytracer requests a non-VRAM area to draw into, but definitely not that we give any app that is switched away a backbuffer. The raytracer itself draws in the requested area (and maybe when in foreground to the graphical console), can handle the copy-to-screen itself rather easily, and no ugly tricks have to be done to get the stuff mapped away at a console switch. If it gets focus again, the code does a copy to screen and continues like nothing happened. (Maybe that a PutPixel should also be done to the screen, instead of the buffer only). This is not the fastest way, but programs too slow to redraw don't care about that. Other applications (GUI, games) apparently have enough internal information to recompute a display. Furthermore, these applications will probably want to stop doing CPU expensive drawing while they are not visible. Apart from whether the application wants that, it is the current foreground task that definitely wants that. There is no reason why a game or GUI has to draw while it is not visible. And if it does so, it destroys the performance (and thus framerate) of your current foreground game. Concerning 2), I'd say that from the KGI perspective, it seems to me that such a backbuffer system should, if possible, be managed by a userspace library. It seems to me that it is much safer if userspace takes care of moving/saving fb-related memory. This is simply because there is usually no MMU-like features (page tables, memory protection) wrt the graphic board memory; so nearly none of the VM mechanisms used for main memory are applicable. Of course, KGI could provide a systematic save of fb memory into main memory on context switch, but, as many of you outlined already, given the trend in memory size on graphic boards, this is probably not realistic. Plus, it seems to me that, if LibGGI imposes a few restrictions on the application that wants a backbuffer (like, the interdiction to use DirectBuffer), it can be done pretty easily, and probably transparently to the application. (I noted that it may not be so easy to replace the functions of one display with those of the memory display, but can't these problems be solve?) Don't forget that when you let GGI do it, GGI must be fully aware of multithreaded applications. I step back as volunteer to get that fixed. A possible impact of this discussion is the fact that, from the KGI point of view, we should really take care to identify the resources that are preserved transparently accross a VT-switch and those that are not. (Pointer position comes to my mind for example, in the future, AGP-capable memory, or in-main-memory graphic buffers.) We should do that in the near future anyway, the resource stuff is also suboptimal for dual-head cards. Well, in fact, from the KGI perspective, I'd like to say that I am extremely reluctant to make provision to send all of VRAM to swap on VT-switch. In fact, I would be more favorable to use some of the VRAM as a swap area... :-) Which has been done before, I already used 3 MB of my ViRGE memory as swapspace back in the kgicon days. I really hope this day is not to be remembered as the day GGI became BIG BLOATWARE [tm] Jos
Re: User space background process blocking
On Wed, 22 Jan 2003, Rodolphe Ortalo wrote: On Tue, 21 Jan 2003, Jos Hulzink wrote: And, let's face it: For which programs would a background framebuffer be useful and worth the trouble ? For programs that can't refresh within #include fuzzylogic.h GGI_SMALL_AMOUNT_OF_TIME. I'd rather say the impatient user. :-) I really like the idea of Andreas: let the application take its time to handle a switch, until the user hits a second time on the vt switch keys... Smart applications will display Bookkeeping, pleeese, give me 1 second... :-) As I stated before: I don't know what is slower: a program that has to redraw its screen or a kernel that is constantly swapping for it is out of main memory. Personally I'm more irritated by the fact my performance went down the drain for the kernel found the swap space, than by the fact I got to wait a sec after a console switch. Jos
Re: User space background process blocking
On Mon, 20 Jan 2003, Fabio Alemagna wrote: On Mon, 20 Jan 2003, Jos Hulzink wrote: Why should you backup the whole gfx board's memory? Isn't there any way to back up only the area actually used by the application? You know, Amigas deal with full screen graphics and swappable screens perfectly since they're born, even with gfx cards. Good old Amiga... yes you can, but as you see memory is really growing too fast. Say I run Unreal Tournament 2003 on both console 1 and 2. I run 1280x1024 true colour with 32 bit Z buffer, the rest of the memory is filled with textures, and if I had 64 MB instead of 32, it would have filled those too. UT 2003 isn't that small itself, so it takes 64 MB of main memory already. What should I do when I switch console ? Out of memory error: you are not allowed to swich console untill you quit this program ? Swap Unreal to disk ? I wanna bet Amigas didn't save the framebuffer unless the amount of memory needed was much smaller than the amount of memory available. Besides, the software mapped to the background will still consume cpu power needed for the foreground task, and it only helps applications that use unaccellerated framebuffer access. Other apps will be blocked immediately by the accellerator. Think about the amount of software that will be. Best regards, Jos
Re: User space background process blocking
On Tue, 21 Jan 2003, Christoph Egger wrote: How about using the memory target within libggi's kgi-target, when the application runs in background? This background mode can be done by copying first waiting until the accel is idle, then copying the framebuffer content into the userspace memory obtained by the memory target. Then the application can continue reading/ writing from/to the framebuffer in the background. So an application that is switched to the background is allowed to eat all CPU cycles away from the foreground task while emulating what should be done by the accellerator ? I assume that is a feature nobody will like. Jos
User space background process blocking
Hi, KGI used to set a task to sleep when it was mapped away from a display. This didn't work always, and with some help from #kernelnewbies, I came to the conclusion that it isn't even what you want. A task doesn't need to sleep, it only has to stop drawing. I came to a very simple, but (imho) ingenious solution: Send the task a signal. When reading the docs, I noticed that POSIX even defines a signal specially made for this, SIGTTOU. This signal is sent to a background process that tries to write to its controlling tty. (*) The default behaviour of a process is that it is stopped. This looks a little ugly on its controlling text console (You see the same as when pressing ^Z), and with a fg the task starts writing again (without setting the mode back), but it is a start. LibGGI can catch this signal, and behave like it. It can block all drawing (maybe if a program requested a directbuffer, sleeping really is the only solution). When LibGGI catches this signal, the fg issue is over immediately. I'd like to know whether you like this idea, or I have to find another solution. Jos (*) SIGTTOU confirmed available on Linux, (Free,Open,Net)BSD, Solaris, MacOSX, other platforms untested.
Re: User space background process blocking
On Sun, 19 Jan 2003, Christoph Egger wrote: On Sun, 19 Jan 2003, Jos Hulzink wrote: Hi, KGI used to set a task to sleep when it was mapped away from a display. This didn't work always, and with some help from #kernelnewbies, I came to the conclusion that it isn't even what you want. A task doesn't need to sleep, it only has to stop drawing. I came to a very simple, but (imho) ingenious solution: Send the task a signal. When reading the docs, I noticed that POSIX even defines a signal specially made for this, SIGTTOU. This signal is sent to a background process that tries to write to its controlling tty. (*) Question: What does happen, when a background running process wanna READ from its controlling tty? IMO, a SIGTTIN signal should be send to the tasks, then. Correct. The default behaviour of a process is that it is stopped. This looks a little ugly on its controlling text console (You see the same as when pressing ^Z), and with a fg the task starts writing again (without setting the mode back), but it is a start. LibGGI can catch this signal, and behave like it. It can block all drawing (maybe if a program requested a directbuffer, sleeping really is the only solution). When LibGGI catches this signal, the fg issue is over immediately. I'd like to know whether you like this idea, or I have to find another solution. Well, signal handling is up to the targets. So the kgi target need to install signal handlers for SIGTTOU and SIGTTIN. The question is, what should the signal handlers should do then. There must be a way to get informed when the kgi target can continue with reading/writing to its console. I think, the best way is to wait for a second SIGTTOU/SIGTTIN signal, which informs the kgi target to continue. Wrong. SIGCONT is ment for this. It is sent to the tasks already it seems (for they come up again when they are switched to front). Unchecked though. Jos
Re: Bug in libggi's tele-target
On Sun, 5 Jan 2003, Andreas Beck wrote: Well I have a general idea, though nothing specific: You were using 32 bit modes, right? In that case, you might have BGRX vs. XRGB in the color schemes. This leads to sending B=ff G=ff R=ff X=00 for white, which gets interpreted as X=ff R=ff G=ff B=00 which is yellow. This problem is common in the entire communication world. The solution is by defining the communication channel X-endian and converting all data that is sent over that channel to X-endian. I use X here, for it doesn't matter if it is big endian, little endian, byte swapped, whatever. All you have to take into account is that you might want to have the group of machines that can handle the conversion as a NOP as big as possible, so using little endian (Intel, Alpha) means no speed loss for most of the target machines. For examples of this matter, see the networking layer. #include netinet/in.h htonl (host to network long), ntohl (...) htons (host to network short), ntohs (...) htonl converts a long in Host endian to Network endian. In any decent networking code you'll see the use of these very much. Jos
Re: World-Domination
On Tue, 17 Dec 2002, Christoph Egger wrote: What we need is something other do NOT have. libggigl and XGGI are a good examples. See Brian's mail and my next one for more details. What you need is what you got __finished__. The focus should not be on creating new things, but on finishing what there is. The KGI target is for both GGI and KGI a very important thing, but it doesn't work. (Major issue: the KGI target simply ignores the existence of devfs). Some work has been done on the GGIGL stuff, but it was never finished. Features are nice, but only when the foundation is rock solid. Jos
Re: libgii082rc2 can't build w/o installed libgii
On Tue, 26 Nov 2002, Martin Albert wrote: Hi, Folks! Sorry, i sent mail yesterday with broken header, should be fixed. Acknowledged, thanks... Jos
HELP / Leightweight structures/lowlevel tie-ins.
On Sat, 13 Oct 2001, Brian S. Julin wrote: Heya, Just throwing this to the gallery for comments. We have three new lowlevel libraries which (along with LibGGIMISC) will be forming the core of the intermediate API for the GGI Project, LibBuf, LibBlt, and LibOvl. Great... Congratulations... what is GGI about ? to get as many different libs as possible ? Anyway, have been talking with some programmers lately, trying to promote GGI / KGI, and had to admit they got good reasons not to use GGI. I'm just passing them through, don't take them personal from me: 1) Lack of 3D support. There are many libs available that can do this at the moment. Indeed they are not so flexible as GGI is, but they are cross-platform and use accelleration. Most programmers just want a normal window (might be fullscreen) to draw to, with accelleration. (see 3) 2) Drowning in libs. Their complaint was GGI doesn't look like a solid system, but as a heap of many small stand alone thingies. Documentation about them doesn't seem up to date. 3) Wrong priorities. To quote one of the guys I spoke to: Nice to see someone has been hacking GGI to run on that Compaq handheld (with my personal compliments). Nice to see GGI running on a cube. Nice to see gdoom running in 16 separate windows, nice to see it running on aalib. Where is the 3D support ??? 4) They are working on everything at the same time, instead of trying to get something really finished. This was said to be a reason not to join the GGI development: They simply didn't know where to start, for there was a lot of half-finished code, causing a lot of confusion. -- Personally, I think especially the lack of 3D will kill GGI sooner or later. I have been thinking about how to do it, and was thinking we shouldn't reinvent the wheel, but create a LibGGGL. (GL compatability layer), using the OpenGL syntax. Don't have time right now to explain, but please feel free to fire your bullets at me already. Jos
Re: dga still not working for 4.0.0.2
On Wed, 27 Jun 2001, Andreas Beck wrote: Could someone verify, if this is realted to XF86 Version, or to the graphics driver, as all machines I have access to use Matrox G200/G400 ? Ha - I found one with an ATI chip. The ATI driver does _not_ exhibit said strange behaviour. I suppose the mga cards are in some weird planar mode or something ... As a matter of fact, they are. Xfree sets the matrox cards in a tiled mode, for this improves accelleration. Normally the user shouldn't notice this. I'll check things out. Jos
Re: Sourceforge
On Tue, 13 Mar 2001, Andreas Beck wrote: Umm - I get Error creating user object when trying - could you check your Account ? I logged in normally this morning. Don't see what goes wrong... (donnow if sourceforge is case sensitive, I now see my user name is foske instead of Foske. I can log in with Foske, so I guess not..) Jos
Re: Galloc and the temple of Cheng
On Sun, 11 Mar 2001, Brian S. Julin wrote: What's in this release: 1) Docs, lots of them, and you can even read them in HTML at: http://mojo.calyx.net/~bri/projects/GGI/galloc/ Cool.. here goes my rest tonight :) 2) An API we felt was stable enough to write docs for. Really. We're not going to have to change it anymore. Honest. :-) Honest like in "yeah right ?" :) 3) Christoph is using the X target to demonstrate what a full blown target will look like -- some of it even works, too! The X target is being used to probe the full extent that a target can go to, so there may be a few things in there that will be moved to helper libraries later. Hmmm... my compiler can't wait :) This isn't a BETA release, there are bugs, and you shouldn't expect to actually use this code for anything quite yet. The BETA release will contain hints on how to use LibGalloc in an extension; that's when the rubber hits the road. Help needed: 1) Christoph has lots of questions for anyone who knows anything and everything about X programming. Read them, figuering them out... 2) If there are SGML nuts out there who can go through the docs and mark up the terms/filenames/etc. and do cross links to external GGI docs where appropriate, that would be super peachy. 3) Whoever is in charge or can volunteer to help fix CVS, DNS, sourceforge, etc. we will probably be wanting to commit this soon, and needing CVS at least working and a checklist of what should be filled out at SourceForge. Used to maintain some CVS thingies once... What about DNS ? We must get the ggi-project entry back ? Sourceforge must be set up completely I guess ? Offering my help... 4) How's KGI doing these days? Anyone got it on their TODO list to write the new LibGGI target? If that's you, we definitely want you to check Galloc out and offer at least your comments, if not code. Looking at the CVS it seems dead. Last update was 8 weeks ago: updating some Makefiles. The last code update is 5 months ago. Hope I am not right. Jos
Re: Galloc and the temple of Cheng
On Mon, 12 Mar 2001, Christoph Egger wrote: Let us know, when you want to contribute... :) Just started :) Reading the API and the man-pages doesn't answer the questions I have. For example, there is nowhere mentioned, if ... 1. ... X supports hw-sprites 2. ... X supports hw-buffers (Z-Buffer, Alpha-Buffer, etc...) As I said, working on that. Got more books on X here than is good for me... Used to maintain some CVS thingies once... What about DNS ? We must get the ggi-project entry back ? Sourceforge must be set up completely I guess ? Offering my help... Ah... Good! I asked Stefan, if he want to create a module for each extension and import the extensions into CVS there as noone seems to do that. Maybe he will come up and help you later. I must get back into the project and find out the current extensions, but, I'll do the best I can. ps: ggi.sourceforge.net doesn't seem to exist. Is that right ? Jos
Re: Again KGI-ViRGE update
On Sun, 16 Jan 2000, Christoph Egger wrote: I agree. I'm subscribed to both lists too, and I don't like it, when I get the same post twice times... A procmail filter prevents that, but the point is clear now. Only wonder why I didn't get complaints when sending KGICON patches... Jos
Re: GGI/KGI workshop (was: Re: Bye,)
Okay, now something serious. I want to know if there are enough people seriously interested in a meeting (i.e. are thinking about coming) for me to organise something. Please let me know if you are interested to come, (maybe with some comments on time, date, place, etc..) so I can consider what to do. [EMAIL PROTECTED] My ideas so far: - Date Friday April 21st - Monday April 24th (it's a far travel for most people, so it is nice to have some time). This is Easter - weekend, so most people will be free then. Longer is no problem for me, maybe we can insert some days after. -- Place: Eindhoven. I got a location in the center of town where we can drop computers, have access to much beer and other drinks. Local network facilities : 10 / 100 MBit UTP, Internet: 56K6 Modem (still looking for a way to get 100MBit there) Railway station with very good connection to Germany (Koln) and rather good connection to Belgium (Liege or Antwerp) available. Airport for charterflights available, good international highway connection. I'm still thinking where I can let people sleep, depends on the number of subscribors. There are many hotels within 500 m of the location above, but of course I try to arrange something free. - Program: Still to find out. Ideas: GGI demo / software demonstration Programming courses Availability to program / discuss programming problems A good night out in Eindhoven Movies Anything that you want and doesn't remove Eindhoven from the world-map :) ---
Re: GGI/KGI workshop (was: Re: Bye,)
On Fri, 14 Jan 2000, Steffen Seeger wrote: Mhmm... one day after my birthday, so why not... The 23 th is mine... So I'm afraid sunday afternoon I'll not be available, but I still have to figure this out. Don't worry :) I will try to keep this a spare weekend. Steffen
GGI/KGI workshop (was: Re: Bye,)
Rodolphe PS: I realize I owe many beers to many people on this list. A KGI/GGI workshop is starting to be really necessary... Where, when ... I propose Eindhoven, the Netherlands, long Easter-weekend (around April 23th, 1900). (Why Eindhoven ? for I live there, and it is close for the Germans, French and English (sorry for the American) :) (If anyone knows a better location...) I can arrange computer facilities, beer and very likely some places to sleep.
ViRGE KGI patch
Hi, Please find attached the source that makes the KGI ViRGE driver compile. Don't expect anything, but to prevent many people doing the same, I release it anyway. Jos
Re: ViRGE KGI patch
On Sun, 9 Jan 2000, Stefan Mars wrote: On Sun, 9 Jan 2000, Andreas Beck wrote: Please find attached the source that makes the KGI ViRGE driver compile. Don't expect anything, but to prevent many people doing the same, I release it anyway. Forgot to attach ? He _did_ say "don't expect anything" after all :-) I always keep my word :) If everything went right, there should be a second mail in your mailbox now. Auf Wiedersehen, Jos
Re: Massive KGI-0.9 update
On Sun, 2 Jan 2000, Jon M. Taylor wrote: B. S3 ViRGE a. Detects and maps card and regions b. Sets/mmap()s VGA textmodes c. All registers #defined and struct{}'ed d. VGA registers read from card e. Accels driver skeleton but no code yet f. No MMIO-only mode yet I'm working on it now, so please contact me if you want to help for the ViRGE. It would be very nice if we could use CVS for the KGI tree. Steffen, Jon, can we please go back into CVS please ? Current state of my code: Clock driver done. Chipsetdriver contains almost all code, only checking must be done (still figuring out how to determine the mode, this is quite different form the kgicon drivers...) Ramdac driver inserted in chipset driver due to lack of register access in the ramdac driver. Ramdac driver is an empty (but compiling) sceleton. Is the problem that the KGI-card must be the second card solved already ? A ViRGE can't be a second card (At least not my DX). Jos
Re: Speaking of bug reports
On Mon, 22 Nov 1999, Peter Amstutz wrote: Just read the bug reports :) It's actually a bug in the debian packaging system. It doesn't install the appropriate libggi targets by default along with the core libggi, so a program starts up, tries to get SOMETHING to display to and dies because it can't find anything. Not my fault, not your fault, it's debian's fault :) I remember an issue with Debian: They included a mp3 player in their distribution without notifying the author, removed the check on little-endianess (the author explicitly inserted), distributed the software, and tried to sue the author for the software not working on a Dec Alpha distribution Debian is exit IMHO. Jos
Re: Speaking of bug reports
On Mon, 22 Nov 1999, Brian S. Julin wrote: This I really doubt. The very concept of Debian trying to sue an author of freeware for non-functionality is absurd on many levels, especially considering the software is distributed "without warranty [...] of suitabilty for a particular purpose". Please provide some sort of reference. Asking my friend (the autor) for the E-mails I personally only read goes a little too far for me. And yes, absurd was also my idea about it. Besides, I'm not on this mailinglist to start a Distribution war. All I want to say is that I have also seen Debian doing strange things to software. Discussion closed. If you use Debian, fine with me. Jos
Re: default/fbdev/s3/virge.so
On Tue, 9 Nov 1999, [iso-8859-1] Niklas Höglund wrote: How do I get libggi to use the accelerated functions in the virge kgicon driver? It does already. Jos
Re: gicon and low-level I/O
On 2 Nov 1999, Marcus Sundberg wrote: Basicly you never want to cache anything that sits on the other side of the PCI-bus. MTRR write combining should always be turned on for memory, but never for registers. That's about it. AGP is ofcourse another story... Eh.. Enabling MTRR write combining for the videomemory of some chips causes crashes. Yes it's me again with my outdated ViRGE :) And I like to know your AGP story, for it seems S3 has managed to get a ViRGE listening to an AGP bus... (AGP0.0001X or something...) Jos
Re: tested KGI 19991017 snapshot
On Tue, 26 Oct 1999 [EMAIL PROTECTED] wrote: For information, I also working on re-porting my attempts at a Gx00 driver to the new KGIbut am still trying to learn more of the interface. I noticed two excelent summaries on Segeer's webpage that exmplain most of the theory, but I'm currently knee deep in header files :) Only knee-deep ? How tall are you ? With my 1m97 (don't know how many feet :(, but it must be many) I'm in them till my nose... Can someone call a doctor before I'm killed by header-file overflow ? Jos
Re: tested KGI 19991017 snapshot
On Wed, 20 Oct 1999, Steffen Seeger wrote: When the drivers are operational again. The Permedia2 driver works quite reasonably, at least setting modes and exporting the framebuffers. However, there is still some work to be done so that it can take over a running board safely. ViRGE driver is coming up too, though I got to redesign the whole chipset driver (had to be done anyway, but now I'm forced to.) Is there a KGI cvs tree ? Or can it come back in the main GGI tree ? IIRC the KGI stuff in the GGI tree is outdated ? Jos
Re: KGIcon (Was KGIcon on linux 2.3)
On Sun, 24 Oct 1999, Marcus Sundberg wrote: Martin Lexa wrote: 2) libggi: Why is in display/fbdev/fbdev.conf.in that line with virge? I think it couldn't be there. It doesn't do anything at all, so... Well... It used to do something, but now it's obsolete. I'll remove it. Besides, is the ACCEL_GETSUGGEST stuff working again for kgicon ? We need it to get ggiMesa aware of the specific chipset (ViRGE in my case). Or am I wrong here ? Jos
Re: KGI_COMMANDS, was: Re: Doesn't need vertical retrace!
On Tue, 5 Oct 1999, Andreas Beck wrote: struct kgi_3dtriangle {int x0,y0,z0,x1,y1,z1,x2,y2,z2}; Comments please ! I don't like this kind of 3dtriangle at all, it needs 9 copies of data to draw a triangle, maybe it's insignificant when you must call later ioctl, which surely eats up more cpu. But when you implement multiple commands (one call for each triangle is very slow), it will be more significant. I would propose this alternative: struct kgi_3dvertex { int x,y,z;}; struct kgi_3dtriangle { kgi_3dvertex *v0, *v1, *v2; }; The passing of pointers is undefined for KGI. It is not possible to transparently pass pointers across protection ring boundaries. Something similar would be possible, though by allowing a kind of "upload" of a vertex array that is accessed by (numeric) indexes later on. However I suppose the above call might be enough for the simple "common ioctl" layer. If you want to be really fast, you will need a card-specific communications layer anyway. Which call ? :) And... int? Aren't there cards that use float? Good point. I do not know about cards that use float. Should be pretty rare, as floats are very expensive to handle and rarely needed, unless the card has an internal geometry processor. ViRGE uses some fixedpoint 16 bit value, that's all I know... The format of this fixpoint value can be modified though (if anyone can tell me the use of this...) so you can call it floating point :) However most cards allow for fixedpoint, as this is how they work internally. Would 16.16 fixedpoint be o.k. ? Well... All I can say is that on my ViRGE I'd have to drop the fractional part of the Z value, and create a signed fixpoint value of the integer part... Still thinking if this would have consequences besides loss of resolution. I know... I should buy another videocard... :) Jos