Re: Gnuplot GGI driver
Hi ! Hi! I'm almost finished with a simple GGI driver for Gnuplot (which will make Octave work with GGI, which is what I want in the end). Cool. * How can I get the resolution of the visual? This one is simple: ggiGetMode(): * How can I tell if I'm using the X11 device or not? Umm - usually you can't. Why do you want to know ? There is a way of doing this using the wmh-extension, however it will basically tell you if you are running under a windowing system (of which only X is supported yet). Just call any wmh-function, and it will fail, if it cannot complete the request (say to set a window title), which effectively tells you, if you are running in a windowed environment or not. CU, ANdy -- Andreas Beck | Email : [EMAIL PROTECTED]
Hah! Avoid thread programming!
mutex locking time is... *heh* around 26000 microseconds... Longer than poll() Go figure. And you can't do interthread communication without -some- form of locking, be it kernel or userspace. (socket I/O locking is handled in the kernel) Anyways, for future reference : streams are faster than messages. Unless you have a -really- speedy lock-aquisition system... How does this apply to GGI? Easy. Don't do lock-aquisition in commands/etc unless you -have- to. Regardless of how thread-friendly you want it. Graphics programming isn't particularily multithread friendly anyways. You don't really want to have multiple command-streams to most videocards and even worse, mixing rendering with accels can be fatal on some. Now on the flip side, it'd be very handy to have a libGGIthreads library or something to give one a threaded interface to -all- of GGI. Sorry, just playing with massively threaded proggies finding delays :) G'day, eh? :) - Teunis incidentally, mandrake 6.0 + upgraded kernel + compiler
Re: Hah! Avoid thread programming!
teunis wrote: mutex locking time is... *heh* around 26000 microseconds... Longer than poll() Go figure. So much ? Well, it must be implemented via a system call then... Well, with threads managed by the kernel it is not so astonishing, but you can surely do much better if you only want to lock in-memory data structures... Which library do you use for threads/mutexes, linuxthread ? POSIX ? Oh. by the way, while I'm thinking, how did you do your benchmark ? An empty lock/unlock I guess ? Not by two threads on a uniprocessor I hope (this environment implies a scheduling so... here is your implicit system call) Well, anyway, I'd be interested in seing your benchmark if it's a simple test programs. So much time for locking seems astounding to me. (With Chorus on a 386 in 1994, it was much faster...) Now on the flip side, it'd be very handy to have a libGGIthreads library or something to give one a threaded interface to -all- of GGI. Hmmm. With additional delays, I don't know if it would be worth the work (especially as everyone would try to use for hype and then go back complaining about the performance)... Nevertheless, your application is very interesting with respect to this issue of real-graphic-world MT programming ! Rodolphe
Re: Windowmanager protocols
In what sense ? I.e. GGI is not a windowing system, so I don't quite see what to manage. GGI should draw the windowframes, buttons, icons or the background (root window). Or do you mean something like writing an X windowmanager that does its rendering of decorations and stuff via LibGGI ? The basic idea is to start from concepts of known smart/small window managers like blackbox or sapphire. That means it should take care of placing window within the screen, resizing, closing, iconifying, fullscreen windows or the visability hirarchy. Is it basically like opening additional Windows cleverly placed around the application window, or drawing to the root window (I doubt that) or what ? Placing windows within an application window should be realized by a graphic toolkit, which is an interesting subject of course and maybe should be thought over together with the window manager issue. I spent tool less time with berlin to know if there is some development in that direction integrated. But You doubt right, it should do what a "standard" X window mangaer does. I mean: If we can tweak the X target to render to such stuff, we could do way cool stuff like program icons running your favorite GGI program or titlebars and menus being interactive. Yeah, why not. But why should X render it? That wouldn't be exactly what GGI is designed for, but looking at the popularity of the screamingly colorful windowmanagers like WindowMaker or Enlightenment, it could give GGI quite a boost in interest. That was one intention of mine. Of course X Window programs should run also as well as programms with graphic libs based on X Window. I guess this is the jumping point, if we have an application using e.g. Qt, it will need a running X Server. My imagionary window manager would run preferably with kgicon, so we'll get in trouble here. But a way out of it is to drop the X - server in the system completely and link Qt, gtk+ or what ever to XGGI. XGGI would need to map the XSetWM*, XMoveWindow, etc. to GGI functions or maybe to a external library. No, because GGI does not have such functionality. We had considered something like it for the resize stuff at least, but AFAIK it is not implemented. Hm, bad. However there is libwmh that can be used to "remote-control" a windowmanager, if one exists. That is a LibGGI application can ask the windowmanager for placement, size, setting window titles, z-ordering and iconifying stuff. I haven't checked this out yet, I'm still collecting wisdom. Moreover, if you want to do what I propose above, that does not require any such functionality. It would basically mean taking any existing windowmanager and ripping off the widget rendering parts and replacing them with calls to ggi applications. This would require a running X server what I don't want to. Might be a bit ressource-intensive as it will eventually spawn off lots of processes, but most of them should sleep in the normal case anyway ... This is exactly what I don't want. A major reason why I want to use GGI for this win. manager is that X is a resource killer. I want a smart, fast, stable system. All instabilitys in the linux boxes I'm using comes form X, I therefor would really like to get rid of that beast. gr. matthias
Re: Hah! Avoid thread programming!
On Fri, 28 Jan 2000, Rodolphe Ortalo wrote: teunis wrote: mutex locking time is... *heh* around 26000 microseconds... Longer than poll() Go figure. So much ? Well, it must be implemented via a system call then... Well, with threads managed by the kernel it is not so astonishing, but you can surely do much better if you only want to lock in-memory data structures... Which library do you use for threads/mutexes, linuxthread ? POSIX ? linuxthreads - ie threading that comes with glibc-2.1 Oh. by the way, while I'm thinking, how did you do your benchmark ? An empty lock/unlock I guess ? Not by two threads on a uniprocessor I hope (this environment implies a scheduling so... here is your implicit system call) gettimeofday(). Yeah, I know, there's better ways. If anyone can tell me - please! I'm guessing at this at every step. All I know now is with -all- the I/O handling removed from the movie handler except for movie decoding the thing finally plays fluidly on my computer. Most of the time. I think my code's pretty poor actually for movie playing but I don't know how to do it better. Now on the flip side, it'd be very handy to have a libGGIthreads library or something to give one a threaded interface to -all- of GGI. Hmmm. With additional delays, I don't know if it would be worth the work (especially as everyone would try to use for hype and then go back complaining about the performance)... Nevertheless, your application is very interesting with respect to this issue of real-graphic-world MT programming ! Thanks :) [code at http://www.geocities.com/winterlion... its the 'ggiplay' proggy :] It plays linux quicktimes. And mp3's *giggle*. And mpeg-2 multimedia though the renderer's broken. Gonna write my own soon. G'day, eh? :) - Teunis
Re: fastest output
what about a memory visual using video memory ? This can be done by using e.g. the "sub target" on a main visual with a larger virtual area Which reminds me of a question I wanted to ask on the list here: Are there any plans to create a visual like structure which's only use is for caching a la double buffering or backing graphics ? Yes. this one goes with the blob and sprite extensions we haven been talking about a while ago. I even have a demo implementation of an extension that should handle both, but we decided to make it separate entities instead, but never got around to implement it. Got to pick that one up as well ... I really need those 127 hour days ... quite extensively but as I heared, the visual structure itself is too complex to be used for that. Depends. For backing store of whole windows and such, I see no problem. But for a few dozen sprites it's a bit heavy. However when you stuff the sprites onto a single visual, I'd say that's o.k. as well. We really don't want event support or anything in this structure, Well it has event support, but only in the sense, that the hooks are there ... CU, ANdy -- = Andreas Beck| Email : [EMAIL PROTECTED] =
Re: Gnuplot GGI driver
* How can I tell if I'm using the X11 device or not? If on console, I must wait for a key to release the console. If in a windowing system, I must not, since the plot will be in another window... And waiting for a key is giving problems on Octave. I see. There is a way of doing this using the wmh-extension, however it will basically tell you if you are running under a windowing system (of which only X is supported yet). Is this a GGI thing? Yes. Look at libwmh in the lib subdir. If you link the application with it and Initialize it and Attach the mwh-Extension to a visual, extra commands become available, like setting window titles on GGI visuals. There is a demo in there to show you how it is done. If not, it won't work because GNUplot is being compiled without any X support, so *compiling* will fail... It is a GGI thing, so no worries. Just like the GGI target will basically make the internal X support of gnuplot redundant. It works by hooking into the dynamic loading mechanism of LibGGI. When it sees the X renderer being loaded, it will load the X specific override functions for the WMH functions as well, which will magically make them work on X (or other windowed environments), but fail on others, which should be precisely what you want to know. I would like to be able to run in X even if in compile-time there were no X-related stuff... Sure. Should work. Btw, the Octave ggiGetc problem persists... If someone wants the GGI driver to test with it I can send it (its very simple). That's not an easy thing to solve ... You might want to try setting GGI_NEWVT. CU, ANdy -- = Andreas Beck| Email : [EMAIL PROTECTED] =
RE: fastest output
Video memory writing usually is slower than computer memory. If you use video memory for drawing operation you have to pay for it. For example software alpha-blending in video memory is usually slower than the same operations in usual memory, but software blit from usual memory take more than page flipping and copy from video to video memory. So there is no true answer for all combination of drawing operation. Also we have to take in account using of accelerated functions. But often those functions are slower than software equivalents. Dmitry Semenov -Original Message- From: [EMAIL PROTECTED] [mailto:[EMAIL PROTECTED]] Sent: Saturday, January 29, 2000 03:11 To: [EMAIL PROTECTED] Cc: [EMAIL PROTECTED] Subject: Re: fastest output Andreas Beck wrote: what about a memory visual using video memory ? This can be done by using e.g. the "sub target" on a main visual with a larger virtual area I don't understand. You ask me to allocate a larger visual and then to use the bytes currently not visible for my other purposes ? I think that is a bad idea. First of all, I don't know how many extra buffers I need. This is all dynamic. If a client connects to the server and asks for a graphic he can run his quicktime movie in, I want to back up everything behind and in front of it so I don't need to traverse the scene graph each time a new frame is rendered into this video graphic. In other words, the number and size of drawables I might want to allocate are completely arbitrary and only known at run time. Therefor I want a 'drawable manager' which is responsible for (V)RAM allocation and deallocation on demand. This might be more in line with what you refer to as an extension. I just want to remind you that we are (will be) quite keen on this. Stefan ___ Stefan Seefeld Departement de Physique Universite de Montreal email: [EMAIL PROTECTED] ___ ...ich hab' noch einen Koffer in Berlin...