On Mon, 21 Aug 2006 16:27:30 -0400, Grzegorz Adam Hankiewicz <[EMAIL PROTECTED]> wrote:

I don't think the former is a problem under Linux. For the
latter, there are other real reasons why things don't progress.
For instance, do you know why it is impossible to make flicker free
page flipping applications with the kernel's framebuffer at least
in 2.2/2.4 (and I believe this won't ever be solved in 2.6 or later)?

Yes, I know. I've done graphics in Linux, as I've said. That's why I know it sucks.

It is hackishly possible in 2.6 kernels to sync to vertical retrace. Using the new HZ=1000 allows a timer to be set to signal the program somewhat near the time when the vertical retrace will occur, and using "realtime" priority will nearly guarantee that the process is executed at that moment, leaving the graphics software to sit in a busy loop waiting out the remainder of the time until the vertical retrace occurs, which wastes about 6% of the CPU (regardless of CPU speed, because it's the inaccuracy of the timer that causes the waste), but is ultimately better than the situation with 2.4 kernels where the HZ=100 made it impossible to do any better than wasting all available CPU time.

The problem as I see it is that Linux uses fixed interval scheduling by configuring the timer chip to a fixed frequency. It would work much better if Linux used the timer in one-shot mode, simply setting it every time the interrupt calls the scheduler so that it times out the interval until the next time the scheduler needs to be called, but that seems to be yet another idea that has only ever occured to me.

Framebuffer allows a user application without priviledges to
draw graphics.

No it doesn't. Have a look at how it works. /dev/fb0 gives direct access to the real framebuffer, regardless of wether the current console is active or not. Because of that, you still need to be a privledged user, or at least the permissions on /dev/fb0 should require you to be. The other /dev/fb* devices aren't for each console like /dev/tty* are, they're for the additional video cards you don't have. Linux's framebuffer is as fucked up as any other graphics solution it has ever had.

But to provide smooth page flipping you have to
actually wait for a vertical retrace and change video content at
the right time. Now, you are doing this from user space, so your
process might not even get a CPU quantum in half a second.

Actually, I've already figured out how to make this work, a year ago no less, when I wished to display video in 320x240x8, and found that it didn't work if the changing of the color palette wasn't synchronized with the changing of the image data. Worse than tearing, it just about looked as if the screen was full of static when one image's pixel data was shown with another image's palette. So I looked for a solution, found that once again I what I wanted to do was one of those things that you're just not supposed to do in Linux, so I did what I always do: I told everyone to fuck off and then I came up with my own solution:

http://www.upbeatlyrics.com/temporary/vsync.tgz

You have to run it as root as it makes use of both "realtime" priority and direct I/O, it requires a 2.6 kernel with HZ=1000 as well as a few special system calls (you'll get unintelligible errors if your kernel is too old), but meet those requirements and it will figure out your vertical retrace rate and sync to it, all the while keeping statistics on how well it is doing:

Calculating your vertical refresh rate...
Your vertical refresh rate is 60 frames / second.
The correct itimer delay for your FPS is 15000 microseconds.
Retrace sync success to failure ratio: 1:0  Wasted CPU time: 1008 us
Retrace sync success to failure ratio: 2:0  Wasted CPU time: 687 us
Retrace sync success to failure ratio: 3:0  Wasted CPU time: 1343 us
Retrace sync success to failure ratio: 4:0  Wasted CPU time: 1000 us
Retrace sync success to failure ratio: 5:0  Wasted CPU time: 661 us
Retrace sync success to failure ratio: 6:0  Wasted CPU time: 1317 us
Retrace sync success to failure ratio: 7:0  Wasted CPU time: 970 us

...one, two, skip a few, ninety-nine, one hundred...

Retrace sync success to failure ratio: 1866:0  Wasted CPU time: 1257 us
Retrace sync success to failure ratio: 1867:0  Wasted CPU time: 923 us
Retrace sync success to failure ratio: 1868:0  Wasted CPU time: 1581 us
Retrace sync success to failure ratio: 1869:0  Wasted CPU time: 1238 us
Retrace sync success to failure ratio: 1870:0  Wasted CPU time: 897 us
Retrace sync success to failure ratio: 1871:0  Wasted CPU time: 1552 us
Retrace sync success to failure ratio: 1872:0  Wasted CPU time: 1213 us
I'd better stop now before I have so much fun I vomit!

It isn't the best solution, as it wastes 6% or so of your CPU time, but it is entirely possible right now, entirely in user-space, and if the kernel's scheduler used a one-shot timer, not only could run in user space without wasting 6% of the CPU, but it could do the same in the kernel, have its own API, and so root privledges wouldn't be necessary.

So, I don't know, maybe that counts as evidence that I just might know what I'm talking about. Linux can have good graphics, it's just that everyone, like yourself, has a lot of ignorant reasons why they believe it isn't possible.

Cool, but most available video hardware doesn't provide an interrupts
when vsync is going to happen, because they are commodity hardware
not oriented towards games. So you are left with active polling
as the lowest common denomiator. Nice. Now any user application
calling that ioctl is going to be able to perform a CPU DOS attack
at kernel level on the machine.

Yes, if you do it a stupid way, you'll get awful results. That's why you don't do it that way. If Linux used the 8254 PIT in one-shot mode, it could get interrupts whenever it needed them to a certain degree, and it could easily enough use the chip to time interrupts to sync with the vertical retrace while timing interrupts for other things simutaneously with a suitable algorithm. Getting multiple simultaneous interrupt frequencies out of a one-shot timer is a piece of cake, the slightly difficult part would be setting up a priority system so that when two application's signals coincide (either exactly or nearly), the kernel knows which one is more important. It wouldn't be a perfect solution, just magnitudes better than what we have now.

Morale: Linux is just not what YOU want it to be, and other people
may actually care more about stability/security than smooth graphics.

I don't know... I guess I'm just a genius. I don't know how else to explain the lack of flexible scheduling, vertical syncing, and a graphics API. I guess I have solutions to all sorts of problems to which everyone else believes there is no solution. Maybe I need to help humanity by creating a web site to document everything that I believe to be common sense as apparently much of what I believe to be obvious isn't.

3.  Create a VESA BIOS driver that uses this interface, and maybe
even a "Basic VGA" driver. [...]

Other's have already replied this doesn't make sense on anything
other than x86. Again you are making generalisations which don't
apply to others' hardware.

I'm not making generalizations, I just don't give a shit about anyone else's inability to use a VESA driver. A VESA driver would work for me, so I want one. Next you'll tell me that I can't have a driver for a PC keyboard because that driver wouldn't work on a Mac. Who gives a fuck? I'll use drivers that work for me, you use drivers that work for you, everyone's happy.

I guess it's just a matter that I'm not allowed to have an easy solution to my graphics problems if you can't have an easy solution to yours. Saying that I can't have a VESA driver because not everyone can execute the VESA BIOS makes about as much sense as saying that I can't have a VESA driver because not every video card has a VESA BIOS. That's effectively what an x86 card with a VESA BIOS is in a non-x86 system. How does something as common sense as this manage to go so far over everyone's head? Yes, I forgot, I have no sense of what common sense is.

You seem to despise a lot Linux' graphic support. Yet with
framebuffers a correctly written end user application can run on
any architecture you can think of.

So what? The same could happen with all kinds of other APIs. Portability is nothing but a matter of never accessing anything directly and instead using generic functions and then creating one of those generic functions for every architecture you want to be portable to. Any piece of shit API can make applications portable and so the fact that Linux's framebuffer API helps to make applications portable doesn't mean that it isn't a piece of shit.

Why are you under the impression that I want a non-portable API anyway? I'm not saying that applications should call the VESA BIOS themselves. That's what they have to do now and it sucks. What I want is an API that can work for everything and I want one of those everythings to be a VESA driver. Software is written for the API, and via that API it can set graphics modes without knowing or caring wether the graphics modes are ultimately set by a VESA driver, a closed source driver, an open source driver, or a magical fairy.

I fucking hate it when people mention portability. All that everyone ever said when I was writing my graphics system is "you're not supposed to do direct access, it isn't portable!" SVGAlib does direct access, and X11 does direct access, but apparently I'm not allowed to. No one understands that the reason SVGAlib and X11 applications are portable is because SVGAlib and X11 are written in an architecture-specific way, they access graphics in whatever way they have to, and then they're modified extensively for each architecture that wants an SVGAlib and X11. The applications which use SVGAlib and X11 are portable because SVGAlib and X11 do all of the non-portable stuff on their behalf. So there's no problem at all with my application doing direct access. It has an API for software to use. I write the x86 version, someone with a Mac writes the Mac version, other people write other versions, and when we're all done, any software which uses my API is portable to any of those systems because it can just use the version of my graphics system available on those systems because they all use the same API.

That's how portability works, but I can't get that into anyone's head. They all think portability is something that I can only understand to be some sort of magic because apparently everything can be written in a portable way and so anything non-portable, like a VESA driver, is automatically a bad idea because it isn't portable. Don't people stop and wonder about how stuff works, or do they just spend all day memorizing what they read on Slashdot so that they can repeat it all hoping to sound knowledgable?

I already see many office applications under Linux. With regards
to games, they are special applications which from the beginning
would prefer to not even deal with a multitasking OS, since they
are greedy and do "nasty" things to maintain 60fps.

Well, 30 FPS is absolutely sufficient, and wether or not it works depends on wether or not they can get their shit done in 1/30 of a second or not. Their greed is based on the assumption that that is what the user wants, or perhaps simple ignorance, who knows. It's possible to make a game with a configurable FPS. Once could set a game to 10 FPS if they wanted a lot of free CPU time, and "realtime" priority makes it possible for a game to be certain it can have as much CPU as it wants, but still release what it doesn't need to the rest of the system. That's how my vsync program above works. It uses "realtime" priority so that no other process has priority over it when it comes time for it to recieve its timer signal, but it still spends most of its time in a sleep call so that it only uses 6% of the CPU, and like I said, it wouldn't even use that much if Linux had flexible scheduling.

[...] If you write a game for Windows, your game can request
a 640x480x256 graphics mode and it gets it, not just on your
Windows system but on every Windows system on the planet. [...]

You are oversimplifying. Linux is capable of doing that too.

No, Linux isn't capable of that. It may work on your system, but I said every system, and it doesn't work on a signifigant number of Linux systems. I'm not talking about old linux systems either, I'm talking about ones that were installed last week. Many Linux systems are in framebuffer mode on text consoles, making mode changes impossible, and X11's ability to change modes without restarting is dependant on which driver you are using, in particlar it cannot change modes if it is using the framebuffer driver. Some distributions use the framebuffer driver because it is a simple way to guarantee that X11 will work without any problems. So on many systems, games are stuck with whatever resolution and color depth the display happens to be in, even though it is capable of many other resolutions and color depths, simply because Linux lacks a graphical system that is well designed forcing people to fall back on hacks like Linux's framebuffer just to guarantee ease of installation.

What you are thinking is:

I know what I'm thinking!

I'm not thinking of 3D. I'm thinking of ordinary graphics modes with no acceleration. I couldn't care less about 3D. I'm pissed that Linux screws up everything related to graphics, even the simple stuff. When the simple stuff works, then I'll consider being pissed that the more complex stuff doesn't work.

So much for your ease of programming. If it was so easy from
the beginning, why would Windows games always provide a list of
supported video hardware, plus nearly always ship with READMEs full
of "Graphic card xxx doesn't work unless you download this or that
driver" and "Graphic card xxx is simply buggy and you have to wait
for the hardware vendor to fix his driver"?

Because that's accelerated 3D. Find a game that runs in a simple graphics mode, either 2D or one of the older games that did all of the 3D in software. They don't require anything, they just tell Windows "I want to run full screen in 640x480x8" and it happens, no questions asked. Linux could do that too if it simply had a graphics API.

[...] So why doesn't Linux have games?  Well, simply, it's because
Linux sucks. [...]

Earlier you mentioned business make formulas about profit. Don't
they deal with the fact that no matter how great Linux, MacOS or any
other OS are, they don't have the market share to justify say 20%
of the development cost to support each additional platform?

If you had thought about what I was saying you would realize that the point I was making was that graphics in Linux shouldn't be so difficult that it costs 20% of the development costs to support Linux.

[...] The result is that game developers have every reason in
the world to stay away from Linux. [...]

Even if Linux provided all the graphic drivers you are asking
for, it still wouldn't matter, and companies wouldn't flee to
support Linux. Which distro? Which version of libc? Such thinks
make simple programs like web browsers require several different
packages/binaries for every popular distro out there.

OK, so you do understand why they don't want to support Linux. That's exactly what I'm trying to say, it shouldn't be so difficult.

As far as distributions and different versions of libc go, neither one matter if you do things correctly. Just create executables that aren't dependant on external resources and you'll have a binary that runs on every x86 Linux system. There's a binary in the vsync70.tgz linked above, you'll find it just works. That's because it doesn't use libc and so it makes no difference at all what version of libc you have. All it uses is the kernel, and so as long as you have a new enough kernel that it supports all of the system calls calls it makes, you're good to go, and thankfully the system call API is something that the kernel developers don't change between every kernel release.

Every platform would require yet another costly QA certification
process, more developers with knowledge about how stuff is done on
that distro to make it work and fix bugs, more technical support
specialised for the distro, etc.

Yes, you see, it shouldn't be that way, but it is. It is because every distribution uses a different set-up for graphics because every one of the set-ups sucks ass and so there's no clear winner.

And if/when this group manages to produce the open video card,
I will buy those, because I know they will be even better in terms
of reliability and support, thanks to being open, and I won't have
to hunt ebay to find them (or at least I hope so).

I'll at least want one, but I suspect the price will be too high for my tastes. I'm not a fan of 3D, so the cheapest video card I can find is what I go for. I'd pay more for a card with open documentation, but probably not as much as this card will cost. It would be nice if there were also a low-end card with no acceleration so that it was simple and inexpensive.
_______________________________________________
Open-graphics mailing list
[email protected]
http://lists.duskglow.com/mailman/listinfo/open-graphics
List service provided by Duskglow Consulting, LLC (www.duskglow.com)

Reply via email to