Just a side note: This conversation is getting huge, and it's potentially off-topic. We should quickly trim it back and discuss some of the issues off-list. I shouldn't let my personal fascination with the topic cause me to behave unfairly to the rest of the members on the list.
Anyhow.... On 8/20/06, Richard Cooper <[EMAIL PROTECTED]> wrote:
On Sun, 20 Aug 2006 00:27:48 -0400, Timothy Miller <[EMAIL PROTECTED]> wrote: > You start off very hostile, and that's going to be a barrier for some > people reading what you wrote. Well, believe it or not, you're the first person _ever_ who has had a positive response to what I have said.
How did you approach them? Was your hostility in your first post to this list borne entirely out of past frustration from people not listening to you? What was it about what you said that put people off?
> You've just joined a mailing list full > of people who want revolution. Maybe for different areas, but we > nevertheless understand your drive and appreciate it. I just saw the two mentions of Slashdot on the mailing list page and figured I had joined a mailing list full of Slashdotters. I'm probably alone, but I wouldn't consider being mentioned on Slashdot to be a complement, especially for a project like this. Slashdot generally isn't into anything that isn't just one big misunderstanding.
Slashdot have consistently snubbed this whole project for a long time now. Perhaps they're doing it for our own good?
> We like designing graphics cards... and hardware in general. Some people > like golf. Some people like hacking OS kernels. Others like designing > chips. What's wrong with that? Nothing at all, I just thought you were only doing it because you believed it was necessary. Doing it because you want to is completely different and in particular nearly always a valid reason. I like hardware. I have a Z80 computer that I built myself...I'll have to take a picture... http://www.upbeatlyrics.com/temporary/z80_front.jpg http://www.upbeatlyrics.com/temporary/z80_back.jpg
Neat. I was fortunate enough to get to work with a few classmates on a 68000-based project as an undergrad. One thing that drove me nuts about it was that a number of other groups in class bought mostly pre-build microprocessor boards and just added stuff to the perf boards attached. We hand-wired everything on breadboards and were not able to get it to work right in time. I still got an A in the class, so I think the professor must have taken that into account.
Rather than deal with burning ROM images, I built a serial port interface
[snip]
source code to execution in the Z80 takes only 15 seconds, much faster than using an EEPROM.
I've pondered this problem myself before. I like your solution.
I wrote the assembler for it as well, as I couldn't find any Z80 assembler that I even remotely liked: http://www.upbeatlyrics.com/electronics/z80/sarcasm/ That may well be the only assembler in existance which allows more than one instruction on a line. For some reason, any time I mention multiple instructions on a line to an assembly programmer, they freak out and insist I'm not programming in assembly anymore. Apparently being able to work with poorly structured code is a badge of honor and they're offended that I'm attempting to claim it while using a tool which doesn't make it so difficult, or something like that.
Intereresting. It seems like every hacker's invented his own programming language. I wonder why they would be bothered by what you did (rather than impressed). I mean, how is your approach any weirder than using macros?
Two weeks ago I used the Z80 to create a drum machine. Here's a video: YouTube: http://www.youtube.com/watch?v=4cB1J8mjvgI The origional .AVI file: http://tinyurl.com/qzjcz I don't think the video is that interesting myself, but everyone else seems to love it.
People love to see results.
Anyway, I definately understand your love for hardware. I would actually be quite interested in an open video card myself, just
[snip]
cells, and a graphics mode of some sort. Over time, I eventually came to the point where I would settle for anything at all, but there simply isn't anything. I've discovered two video chips, but documentation for neither, and so I can't use either one.
I would suggest you get into FPGA design. With your experience in breadboarding and coding, it should come naturally to you. Then you can design whatever kind of video controller you need, as well as put any kind of CPU design you want also on the same chip. The only disappointing thing is that one FPGA looks much less impressive than 10 square feet of breadboard. :)
I did look into making my own television monitor output once. I got as far as setting up some 74LS123 chips to create all of the timing signals necessary to create a black screen with a white box in the middle, but then I was a bit stuck trying to figure out how to implement dual access
I think one of the approaches was to interleave video and CPU by giving each alternate cycles. Evens for CPU, odds for video. It's completely deterministic, and if you're not going for ultimate performance, it won't hurt much to have the CPU slowed by only accessing memory every other cycle.
I once had a discussion group where several people were discussing with me a design for my OS. After a month we were making good progress, but then someone began talking about hot keys, which was completely irrelevant as we were discussing kernel design, and I guess it just became too boring for everyone. I even went as far as writing half of an assembler in assembly language, so that I would have something I could later port to my new OS, but then I lost interest as well. I start all kinds of projects and quit half-way though.
It can be a challenge to maintain focus. I've had people unsubscribe from this list because I am so resistant to feature and scope creep on this project. I listen to alternate ideas all of the time, but no one's given me a solid enough argument to completely deviate from what I know would work, at least reasonably well. There is inertia HERE, but that is inertia against getting so lost in our fantasies that we never build anything tangible.
either. I don't become as depressed as bipolar people usually become and my headaches aren't nearly as severe as most migraine sufferer's, so it's a complete mystery.
Have you considered other things like infections, immune dysfunction, or food allergies? I've recently learned that I have an allergy to soy, and that's been causing havoc with my immune system for a VERY long time.
> After reading the rest of your argument, I don't believe open source > vs. closed source is a central issue here. No, not at all. The only open/closed issue is the kernel developers being so anti closed source. I personally think it's pathetic nit-picking, I
I'm one of the proponents of open source drivers because I see immensely important practical and philosophical reasons for it. I understand the drawbacks to that philosophy, but no one way solves all problems.
Like I said, I think it's just the kernel developers wanting people to bend over backwards for them. The hardware vendors don't want to supply source code and they don't want to supply documentation. I don't understand their position, and I'll agree that it's just silly, but they work fine with Windows without such things and I can also understand why they might not play well with having their policies forced by an OS with a smaller market share. The kernel developers should be happy that anyone is willing to help out with Linux at all, even if they're only willing to do so only with binaries.
The philosophy that created Linux in the first place and caused it to grow to what it is now is antithetical to the idea of having closed-source drivers. They are wholy incompatible philosophies. Linux without open source philosophy (and open source drivers) isn't Linux. It's something else. Perhaps what you want is not Linux.
> On the other hand, visible source code is > part of what makes Linux an amazing things. As you start hiding bits, > you start taking away its ability to grow. Much of it's already hidden. Search the internet and see how long it takes you to find information on how to create a graphical system for Linux such as X11 or SVGAlib.
The information for these things isn't hidden. It's just completely absent. These things weren't designed. They were tossed together haphazzardly. THAT is the problem. No one sat down and actually THOUGHT about a right way to do things. You might want to learn C just for the utilitarian aspects of it. It's not the holy grail or anything. It's just another programming language.
enough that the mechanism isn't well enough documented for even people closer to Linux development than myself to understand the proper way to do things.
Linux has always been documention-poor. It's getting better, but there are a lot of things like this that trip people up all of the time.
> The very fact that it's open source is what makes it possible for > people with ideas like yours to make fundamental changes to the way we > do things. Well, it isn't possible for me, as like I've said, I hate C. Even if I did like C, I still don't believe I would get far. You are the only person I've ever talked to who has agreed with me. (aside from my friends, of course)
You don't have to be a C programmer to be an architect. If you design something well, C programmers will find it interesting and help you implement it!
Well, I just might have to not unsubscribe from the list as I had planned to do in just a few days, after making sure that by dumb luck I didn't recieve a reply such as yours. I still can't believe someone agrees with me. Everyone has always argued with every point I've tried to make.
We have our own problems here, I'm sure. I think part of it is that we don't have a status-quo yet. Another part is that I enjoy debating (a little too much), but I hate pointless debate. If you have an argument, I'm going to address the points you make. I almost never completely disagree with anyone. Everyone has good points to make. If you keep that in mind, you can learn a lot. A trend I've seen among a lot of "experts", though, is that they feel threatened when someone wants to change the way they've been doing things. Some experts understand their stuff well and can adapt to new ideas. Some fought hard to learn what they know, but their knowledge is crystalized. You may be running up against that.
>> I've given up. > > No, you haven't. If you had, you wouldn't be posting here. Well, I had no intention of replying short of a reply like yours, but I guess I would have unsubscribed from the list immediately if I had no hope of seeing one. > the fact that Linux graphics architecure needs to be completely redone > is a no-brainer. Honestly... I'm in disbelief that someone else typed that sentence.
On the other hand, I think LOTS of things need to be redone completely, and I'm not sure Linux Graphics is at the top of the list. How about intellectual property law?! :)
> I didn't understand this until I read what you'd written below. I > don't think we want to remove it from the kernel completely. But I do > agree that it should behave in many ways like a user process. The > main problem is that we need to have it running very early in boot. > This is also one of those things that is so core and fundamental that > I really don't want to remove it from the kernel and make it rely on a > userspace app. I'm not a fan of scrolling text. I'd just as much like to see the kernel's startup information displayed in a nice menu system or something much better formatted. Forget the VT-100 emulation, just load the video drivers at boot and have the kernel display a nice boot logo with some text at the bottom about what exactly it is doing at the moment, and if something important comes up it can display more information then.
So you want to do like the Mac... never have a text console. Everything's graphical from the beginning. If you never have a teletype to begin with, you won't become dependent on it. But with that, many other aspects of Linux, like the command line, may have to go as well. I won't say that I'd miss it much. :)
ways. To help slow modem communications, there are macros as well, which allow you to send one or two byte codes which result in up to 256 bytes of codes being inserted into the display stream.
I like your ideas about doing away with ASCII keyboards and dropping to scancodes. X11 does this and also provides facilities to translate to ASCII. Are you saying that you want the graphics API to be a byte-stream protocol rather than a function-call interface? There are tradeoffs either way.
up. Which is what I've been saying: If things are easy, people won't be able to pass up the opportunity to do them. Make Linux programming easy and people will do all sorts of wonderful things.
Well, perhaps you could get started on what you think should go into the graphics API, and we'll see where it goes.
As an example, my software works by running as a seperate process reading and writing these codes to anonymous pipes attached to another program's STDIN and STDOUT, and so it is possible to use it to write Perl scripts with graphics support.
It's like how X11 is network transparent. Nice. I might suggest a layered protocol. The lower level is a local function-call interface that has low-overhead. On top of that, you can use a serializer. You can have an 8-bit serializer optimized for speed. And you can have an ASCII serializer that is more human-readable. And moreover, you can invert the layering so that a program written to the function-call interface gets its commands converted to bytestreams.
Where was I going with this?... Oh yes, the kernel using the VT-100 at boot. It doesn't need it, there are better ways to do things, and it couldn't hurt Linux at all to be forced out of the 1980s.
I think it was Jon Smirl (someone I think you would get along with) that suggested a trivial ASCII-only interface in the kernel just for boot messages. No virtual terminals. Then a daemon takes over later, providing that functionality.
I'm sure everyone would like to see a nice boot graphic anyway, definately something better than that stupid penguin you see in framebuffer mode which not only doesn't fit but apparently no one could be bothered to so much as center it. I honestly just want to beat my head into a wall every time I see it.
Well, there's always the memory space necessary to hold the graphic....
> Lots of people have made this suggestion before. I'm one of them. > The only problem is that we'd have to define some sort of pcode > language for the drivers to be written in. Otherwise, you won't be > able to use the same drivers for x86, x86-64, PowerPC, Sparc, etc. > Architecture-specific binaries are out of the question, because > they're not portable. Why do they have to be portable? Is it a matter that if graphics don't work for PowerPC then they're not allowed to work for regular PC either? Do these other architectures even have PCI slots? I hate portability. I may be forced into going into why by the end of this.
Linux's portability has contributed to its growth in a number of different ways. The obvious bit is that it's on more machines. But portability forced Linux to generalize and clean up a lot of facilities that sucked a lot more before than they do now. Portability is part of the driving force behind the development of generalized and sensible infrastructures such as your proposable about centralized Linux graphics drivers. Embrace the _whole_ philosophy. If you're going to generalize, don't stop short.
> Perhaps a pcode-to-native compiler could be part of the installation Well, I suppose if that would make everyone happy, though I think that is overcomplicated and unnecessary.
For the most part, it's either open source or pcode. Cutting out portability would be like cutting out open source. It removes from Linux the things that have made it great so far.
I've got a box full of 480x64 LCD displays, and if I had a PC with no spare monitor, I might be tempted to code up a graphics driver for it. It would essentially just have to take commands from the kernel and shove them out a parallel port, which might not be possible with this pcode...or possible at all if the kernel demands a memory mapped interface unless I could just point it to shared memory, but more importantly I'd like to just write the driver in whatever language I choose to write it in.
That's part of why it's better to have open source drivers. Developing the API _will_ impose preconceived notions and limitations. But the pcode idea will be even more restrictive. With a proper framework like what you want but also open source drivers, then no one has to anticipate that the graphics driver might want to talk to the parallel port driver. You can just use the facilities there to do it.
pratically, I think the closest we can come to that is to supply a standard API that they can use from any programming language and from binary only code. Forcing them to learn a new programming language and rewrite all of their driver code for this new language isn't making things easy at all.
We could provide a GCC back-end that allows the programmer to compile any language to the pcode. And then a GCC front-end that compiles pcode to native. Just like Java and C#. We'd just want to simplify our VM model because it has to work well in a kernel, which Java and C# are not suited for.
I'm sure it's a good idea from the perspective of getting the portability that everyone seems to be obsessed with, but portability really needs to be dropped like a bad habit when it gets in the way, which is to say just about always.
The portability is the reason Linux has made a relatively seamless transition to 64-bit, while Windows hasn't done it yet. No, I think portability is an underlying cause of many of the things that you and everyone else like so much about Linux.
> Is this what you had in mind? Let people with a PowerPC get PowerPC cards from a PowerPC vendor with a PowerPC driver. That will fix the problem easily.
I don't know about you, but I like interchangable parts. I'd like it very much if the same graphics card that worked in a PC also worked in a Mac. You want a modular OS... don't get it at the expense of modular hardware.
> But there's no room for Free Software > developers to improve the drivers, taking away part of the biggest > benefit of Free Software. Why cripple that philosophy? Well, it doesn't work as well as everyone would like to believe it does, and I don't like other people's philosophy getting in the way of me being able to enjoy my computer. If people want open-source drivers, let them buy video cards that come with open-source drivers. Let those of us who don't give a shit buy whatever we want. It's not fair to pull for your cause by removing everyone else's option to not care about your cause.
Linux is what it is because of the philosophy of having open source everything. If that philosophy were not there, Linux wouldn't be what it is. It would be Windows or something else. You cannot separate something from its origins. Aside from graphics infrastructure, do you like Linux? Do like the way it is? It's not perfect, but it's very good, and it's very good because it was built under an open source philosophy. Take away that philosophy, and you don't have Linux anymore.
> Before long, > every sort of driver will have an API defined for it, relegating the > open source parts of the kernel to only rudimentary pieces. I would love to see that myself.
Then you don't want Linux. I'm not saying that alternatives to Linux are bad. You just don't like Linux. Have you looked into SkyOS? How about BeOS or whatever they're calling it now? There are a handful of others that have learned engineering lessons from Windows, Linux, and many others and have developed solid infrastructures around all OS facilities. Rather than changing Linux into something incompatible with what brought it about, how about investing in something that already conforms to your expectations? Mind you, Linux really does need some change, but those changes needn't contradicts its basic driving force.
The kernel is overcomplicated. None of those drivers should be in there, but instead there should be a driver API. In particular, I do have the knowledge to create my own ISA cards, but unless I learn C, I can't make Linux support them. It should be possible for someone who writes a driver to write that driver in whatever programming language they like.
Well, we don't need to get into microkernel arguments, which is where you're headed. Suffice it to say that Linux really needs some extensive internal reorganization.
I also don't like that there isn't a means for user-space applications to create APIs compatible with kernel APIs. For example, I can't write a piece of software that connects to my sound card and creates a dozen fake sound cards and simply mixes all of their input because no other software would know how to connect to it because I couldn't create the same API that the kernel creates.
Arguments about in vs. out kernel aside, I think all driver interfaces should be virtualized. I think disk storage should be virtualized. I think EVERYTHING should be virtualized. Mind you, beyond a certain point, too many levels of abstraction can become a performance problem. Good generalization will speed things up. But done wrong, you'll just make things slow.
That was one of the features of my own OS design. The entire kernel API was inter-process communication. A process could connect to something named "sound card" or it could connect to something named "fake sound card" created by a user-space process and use exactly the same API over the IPC. It's pure modularity, and it's where my ideas of how to do graphics in Linux come from. In my OS you'd have a video driver which provided video access through an IPC API, and a "console multiplexer"
There are lots of microkernel OS's that are like this. And have you looked into Microsoft's SIngularity project? That's some neat stuff. There's a major push in computer science towards more and more modularity. It's better for stability and security. But it's not always better for speed.
program that accepts IPC connections and passes the data over a TCP stream to another PC where a similar program turns it back into IPC communications, and then programs that were never intended to send their video over the network can do it without having any idea they're doing anything differently. I'd like to see that kind of an OS, but like I said, I couldn't get people to stop arguing over hot keys to concentrate on what was really important, which was figuring out how to make that IPC as generic as possible so that it never had to be modified.
This also reminds me a lot of Plan 9. You'd like aspects of that OS too.
> After a > while, so little of the kernel will be open source that there's little > point in it being open source, and the development stagnates. Again, that's something I'd like to see happen.
You'd like to see development stagnate? Seriously, what you're after is NOT Linux. Linux is how it is because of the open source development process. Both the good and the bad. What YOU want is an entirely different OS.
With my OS, once the kernel was done, it would be done. No further work would be necessary because drivers would be drivers, software would be software, (and really drivers would be software), all the kernel would do is manage the CPU and memory, hardware such as DMA and IRQs, and implement the IPC. I'm just not a fan of monolithic kernels, and if a driver API caused Linux to turn into more of an average kernel, I wouldn't be disappointed at all.
Also, I think GNU Hurd has a lot of these ideas too. Basically, you're just taking the microkernel idea to an extreme, which a number of academic OS's have already done. Research them. You're more than likely to find an existing one that has exactly the structure you need so you can start building on it. Linux developers may not think like you, but lots of C.S. professors do.
> This > defeats the whole idea behind Free Software and giving one full > control over one's computer. You won't be able to look under the hood > anymore, making Linux no better than Windows. Again, I think my OS idea would do that far better than Linux. As for looking under the hood, how about a program that maps out all of the IPC API connections for you? You can see how all of the pieces are interconnected, take them apart, join them in different ways, all from a GUI application. If you want to stream your favorite radio station to a friend over the internet, you use a command line like this: mp3_compress 'sound:44100:16S:stereo' 'tcp:192.203.118.9:8000'
Again, I refer you to Hans Reiser's discussion on namespaces. This is exactly that, and it's incredibly powerful. Sadly, one of the things that makes Linux "great" is that it's popular and has lots of software available for it. Linux struggles against Windows due to install base, not any sort of technological edge. Your OS may be a CS prof's wet dream, but unless you can get all the old apps to work with it, few people will use it.
the kernel it wants access to something called 'tcp' with the options '192.203.118.9' and '8000' and that it requires write abilities and
One of the things that makes UNIX powerful is the unification of device and file namespaces. You're trying to go in that direction even more. Look at Plan 9.
the IPC calls to a filesystem driver there. Because everything uses the same IPC, it's as easy as building something out of Lego bricks. If you don't like where a particular block is, you just pull it out and put it where you want it, and it just fits because all of the pieces have the same connections.
You have the right idea. Reiser4 does this by implementing metafiles under files when they're treated as directories. Since it uses the same filename syntax as before (generally), programs that were written earlier can take advantage of the new functionality seamlessly.
learning how it works. A system like mine would allow you to simply learn what you cared about and ignore the rest.
Good documention goes a long way to helping with this.
better than Linux, so it would be a pointless waste of effort. If I already had the Windows equivilent of my Linux knowledge, I'd certainly switch to Windows just to be done with Linux as I think that Windows would be just a hair more fun to use.
Like I say, YOU don't REALLY want Linux. You're just resistant to change like many other people. :)
What Linux does is called innovation? It seems to me like they're simply constantly figuring out new and unique ways to screw everything up.
Heh. Reminds me of some other popular software vendors. :)
In any event, I didn't mean to say that it couldn't ever change, but that it couldn't ever become incompatible with itself. It needs to be the case that if someone writes a driver today for their video card, then that driver will work with Linux for at least the next seven years without any modification.
That isn't true for ANY OS that is under active development. Nor do many peripheral devices tend to get used for that long anymore either.
It can't be the case that a vendor makes a driver, puts it in a box, it sits on a shelf for a year when it is finally purchased, and the driver no longer works, and the vendor gets bitched at for claiming their card supports Linux when it doesn't. They should be able to write the driver and be done with it, not have to constantly update it. So new features aren't bad, it just has to be that drivers written to the previous standards still work just fine.
Linux is also popular because of the wide range of devices it supports out of the box... mostly because they're part of the mainline kernel and get upgraded along with the kernel. You don't like Linux largely because your values are completely different from the values of those who work on it. It isn't wrong to have different values. But it is does seem counterproductive to fight the core philosophy of Linux rather than looking for something with a different core philosophy.
Typically, whenever I find a project which hasn't been active for a few years, the software no longer works. It's usually because GCC insists on being portable to everything system except for two years ago and so the code simply doesn't compile anymore because those jackasses have once again changed their opinion of how everyone should write their code, but sometimes it is because something about the kernel has changed and it simply doesn't work anymore. This is especially true for graphical applications (and I don't mean GUI applications) as a result of Linux graphics being in a constant state of being fucked up in a different way every year. That's a mess no hardware vendor wants to get involved in, and that's what I think we need to avoid. People need to be able to just write a driver and then forget about it.
Microsoft prides themselves in being able to run really old apps on their latest OS. But they also frequently lament how it bloats their OS and impedes innovation. And Windows has never tried to maintain a consistent driver interface from one major release to the next. At Tech Source, we needed to support Windows 2000 for our medical cards. Then we needed to support XP. A lot of code could be copied. A lot had to be done from scratch.
> What I think we need are standardized interfaces that are scalable, in > the sense that they can grow and adapt to innovation without requiring > excessive work to adapt to. Some of those innovations may require > deep changes in drivers, unfortunately. With open source drivers, > those making the changes to the framework can also fix up the drivers > at the same time. No one ever fixed my code when they modified the kernel in such a way as to break it.
If that's so then they're not doing their job.
how the person who said that could be so sure. To make use of a WinModem, the kernel would have to make up for its shortcommings. Even with documentation they would still refuse to do it simply because of philosophical reasons.
Those philosophical reasons are critical to what makes Linux Linux. You cannot do away with them without turning it into a completely different OS.
The kernel would have to work with the driver to try to end up with a working video system, but that doesn't seem to be how things happen now, which is what concerns me. I can easily see the kernel developers changing the API a little and simply insisting that it isn't a big deal for everyone to update their drivers and so the kernel will no longer work with the old drivers because keeping the compatibility code wouldn't be 'elegant' or some shit like that. If you want 'elegent' then you need to make the API right the first time, not change it every six months and insist that hundreds of people make up for one person's mistake.
In the ideal, I agree with you, but there also needs to be room for supporting new innovations. This is no easy task.
bits for the fractional portion of the degrees. That's sufficient to specify a point on the earth to within 2.5 cm, far better than a GPS will ever manage to do, and in a format far less complicated than floating point.
It's sad when you find that people didn't stop to consider something simple like this, isn't it? I also find it to be sad when I make very similar sorts of mistakes. :)
In the same way, there may be many advantages to using this pcode, whatever it is (I have no idea), but if they aren't advangates that some particular driver developer is interested in having, it isn't going to make them any happier that you had their best interests in mind when you overcomplicated the interface.
The pcode isn't something they would ever see or generally have to think about. We'd provide them with docs, a toolkit, and a compiler that converts C (and other languages) to pcode. When installing the driver, the installer would convert from pcode to native. The driver developer would only know about this to the extent that they have some debug options they can play with.
I would say that if it can be done without anything fancy, then do it without anything fancy.
Unfortunately, the portability issue is paramount.
Quote: "If you represent a GPS manufacturer interested in qualifying your device for use with Linux and other open-source operating systems, we are your contact point. We'll need (1) on-line access to interface documentation, (2) a few (as in, no more than three) eval units. and (3) an engineering contact at your firm." So much for only needing documentation. I don't know if ERS wrote that or if someone else did, but ERS apparently wrote most of gpsd, and it seems from that that the gpsd developers really think that the hardware manufacturers are interested in kissing ass. This is GPS data. Documentation is sufficient. One test unit might be nice just to make sure your code is correct, but three test units is simply asking for free toys.
Perhaps. Of course since many FOSS developers are doing the work for free, it helps when they don't have to pay for the device.
In any case, it's obvious that the documentation isn't comming, so it's time to stop expecting it to and start simply making things work in whatever way is possible. Either that, or forever have an OS which has little graphics support. If that's what some people's ideology leads them to want, then let them simply not make use of the closed source drivers, but don't punish the rest of us for your personal philosophy.
Linux is a different way of looking at things. People want Linux, and the response is, "If you want to use Linux, then you should play by our rules. If not, use another OS." Many Linux users think it's heresy to suggest using a different OS. But by their words, they tend to say just that all the time. Anyhow, I like Linux in large part because it's open source. If Linux were to compromise and start making closed-source binary drivers standard, I would become very disillusioned and probably switch to FreeBSD or something.
> The only cost for supporting Linux, though, is releasing docs. And > any vendor worth their weight in salt has good internal documention on > their hardware. Actually, many of them don't, but that's their > failing. Well, it isn't happening, which means one of four things: 1. Releasing docs isn't the only cost of supporting Linux. 2. Releasing docs is a higher cost than everyone believes. 3. There is no documentation. 4. Hardware vendors refuse to do open-source drivers for ideological reasons.
To some extent, those are all true. Also, from a business point of view, I don't blame them for taking the stance they do. That presents them with a huge dilemma when it comes to Linux, which makes things very interesting.
As far as #4 goes, if the kernel developers can be stupid about things, why can't the hardware vendors be stupid about things as well?
Well, they are, aren't they! Here's perhaps a key disconnect some people make: Closed-source drivers aren't inherently stupid. But closed-source drivers in LINUX _is_ inherently stupid. Square peg, round hole. I have other reasons for disliking non-free software. I myself have been bitten plenty of times by relying on an expensive piece of software that I couldn't get to function because the vendor no longer existed or refused to support us. If I (or my employer in my behalf) pay $100,000 for a piece of software, they damn-well better support it forever, or else give me the source code!
Whatever the reason, it isn't happening, and it's stupid to sit around and wait for it to happen when we could just as easily come up with another solution.
Open graphics is coming up with ONE solution. ATI and nVidia aren't wrong for not wanting to write open source drivers. Fine. Let someone who DOES want to write open source drivers do it instead! Problem solved! (Well, one of them anyhow.) There is no reason why people should HAVE to use ATI cards with Linux. That's as much of a choice as the decision to use Linux instead of Windows. If someone is so gung-ho about using Linux, then they can put up with the limitations and deal only with vendors who support open source.
Well, I was thinking of two APIs. I think you figured that out. An API for the driver to communicate with the kernel and an API for applications to communicate with the kernel. I can't think of any way for the applications to talk directly to the driver when the driver isn't part of the kernel, but I think that would work as well if someone can figure out a way to do it.
I have a pretty solid picture in my head of how it should go.
> BTW, as far as APIs go, XAA is quite nice, and I've heard that DRI is > very good too. They're both very well standardized. But like > anything, they can be improved. That's the problem, of course. You > can't develop an API that's perfect. Well, I don't know anything about 3D graphics, so I'm just looking at this from a 2D perspective. People can screw up the 3D API all they want for all I care. I doubt I'd ever use it.
Hey, don't stop short of your goal here. To have a really good API, it has to be COMPLETELY generalized. Past a certain point, there isn't much difference between 2D and 3D. Just a few extra parameters here and there. (Ok, I'm over simplifying it, but there's no reason the APIs cannot be unified in a sensible way.) Lots of the differences would be hidden behind graphics contexts. This is something X11 does well but OpenGL does poorly (or not at all). A given rendering operation is a combination of two things: What do to and how to do it. For instance, what to do might be "draw a rectangle", while how to do it might be "red, with alpha blending". When you create a new GC, you get default settings, so your API looks very simple, and you don't have to think too advanced. But when you want to do something advanced, it's as simple as tweaking a GC attribute.
> And that brings us to back to the problem of unchangable APIs. What > if we forgot something? What do we do? Well, I hope you forgot something 3D, because if you forgot something 2D then I'm just going to have to be very upset.
You can't seriously think of doing one without the other. Your goal seems to be a truly universal API. Let's not do half the job.
> Here's an example: Many drawing engines have an "I am completely > done" interrupt. DMA hardware and software use that to set up the > transfer for the next block of commands. In the mean time, the GPU is > IDLE. Just sitting there wasting time. What if we wanted an > interrupt that said "I'm nearly done; go ahead and set me up for some > more commands"? The only drawing engine I know of that does this is > the one I designed for Tech Source. If you were to base your API > around existing GPU designs, you would be forever unable to take > advantage of this unusual feature, and you'd lose out on a potentially > significant performance boost. Is there any harm in sending an "I am completely done" interrupt when you aren't completely done?
Then it wouldn't be an "I am completely done" interrupt. The other cards send the interrupt when they're idle. That's just how they're engineered. Anyhow, I like Jon Smirl's idea of just pushing an interrupt command in as a DMA packet.
I just see it as a card with a buffer, and the interrupt as a "you can send more data now" interrupt. However, does the API need to even care?
Not in the slightest, although it should be designed to facilitate certain aspects of how things would be done by most hardware (like queueing things up and sending in bulk).
Keep in mind I don't know anything about 3D, but I would think the application just sends the commands to the driver via the API, and the driver has a long list of commands that need to be sent, and in particular, it's sufficient that the driver know what the interrupt means. The driver can simply check an I/O port or whatever to determine what the interrupt meant, and then either send more data or tell the application that the card is completely done. I would hope that the API doesn't insist that no more data be sent until the card is able to recieve it.
Exactly.
There is a problem with designing APIs around how you intend them to be used, but to be honest, and I know it sounds a bit like "I'm the smartest person in the world!" but I simply don't know how people screw them up.
I'm just calling for bulk processing. That's always a win.
The manual for gpsd says that zero values indicate unavailable data, when shortly before I was reading ERS complaining about how NMEA is bad because the only way to indicate unavailable data is by specifing zero which is also a perfectly valid value. When reading VESA documentation I also found several API items which were obviously unworkable but somehow became part of the standard anyway. Reading the API for Menuet was just bad design after bad design, and you would think that the guy who wrote Menuet's second sound API would have done it well enough to not require a third, but even the third guy only designed the API to do what he planned to do with it. I just don't understand how other people think.
Committees. :)
So maybe I'm being foolish, but I don't believe it's impossible to get the API close enough to perfect that any issues that come up will be signifigant. Even with your card, the problem would at most result in less than perfect performance. No matter how strange the card is, it should be possible for the driver to at least emulate a normal card, and even if that results in not every feature being available, it still results in a card that very much works.
Adding new calls to the API and new attributes and having a generalized way to refer to those attributes is one way to make a scalable API.
However, again, what I meant was that a driver that works today needs to still work seven years from now. Unless someone completely screws up the API, even a completely new way of doing graphics would only require a supplemental API.
I suppose we could think of ways to modularize the API. I don't like how X11 has "core" and "extensions". If we do away with that and define "API modules" so that nothing is a second-class citizen (and suffers from the added indirection), then we can make the API grow. Windows GDI has some interesting ways of doing things that involve fallbacks. You install whatever driver functions you have. GDI then uses what you provide. If you provide a specific function for something, it'll use it. If not, it'll try to use other functions to emulate it. If it can't, it quietly falls back to software rendering.
If the driver sucks, just don't use it. Don't implement stuff to fix the shitty driver for your video card that punishes everyone else who bought a card from a vendor who wrote a good driver. In particular, it doesn't do any good to prevent drivers from doing harm if the prevention methods complicate things to the point that no one wants to write drivers. The requirement to learn a new programming language is just one more reason not to make a Linux driver.
C is not by any means a new programming language. But in any case, part of the reason for poor closed-source Linux drivers is because the infrastructure is poor and poorly documented. Part of the problem is vendors who don't care to learn how to do things correctly.
I say just make a simple interface for binary drivers. Those who want open source drivers can simply refuse to use the drivers which aren't open source. Those whose vendors supplied bad drivers can either use the bad drivers or simply buy new video cards and remember never to buy that vendor's shit ever again. Those whose vendors supplied good drivers can use those drivers and tell all of their friends how stable they are.
There are some who would say that there are no good closed-source drivers. Not that there couldn't be. It's just that the vendors do a lousy job. This is another argument people make in favor of open source drivers.... many eyes and all that.
it off anyway. Applications generally aren't built from lines and solid fills anymore, and so there isn't much to be accelerated. Drawing the window border is accelerated, drawing everything inside the window proceeds at ordinary speed because it's all shiny images. So what's the point? Just get a framebuffer and draw into it.
Actually, a lot of these things are "accelerated" by pre-rendering them as pixmaps and using bitblt to display them. Bitblt is probably the most important function to implement. By no means would we want our API to give direct access to the real framebuffer. Different framebuffers have different formats.
However, the usual way of dealing with a slow API call is the same here as well: buffer up as many of them as you can and send them all at once.
Then we definitely see eye to eye on that one. Bulk processing. Minimize overhead.
If possible, you should do the same for 2D video memory. When you're a game or something else which simply redraws the screen over and over again, the CPU time wasted slowly copying that data to video RAM frame after frame is a signifigant portion of time which could be better spent doing other things.
Most of the work is rendering, but a lot of it is image copies. Some are copied from off-screen graphics memory (fast). Image uploads from the host to the GPU would be done via DMA.
> Plus, no end user would ever notice that the driver is not a native > binary. They might notice if the hardware vendor didn't bother to write it because they didn't want to go through that much trouble.
But we're defining it just so that a vendor has an easy way to write drivers! Anyhow, I see no problem with writing a general VESA driver that, lacking a real driver for the card, uses VESA BIOS calls to do rendering. It would be hidden behind the API, so we don't care.
> Moving the console switching into userspace or keeping it in the > kernel is neither here nor there. What?
I'm just saying that I don't care much if it's in the kernel or a daemon.
> With an API that is sufficiently low-level (nothing even remotely > approaching the concept of widgets), it should stay relatively > flexible for some time. The idea is that the API simply remain simple, and that code written for today's API work with the API we have seven years from now. Anything else is just a reason not to bother to supply a Linux driver.
For the most part, but I am seeing ways to make the API inherently scalable and extensible... without major performance hits.
I don't know... Last time I brought this up somewhere I had someone insisting that I was wrong because they had a copy of Quake for Linux in a box that they bought in a store.
I've observed many times (not on this list!) people not bothering to actually read what someone has to say before responding. It's very slashdot. You were probably a victim of this. Abstraction and APIs and stuff goes over many people's heads. If someone doesn't know what an API is, they're not going to be able to follow your argument or have any understanding as to why your argument is important.
>> Who wants to fix something when everyone is >> simply going to tell you that you're fixing something that isn't broken? > > Oh, no. It's broken alright. You're the first person to ever agree with me.
I don't think it takes much to point out a handful of aspects of the way Linux handles graphics poorly to recognize that there's no vision in it. Multiple drivers competing with each other, no centralization, no virtualization, race conditions...
> Forget closed vs. open drivers. Linux needs to evolve for the better, > and this is one area where it's just embarassing. I won't necessarily > support you on the idea of closed-source drivers, but I'll be right > behind you all the way when it comes to developing a unified framework > for dealing with graphics in an intelligent manner. Too much legacy > crap has to be cut off and replaced with something GOOD. I've never written any code that I've felt I couldn't just give away, but I still don't buy into the idea that everyone should do things my way.
I agree with you. And that's why I'm saying that trying to make Linux into something not open source is the wrong approach. Don't try to make them do things your way (at least not completely).
If people don't want to give me their source code, I can't say I care. I use programs which are closed source and they work just as well as anything else, and when they don't work I'm every bit as able to fix them because I've only ever read one piece of source code written by someone other than myself that I've actually been able to make any sense of. The only way that open source benefits me is that it gets me software that I don't have to pay for.
Many other people see open source in the same way: Zero-cost. But don't forget about the people who work hard for no money to develop all of this free software. THEY do it because it's Free Software. If it were not free software, they wouldn't be able to do it, and they wouldn't want to anyway. When you use Free Software, you are benefitting from the work of a certain kind of personality and philosophy. Don't dismiss it.
The "no closed source" thing really bothers me because I just see it as people forcing everyone else to play along with their game of trying to force vendors to provide open source drivers, when some of us really couldn't care less and we would just like to use our computers and our hardware. Is it really so wrong for your software, when running on someone else's computer, to work in the way that that someone else would like for it to work? Letting everyone else use closed source drivers if they wish to doesn't interfere with your ability to ignore closed source drivers. All it does is reduce the amount of leverage you have to force hardware vendors to comply with your demands, and I don't think it's right to force people who couldn't care less to be involved in your dispute like that.
You don't see why it's evil to require closed-source drivers. I don't see why it's evil to require open source drivers. Linux kernel developers define Linux. They have the right to say that they don't like it when you use their work in a certain manner. The copyright license may or may not let you do it anyway, but they don't have to be happy about it.
Well, I really don't care anymore. There was a time when I would have been all over fixing it, but dealing with it for the last eight years and having nothing happen but people tell me that I simply don't understand it really has eroded my ability to care. I may have convinced you that I'm on the right track, but you're just one person out of many, which still leaves an enormous uphill battle. Go find several dozen other people who are willing to help out, then I'll reconsider my interests, but for now I'm rather convinced that you're the only person who sees what I am talking about.
You've put down some of your thoughts here. Perhaps if the inspiration strikes you, you could refine it and start trying to define the APIs. Perhaps in the future when we have the time, I and some others can take what you've written and run with it.
> I don't necessarily disagree with you, other than the problem of > having a console before the kernel is fully initialized and has been > able to load any kind of userspace app from disk. To use programs on the VT-100, or are you talking about the console multiplexer? The kernel on bootup could just use the video driver directly, but it
I would resist defining text directly in the API. There would have to be some rudimentary terminal emulator built into the kernel. And that emulator can't be unloaded when the console multiplexer is loaded, because the multiplexer could crash.
> You know... some of the kernel fbconsole drivers don't accelerate all > functions. Really? I've never seen a framebuffer console accelerate anything at all.
The scrolling is fast!
> This is why I talk about being able to load most of the driver into > userspace. For performance reasons, we don't want every drawing > operation in X to require a system call. Instead, we use the > userspace API (A), which silently calls the kernel only in cases where > it's absolutely necessary. Will this API will be friendly to other programming languages?
It would just be a low-level set of bindings like glibc or xlib. You just need a corresponding library for your programming language.
> So the API in user space isn't the same API as in the kernel, but it's > not the SAME API. It's just a virtually identical one. Get it? :) Only if you meant to say "is the same API" instead of "isn't."
Yeah, sorry.
> It's also brilliant in that it's > network-transparent. Let's not lose those things. Yes, the network transparency is useful on rare occasions, and even though people assume that there must be latency in a TCP connection from the local machine to the local machine, everyone who knows knows that it isn't a big deal. That doesn't stop people from wanting to give it the axe, however.
Latency is not a problem if you find a way to compensate for it, such as processing commands in bulk. X11 is largely an asynchronous protocol, which is why it doesn't perform poorly. Only in cases where you have to wait on a return value do you notice the latency.
things just as Linux does. Apparently Linux has the best audio out of any operating system, which gives me no hope whatsoever for the future of computing.
For some reason, I had had the impression that Linux virtualized audio in some way so that multiple programs could independently play audio at the same time. Perhaps I'm thinking of some KDE component instead.
Assuming you might ever agree to closed source drivers, the way this likely would have to be done is to do it mostly in user space, as I doubt the kernel people would go for the closed source drivers. Trick the kernel developers into accepting a driver that just passes an IRQ and DMA into user space for those drivers which need it, then once Linux has
That certainly is a loophole you can exploit. The system call overhead might impact performance (less so if we do things properly in bulk), but there are aspects of it that just have to be in the kernel, if only for early-boot support. And if the proprietary driver is in userspace, people can't bitch so much about the stability issues.
started up, simply hijack the entire console system; start up as if we're going to do like X11, but don't cooperate at all with console switches, but instead simply pass the Alt-Fn keys to our own console multiplexer instead, and go from there designing the system as planned. The biggest problem with doing graphics in user space is cooperating with console switches, so if we simply don't cooperate, then that isn't a problem anymore. As long as the kernel isn't so broken that we can't simply ignore it, then we don't have to modify it, we can just forget about it and run our own software that does things the way we want to.
There's usually only one graphics device. There's no reason to have more than one driver process per graphics card. You can put part the API framework into the kernel and have it call back out to the userspace driver to do some of the processing, which may have to call back to the kernel for DMA and interrupt stuff. Ever head of FUSE? It's a way to implement file systems in userspace, and people seem to like it. This is no different (albeit more performance-critical). Now, this one driver per card has complete control over the graphics device. All commands to it go through this process, INCLUDING console-switching. There are no race conditions, because everything is serialized through this process. The serialization isn't a problem because there's only one physical GPU anyhow. Some aspects of the API and framework would be implemented in a library that links in to the application. The app would make calls to the API, and they would be buffered locally. That library would IPC them to the driver in bulk, which often would just tell the GPU do DMA them out (all done with zero copy!). I'm liking this more and more. BTW, as you move drivers into userspace, all you're doing is converting the OS to a microkernel. :) _______________________________________________ Open-graphics mailing list [email protected] http://lists.duskglow.com/mailman/listinfo/open-graphics List service provided by Duskglow Consulting, LLC (www.duskglow.com)
