On 5/13/06, Ray Heasman <[EMAIL PROTECTED]> wrote:
But, look at the second set of items I gave. In every case, I ended up using a framebuffer, and did everything I could to avoid work using it.
Well, it wouldn't be a big deal to design a chip that was nothing but a video controller, a host interface, and some memory logic. Of course, I'm not sure what would differentiate us from anyone else, but early implementations on OGD1 are going to be just that, just to get us started.
So, if I look at the current OGC spec today, and was hoping to use it for my project, my questions would be: 1) Hm. It has a 3D pipeline. I wonder how I turn it off? I wonder if it still uses power when its turned off?
Yeah, but those questions don't make sense. It has a "rendering pipeline" that is capable of doing some stuff that people call "2D acceleration" and some other stuff that people call "fixed-function 3D fragment shader". It uses minimal power when it's not rendering anything, but more than if you didn't have it there in the first place. Keep in mind that this is designed for applications that would benefit from some hardware acceleration.
2) Hm. It uses DMA queues. Can I use it without turning on DMA?
You don't "turn on DMA." You send it commands that result in DMA. You can access everything without using DMA if you want. It's just less efficient.
3) Great, it does YUV. Hope I can use it without DMA.
Yeah. Just a translation between the host interface and the memory. It doesn't matter what (PIO write or DMA read) that caused the data to get there.
4) I wonder how I set up the outputs for my requirements?
Sample code anyone?
5) Is there any weird VGA BIOS stuff that I have to work through or can I just disable all that weird PC legacy crap?
The legacy VGA emulation is off by default and only turned on when a "PC BIOS" turns it on.
6) Is the DAC integrated, or do I have to include that too?
Integrated DAC and DVI is what I have in mind for the ASIC.
7) What voltages does it use?
How important is that? Most things expect 3.3 and 2.5v supplies, right?
8) What other support circuitry does it require?
At least one RAM chip?
9) How much does it cost?
How about $30 in units of 100,000?
10) No, really, how much does it cost?
How much does the chip cost us to fab? Why do you care? :)
Let's dream a little, about what I would love to see available. In an ideal world, I would want a video support chip to be completely memory mapped with no bus in the way, for maximum compatibility. So:
There's always going to be a bus. It may not be a standard one like PCI, but there's always some communication path between the CPU and this chip. It's either a bus or point-to-point. But who's going to want to deal with a proprietary interface like that?
1) It would look like an SRAM or DRAM to the CPU. It would then have its own external RAM that it would map so the CPU can see it too. This might even give me a way to use DRAM with a CPU that only does SRAM. Cool!
Are you saying that it should behave like, say, a DDR-SDRAM chip? It should expect refresh cycles, etc?
2) It would have YUV support and/or a fairly simple bit blitter with YUV support.
Sure, no problem.
3) Setup would be through some support registers.
Isn't it always?
4) Interrupts would be tied to one of the interrupt pins on the CPU, with control being through some nice memory mapped registers.
Again, you're talking about some sort of custom bus interface. How can we design for everyone's different custom interface?
5) There would be no "DMA" in the sense that the support chip plays in the CPUs memory. The CPU will use the support chip RAM as it's own RAM. Why add a bus I don't need then use DMA to get around the bus?
There's always a communication path between the CPU and the support chip. We usually call such a thing a "bus" (even when it's on a crossbar).
Now that is a chip I would have a use for. Useful, integrates with just about anything, easy to program, saves me time during development.
You'll need to clear up some of what you've said, because it doesn't all make sense. But I see your point about having a very simplified interface. These days, though, we don't design systems by taking a 68000 and wiring up RAM to it and and ROM and using 74138's to do address decoding, etc. Most things use some pretty standardized interfaces.
> The 2D vs. 3D argument has been beaten to death. You may want to sift > through the archives (and LKML posts on this topic) where the number > of people in favor of a 2D-only design were a tiny minority. Hm. If you are going to run this project as a democracy, then OGP is really in trouble. The number of votes for any particular argument should be irrelevant. It's the strength of the arguments backing a vote that counts. Someone (that's you), makes a final determination and sticks with it until its obviously wrong. The rest of us think you're brilliant or swear at you, or both. :-)
Well, OGA isn't obviously wrong to me (yet). I haven't made any decisions to change it. I just hate being wrong and therefore always keep an open mind. So, no, it's not exactly a democracy. Everyone on this list is very bright, but not everyone here is an experienced graphics chip designer. I work to extract useful things from what people say and turn those ideas into something practical to implement. Oh, and I'm not very good at saying "screw you, I'm not going to do that." I just do what I think is best. A number of complaints were made about OGD1 that I had to ignore because they just weren't practical to implement.
> The decision was made to design a pipeline compliant with OpenGL 1.3 > (and some later features) and tweak it so that it would perform well > on 2D tasks as well. In fact, the stated primary intended uses of > this design are "2D desktops plus the simple 3D eye candy that is > popular in recent UIs." And there we start diverging. The problem is that things don't stand still. There will always be a proprietary chipset with open source drivers that beats what an OGP design could do. This proprietary chipset will always be cheaper, because of volume. The average desktop user might pay a little extra for an open source chipset, but the keyword is "little", as in 25%, assuming the same performance as the competitor.
What do you suggest we do to solve that particular problem?
The user might pay more for "different" if they think "different" is in some way cooler. Look at the history of the Apple Mac for an example. They are a lot less likely to pay more or accept a design if it is inferior to just about anything out there and does exactly what the other chips do.
Ok, so if I understand you right, OGA is "boring" because it implements an OpenGL. But for some reason, people would be more inclined to buy something that doesn't do "3D", even though that's what everyone "wants"?
There is some opportunity in replacing B, but how many high volume/high cost generic 3D consumer electronic applications are that that OGP could count on? How would you compete with the big names for promises of support, volume, driver features, and speed?
If we can get some help from the FOSS community (which is critical, actuallty), then support and driver features are taken care of. Speed is up to me to design it right. If there's a high enough demand, we'll be able to meet it.
A medium volume provider might be interested in OGP stuff, assuming they need 3D, but their selling price will probably be high and they would save a lot of money during development using an off-the-shelf x86 CPU/chipset combo (or perhaps even standard motherboard) with support that is good enough. > The fact that the design is based on a 3D pipeline doesn't mean that > we weren't attentive to the needs of 2D desktops. Sure, having > floating point and textures and such in there complicates things, but > the tradeoff is worth it to maximize the market as much as we can. > > What do you think will get bought more? A 2D only engine? Or a 3D > engine that's also good at 2D? The real question is "Would the 3D engine be bought more, and if so, would the extra sales justify the increase in cost, development time, and complexity to everyone else". If you are talking deeply embedded stuff, the designer would see your 3D pipeline as a cost not a feature.
Ok, fair enough. But keep in mind that the essentials of the development of OGA are done. I just have to code it in Verilog.
> The only thing that 2D-only would give us for the same amount of logic > is wider issue, which helps in some cases and not in others. Having > read and responded to some of your later comments, I am of the opinion > that what you're asking for is NOT a 2D design. 2D designs don't have > scaling and rotation. Now we slip away from what I was trying to achieve. I wasn't asking, I was just showing how my _desktop_ needs don't call for 3D. I am partially arguing that maybe we should just use less logic and do a really good and simple 2D design (that doesn't do the cool stuff I was talking about). It would be quick to do, and could be made much higher performance than a card running in some VESA mode. Maybe even a design that would be commercially viable implemented in FPGA only. Alternatively, it could be implemented in a cell-based ASIC for only a few tens of thousands of dollars, and you could have sales very soon.
Well, we'd have to do some market analysis on that. How much acceleration do you think such a thing would need? None? Bitblt and solid fill only? Anything else?
> > If I want an OpenGL card, I will buy a nVidia or ATI card that is > > reasonably well supported by an open source driver. > > What happens when Radeon 9250's run into short supply? I will buy a card that is currently cheap and has open source support equivalent to that of the Radeon 9250, and it will probably be several times faster. And I will be able to do so, because there are lots of open source developers making it happen. An OGP-designed card provides no special value for me there, beyond a mild wish to help out open source projects. Most real world people don't have that wish.
Look, if I wanted to design a "2D" engine, I could design something for you that was small, wide-issue, always maxed-out memory bandwidth, accelerated all the most important stuff, and could handle very high-res displays. Oh, wait. I already did that. It's called TROZ and is current in use in thousands of mission-critical air traffic control displays. :) Those 256-bit-wide data busses were a bitch.
If perhaps you see this as more of an open project, where you do work in the open, and other paid programmers in open source companies help you because they see a benefit for their companies, then great, you are probably doing the right thing - you are getting other companies to pay for part of your development. Don't expect a whole lot from anyone else, though.
I would like to see this happen.
> That all being said, your input on the nature of our design is > encouraged. If you see a missing feature, an existing feature we > couldn't possibly benefit from, or some radical new approach to this > whole thing, by all means, post it! I am worried about things at a very high level. The spec you have is meant to meet a particular need. I am not complaining about the spec. I am complaining about the perceived need the spec is written for. I am complaining about the implied requirements of the spec and the tradeoffs they force, and how those tradeoffs compromise the original logic that specified the need.
I'm still waiting to be convinced that: (a) OGA cannot do what we need and (b) There's a much simpler design that'll meet the needs way better. You have months to convince me, and I am listening carefully. _______________________________________________ Open-graphics mailing list [email protected] http://lists.duskglow.com/mailman/listinfo/open-graphics List service provided by Duskglow Consulting, LLC (www.duskglow.com)
