On Sat, 2006-05-13 at 23:31 -0400, Timothy Miller wrote:
> On 5/13/06, Ray Heasman <[EMAIL PROTECTED]> wrote:
> 
> > But, look at the second set of items I gave. In every case, I ended up
> > using a framebuffer, and did everything I could to avoid work using it.
> 
> Well, it wouldn't be a big deal to design a chip that was nothing but
> a video controller, a host interface, and some memory logic.  Of
> course, I'm not sure what would differentiate us from anyone else, but
> early implementations on OGD1 are going to be just that, just to get
> us started.

Good. My question is how many gates for that simple implementation? You
want to have something real ASAP, cheap. That means 50K, 100K gates
tops, and you have something within reach of release in a standard cell
ASIC at a good price.

> > So, if I look at the current OGC spec today, and was hoping to use it
> > for my project, my questions would be:
> > 1) Hm. It has a 3D pipeline. I wonder how I turn it off? I wonder if it
> > still uses power when its turned off?
> 
> Yeah, but those questions don't make sense.  It has a "rendering
> pipeline" that is capable of doing some stuff that people call "2D
> acceleration" and some other stuff that people call "fixed-function 3D
> fragment shader".  It uses minimal power when it's not rendering
> anything, but more than if you didn't have it there in the first
> place.

I'm trying to tell you my first thoughts when I am doing triage on your
datasheet, assuming I know nothing else. What I am saying is "Oh look,
complicated stuff to ignore unless I have no choice. Hope I can switch
it off".

> Keep in mind that this is designed for applications that would benefit
> from some hardware acceleration.

Er.... and those would be? Would they exist in an environment where I
wouldn't have already selected another chip or chipset? I did try to lay
out what my choices would be.

> > 2) Hm. It uses DMA queues. Can I use it without turning on DMA?
> 
> You don't "turn on DMA."  You send it commands that result in DMA.
> You can access everything without using DMA if you want.  It's just
> less efficient.

Thank you, yes, I am aware of that. I am also aware of the fact that you
could have implemented things so that DMA was the only way to get
certain data fetched. If I'm looking at a new datasheet, that is the
sort of braindead stuff I have to check for before I decide to use a
chip. An example would be a CPU with a high speed synchronous serial
port that you had to poll in the CPU because they flubbed the interface
logic for the port and wouldn't fix it due to mask costs.

> > 3) Great, it does YUV. Hope I can use it without DMA.
> 
> Yeah.  Just a translation between the host interface and the memory.
> It doesn't matter what (PIO write or DMA read) that caused the data to
> get there.
> 
> > 4) I wonder how I set up the outputs for my requirements?
> 
> Sample code anyone?

*shrug* As I said, showing how I would think looking at a new datasheet.
I would assume there are a bunch of memory mapped registers, but who
knows.

> > 5) Is there any weird VGA BIOS stuff that I have to work through or can
> > I just disable all that weird PC legacy crap?
> 
> The legacy VGA emulation is off by default and only turned on when a
> "PC BIOS" turns it on.

Nice to know, but not the point of this list of questions.

> 
> > 6) Is the DAC integrated, or do I have to include that too?
> 
> Integrated DAC and DVI is what I have in mind for the ASIC.

Would be nice. Not going to be easy or cheap doing a mixed signal chip.

> > 7) What voltages does it use?
> 
> How important is that?  Most things expect 3.3 and 2.5v supplies, right?

Um. It's really really application dependent. 3.3 is probably pretty
safe for non-battery embedded stuff. Anything less than that isn't very
standardised, and you end up with a power supply that has to supply 7
different voltages. It's a design headache and a significant source of
cost. Also, the lower voltages require higher currents, and its a real
pain having to supply high currents at low voltages with difficult noise
margins.

The last project I did the circuit design for was based on Xilinx
Spartan3, and it was a pain in the ass designing a PSU that would meet
all of my requirements.

> > 8) What other support circuitry does it require?
> 
> At least one RAM chip?

And some sort of interface to the outside world, yes. You won't connect
the DAC pins directly to the connector. There will be impedance issues
and matching, perhaps some weird voltages for sync pulses. Whatever.

> > 9) How much does it cost?
> 
> How about $30 in units of 100,000?
> 
> > 10) No, really, how much does it cost?
> 
> How much does the chip cost us to fab?  Why do you care?  :)

Again, I wasn't asking you specifically.

I care because I don't see you having the upfront money short of a
miracle, and I would like to understand where the miracle will come
from.

> >
> > Let's dream a little, about what I would love to see available. In an
> > ideal world, I would want a video support chip to be completely memory
> > mapped with no bus in the way, for maximum compatibility. So:
> 
> There's always going to be a bus.  It may not be a standard one like
> PCI, but there's always some communication path between the CPU and
> this chip.  It's either a bus or point-to-point.  But who's going to
> want to deal with a proprietary interface like that?

Er. Technically any group of signals that carries information can be a
bus. What I mean is that PCI is weird. It uses unterminated signals with
weird drivers, and it depends on reflections at the end of the signal
path to even work. It has a whole language or protocol and a lot of that
language is often unnecessary.

Hell, it uses a 33MHz clock, and having to generate another clock signal
might be enough reason to not use it.

> > 1) It would look like an SRAM or DRAM to the CPU. It would then have its
> > own external RAM that it would map so the CPU can see it too. This might
> > even give me a way to use DRAM with a CPU that only does SRAM. Cool!
> 
> Are you saying that it should behave like, say, a DDR-SDRAM chip?  It
> should expect refresh cycles, etc?

*shrug* I was just explaining a way to cut out some of the crap, from
the perspective of a potential customer. Engineers are lazy. Make their
lives easy and they will flock to your door. :-)

> > 2) It would have YUV support and/or a fairly simple bit blitter with YUV
> > support.
> 
> Sure, no problem.
> 
> > 3) Setup would be through some support registers.
> 
> Isn't it always?

Yeah, true. :-)

> > 4) Interrupts would be tied to one of the interrupt pins on the CPU,
> > with control being through some nice memory mapped registers.
> 
> Again, you're talking about some sort of custom bus interface.  How
> can we design for everyone's different custom interface?

You don't have to. There are about three different kinds of interface to
deal with if you are pretending to be SRAM, and you can usually glue
yourself onto just about anything. 

And a signal that pulls a pin low when there is an interrupt isn't that
outre.

> > 5) There would be no "DMA" in the sense that the support chip plays in
> > the CPUs memory. The CPU will use the support chip RAM as it's own RAM.
> > Why add a bus I don't need then use DMA to get around the bus?
> 
> There's always a communication path between the CPU and the support
> chip.  We usually call such a thing a "bus" (even when it's on a
> crossbar).

Thank you, I am aware of what a bus is. I thought it was obvious I was
referring to PCI, and all of its attendant protocols and stuff I really
don't need.

PCI was invented to solve the problem that the ISA bus was slower than
morse over wet string. It's still a big complicated thing with high
latency that sits between other chips though, and you often don't want
that in an embedded design if you just have two chips.

> > Now that is a chip I would have a use for. Useful, integrates with just
> > about anything, easy to program, saves me time during development.
> 
> You'll need to clear up some of what you've said, because it doesn't
> all make sense.  But I see your point about having a very simplified
> interface.  These days, though, we don't design systems by taking a
> 68000 and wiring up RAM to it and and ROM and using 74138's to do
> address decoding, etc.  Most things use some pretty standardized
> interfaces.

Yes. That is my point. You can get away with doing a few minor
variations of a synchronous interface, and maybe the old 68000-like
asynch bus too. I have designed around various DSPs and CPUs, and they
are all not quite standard enough that you are forced to tweak things a
little with every design. It's not a step you can safely skip. :-(

> > > The decision was made to design a pipeline compliant with OpenGL 1.3
> > > (and some later features) and tweak it so that it would perform well
> > > on 2D tasks as well.  In fact, the stated primary intended uses of
> > > this design are "2D desktops plus the simple 3D eye candy that is
> > > popular in recent UIs."
> >
> > And there we start diverging. The problem is that things don't stand
> > still. There will always be a proprietary chipset with open source
> > drivers that beats what an OGP design could do. This proprietary chipset
> > will always be cheaper, because of volume. The average desktop user
> > might pay a little extra for an open source chipset, but the keyword is
> > "little", as in 25%, assuming the same performance as the competitor.
> 
> What do you suggest we do to solve that particular problem?

Do stuff that is unique, so that you aren't just a clone of other
solutions.

> > The user might pay more for "different" if they think "different" is in
> > some way cooler. Look at the history of the Apple Mac for an example.
> > They are a lot less likely to pay more or accept a design if it is
> > inferior to just about anything out there and does exactly what the
> > other chips do.
> 
> Ok, so if I understand you right, OGA is "boring" because it
> implements an OpenGL.

No. OGA has a problem because its a clone of existing functionality. It
has to be cheaper, better, or not a clone.

> But for some reason, people would be more
> inclined to buy something that doesn't do "3D", even though that's
> what everyone "wants"?

You are carefully mixing different groups of people here. "People" are
embedded designers, and "everyone" is desktop users, according to you.

If I am an embedded developer, I most likely do not want 3D.

If am a desktop user, 3D would be okay if it was better or cheaper than
the things it clones. If not that, it has to be "not a clone".

> > There is some opportunity in replacing B, but how many high volume/high
> > cost generic 3D consumer electronic applications are that that OGP could
> > count on? How would you compete with the big names for promises of
> > support, volume, driver features, and speed?
> 
> If we can get some help from the FOSS community (which is critical,
> actuallty), then support and driver features are taken care of.  Speed
> is up to me to design it right.  If there's a high enough demand,
> we'll be able to meet it.

Hm. It still sounds like most of the work (geometry processing) is going
to be done on the CPU, and your shaders are, as you have said, limited.

You will have to work hard to make OGA not be seen as an inferior clone
in the desktop market. I have already explained the rest about the
embedded market.

> > A medium volume provider might be interested in OGP stuff, assuming they
> > need 3D, but their selling price will probably be high and they would
> > save a lot of money during development using an off-the-shelf x86
> > CPU/chipset combo (or perhaps even standard motherboard) with support
> > that is good enough.
> >
> > > The fact that the design is based on a 3D pipeline doesn't mean that
> > > we weren't attentive to the needs of 2D desktops.  Sure, having
> > > floating point and textures and such in there complicates things, but
> > > the tradeoff is worth it to maximize the market as much as we can.
> > >
> > > What do you think will get bought more?  A 2D only engine?  Or a 3D
> > > engine that's also good at 2D?
> >
> > The real question is "Would the 3D engine be bought more, and if so,
> > would the extra sales justify the increase in cost, development time,
> > and complexity to everyone else". If you are talking deeply embedded
> > stuff, the designer would see your 3D pipeline as a cost not a feature.
> 
> Ok, fair enough.  But keep in mind that the essentials of the
> development of OGA are done.  I just have to code it in Verilog.

Are you serious? Last I heard you were talking of custom processes and
mixed signal ASICs. Writing the Verilog is the least part of your
problems. What software are you going to use? Last time I was involved
with chip design, a single seat of something like Synplicity was
something like $500K. Have you chosen a process and fab? How are you
going to lay out your mixed signal stuff? Who will design it for you?

> > > The only thing that 2D-only would give us for the same amount of logic
> > > is wider issue, which helps in some cases and not in others.  Having
> > > read and responded to some of your later comments, I am of the opinion
> > > that what you're asking for is NOT a 2D design.  2D designs don't have
> > > scaling and rotation.
> >
> > Now we slip away from what I was trying to achieve. I wasn't asking, I
> > was just showing how my _desktop_ needs don't call for 3D. I am
> > partially arguing that maybe we should just use less logic and do a
> > really good and simple 2D design (that doesn't do the cool stuff I was
> > talking about). It would be quick to do, and could be made much higher
> > performance than a card running in some VESA mode. Maybe even a design
> > that would be commercially viable implemented in FPGA only.
> > Alternatively, it could be implemented in a cell-based ASIC for only a
> > few tens of thousands of dollars, and you could have sales very soon.
> 
> Well, we'd have to do some market analysis on that.  How much
> acceleration do you think such a thing would need?  None?  Bitblt and
> solid fill only?  Anything else?

BitBlt, YUV<->RGB, and solid fill at most.

That will cover most embedded stuff. Far more important would be the
ancillary stuff: 

A) is it compatible with a bunch of different CPU interfaces? You'd want
an SRAM like interface, PCI, and maybe even some kind of fake DRAM, if
you really wanted people to use you.

Trust me, if I have an ARM processor in a design already, and I can add
graphics and it makes sense, just by bolting it in as a fake SRAM, you
are way more likely to have my custom than if I have to pick a new CPU
and figure out a new signal interface.

B) what can it drive? Composite out, VGA, DVI, whatever the LVDS
standard is to talk directly to an LCD. Some could be done digital-only
and you could get a chip out without having to go mixed signal.

C) Cost is important too. $30 is ... high. That's a lot of money to blow
on one chip. My CPU probably wouldn't cost that much, in at least half
of my embedded designs.

If it were based on something like the Xilinx "single use" thing (Can't
remember the name, but they certify defective FPGAs to work with your
HDL code only, and the FPGAs are much cheaper as a result. They treat it
almost like an ASIC order), customers could take your open source core,
add system glue onto it for other bits in the design and have a real
value add part for a reasonable price and amount of work.

If I could get a custom FPGA core that does my video for me, and also
has some space left over to replace some other logic in the design, an
FPGA might actually be cheap enough for me to use in a design. The FPGA
would have to run on one voltage though. The Spartan3s are annoying, cos
they require some bizarre voltages.

> > > > If I want an OpenGL card, I will buy a nVidia or ATI card that is
> > > > reasonably well supported by an open source driver.
> > >
> > > What happens when Radeon 9250's run into short supply?
> >
> > I will buy a card that is currently cheap and has open source support
> > equivalent to that of the Radeon 9250, and it will probably be several
> > times faster. And I will be able to do so, because there are lots of
> > open source developers making it happen. An OGP-designed card provides
> > no special value for me there, beyond a mild wish to help out open
> > source projects. Most real world people don't have that wish.
> 
> Look, if I wanted to design a "2D" engine, I could design something
> for you that was small, wide-issue, always maxed-out memory bandwidth,
> accelerated all the most important stuff, and could handle very
> high-res displays.
> 
> Oh, wait.  I already did that.  It's called TROZ and is current in use
> in thousands of mission-critical air traffic control displays.  :)
> 
> Those 256-bit-wide data busses were a bitch.

I wasn't advocating a 2D card. I was saying "You cant release a clone of
everything else that is more expensive than what is already there."

> > If perhaps you see this as more of an open project, where you do work in
> > the open, and other paid programmers in open source companies help you
> > because they see a benefit for their companies, then great, you are
> > probably doing the right thing - you are getting other companies to pay
> > for part of your development. Don't expect a whole lot from anyone else,
> > though.
> 
> I would like to see this happen.

Okay. That makes things a bit clearer. Good to know you aren't depending
on too much non-company help.

> > > That all being said, your input on the nature of our design is
> > > encouraged.  If you see a missing feature, an existing feature we
> > > couldn't possibly benefit from, or some radical new approach to this
> > > whole thing, by all means, post it!
> >
> > I am worried about things at a very high level. The spec you have is
> > meant to meet a particular need. I am not complaining about the spec. I
> > am complaining about the perceived need the spec is written for. I am
> > complaining about the implied requirements of the spec and the tradeoffs
> > they force, and how those tradeoffs compromise the original logic that
> > specified the need.
> 
> I'm still waiting to be convinced that:
> 
> (a) OGA cannot do what we need and
> (b) There's a much simpler design that'll meet the needs way better.

I am still trying to point out that that is not the point of my
discussion. I am trying to say "Is what we think we need sellable?"

> You have months to convince me, and I am listening carefully.

In the current direction of OGP in general, I don't see a viable market
for OGA, so talking about what the chip can do is pointless.

Here are some suggestions for weird/easy/different things OGP could do.
None of them are the 2.5D idea I have, but I'm trying to advocate
something that might have a market and require next to no NRE to get to
market ASAP. (Cue official suggestions, with fanfare):

1) Make something that just sits on the PCI bus and indicates it has a
ROM that has to run during boot. Allow the FLASH that contains the ROM
to be reprogrammed in a relatively secure manner. This would allow you
to write code that took over a PC at boot time. You could put a
LinuxBIOS image in there, and have the LinuxBIOS boot your motherboard.
The LinuxBIOS people are constrained by the fact that most motherboard
ROMs are way too small. OGP could fix that.

2) Make a VGA compatible chip that does text modes only. Also add a USB
MAC and Ethernet MAC, some PS2 ports, and a CPU from opencores.org that
is supported by Linux. Run a simple SSH server in an embedded linux
image, and make the SSH server do a simple screen scrape of the text
mode RAM and represent it as a VT100 terminal. Keypresses in the
terminal are echoed to the PS2 port. Tada. Instant remote PC management
card for cheap colocated unix servers. Add in a connector to hook up to
the reset connector, and you can revive locked PCs remotely, change BIOS
settings during boot, etc.

3) Make a simple 2D chip that is really easy to interface to. Make it
produce digital signals to drive LCD interfaces directly. Lets face it,
if an embedded app needs a screen, its likely to be an LCD panel
nowadays. Mayyybe include some inputs to read a touchpanel or stylus, or
shaft encoders, or maybe a button matrix scanner and some LED drivers.

I think (2) would have some geek cred and people who want to buy it. It
would be "cool" in its own special way. It would be pretty easy to do
too. (3) would be more like what I think about when I think of the word
"embedded", and I could imagine using something like it one day,
although I have no idea if it has a market.

I'm trying to stay in a cheap FPGA/cell ASIC, all-digital, easy-to-do
world here, so OGP can have a real product bringing in real money ASAP,
with low up-front costs.

I do not consider OGA in a mixed signal process quick or cheap.

Keep well,
Ray


_______________________________________________
Open-graphics mailing list
[email protected]
http://lists.duskglow.com/mailman/listinfo/open-graphics
List service provided by Duskglow Consulting, LLC (www.duskglow.com)

Reply via email to