On Sat, Nov 03, 2012 at 01:44:18PM -0400, Timothy Normand Miller wrote: > Let's not make this so overwhelmingly large that no one sees the end. > That's one of the problems we had with the OGP originally. > > Let's break this down into some steps: > 1. Design simulator for GPU > 2. Design RTL for GPU > 3. Get it to work in RTL-level simulation
Let's stop at 3, to keep this simple. If we start talking about targeting FPGAs, we start jumping through hoops, and engineering creative work-arounds, only to find ourselves with a $1500 fpga board. We don't need something that has gee-whiz features, we just need something with a PCI-E 1x port, an hdmi output (maybe with an encoder chip to avoid obnoxious licensing questions). There are things like http://www.mosis.com/what-is-mosis that significantly reduce the cost of ASIC design. What's the rush? Let's go with a relatively ancient process, and try to produce RTL for a GPU that works with an opencores OpenRisc CPU core, and clocks at 200mhz. At this point, I'm actually inclined to think that going straight to ASIC is *easier*, because we can simulate the proposed ASIC from first-principles, instead of depending on some black-box vendor fpga tools that just break in strange ways. _______________________________________________ Open-graphics mailing list [email protected] http://lists.duskglow.com/mailman/listinfo/open-graphics List service provided by Duskglow Consulting, LLC (www.duskglow.com)
