Paul G. Allen wrote:
I worked on video compression hardware for a company that produced the first MPEG-2 digital cable and satellite equipment. A competitor paid an IT employee to steal a computer and a hard drive - my test station hard drive in fact - so that they could get a hold of our software and algorithms. I've also had a rival game developer steal my 3D source code from a "private" Windows FTP server and use the algorithms and ideas in their game.
The value there is in the knowledge before release. After the release, they can just reverse engineer your code.
Openness does not work in all situations, especially in such a competitive area. Those who think it should need to get over it and enter the real world.
Stop. Nobody among the programmers is asking for drivers and source code from the manufacturers. The companies are spending a lot of time and money on something that the programmers *don't actually want*. The graphics programmers don't want code. They want hardware *specs*.
It would be *less* programming effort to hand out the specs. It would also mean that people could write their own drivers.
Keep your *hot off the presses* software trade secrets. We don't want them anyhow. They aren't as clever as you think; we don't care how you got 3% more Quake framerate.
The problem is that there isn't even a stable core to write a driver to. Just producing a spec which enables doing a simple mapping from OpenGL 1.1 to hardware would be an order of magnitude improvement. This is hardly trade secret area anymore.
As for the value of algorithms, anybody who wants your algorithms that badly *will reverse engineer the binary code*. I've seen hardware analyses where they delaminated the silicon chip and reverse engineered the entire schematic set. There are entire companies devoted to this.
Once you ship, your secrets aren't.
With the recent advent of asynchronous processors (ARM just announced one a couple months ago), I would expect performance to increase and heat dissipation to decrease in the near future. It may take some time to re-design GPUs and CPUs into an asynchronous architecture, but I believe that's the way the industry may have to go.
Sigh. I have been hearing about the asynchronous processor thing for the last 15 years. It is no closer than it has ever been.
Asynchronous, in theory, is much better when the processor is mostly idle. Without clocks, nothing is burning power just to give the processor a heartbeat. However, when the processor is running full out, there is actually *more* clocking flying around, not less. Every transaction requires acknowledgment.
In reality, the only thing that the asynchronous movement is telling me right now is that nobody knows how to intelligently manage clocks anymore. Translation: too many designers only know verilog/VHDL and can't actually do real, physical layer, transistor design.
Apple changed their entire underlying computer infrastructure because they couldn't design a chipset. That's pretty bad.
-a -- [email protected] http://www.kernel-panic.org/cgi-bin/mailman/listinfo/kplug-list
