Andrew Lentvorski wrote:

Paul G. Allen wrote:

I worked on video compression hardware for a company that produced the first MPEG-2 digital cable and satellite equipment. A competitor paid an IT employee to steal a computer and a hard drive - my test station hard drive in fact - so that they could get a hold of our software and algorithms. I've also had a rival game developer steal my 3D source code from a "private" Windows FTP server and use the algorithms and ideas in their game.


The value there is in the knowledge before release. After the release, they can just reverse engineer your code.

If reverse engineering was so beneficial, they would have waited and done so. Personally, I don't care if someone reverse-engineers any of my code. I know that if they have to stoop to that level, I'm good enough to keep ahead of them for the next generation anyway. (In fact, that's what happened with the game software. By the time they implemented what they stole from me, I had gone three steps further.)


Openness does not work in all situations, especially in such a competitive area. Those who think it should need to get over it and enter the real world.


Stop. Nobody among the programmers is asking for drivers and source code from the manufacturers.

Bull. I've seen many threads in the OSS community from developers that want exactly that. There are Linux kernel developers that want every bit of data on every chipset and driver. It's simply not realistic. It's not necessary either.

The companies are spending a lot of time and money on something that the programmers *don't actually want*. The graphics programmers don't want code. They want hardware *specs*.

It would be *less* programming effort to hand out the specs. It would also mean that people could write their own drivers.

Keep your *hot off the presses* software trade secrets. We don't want them anyhow. They aren't as clever as you think; we don't care how you got 3% more Quake framerate.

*You* may not want them, but the competition in the 3D video market is so tight that *they* do want them.


The problem is that there isn't even a stable core to write a driver to. Just producing a spec which enables doing a simple mapping from OpenGL 1.1 to hardware would be an order of magnitude improvement. This is hardly trade secret area anymore.

As for the value of algorithms, anybody who wants your algorithms that badly *will reverse engineer the binary code*.

Have you ever reverse-engineered a binary driver? I have and it's not as easy as one might think. There's a lot of time and money involved. It's simply not worth the effort for a company to shell out the resources necessary for the return it would provide.

I've seen hardware analyses where they delaminated the silicon chip and reverse engineered the entire schematic set. There are entire companies devoted to this.

Once you ship, your secrets aren't.

By the time it's done, the next generation chip is shipping. Again, a waste of resources (unless you're so far behind the curve even knowing how the previous generation ship works is a huge boost)


With the recent advent of asynchronous processors (ARM just announced one a couple months ago), I would expect performance to increase and heat dissipation to decrease in the near future. It may take some time to re-design GPUs and CPUs into an asynchronous architecture, but I believe that's the way the industry may have to go.


Sigh. I have been hearing about the asynchronous processor thing for the last 15 years. It is no closer than it has ever been.

http://www.arm.com/news/12013.html


Asynchronous, in theory, is much better when the processor is mostly idle. Without clocks, nothing is burning power just to give the processor a heartbeat. However, when the processor is running full out, there is actually *more* clocking flying around, not less. Every transaction requires acknowledgment.

Most processors spend most of their time idle. Power consumption is not the only reason to remove clock signals.


In reality, the only thing that the asynchronous movement is telling me right now is that nobody knows how to intelligently manage clocks anymore. Translation: too many designers only know verilog/VHDL and can't actually do real, physical layer, transistor design.

Running clock signals all over a chip is expensive in timing (propagation delays), power consumption, physical space, and other areas. One of the biggest problems with synchronous systems is the propagation delays imposed upon the timing signals. It doesn't matter how good an engineer is at managing clocks, the reality is at some point there will be something that is limited because of the amount of time it takes for the clock signal to get from point A to point B. This is one reason why mfgs. are constantly trying to decrease die size.

I'm done with this thread (That is, I hope I'm not sucked back into it. Quick, someone compare 3D video to Hitler and Nazis! ;) )

PGA
--
Paul G. Allen
Owner, Sr. Engineer, BSIT/SE
Random Logic Consulting Services
www.randomlogic.com


--
[email protected]
http://www.kernel-panic.org/cgi-bin/mailman/listinfo/kplug-list

Reply via email to