On Tue, 2005-10-11 at 09:56 -0700, Nick Kelsey wrote: > The targets we use are all embedded processors, often with less than 64k > of RAM - with this approach the bytecode isn't stored, it gets > translated on the fly as it is downloaded. My main test target is a > Ubicom ip3k and the translator uses less than 4k of code space... less > space than a VM would :-)
I was curious why this was done? Are you using many embedded machines with different architectures and CPU types but all running the same code? Have you run any performance tests to see if there is any significant performance difference between code generated by GCC targetted specifically to that system, and code generated by the dual-layer approach with tcc and a translator. Its very similar to what the taos system used and Amiga was going to license. Not sure how far they got, but the idea was that any application, game, library, or device driver could be run on nearly any platform. The virtual CPU was quite high-level being designed by demo coders that had begun using large macro libraries, so the virtual machine code was often smaller than equivalent host code and could be loaded and translated quickly. They made all of the GCC tools generate code for the virtual CPU, and the code was translated (with a small translator written in assembler) and sometimes optimized for the host CPU as it was loaded off disk, keeping the CPU busy translating while waiting for disk IO. In addition, translated libraries were cached in host format to further increase the load & translate time. Since it was GCC and not a higher level language such as java, they could actually have device drivers in the virtual CPU format as well. Java simply had its own translator that translated to the virtual CPU format and then could be translated by the host translator. _______________________________________________ Tinycc-devel mailing list [email protected] http://lists.nongnu.org/mailman/listinfo/tinycc-devel
