On Feb 16, 2007, at 13:44, Ben Scott wrote:
One is my understanding: Even if you're working with a 64-bit architecture, isn't most software still dealing with 32-bit values? Does throughput double without re-writing all the code to take advantage of that?
I recall reading somewhere [I'll never find the reference, I'm sure] that there were some compiler optimizations to take two 32-bit ops (say in a loop) and optimize them into a single 64-bit op. These depend on conditions that have been predicted a priori, though, like loop optimization. And you'd have to assume that loading four ops into, say, an SSE3 unit, wouldn't be a better way to spend time.
I'm not sure what kind of operations would benefit from doing 2 at a time rather than 4 at a time. My first reaction would be IEEE floating point or something with hardware assist that can't be fed into an integer vector processor. But I'm consciously avoiding thinking about the binary representation of this because that kind of thing usually just results in neuronal injury on a Friday evening.
-Bill ----- Bill McGonigle, Owner Work: 603.448.4440 BFC Computing, LLC Home: 603.448.1668 [EMAIL PROTECTED] Cell: 603.252.2606 http://www.bfccomputing.com/ Page: 603.442.1833 Blog: http://blog.bfccomputing.com/ VCard: http://bfccomputing.com/vcard/bill.vcf _______________________________________________ gnhlug-discuss mailing list gnhlug-discuss@mail.gnhlug.org http://mail.gnhlug.org/mailman/listinfo/gnhlug-discuss/