it occurs to me that much of what is being written is optimizing for cpu and not for ram. is this true? ram has often been the limiting factor in scientific computing -- having 16x ram can mean a major change in what you're able to accomplish. ram, consequently, is much more dicey to optimize for, since getting a lot of it can be very expensive.
let me pose a thought experiment that i went through with a friend of mine one day when we discovered a machine with 1 TB of ram. what would you do differently with 1 TB of ram? assume (because it is) that it's all directly addressable, all very fast. to put it more sharply, would you be willing to trade your ideal current setup for a single-core, single-cpu modern intel machine running at the low end of the ops/sec range if you had 1TB of ram to play with? how would/could you rewrite your code to take advantage of this? s. _______________________________________________ Computer-go mailing list [email protected] http://dvandva.org/cgi-bin/mailman/listinfo/computer-go
