Hi Eric, > Could you expand on this a bit? Solaris implements > different policy > for NUMA and CMT (although affinity and load > balancing > tends to be a common theme). What sort of simulation > / experiements > did you have in mind?
We use Simics to simulate a wide variety of memory system configurations. Right now, we simulate database and web workloads on Solaris 9. However I am particularly interested in a memory-system optimization that makes accessing local memory quite fast but avoids some of the hardware traditionally required to build a NUMA machine. In other words, I want to simulate a NUMA machine I think can be built with some of my ideas. Right now, Simics tells Solaris that all of the memory is on a single board, even though my add-on module to Simics actually carries out the timing of NUMA. The bottom line is that we currently model the timing of NUMA, however Solaris does not do any memory placement optimization because it thinks memory is all on one board. Thus in order to get memory placement optimization working, I believe I need to bring up a newer version of Solaris, and to possibly get Simics to properly tell Solaris about NUMA hardware. I posted the same question in the code group. Apparently for SPARC, the platform-specific files statically define the lgroups, and that many/most of the platform-specific files are *not* included with OpenSolaris. Thus it seems that maybe my best solution is to add a mechanism (system-call or something) so that I can manually define lgroups before running my database workload. OR, I can go with the Opteron approach and have Solaris manually probe the memory system at boot time in order to figure out the lgroups dynamically. Doing so might be easiest because it is easy for me to affect the timing of Simics, but it isn't so easy to make Simics return the right information to Solaris (static hardware platform). Thanks for the interest and response. Would welcome any ideas. --Mike This message posted from opensolaris.org _______________________________________________ perf-discuss mailing list perf-discuss@opensolaris.org