> First, so far my understanding is that there is support to run two (or > more?) simulated machines, which can communicate with each other > through the EtherLink network fabric. Is this right? You can put as many simulated machines in the system as you want, but we currently only have a model for an ethernet link (there is no ethernet switch model). We have done two and three system simulations (in the three system sim, one machine was a proxy or a firewall and traffic was passed across the link). To be honest, an ethernet switch model is something that is pretty easy to implement. We just never got it into the tree.
> Also, is there > any reference you would recommend that explains how to setup a > simulation of two such machines? I don't believe that there is any existing documentation, but if you use fs.py from config/examples with the --dual option, it will create one for you. There is also at least one dual system regression test in the tests directory. tests/configs/twosys-tsunami-simple-atomic.py sets it up. > Second, I am wondering whether there is any existing infrastructure > that enables to talk with a simulated machine from the outside world. > Specifically, whether I can can run a client on a real machine, and > have the client communicate with a server running on the target > simulated machine. While I could potentially run the server on one > simulated machine and the client on another simulated machine, the > hypothesis is that by running the client outside we might be able to > save simulation time (but by how much is admittedly debatable). This has worked in the past, and it likely does not work right now, though it shouldn't be too difficult to fix. Basically instead of connecting a system to an etherlink device, you connect it to the ethertap device which connects it to the real world. My guess is that the ethertap device is not working right now. I created it probably 10 years ago and it wasn't exactly stable at the time. If you were to make it work with the tap(4) device in the linux kernel (which is pretty darned easy), I'm sure that it would be much more reliable. tap(4) wasn't generally available way back then. You'll also need to make sure that the simulated clock doesn't run too fast (idle simulates pretty fast :) If it gets too fast, then TCP sessions will time out and stuff like that. Gabe Black implemented a special event that prevents the simulator clock from getting too fast. I'm pretty sure that he committed it and you can look through either the mail archive or the commit history to figure that part out. (Or Gabe could help you out) All that said, I'm not sure that you want to connect your system to the real world. At least not without creating some infrastructure around it. The basic problem is that the real world is non-deterministic, so you could never get a deterministic simulation connecting to the real world. None of us could accept that, so ethertap was always a cute trick as opposed to something truly useful for simulation. What we generally do is have the "drive" system be a very simplistic completely non-detailed CPU model. Its performance ends up being at least 10x better than the "test" system. (Assuming that your test system has a cache hierarchy or a detailed CPU model or something like that). All that said, if you want determinism and you want the drive systems to be fast, I see two possibilities (either of which would be awesome to have in gem5): 1) finish the work of having gem5 be the device model for linux KVM. Gabe black started some work on that, but got stuck on an issue with the clock I believe. It would be really excellent to get this working. 2) There are user space tcp stacks out there and with some clever work using ptrace and a userspace tcp stack, you could probably get your timing right. Hope that helps. And if you solve any problems in this area *please* contribute your code back to the community. Nate _______________________________________________ gem5-users mailing list gem5-users@gem5.org http://m5sim.org/cgi-bin/mailman/listinfo/gem5-users