On 2014-02-10 16:49, Steve Reinhardt wrote:

On Mon, Feb 10, 2014 at 6:56 AM, Andreas Sandberg <[email protected] <mailto:[email protected]>> wrote:

    This is an automatically generated e-mail. To reply, visit:
    http://reviews.gem5.org/r/2159/


        On February 10th, 2014, 12:28 a.m. CET, *Steve Reinhardt* wrote:

            You comment that "we are currently limited to configurations where there 
is only one KVM CPU per event queue"... can you expand on this?  I.e., why can't I 
have multiple KVM CPUs per event queue and timeslice among them?  Or is that not what you 
meant?

        On February 10th, 2014, 10:45 a.m. CET, *Andreas Sandberg* wrote:

            The limitation at the moment is because we can't multiplex between 
KVM CPUs in the same event queue. This is caused by the way we simulated until 
the next event (we look into the future and request an exit when the next event 
is due), if there is another KVM CPU executing, we'll see that one on the same 
or a nearby tick. It might be possible to get multiplexing to work by ignoring 
KVM events when we look into the local event queue to calculate when to exit 
from KVM.

        On February 10th, 2014, 3:18 p.m. CET, *Steve Reinhardt* wrote:

            I see.  So this would be a limitation with the current 
single-threaded model as well, right?  I guess I was reading too much into it 
and thinking this was a new limitation due to the multiple queues.

    That is correct. Without this patch, the KVM implementation only supports 
one CPU at a time.


Getting off the topic of this patch, but a potential solution to having # KVM CPUs > # threads is to stay with the one-kvm-cpu-per-event-queue constraint, but instead multiplex multiple event queues onto a single thread. That would have the advantage that whatever inter-event-queue synchronization we use will automatically apply to intra-thread as well as inter-thread sync. In fact, if you have enough event queues, you might not even want to tie an event queue to a particular thread, so that you can load balance by having threads dynamically choose an event queue to process.

That keeps the KVM code simpler, but would require some work on the event queue model, as I think the 1:1 mapping of threads to event queues is pretty well baked in to the current design.


That would solve a lot of nasty synchronization issues when particularly in the memory system. There might still be strange issues where events are scheduled in the past if it isn't done carefully. For example, consider a system with two CPUs sharing memory with devices living in CPU1's event queue:

1. CPU1 is allowed to execute its time quantum.
2. CPU2 starts its quantum and executes an IO instruction early.
3. Devices schedule an event in CPU1's queue as a result of the IO.

In this case, 3 could lead to an event being scheduled in the past (unless the time quantum is small enough). We probably want to be able to support quanta that are longer than the critical latency (a bit like Graphite) if we can guarantee that application-visible synchronization is correct (which is trivial in KVM since all memory accesses go straight to the backing store).

A possible workaround would be to execute all devices in a separate queue and require that all CPU queues execute before the device queue. That way, things never get scheduled in the past (the current device time quantum is 'built' by the CPU and then executed as whole). This assumes that devices never schedule events in a CPU's event queue (this might happen indirectly as a response to an interrupt request).

Yet another option (which is probably equivalent to the previous option) is to just use one event queue and ignore the other CPUs when calculating the time to execute in KVM. If we enforce a maximum time a CPU can execute at a time (similar to the quantum in a multi-queue setup), we should be able to get a reasonable interleaving.

//Andreas

_______________________________________________
gem5-dev mailing list
[email protected]
http://m5sim.org/mailman/listinfo/gem5-dev

Reply via email to