Ulrich Weigand wrote:
>
> Kevin Lawton wrote:
>
> > For the moment, forget about the physical page holding
> > currently executing code, and any linear addresses which
> > map to it. They are handled differently.
>
> I'm not sure this is necessarily a good idea, as it means
> that every time the execution flow crosses a page boundary,
> it *must* trap (otherwise the monitor wouldn't notice when
> the current page changes ...). Suppose you have an otherwise
> harmless, but very speed-critical inner loop that just happens
> to cross a page boundary ...
>
> I'd prefer not having to treat the 'current' page specially.
> Why can't we simply have *multiple* special mappings in the
> I TLB cache? Sure, that cache will overflow, but whenever
> that happens, we get a fault that we can then handle.
Sure, I mentioned the idea of working with groups or
clusters of pages some time ago. We can do that. First
priority is to get things working, then add more performance later.
It'll be interesting to see how much more speed we can
squeeze out with page grouping. The tradeoff I remember
mentioning was if you have to invalidate a page with
vcode you have to invalidate the whole cluster due to
branches within those pages. This might be a good reason
to go with option (1) I mentioned, so we can trap on
a write and potentially invalidate a minimal part of
the virtualized code (or none if it was a data access).
-Kevin