Hi,

On Wed, 2011-04-13 at 20:08 +0200, Norman Feske wrote:
> It would be much better to use physical addresses as cache indices.

This cannot be done without reducing CPU performance... we'd typically
need 2 extra pipeline stages (which will incur penalties at each branch)
to translate the I and D addresses before the caches.

> If for some reason, the use of physical addresses as indices for the cache is 
> not
> possible, tagging cache lines with a task ID would be a viable
> alternative: A cache hit is considered only if the address matches and
> the cache line's tag equals the value of a task-ID register (which gets
> changed when the kernel switches address spaces). This way, when
> switching address spaces back and for, untouched cache lines remain
> populated.

That's an idea, but:
1) It would limit the number of concurrently loaded processes, since big
tags would be slow to compare and use significant on-chip memory.
2) Does it bring a real gain compared to flushing the caches at each
context switch? In other words, what percentage of the cache data loaded
for one task will not get evicted before the kernel schedules that task
again? Considering a time slice is ~20ms, this looks like plenty of time
for the task execution to replace the quasi totality of the cache
contents...

> The handling of usermode and privileged mode is quite simple and useful: As
> soon as the CPU switches to privileged mode, virtual memory gets
> disabled altogether.

Looks good.

S.

_______________________________________________
http://lists.milkymist.org/listinfo.cgi/devel-milkymist.org
IRC: #milkymist@Freenode
Twitter: www.twitter.com/milkymistvj
Ideas? http://milkymist.uservoice.com

Reply via email to