Hi,

On Thu, 2011-04-14 at 19:37 +0200, Norman Feske wrote:
> which register size is considered big? I think 8 bit would be enough -
> but I have no idea if this is already imposing a problem. Anyway, having
> a small task ID register would be better than having none.

8-bit should be doable. It would typically use ~ 1 kilobyte of extra
on-chip memory, and has a chance of fitting in the timing budget.

> In the event
> the number of tasks exceeds the capacity of the task ID register, the OS
> can still start flushing caches.

Ok.

> > Considering a time slice is ~20ms, this looks like plenty of time
> > for the task execution to replace the quasi totality of the cache
> > contents...
> 
> In practice, context switches happen far more often than that.

Hmm... why not, then. Maybe we could even include a performance counter
in the cache in order to measure on the live system the percentage of
cache lines re-used across context switches using this scheme. In either
case it would make for interesting research :)

Lars, Michael, Takeshi: any comment about using 8-bit task ID cache tags
in Linux?

> Just
> consider piping a 1 MB file through a UNIX pipe, which is typically
> buffered with 4K.

OTOH, while your scheme would probably give good results for the
instruction cache, the data written to the FIFO by the first process
would miss the data cache when read by the second process because of the
mismatched task IDs.

BTW, we are using 4KB I and D caches in the Milkymist SoC, so if we fix
the page size to 4KB as well we'd avoid aliasing problems entirely (the
L2 cache would be physically indexed and tagged).
Cache size can be increased beyond the page size without causing
aliasing issues by using more associativity. This keeps the hardware and
software fast and simple, but still allows a little flexibility (more
associativity = more timing problems). And the OS can flush the caches
on implementations that do not verify this property :-)

To sum up: unless I understood something incorrectly, if we use a
virtually indexed physically tagged cache with:
  cache associativity * page size = cache size
we can happily context switch without taking care of the cache at all
and without unnecessary cache flushes, cache misses or CPU pipeline
stages.

> I would very much appreciate virtual memory, which principally enables (or
> at least greatly simplifies) modern OS features such as on-demand paging
> and dynamically loaded shared libraries.

Sure :)

S.

_______________________________________________
http://lists.milkymist.org/listinfo.cgi/devel-milkymist.org
IRC: #milkymist@Freenode
Twitter: www.twitter.com/milkymistvj
Ideas? http://milkymist.uservoice.com

Reply via email to