On Sun, Feb 20, 2011 at 8:34 PM, Ali Saidi <sa...@umich.edu> wrote:

> The fetch stage seems to have done that forever. I think because we're
> using RAM as a filesystem for ARM at the moment that is one reason why it's
> more prone to pop-up in this case, but it seems like it's been an issue for
> a long time.
>
> With my first suggestion the L2 could send the data up to the L1s without
> issue, it's just that the L1s wouldn't assume a cache line sized read is
> another cache above them:
>

Yea, I saw that, the only difference with my solution is that instead of
giving any L1 an owned block, it would only give the dcache an owned block
and could avoid giving one to the icache.  Like I said, slightly more
realistic but otherwise probably insignificant.



> diff -r 5e58eaf00b58 src/mem/cache/cache_impl.hh
> --- a/src/mem/cache/cache_impl.hh       Sat Feb 19 17:32:43 2011 -0600
> +++ b/src/mem/cache/cache_impl.hh       Sun Feb 20 15:39:45 2011 -0600
> @@ -193,7 +193,7 @@
>              blk->trackLoadLocked(pkt);
>          }
>          pkt->setDataFromBlock(blk->data, blkSize);
> -        if (pkt->getSize() == blkSize) {
> +        if (pkt->getSize() == blkSize && !isTopLevel) {
>              // special handling for coherent block requests from
>              // upper-level caches
>              if (pkt->needsExclusive()) {
>
>
> I suppose I could change !isTopLevel to !pkt->req->isInstFetch() and that
> would implement your solution below, correct? The more I think about it, I
> think we really need to use isTopLevel. The problem doesn't end with
> instruction fetch, that is just a special case. I/O devices do full block
> reads an writes. For whatever reason, if an I/O device did a write of a
> block and then read it back while it lived in the I/O cache the data could
> be lost there too.
>

Yea, I was thinking you'd have to add a new flag for my scheme rather than
just relying on isInstFetch().

Maybe the one case where my alternative would really matter is if you had a
configuration where there were some caches and some non-cache devices that
talked to the same downstream cache, e.g., if you had a DMA device attached
to the L2 in parallel with some L1s (who would do a crazy thing like that?).


> I'm going to add an assert to the fetch stage and the
> DmaDevice::dmaAction() like: assert(pkt->sharedAsserted() ||
> !pkt->memInhibitAsserted()) to catch these situations (that will do it
> correct, it seems like memInhibitAsserted() is being overloaded to mean you
> have the block in owner if it's not shared).
>

Yea, that sounds right.  If you think about it, in the normal case of a
cache-to-cache transfer then (!sharedAsserted() && memInhibitAsserted())
does imply that you're being handed ownership of a dirty copy; this code is
just faking that in the situation where it's not a peer cache-to-cache
transfer but you want to get the same effect at the requester.

Steve
_______________________________________________
m5-dev mailing list
m5-dev@m5sim.org
http://m5sim.org/mailman/listinfo/m5-dev

Reply via email to