Dear Stavro,

thanks a lot for your answer, it was very helpful. So, it is getDirectoryLocation function of the SplitDestinationMapper the mechanism that statically decides in which L2 slice the data *should* be placed, thus, where the request should be forwarded to. I will continue looking into it.

Thank you again,
Alexandros Daglis

On Tue, 20 Dec 2011 18:12:45 +0000, Volos Stavros wrote:
Dear Alexandre,

The CMP.L2SharedNUCA.[ Inorder | OoO] simulates the last-level cache
as shared. Cache
blocks are address-interleaved across the L2 slices (each tile has an
L2 slice which is part
of the last-level cache). This applies for the directory, too.

After several function calls in which the L1 request is identified as
a miss, the L1 miss travels
to the SplitDestinationMapper (the BackSideOut_Request port of the
L1d cache is connected
to the CacheRequestIn of the SplitDestinationMapper component) in
which the network message
is set up.

The function getDirectoryLocation is called so as to get the
directory location according to the
the memory address of the message. This function shifts the address
by an offset of theDirShift
(i.e., the log of the DirInterleaving. The value is 64 bytes and
hence the log is 6) and reads as many
bits as the log of the number of directories (e.g., in case you have
a 4-core CMP the log will be 2). In
this case you read the 7th and the 8th bit to find which directory
slice (0,1, 2, or 3) should receive the
request. The message is forwarded to the NIC component through the
ToNIC0 port which is connected
to the port FromNode0 of the NIC component.

Hope this helps.

Regards,
-Stavros.


On Dec 20, 2011, at 12:35 PM, aledaglis wrote:

Greetings.

I have been trying to become familar with Simflex, aiming to run some simulations on different NUCA configurations. While I have found the sequence of actions when a hit on an L1 occurs, I am having some trouble finding out how exactly a miss is implemented.

To become more specific:
let's assume we are using the CMP.L2SharedNUCA.Inorder simulator, with 4 tiles. I have noticed, for instance, that once a request on tile's 0 L1 misses, a new MemoryTransport is created and forwarded through the NIC to the appropriate L2 node (not tile 0's L2 neccessarilly). This is the point where I cannot figure out what is happening, how/where the L2-destination is decided. I can track the process until we find out that the request on the L1 misses and I can see the new MemoryTransport that travels from the L1 to the appropriate L2 tile through the nic, but I am not able to find what happens in between, how and where this transport is created.

The request's path, as I have seen:
- CacheController's "handleRequestTransport" calls BaseCacheController's "handleRequestTransport"
- This calls BaseCacheController's "examineRequest"
- This calls MESI's "doRequest", which does a lookup in the array and determines whether it is a hit or miss All these functions are passing a MemoryTransport as a parameter to each other. I suppose that, when a miss is identified, a new MemoryTransport is created somewhere and by someone, its destination is determined and then it is released on the nic to get to its (L2) destination. Have I overlooked something? Could anyone please direct me where to find this? I would be really grateful, since I am stuck at this point for quite some time. I want to experiment on changing the mapping on the L2, so this point is of crucial importance.

Thank you in advance
Alexandros Daglis


Reply via email to