Hello again,
I've been looking further into this matter and have seen the following:
The packet with it's NetDest to all zeros is being produced in this part of the
MESI_CMP_directory-L2 slicc
action(fwm_sendFwdInvToSharersMinusRequestor, "fwm", desc="invalidate sharers
for request, requestor is sharer") {
peek(L1RequestIntraChipL2Network_in, RequestMsg) {
enqueue(L1RequestIntraChipL2Network_out, RequestMsg,
latency=to_l1_latency) {
assert(is_valid(cache_entry));
out_msg.Addr := address;
out_msg.Type := CoherenceRequestType:INV;
out_msg.Requestor := in_msg.Requestor;
out_msg.Destination := cache_entry.Sharers;
out_msg.Destination.remove(in_msg.Requestor);
DPRINTF(RubySlicc, "fwm_sendFwdInvToSharersMinusRequestor INV TO %s\n",
out_msg.Destination);
out_msg.MessageSize := MessageSizeType:Request_Control;
}
}
}
The problem is that at this point, the only sharer is the requestor L2 Cache,
so when it gets removed, the NetDest results in all zeros, so I believe this
packet should never be going inside the network.
Does anyone know anything about this?
Thanks a lot for your time,
Kind regars.
________________________________
De: [email protected] [[email protected]] en nombre de
Castillo Villar, Emilio [[email protected]]
Enviado el: viernes, 01 de noviembre de 2013 15:06
Para: [email protected]
Asunto: [gem5-users] RV: Getting an invalidation request with no destination.
Good afternoon,
I am doing some experiments with the 9948 rev. (Though I have experienced the
same bug with revisions of May).
Currently I am taking a checkpoint inside a benchmark and restore it with Ruby
+ OoO CPU.
If I do it with the simple network model or garnet flexible it works fine.
However, when trying to restore it with the garnet fixed network, I am facing a
deadlock.
I am using the following parameters:
build/X86_MESI_CMP_directory/gem5.opt -d testing_ckpt/
configs/example/ruby_fs.py --kernel=x86_64-vmlinux-2.6.28.4-smp
--disk-image=x86root.img -n 16 -r1 --num-l2cache=16 --num-dir=16 --topo=Mesh
--mesh-rows=4 --restore-with=detailed --cpu-type=detailed --garnet=fixed
(I fixed a similar problem when restoring with the timing cpu, that fix is now
in the gem5 dev repo.)
I have been tracing the source of this problem, and in some point of the
simulation, a packet like this appears in the NetworkInterface_d.cc object of
the garnet fixed router.
NI:22 Injecting msg [RequestMsg: Addr = [0x98ad80, line 0x98ad80] Type = INV
AccessMode = User Requestor = L1Cache-0 Destination = [NetDest (4) 0 0 0 0 0 0
0 0 0 0 0 0 0 0 0 0 - 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 - 0 0 0 0 0 0 0 0 0 0 0
0 0 0 0 0 - 0 - ] MessageSize = Request_Control DataBlk = [ 0x0 0x0 0x0 0x0
0x0 0x0 0x0 0x0 0x0 0x0 0x0 0x0 0x0 0x0 0x0 0x0 0x0 0x0 0x0 0x0 0x0 0x0 0x0 0x0
0x0 0x0 0x0 0x0 0x0 0x0 0x0 0x0 0x0 0x0 0x0 0x0 0x0 0x0 0x0 0x0 0x0 0x0 0x0 0x0
0x0 0x0 0x0 0x0 0x0 0x0 0x0 0x0 0x0 0x0 0x0 0x0 0x0 0x0 0x0 0x0 0x0 0x0 0x0 0x0
] Len = 0 Dirty = 0 Prefetch = No Time = 3166840465750000 ]
This message has all possible destinations set to 0, causing the flitisize
method to do absolutely nothing.
It does a for loop for all the destinations and after the loop is done, returns
true. The NI will remove the packet from the Message Queue. When there are no
possible destinations, the loop for won't execute and the flitisize will return
true removing the packet from the queue. The problem comes when this mem
request has been registered in the sequencer tables, so after quite some time,
it will pop up with a deadlock panic.
Now I am trying to determine how did this packet get into the Network, does
anyone know what may be happening in here?
Thanks a lot for your time,
Kind regards,
_______________________________________________
gem5-users mailing list
[email protected]
http://m5sim.org/cgi-bin/mailman/listinfo/gem5-users