Hi, I am currently looking at the sendSplitData function in TimingSimpleCPU (cpu/simple/timing.cc:~307), and I'm encountering a problem with the packet sender states when running with Ruby. After the call to buildSplitPacket, pkt1 and pkt2 have senderState type SplitFragmentSenderState. However, with Ruby enabled, the call to handleReadPacket sends the packet to a RubyPort, and in RubyPort::M5Port::recvTiming (mem/ruby/system/RubyPort.cc:~173), a new senderState is pushed into the packet that has type SenderState (note that the old senderState is saved in the new senderState. After the packet transfer, Ruby restores the old senderState). When the stack unwinds back to sendSplitData, the dynamic_cast after handleReadPacket fails because of the type difference. It looks like the senderState variable is used elsewhere as a stack to store data while the packet traverses from source to destination and on the way back as a response, which makes sense. I'm wondering why the clearFromParent call needs to happen in sendSplitData, since it seems like it should happen in completeDataAccess when cleaning up the packets. Thanks, Joel
PS. In sendSplitData after handleReadPacket(pkt2), it looks like there is a bug with the dynamic_cast and clearFromParent since the cast is called on pkt1->senderState. This doesn't affect correctness, but it does leave references that affect deletion of the packets. Is that correct? -- Joel Hestness PhD Student, Computer Architecture Dept. of Computer Science, University of Texas - Austin http://www.cs.utexas.edu/~hestness
_______________________________________________ m5-dev mailing list [email protected] http://m5sim.org/mailman/listinfo/m5-dev
