I don't think I changed anything here... hg annotate seems to back me
up on that, too.  I think the fundamental (but subtle) issue here is
that once you successfully send a packet, the ownership for that
packet object is conceptually handed off to the recipient, so
technically the sender shouldn't be dereferencing that packet pointer
anymore.  Thus it's OK for Ruby to push its own senderState into the
packet if it wants.  If I had to guess I think it might just be that
Gabe hasn't been testing with Ruby...

(That said, looking over the Ruby code with fresh eyes after not
having thought about it for a while, I think the Ruby code might be
overcomplicated... instead of only tracking the m5 packet pointer in
the ruby request object, then using senderState to look up the port
based on the packet, why don't we just keep the both the packet and
the port in the ruby request?)

At a high level I think part of the issue with the sendSplitData()
code is that buildSplitPacket doesn't return a pointer to the "big"
packet, so the only way to access it is via the senderState objects of
the sub-packets.  I expect that with some thought we could restructure
the code to be a little cleaner, but Joel's idea of holding on to the
original senderState pointers on the stack seems like a reasonable
interim solution.

Steve

On Tue, Aug 17, 2010 at 4:25 PM, Gabriel Michael Black
<[email protected]> wrote:
> This code was worked on a few times by different people, originally by me.
> When I wrote the first version, sender state wasn't chained together like I
> think it is now. I believe my version didn't do anything with sender state
> until it got the packet back, but I don't quite remember. You changed the
> sender state stuff, right Steve? What's the rule as far as when senderstate
> can change? Is the code checking it wrong, or the code changing it? This
> would likely affect ARM as well since it can have looser alignment
> requirements and might do a split load.
>
> Gabe
>
> Quoting Joel Hestness <[email protected]>:
>
>> Hi,
>>  I am currently looking at the sendSplitData function in TimingSimpleCPU
>> (cpu/simple/timing.cc:~307), and I'm encountering a problem with the
>> packet
>> sender states when running with Ruby.  After the call to buildSplitPacket,
>> pkt1 and pkt2 have senderState type SplitFragmentSenderState.  However,
>> with
>> Ruby enabled, the call to handleReadPacket sends the packet to a RubyPort,
>> and in RubyPort::M5Port::recvTiming (mem/ruby/system/RubyPort.cc:~173), a
>> new senderState is pushed into the packet that has type SenderState (note
>> that the old senderState is saved in the new senderState. After the packet
>> transfer, Ruby restores the old senderState).  When the stack unwinds back
>> to sendSplitData, the dynamic_cast after handleReadPacket fails because of
>> the type difference.
>>  It looks like the senderState variable is used elsewhere as a stack to
>> store data while the packet traverses from source to destination and on
>> the
>> way back as a response, which makes sense.  I'm wondering why the
>> clearFromParent call needs to happen in sendSplitData, since it seems like
>> it should happen in completeDataAccess when cleaning up the packets.
>>  Thanks,
>>  Joel
>>
>> PS.  In sendSplitData after handleReadPacket(pkt2), it looks like there is
>> a
>> bug with the dynamic_cast and clearFromParent since the cast is called on
>> pkt1->senderState.  This doesn't affect correctness, but it does leave
>> references that affect deletion of the packets.  Is that correct?
>>
>> --
>>  Joel Hestness
>>  PhD Student, Computer Architecture
>>  Dept. of Computer Science, University of Texas - Austin
>>  http://www.cs.utexas.edu/~hestness
>>
>
>
> _______________________________________________
> m5-dev mailing list
> [email protected]
> http://m5sim.org/mailman/listinfo/m5-dev
>
_______________________________________________
m5-dev mailing list
[email protected]
http://m5sim.org/mailman/listinfo/m5-dev

Reply via email to