Re: [m5-dev] TimingSimpleCPU, x86: sendSplitData packet sender states

2010-08-19 Thread Gabriel Michael Black

Quoting Steve Reinhardt :


I don't think I changed anything here... hg annotate seems to back me
up on that, too.  I think the fundamental (but subtle) issue here is
that once you successfully send a packet, the ownership for that
packet object is conceptually handed off to the recipient, so
technically the sender shouldn't be dereferencing that packet pointer
anymore.  Thus it's OK for Ruby to push its own senderState into the
packet if it wants.  If I had to guess I think it might just be that
Gabe hasn't been testing with Ruby...



Yes, that's probably true. Mercurial backs you up, but I honestly  
don't remember writing that code. Old age I guess :-).





(That said, looking over the Ruby code with fresh eyes after not
having thought about it for a while, I think the Ruby code might be
overcomplicated... instead of only tracking the m5 packet pointer in
the ruby request object, then using senderState to look up the port
based on the packet, why don't we just keep the both the packet and
the port in the ruby request?)

At a high level I think part of the issue with the sendSplitData()
code is that buildSplitPacket doesn't return a pointer to the "big"
packet, so the only way to access it is via the senderState objects of
the sub-packets.  I expect that with some thought we could restructure
the code to be a little cleaner, but Joel's idea of holding on to the
original senderState pointers on the stack seems like a reasonable
interim solution.


That sounds reasonable.

Gabe
___
m5-dev mailing list
m5-dev@m5sim.org
http://m5sim.org/mailman/listinfo/m5-dev


Re: [m5-dev] TimingSimpleCPU, x86: sendSplitData packet sender states

2010-08-19 Thread Beckmann, Brad


> -Original Message-
> From: m5-dev-boun...@m5sim.org [mailto:m5-dev-boun...@m5sim.org] On
> Behalf Of Steve Reinhardt
> Sent: Thursday, August 19, 2010 3:03 PM
> To: M5 Developer List
> Subject: Re: [m5-dev] TimingSimpleCPU, x86: sendSplitData packet sender
> states
> 
> (That said, looking over the Ruby code with fresh eyes after not
> having thought about it for a while, I think the Ruby code might be
> overcomplicated... instead of only tracking the m5 packet pointer in
> the ruby request object, then using senderState to look up the port
> based on the packet, why don't we just keep the both the packet and
> the port in the ruby request?)

Yes, we should do that.  Actually we could incorporate any changes to ruby 
request to the larger fix of cleaning up the libruby code.  Previously, I 
believe there was a plan for some folks at Wisconsin to clean up that code, so 
I've left it relatively untouched.  Can someone at Wisconsin confirm that that 
is still the plan?  Regardless, we can still make the simple change.  I just 
want to make sure there isn't a larger conflicting change in the pipeline.

Brad



> On Tue, Aug 17, 2010 at 4:25 PM, Gabriel Michael Black
>  wrote:
> > This code was worked on a few times by different people, originally
> by me.
> > When I wrote the first version, sender state wasn't chained together
> like I
> > think it is now. I believe my version didn't do anything with sender
> state
> > until it got the packet back, but I don't quite remember. You changed
> the
> > sender state stuff, right Steve? What's the rule as far as when
> senderstate
> > can change? Is the code checking it wrong, or the code changing it?
> This
> > would likely affect ARM as well since it can have looser alignment
> > requirements and might do a split load.
> >
> > Gabe
> >
> > Quoting Joel Hestness :
> >
> >> Hi,
> >>  I am currently looking at the sendSplitData function in
> TimingSimpleCPU
> >> (cpu/simple/timing.cc:~307), and I'm encountering a problem with the
> >> packet
> >> sender states when running with Ruby.  After the call to
> buildSplitPacket,
> >> pkt1 and pkt2 have senderState type SplitFragmentSenderState.
>  However,
> >> with
> >> Ruby enabled, the call to handleReadPacket sends the packet to a
> RubyPort,
> >> and in RubyPort::M5Port::recvTiming
> (mem/ruby/system/RubyPort.cc:~173), a
> >> new senderState is pushed into the packet that has type SenderState
> (note
> >> that the old senderState is saved in the new senderState. After the
> packet
> >> transfer, Ruby restores the old senderState).  When the stack
> unwinds back
> >> to sendSplitData, the dynamic_cast after handleReadPacket fails
> because of
> >> the type difference.
> >>  It looks like the senderState variable is used elsewhere as a stack
> to
> >> store data while the packet traverses from source to destination and
> on
> >> the
> >> way back as a response, which makes sense.  I'm wondering why the
> >> clearFromParent call needs to happen in sendSplitData, since it
> seems like
> >> it should happen in completeDataAccess when cleaning up the packets.
> >>  Thanks,
> >>  Joel
> >>
> >> PS.  In sendSplitData after handleReadPacket(pkt2), it looks like
> there is
> >> a
> >> bug with the dynamic_cast and clearFromParent since the cast is
> called on
> >> pkt1->senderState.  This doesn't affect correctness, but it does
> leave
> >> references that affect deletion of the packets.  Is that correct?
> >>
> >> --
> >>  Joel Hestness
> >>  PhD Student, Computer Architecture
> >>  Dept. of Computer Science, University of Texas - Austin
> >>  http://www.cs.utexas.edu/~hestness
> >>
> >
> >
> > ___
> > m5-dev mailing list
> > m5-dev@m5sim.org
> > http://m5sim.org/mailman/listinfo/m5-dev
> >
> ___
> m5-dev mailing list
> m5-dev@m5sim.org
> http://m5sim.org/mailman/listinfo/m5-dev


___
m5-dev mailing list
m5-dev@m5sim.org
http://m5sim.org/mailman/listinfo/m5-dev


Re: [m5-dev] TimingSimpleCPU, x86: sendSplitData packet sender states

2010-08-19 Thread Steve Reinhardt
I don't think I changed anything here... hg annotate seems to back me
up on that, too.  I think the fundamental (but subtle) issue here is
that once you successfully send a packet, the ownership for that
packet object is conceptually handed off to the recipient, so
technically the sender shouldn't be dereferencing that packet pointer
anymore.  Thus it's OK for Ruby to push its own senderState into the
packet if it wants.  If I had to guess I think it might just be that
Gabe hasn't been testing with Ruby...

(That said, looking over the Ruby code with fresh eyes after not
having thought about it for a while, I think the Ruby code might be
overcomplicated... instead of only tracking the m5 packet pointer in
the ruby request object, then using senderState to look up the port
based on the packet, why don't we just keep the both the packet and
the port in the ruby request?)

At a high level I think part of the issue with the sendSplitData()
code is that buildSplitPacket doesn't return a pointer to the "big"
packet, so the only way to access it is via the senderState objects of
the sub-packets.  I expect that with some thought we could restructure
the code to be a little cleaner, but Joel's idea of holding on to the
original senderState pointers on the stack seems like a reasonable
interim solution.

Steve

On Tue, Aug 17, 2010 at 4:25 PM, Gabriel Michael Black
 wrote:
> This code was worked on a few times by different people, originally by me.
> When I wrote the first version, sender state wasn't chained together like I
> think it is now. I believe my version didn't do anything with sender state
> until it got the packet back, but I don't quite remember. You changed the
> sender state stuff, right Steve? What's the rule as far as when senderstate
> can change? Is the code checking it wrong, or the code changing it? This
> would likely affect ARM as well since it can have looser alignment
> requirements and might do a split load.
>
> Gabe
>
> Quoting Joel Hestness :
>
>> Hi,
>>  I am currently looking at the sendSplitData function in TimingSimpleCPU
>> (cpu/simple/timing.cc:~307), and I'm encountering a problem with the
>> packet
>> sender states when running with Ruby.  After the call to buildSplitPacket,
>> pkt1 and pkt2 have senderState type SplitFragmentSenderState.  However,
>> with
>> Ruby enabled, the call to handleReadPacket sends the packet to a RubyPort,
>> and in RubyPort::M5Port::recvTiming (mem/ruby/system/RubyPort.cc:~173), a
>> new senderState is pushed into the packet that has type SenderState (note
>> that the old senderState is saved in the new senderState. After the packet
>> transfer, Ruby restores the old senderState).  When the stack unwinds back
>> to sendSplitData, the dynamic_cast after handleReadPacket fails because of
>> the type difference.
>>  It looks like the senderState variable is used elsewhere as a stack to
>> store data while the packet traverses from source to destination and on
>> the
>> way back as a response, which makes sense.  I'm wondering why the
>> clearFromParent call needs to happen in sendSplitData, since it seems like
>> it should happen in completeDataAccess when cleaning up the packets.
>>  Thanks,
>>  Joel
>>
>> PS.  In sendSplitData after handleReadPacket(pkt2), it looks like there is
>> a
>> bug with the dynamic_cast and clearFromParent since the cast is called on
>> pkt1->senderState.  This doesn't affect correctness, but it does leave
>> references that affect deletion of the packets.  Is that correct?
>>
>> --
>>  Joel Hestness
>>  PhD Student, Computer Architecture
>>  Dept. of Computer Science, University of Texas - Austin
>>  http://www.cs.utexas.edu/~hestness
>>
>
>
> ___
> m5-dev mailing list
> m5-dev@m5sim.org
> http://m5sim.org/mailman/listinfo/m5-dev
>
___
m5-dev mailing list
m5-dev@m5sim.org
http://m5sim.org/mailman/listinfo/m5-dev


Re: [m5-dev] TimingSimpleCPU, x86: sendSplitData packet sender states

2010-08-17 Thread Gabriel Michael Black
This code was worked on a few times by different people, originally by  
me. When I wrote the first version, sender state wasn't chained  
together like I think it is now. I believe my version didn't do  
anything with sender state until it got the packet back, but I don't  
quite remember. You changed the sender state stuff, right Steve?  
What's the rule as far as when senderstate can change? Is the code  
checking it wrong, or the code changing it? This would likely affect  
ARM as well since it can have looser alignment requirements and might  
do a split load.


Gabe

Quoting Joel Hestness :


Hi,
  I am currently looking at the sendSplitData function in TimingSimpleCPU
(cpu/simple/timing.cc:~307), and I'm encountering a problem with the packet
sender states when running with Ruby.  After the call to buildSplitPacket,
pkt1 and pkt2 have senderState type SplitFragmentSenderState.  However, with
Ruby enabled, the call to handleReadPacket sends the packet to a RubyPort,
and in RubyPort::M5Port::recvTiming (mem/ruby/system/RubyPort.cc:~173), a
new senderState is pushed into the packet that has type SenderState (note
that the old senderState is saved in the new senderState. After the packet
transfer, Ruby restores the old senderState).  When the stack unwinds back
to sendSplitData, the dynamic_cast after handleReadPacket fails because of
the type difference.
  It looks like the senderState variable is used elsewhere as a stack to
store data while the packet traverses from source to destination and on the
way back as a response, which makes sense.  I'm wondering why the
clearFromParent call needs to happen in sendSplitData, since it seems like
it should happen in completeDataAccess when cleaning up the packets.
  Thanks,
  Joel

PS.  In sendSplitData after handleReadPacket(pkt2), it looks like there is a
bug with the dynamic_cast and clearFromParent since the cast is called on
pkt1->senderState.  This doesn't affect correctness, but it does leave
references that affect deletion of the packets.  Is that correct?

--
  Joel Hestness
  PhD Student, Computer Architecture
  Dept. of Computer Science, University of Texas - Austin
  http://www.cs.utexas.edu/~hestness




___
m5-dev mailing list
m5-dev@m5sim.org
http://m5sim.org/mailman/listinfo/m5-dev


Re: [m5-dev] TimingSimpleCPU, x86: sendSplitData packet sender states

2010-08-17 Thread Joel Hestness
I just realized that the clearFromParent call is used for tracking which of
the packets have successfully sent, so that if the send port is busy, it can
retry them when a recvRetry is received later.  It appears that maybe a
better solution to this is to hold a pointer on the stack in sendSplitData
to the senderState that may eventually call clearFromParent rather than
trying to get the senderState back out after the call to handleReadPacket.
  Does sound reasonable?
  Thanks,
  Joel

On Tue, Aug 17, 2010 at 3:11 PM, Joel Hestness wrote:

> Hi,
>   I am currently looking at the sendSplitData function in TimingSimpleCPU
> (cpu/simple/timing.cc:~307), and I'm encountering a problem with the packet
> sender states when running with Ruby.  After the call to buildSplitPacket,
> pkt1 and pkt2 have senderState type SplitFragmentSenderState.  However, with
> Ruby enabled, the call to handleReadPacket sends the packet to a RubyPort,
> and in RubyPort::M5Port::recvTiming (mem/ruby/system/RubyPort.cc:~173), a
> new senderState is pushed into the packet that has type SenderState (note
> that the old senderState is saved in the new senderState. After the packet
> transfer, Ruby restores the old senderState).  When the stack unwinds back
> to sendSplitData, the dynamic_cast after handleReadPacket fails because of
> the type difference.
>   It looks like the senderState variable is used elsewhere as a stack to
> store data while the packet traverses from source to destination and on the
> way back as a response, which makes sense.  I'm wondering why the
> clearFromParent call needs to happen in sendSplitData, since it seems like
> it should happen in completeDataAccess when cleaning up the packets.
>   Thanks,
>   Joel
>
> PS.  In sendSplitData after handleReadPacket(pkt2), it looks like there is
> a bug with the dynamic_cast and clearFromParent since the cast is called on
> pkt1->senderState.  This doesn't affect correctness, but it does leave
> references that affect deletion of the packets.  Is that correct?
>
> --
>   Joel Hestness
>   PhD Student, Computer Architecture
>   Dept. of Computer Science, University of Texas - Austin
>   http://www.cs.utexas.edu/~hestness
>



-- 
  Joel Hestness
  PhD Student, Computer Architecture
  Dept. of Computer Science, University of Texas - Austin
  http://www.cs.utexas.edu/~hestness
___
m5-dev mailing list
m5-dev@m5sim.org
http://m5sim.org/mailman/listinfo/m5-dev


[m5-dev] TimingSimpleCPU, x86: sendSplitData packet sender states

2010-08-17 Thread Joel Hestness
Hi,
  I am currently looking at the sendSplitData function in TimingSimpleCPU
(cpu/simple/timing.cc:~307), and I'm encountering a problem with the packet
sender states when running with Ruby.  After the call to buildSplitPacket,
pkt1 and pkt2 have senderState type SplitFragmentSenderState.  However, with
Ruby enabled, the call to handleReadPacket sends the packet to a RubyPort,
and in RubyPort::M5Port::recvTiming (mem/ruby/system/RubyPort.cc:~173), a
new senderState is pushed into the packet that has type SenderState (note
that the old senderState is saved in the new senderState. After the packet
transfer, Ruby restores the old senderState).  When the stack unwinds back
to sendSplitData, the dynamic_cast after handleReadPacket fails because of
the type difference.
  It looks like the senderState variable is used elsewhere as a stack to
store data while the packet traverses from source to destination and on the
way back as a response, which makes sense.  I'm wondering why the
clearFromParent call needs to happen in sendSplitData, since it seems like
it should happen in completeDataAccess when cleaning up the packets.
  Thanks,
  Joel

PS.  In sendSplitData after handleReadPacket(pkt2), it looks like there is a
bug with the dynamic_cast and clearFromParent since the cast is called on
pkt1->senderState.  This doesn't affect correctness, but it does leave
references that affect deletion of the packets.  Is that correct?

-- 
  Joel Hestness
  PhD Student, Computer Architecture
  Dept. of Computer Science, University of Texas - Austin
  http://www.cs.utexas.edu/~hestness
___
m5-dev mailing list
m5-dev@m5sim.org
http://m5sim.org/mailman/listinfo/m5-dev