I'm pretty sure I found this issue because that address (8cabc0) ended up 
having stale data. I might still have the trace, if I do i'll post it.
Ali




On Sep 8, 2011, at 11:50 PM, Steve Reinhardt wrote:

> It seems odd, but since neither one is dirty, it's not necessarily a
> problem.  Basically no request can come to the L2 without snooping the L1
> first (if it comes in from the CPU side, the L1/L2 bus will snoop the L1
> before the request hits the L2; if it comes in from the mem side, the upward
> "express" snoop in the L2's handleSnoop() call will check the L1 first, as
> long as forwardSnoops is set on the L2).  So the exclusive flag on the L2
> just means that none of the L2's peers (i.e., other L2s) have a copy, not
> that no other cache in the system has a copy.
> 
> There may be a hole in my analysis that means that it is a problem, but I'm
> curious what that would be... which is why I asked where things tangibly
> went wrong (e.g., someone reading a stale value), since I'm curious how this
> situation could lead to that.
> 
> Steve
> 
> On Thu, Sep 8, 2011 at 8:38 PM, Ali Saidi <[email protected]> wrote:
> 
>> What I thought was wrong was that blocks that were marked valid, writeable
>> and readable appear to exist both in the l1 and l2. When the L1 got the
>> block from the L2 the L2 should have sent it shared or invalidated it's
>> copy. Ultimately, the l1 has a valid writable copy of the block that it
>> writes back to the L2 and it gets a hit on the writeback. If it has write
>> permission should that ever happen?
>> 
>> Ali
>> 
>> 
>> On Sep 8, 2011, at 6:05 PM, Steve Reinhardt wrote:
>> 
>>> Hi Ali,
>>> 
>>> I realize this was a while ago, but can you elaborate on the actual error
>>> you ran into in this circumstance?  I agree that the trace below may seem
>>> superficially odd to the casual observer, but it's not clear to me how
>> this
>>> behavior leads to a wrong answer; it looks to me like at the end of the
>>> trace the modified block from the L1 has been successfully written back
>> to
>>> the L2, where (assuming the L2 block is marked as dirty) it should be
>>> residing happily until it's either written back or accessed again.
>>> 
>>> Thanks,
>>> 
>>> Steve
>>> 
>>> On Tue, Aug 30, 2011 at 11:26 AM, Ali Saidi <[email protected]> wrote:
>>> 
>>>> 
>>>> Hi Everyone,
>>>> 
>>>> 
>>>> 
>>>> I've run into yet another issue with an L2 prefetcher doing something
>>>> weird. I'm hoping that someone (mostly Steve) could tell me what is
>> supposed
>>>> to happen in this case to fix it. Obligatory trace below:
>>>> 
>>>> <Prefetcher inserts address 0x8cabc0 into prefetch queue>
>>>> 
>>>> 496726046000: system.cpu.dcache: ReadReq 8cabe0 miss   <--- Miss from
>> CPU
>>>> for address
>>>> 
>>>> 496726048000: system.tol2bus: recvTiming: src 4 dst -1 ReadReq 0x8cabc0
>>>> <--- Address going to L2
>>>> 
>>>> 496726048000: system.l2: ReadReq 8cabc0 miss  <--- L2 sees the miss from
>>>> the dcache above
>>>> 
>>>> 496726048000: system.l2-pf: hit: PC 32f0 blk_addr 8cabc0 stride 1856
>>>> (change), conf 0  <--- the prefetcher found one, yay!
>>>> 
>>>> 496726060000: system.membus: recvTiming: src 7 dst -1 ReadReq 0x8cabc0
>>>> <--- This is the prefetch leaving the L2
>>>> 
>>>> 496726060000: system.physmem: Read of size 64 on address 0x8cabc0
>>>> 
>>>> 496726114000: system.membus: recvTiming: src 6 dst 7 ReadResp 0x8cabc0
>>>> <--- Data returning from memory to L2
>>>> 
>>>> 496726114000: system.l2: Handling response to 8cabc0
>>>> 
>>>> 496726114000: system.l2: Block for addr 8cabc0 being updated in Cache
>>>> 
>>>> 496726114000: system.l2: Block addr 8cabc0 moving from state 0 to 7
>>>> <---- Block marked as valid+readable+writeable in L2
>>>> 
>>>> 496726133892: system.tol2bus: recvTiming: src 0 dst 4 ReadResp 0x8cabc0
>>>> <---- Block sent to cache above
>>>> 
>>>> 496726133892: system.cpu.dcache: Handling response to 8cabc0
>>>> 
>>>> 496726133892: system.cpu.dcache: Block for addr 8cabc0 being updated in
>>>> Cache
>>>> 
>>>> 496726133892: system.cpu.dcache: replacement: replacing 556bc0 with
>> 8cabc0:
>>>> clean
>>>> 
>>>> 496726133892: system.cpu.dcache: Block addr 8cabc0 moving from state 0
>> to 7
>>>> <--- Also seems to have been allocated as valid+readable+writeable in
>>>> L1!!!
>>>> 
>>>> 496741134000: system.cpu.dcache: replacement: replacing 8cabc0 with
>> 536bc0:
>>>> writeback
>>>> 
>>>> 496741136000: system.coretol2buses: recvTiming: src 4 dst -1 Writeback
>>>> 0x8cabc0 BUSY
>>>> 
>>>> 496741139001: system.coretol2buses: recvTiming: src 4 dst -1 Writeback
>>>> 0x8cabc0 BUSY
>>>> 
>>>> 496741140000: system.coretol2buses: recvTiming: src 4 dst -1 Writeback
>>>> 0x8cabc0
>>>> 
>>>> 496741140000: system.l2: Writeback 8cabc0 hit  <-- Should it ever be
>> valid
>>>> to get a hit on a writeback?
>>>> 
>>>> 
>>>> 
>>>> So.. The question is now, what went horribly wrong and how should it be
>>>> fixed? I can think of a couple of possibilities, but it seems like what
>> is
>>>> happening is that the mshr for the PF isn't marked as InService until it
>>>> issues, but the read request from the dcache comes in right before that
>>>> happens. Since the MSHR isn't in service and we short circuit some of
>> the
>>>> mshr handling things that would mark the mshr so either a copy doesn't
>> get
>>>> put in the L2 or both are shared. Thoughts?
>>>> 
>>>> 
>>>> 
>>>> Ali
>>>> 
>>>> 
>>>> 
>>>> ______________________________**_________________
>>>> gem5-dev mailing list
>>>> [email protected]
>>>> http://m5sim.org/mailman/**listinfo/gem5-dev<
>> http://m5sim.org/mailman/listinfo/gem5-dev>
>>>> 
>>> _______________________________________________
>>> gem5-dev mailing list
>>> [email protected]
>>> http://m5sim.org/mailman/listinfo/gem5-dev
>>> 
>> 
>> _______________________________________________
>> gem5-dev mailing list
>> [email protected]
>> http://m5sim.org/mailman/listinfo/gem5-dev
>> 
> _______________________________________________
> gem5-dev mailing list
> [email protected]
> http://m5sim.org/mailman/listinfo/gem5-dev
> 

_______________________________________________
gem5-dev mailing list
[email protected]
http://m5sim.org/mailman/listinfo/gem5-dev

Reply via email to