-----------------------------------------------------------
This is an automatically generated e-mail. To reply, visit:
http://reviews.gem5.org/r/3315/#review8010
-----------------------------------------------------------


Great work! Thanks for coding this up.

I have some questions regarding the split and naming, but overall it looks like 
a good start.

I think the actual resp queue should have a "reserve" mechanism instead of 
checking the total buffer occupancy. That way we can ensure that it never 
overflows.


src/mem/DRAMCtrl.py (line 85)
<http://reviews.gem5.org/r/3315/#comment6950>

    It would be good to be more clear what this entails. The response buffer 
should normally be large enough to encompass the total number of requests. In 
the gem5 implementation, however, the requests are also in the queue while they 
are being served by the backend and DRAM itself, so we need to also cover for 
that. Where did the number 32 come from?



src/mem/dram_ctrl.hh (line 520)
<http://reviews.gem5.org/r/3315/#comment6951>

    What buffers? When?



src/mem/dram_ctrl.hh (line 601)
<http://reviews.gem5.org/r/3315/#comment6952>

    I am not sure about the backend wording here.
    
    Normally in a DRAM controller the "backend" would be the DFI interface.
    
    The response queue is part of the frontend.
    
    Are we not always guaranteed to free up one space when this is called? If 
not, why not?



src/mem/dram_ctrl.hh (line 742)
<http://reviews.gem5.org/r/3315/#comment6953>

    I am not a massive fan of the naming (see above).
    
    What is the split now? Is the respQueue actually not there only to model 
the pipeline latency of the backend and DRAM? If so, I would suggest to rename 
it pipelineQueue or similar, and the backendQueue respQueue.



src/mem/dram_ctrl.cc (line 292)
<http://reviews.gem5.org/r/3315/#comment6954>

    canEnqueue or perhaps canAccept? or bufferSpaceAvailable?



src/mem/dram_ctrl.cc 
<http://reviews.gem5.org/r/3315/#comment6955>

    We actually need to do a functional access on all the queues here. This 
goes beyond this patch, but it's a good thing you caught it.
    
    The write queue needs to be updated with any write data. The fact that we 
ignore read responses should not cause a problem.


- Andreas Hansson


On Feb. 8, 2016, 7:58 p.m., Joel Hestness wrote:
> 
> -----------------------------------------------------------
> This is an automatically generated e-mail. To reply, visit:
> http://reviews.gem5.org/r/3315/
> -----------------------------------------------------------
> 
> (Updated Feb. 8, 2016, 7:58 p.m.)
> 
> 
> Review request for Default.
> 
> 
> Repository: gem5
> 
> 
> Description
> -------
> 
> Changeset 11331:8492ca5ea301
> ---------------------------
> mem: [DRAFT] Make DRAMCtrl response queue finite
> 
> The DRAMCtrl had an effectively infinite response queue by using the
> QueuedSlavePort to manage responses. Unfortunately, for high-bandwidth
> applications, this queue causes unrealistic bloat, quickly filling with more
> than 100 packets.
> 
> To fix this issue, change the response port from a QueuedSlavePort back to a
> standard SlavePort, and implement the appropriate control flow. This includes
> adding a backend queue, in which slots are reserved when packets arrive at on
> the DRAMCtrl's request path.
> 
> 
> Diffs
> -----
> 
>   src/mem/dram_ctrl.cc 10647f5d0f7f 
>   src/mem/DRAMCtrl.py 10647f5d0f7f 
>   src/mem/dram_ctrl.hh 10647f5d0f7f 
> 
> Diff: http://reviews.gem5.org/r/3315/diff/
> 
> 
> Testing
> -------
> 
> Tested running with gem5's memtest.py:
> 
> ../build/X86/gem5.opt --debug-flag=DRAM --outdir=$outdir 
> ../configs/example/memtest.py -u 100
> 
> and a couple hundred varying GPU workloads in gem5-gpu.
> 
> 
> Thanks,
> 
> Joel Hestness
> 
>

_______________________________________________
gem5-dev mailing list
[email protected]
http://m5sim.org/mailman/listinfo/gem5-dev

Reply via email to