ok thanks for clarifying that.

On 2/21/12, Steve Reinhardt <[email protected]> wrote:
> On Tue, Feb 21, 2012 at 11:17 AM, Mahmood Naderan
> <[email protected]>wrote:
>
>> >Once the demand miss has been issued, a
>> >prefetch can be issued whether or not the demand miss has completed.
>>
>> In the current implementation,
>>
>> http://repo.gem5.org/gem5/file/2629f0b99e8d/src/mem/cache/cache_impl.hh#l1426
>>
>> Issuing a prefetch is made *only* when the miss_mshr and write_mshr
>> are empty. So what I understand is that, issuing a prefetch is made
>> *only* when there is *no* miss or write pending.
>> In another word, we have to be sure that all misses and writes are
>> done. Now we idle, then issue a prefetch.
>>
>
> You're misunderstanding that code.  miss_mshr and write_mshr are obtained
> from mshrQueue and writeBuffer respectively vua getNextMSHR(), which only
> returns ready MSHRs.  So they will be null if there are no MSHRs ready to
> be issued, not if the structures are empty, just as I described.
>
>
>>
>> In timingAccess()
>>
>> http://repo.gem5.org/gem5/file/2629f0b99e8d/src/mem/cache/cache_impl.hh#l1426
>> upon a miss, the prefetch is notified only. Notify() will trigger the
>> prefetcher to calculate some lookahead addresses and put them in pf
>> queue.
>>
>> In short, on a miss the prefetcher is notified and some addresses are
>> calculated. However they are not issued until all misses are resolved.
>>
>
>
>>
>>
>> > it will issue a prefetch as long as one is available and the mshrQueue
>> is not full
>> As I said, an assertion is there to be sure there is no pending miss.
>> Can you explain more?
>>
>>
>> >As I said, it's possible that there is a bug and this is not working as
>> we expect it to
>> I tried to modify that, but don't have any idea about a realistic
>> case. In what circumstance, a prefetch is made?
>>
>> On 2/21/12, Steve Reinhardt <[email protected]> wrote:
>> > When I said prefetches "are only issued when there are no demand misses
>> > outstanding", I meant they are only issued when there is no demand miss
>> > also waiting to be issued.  Once the demand miss has been issued, a
>> > prefetch can be issued whether or not the demand miss has completed.
>> >
>> > More specifically, whenever the cache issues a request, it checks to see
>> if
>> > it should re-request the bus for another request.  This all happens at
>> the
>> > end of Cache<TagStore>::MemSidePort::sendPacket().  Note that in spite
>> > of
>> > the comment about factoring in prefetch requests (I'm not sure where
>> > that
>> > came from), they do get considered via nextMSHRReadyTime().  So if
>> there's
>> > no demand miss waiting but there is a prefetch available, the cache will
>> > re-request the bus.
>> >
>> > Then when the cache is granted the bus again, if there are no demand
>> > misses, it will issue a prefetch as long as one is available and the
>> > mshrQueue is not full (this is the code at the end of getNextMSHR() ).
>> >
>> > As I said, it's possible that there is a bug and this is not working as
>> we
>> > expect it to.  Feel free to trace through it in the debugger or add more
>> > DPRINTFs or do whatever you would like to verify its operation, and let
>> us
>> > know if you find something suspicious.
>> >
>> > Steve
>> >
>> > On Tue, Feb 21, 2012 at 6:01 AM, Ali Saidi <[email protected]> wrote:
>> >
>> >> It issues a prefetch if the address it's trying to prefetch doesn't
>> exist
>> >> in the mshr or write queue. You generally don't want to prefetch
>> something
>> >> that you're currently waiting for. That is wasted bandwidth.
>> >>
>> >> Ali
>> >>
>> >> Sent from my ARM powered device
>> >>
>> >> On Feb 21, 2012, at 1:49 AM, Mahmood Naderan <[email protected]>
>> wrote:
>> >>
>> >> I don't know if this implementation in gem5 is realistic. If the
>> >> prefetching is done when there is no entry in MSHR or write queue, then
>> >> for
>> >> an application that suffers high number of misses, there is no
>> opportunity
>> >> to issue a prefetch.
>> >>
>> >> However we want to prefetch to reduce number of misses.
>> >>
>> >> In another word, with current implementation, prefetching is issued for
>> >> applications which are happy with few number of misses.
>> >> --
>> >> // Naderan *Mahmood;
>> >>
>> >>
>> >> On Mon, Feb 20, 2012 at 10:14 PM, Steve Reinhardt <[email protected]
>> >wrote:
>> >>
>> >>> Hmm... the prefetcher should never cause the program to get the wrong
>> >>> answer.  Are you using the latest code from the gem5 repository?
>> >>>
>> >>> As far as the number of prefetches, they are only issued when there
>> >>> are
>> >>> no demand misses outstanding.  If your program is saturating the
>> >>> memory
>> >>> bus
>> >>> with misses then there may just not be an opportunity to generate many
>> >>> prefetches.  Of course, there could also be a bug.  You could check
>> >>> the
>> >>> code at the end of Cache<TagStore>::getNextMSHR() to see if the
>> >>> prefetcher
>> >>> is being called to provide prefetch addresses or not, and
>> >>> check Cache<TagStore>::nextMSHRReadyTime() to see if the prefetcher is
>> >>> appropriately signaling that it has prefetches to issue.
>> >>>
>> >>> Steve
>> >>>
>> >>> On Mon, Feb 20, 2012 at 9:37 AM, Mitchelle Rasquinha <
>> >>> [email protected]> wrote:
>> >>>
>> >>>>
>> >>>>
>> >>>> I am trying to use the stride prefetcher and see a similar problem
>> with
>> >>>> the
>> >>>> dafault configuration. On increasing the prefetch degree > 2 the
>> result
>> >>>> of the
>> >>>> benchmark is incorrect. What are the values for these knobs?
>> >>>>
>> >>>>  prefetch_on_access = Param.Bool(True,
>> >>>>         "notify the hardware prefetcher on every access (not just
>> >>>> misses)")
>> >>>>  prefetcher_size = Param.Int(100,
>> >>>>         "Number of entries in the hardware prefetch queue")
>> >>>>  prefetch_past_page = Param.Bool(False,
>> >>>>         "Allow prefetches to cross virtual page boundaries")
>> >>>>  prefetch_serial_squash = Param.Bool(False,
>> >>>>         "Squash prefetches with a later time on a subsequent miss")
>> >>>>  prefetch_degree = Param.Int(1,
>> >>>>         "Degree of the prefetch depth")
>> >>>>  prefetch_latency = Param.Latency(5 * Self.latency,
>> >>>>         "Latency of the prefetcher")
>> >>>>  prefetch_policy = Param.Prefetch('stride',
>> >>>>         "Type of prefetcher to use")
>> >>>>  prefetch_use_cpu_id = Param.Bool(True,
>> >>>>         "Use the CPU ID to separate calculations of prefetches")
>> >>>>  prefetch_data_accesses_only = Param.Bool(True,
>> >>>>         "Only prefetch on data not on instruction accesses")
>> >>>>
>> >>>>
>> >>>>
>> >>>> _______________________________________________
>> >>>> gem5-users mailing list
>> >>>> [email protected]
>> >>>> http://m5sim.org/cgi-bin/mailman/listinfo/gem5-users
>> >>>>
>> >>>
>> >>>
>> >>> _______________________________________________
>> >>> gem5-users mailing list
>> >>> [email protected]
>> >>> http://m5sim.org/cgi-bin/mailman/listinfo/gem5-users
>> >>>
>> >>
>> >> _______________________________________________
>> >> gem5-users mailing list
>> >> [email protected]
>> >> http://m5sim.org/cgi-bin/mailman/listinfo/gem5-users
>> >>
>> >>
>> >> _______________________________________________
>> >> gem5-users mailing list
>> >> [email protected]
>> >> http://m5sim.org/cgi-bin/mailman/listinfo/gem5-users
>> >>
>> >
>>
>>
>> --
>> --
>> // Naderan *Mahmood;
>> _______________________________________________
>> gem5-users mailing list
>> [email protected]
>> http://m5sim.org/cgi-bin/mailman/listinfo/gem5-users
>>
>


-- 
--
// Naderan *Mahmood;
_______________________________________________
gem5-users mailing list
[email protected]
http://m5sim.org/cgi-bin/mailman/listinfo/gem5-users

Reply via email to