Aren't there other NO_ACCESS references (in other ISAs) that call
initiateAcc() but not completeAcc()?  If so, then that by itself doesn't
seem like justification to avoid solution (2). If not, then I suppose I
agree with you.

Steve

On Tue, Nov 2, 2010 at 12:54 PM, Ali Saidi <[email protected]> wrote:

> Unfortunately, the stats change in all cases. For (1) the instructions no
> longer have IsMemRef set which means the num_refs changes for all CPUs and
> the change causes some minor changes in the O3. With (2) they're half baked,
> so the models call initiateAcc() but it doesn't actually initiate the
> access, so completeAcc() is never called and thus they aren't counted as
> part of the instruction count. (2) isn't ideal since half-calling the
> initiateAcc() might lead to some problems down the road.
>
> I'll post a diff today.
>
> Ali
>
>
>
>
>
> On Tue, 2 Nov 2010 12:18:08 -0700, Steve Reinhardt <[email protected]>
> wrote:
>
> Do you mean (1) or (2)?  I thought that with (1) the stats would not
> change.
>
> My bias would be (2), but (1) seems livable enough.  In either case it
> would be nice to put in a warn_once() if we don't already have one so it's
> obvious that SW prefetches are being ignored.
>
> Steve
>
> On Sun, Oct 31, 2010 at 9:45 AM, Ali Saidi <[email protected]> wrote:
>
>> Any input? Otherwise I'm going with (1) and have new stats to go with it.
>>
>> Ali
>>
>> On Oct 27, 2010, at 12:02 AM, Ali Saidi wrote:
>>
>> > Hmmm... three emails when one should have done. There are three options:
>> > 1. Make them actual no-ops (e.g. stop marking them as mem refs, data
>> prefetch, etc). The instruction count will stay the same here. The
>> functionality will stay the same. The instructions will be further away from
>> working -- not that I think anyone will make them work in the future.
>> > 2. Leave them in their half bake memop state where they're memops that
>> never call read() and don't write back anything, so the instruction count is
>> different since the inst count gets incremented after the op completes. This
>> is what I currently have.
>> > 3. Make them actually work. I've tried to muck with this without success
>> for a while now.
>> >
>> > Ali
>> >
>> >
>> >
>> > On Oct 26, 2010, at 11:58 PM, Ali Saidi wrote:
>> >
>> >> The other portion of this, is when I try to make them act like loads,
>> but not actually write a register I break the o3 cpu in ways that 4 hours
>> has not been able to explain.
>> >>
>> >> Ali
>> >>
>> >> On Oct 26, 2010, at 10:42 PM, Ali Saidi wrote:
>> >>
>> >>> The count gets smaller because since they don't actually access
>> memory, they never complete and therefore they never increment the
>> instruction count.
>> >>>
>> >>> Ali
>> >>>
>> >>> On Oct 26, 2010, at 9:53 PM, Steve Reinhardt wrote:
>> >>>
>> >>>> I vote for updating the stats... it's really wrong that we ignored
>> them previously.
>> >>>>
>> >>>> On Tue, Oct 26, 2010 at 5:47 PM, Ali Saidi <[email protected]> wrote:
>> >>>> Ok. So next question. With the CPU model treating prefetches as
>> normal memory instructions the # of instructions changes for the timing
>> simple cpu because the inst count stat is incremented in completeAccess().
>> So, one option is to update the stats to reflect the new count. The other
>> option would be to stop marking the prefetch instructions as memory ops in
>> which case they would just execute as nop. Any thoughts?
>> >>>>
>> >>>> Ali
>> >>>>
>> >>>>
>> >>>>
>> >>>>
>> >>>>
>> >>>> On Oct 24, 2010, at 12:14 AM, Steve Reinhardt wrote:
>> >>>>
>> >>>>> No, we've lived with Alpha prefetches the way they are for long
>> enough
>> >>>>> now I don't see where fixing them buys us that much.
>> >>>>>
>> >>>>> Steve
>> >>>>>
>> >>>>> On Sat, Oct 23, 2010 at 6:13 PM, Ali Saidi <[email protected]> wrote:
>> >>>>>> Sounds goo to me. I'll take a look at what I need to do to
>> implement it.  Any arguments with the Alpha prefetch instructions staying
>> nops?
>> >>>>>>
>> >>>>>> Ali
>> >>>>>>
>> >>>>>> On Oct 22, 2010, at 6:52 AM, Steve Reinhardt wrote:
>> >>>>>>
>> >>>>>>> On Tue, Oct 19, 2010 at 11:14 PM, Ali Saidi <[email protected]>
>> wrote:
>> >>>>>>>>
>> >>>>>>>> I think the prefetch should be sent the the TLB unconditionally,
>> and then if the prefetch faults the CPU should toss the instruction rather
>> than the TLB returning no fault and the CPU i guess checking if the PA is
>> set?
>> >>>>>>>>
>> >>>>>>>> I agree that we should override the fault in the CPU. Are we
>> violently agreeing?
>> >>>>>>>
>> >>>>>>> OK, it's becoming a little clearer to me now.  I think we're
>> agreeing
>> >>>>>>> that the TLB should be oblivious to whether an access is a
>> prefetch or
>> >>>>>>> not, so that's a start.
>> >>>>>>>
>> >>>>>>> The general picture I'd like to see is that once a prefetch
>> returns
>> >>>>>>> from the TLB, the CPU does something like:
>> >>>>>>>
>> >>>>>>> if (inst->fault == NoFault) {
>> >>>>>>> access the cache
>> >>>>>>> } else if (inst->isPrefetch()) {
>> >>>>>>> maybe set a flag if necessary
>> >>>>>>> inst->fault = NoFault;
>> >>>>>>> }
>> >>>>>>>
>> >>>>>>> ...so basically everywhere else down the pipeline where we check
>> for
>> >>>>>>> faults we don't have to explicitly except prefetches from normal
>> fault
>> >>>>>>> handling.
>> >>>>>>>
>> >>>>>>> If there are points past this one where we really care to know if
>> a
>> >>>>>>> prefetch accessed the cache or not, then maybe we need a flag to
>> >>>>>>> remember that (sort of a dynamic version of the NO_ACCESS static
>> >>>>>>> flag), but I don't know if that's really necessary or not.
>>  Clearly if
>> >>>>>>> the cache access doesn't happen right there, then we can add the
>> flag
>> >>>>>>> and use it later to decide whether to access the cache.
>> >>>>>>>
>> >>>>>>> Anyway, this is the flavor I was going for... any issues with it?
>> >>>>>>>
>> >>>>>>> Steve
>> >>>>>>>
>> >>>>>>
>> >>>>>>
>> >>>>>
>> >>>>
>> >>>> _______________________________________________
>> >>>> m5-dev mailing list
>> >>>> [email protected]
>> >>>> http://m5sim.org/mailman/listinfo/m5-dev
>> >>>>
>> >>>> _______________________________________________
>> >>>> m5-dev mailing list
>> >>>> [email protected]
>> >>>> http://m5sim.org/mailman/listinfo/m5-dev
>> >>>
>> >>> _______________________________________________
>> >>> m5-dev mailing list
>> >>> [email protected]
>> >>> http://m5sim.org/mailman/listinfo/m5-dev
>> >>>
>> >>
>> >> _______________________________________________
>> >> m5-dev mailing list
>> >> [email protected]
>> >> http://m5sim.org/mailman/listinfo/m5-dev
>> >>
>> >
>> > _______________________________________________
>> > m5-dev mailing list
>> > [email protected]
>> > http://m5sim.org/mailman/listinfo/m5-dev
>> >
>>
>>
>
>
> _______________________________________________
> m5-dev mailing list
> [email protected]
> http://m5sim.org/mailman/listinfo/m5-dev
>
>
_______________________________________________
m5-dev mailing list
[email protected]
http://m5sim.org/mailman/listinfo/m5-dev

Reply via email to