Yeah, the more I think about it the more I think there must be an x86 bug. I do need to test my patches a bit more (ie. running full regressions) and assuming they don't do anything horrible those can go in. Then I'll have to fix whatever the newly uncovered issue is.

Gabe

Quoting Steve Reinhardt <ste...@gmail.com>:

Congratulations, that's awesome!

The ifetch pipelining patch does fix a pretty significant simulated
performance issue, so we don't want to lose it... if you push your changes,
does it break anything else, or just X86_FS O3?  One option would be to push
your changes but wait to enable the regression until you or someone else
resolves this conflict.

Steve

On Tue, Jul 12, 2011 at 4:27 AM, Gabe Black <gbl...@eecs.umich.edu> wrote:

Hey folks, I sort of have an X86_FS O3 regression working! Sort of yay!
The problem is that this recent change:

O3: Fix up pipelining icache accesses in fetch stage to function properly

seems to break it and make it hang indefinitely. It was recent enough
that I had everything working, pulled this in, and then mysteriously it
started hanging. It should be mentioned that this code didn't get sent
out for review in this form I don't think, although I doubt I would have
jumped at the opportunity and I remember an earlier version being sent out.

It's late, and it's not immediately clear to me if there's something
wrong with this change, or if it subtly modifies what one of my own
patches do. It's also possible that this modifies some aspect of timing
that exposes a bug in X86.

One potential problem there might be, though, is that pipelined ifetches
might implicitly act like a cache which isn't necessarily maintained
like one. Maybe fetch is getting stale data which is why it gets stuck?
Or my complaint about interrupts not getting through might have happened
after all?

I'd really like to get my regression checked in so I'd really like to
assume this change is to blame and to back it out. I really don't want
to have to pick at it and figure out what's going on. At the same time,
I'd be making a major assumption about my own patches and I'd push the
annoyance of finding the problem onto the other guy. It would be great
there was a magical way to determine who's fault it was ahead of time
and to just make whoever that was take care of it. What's a good way to
handle this?

Gabe

On 07/12/11 04:10, Gabe Black wrote:
> -----------------------------------------------------------
> This is an automatically generated e-mail. To reply, visit:
> http://reviews.m5sim.org/r/785/
> -----------------------------------------------------------
>
> Review request for Default, Ali Saidi, Gabe Black, Steve Reinhardt, and
Nathan Binkert.
>
>
> Summary
> -------
>
> X86: Add an X86_FS o3 regression.
>
>
> Diffs
> -----
>
>   tests/long/10.linux-boot/ref/x86/linux/pc-o3-timing/config.ini
PRE-CREATION
>   tests/long/10.linux-boot/ref/x86/linux/pc-o3-timing/simerr PRE-CREATION
>   tests/long/10.linux-boot/ref/x86/linux/pc-o3-timing/simout PRE-CREATION
>   tests/long/10.linux-boot/ref/x86/linux/pc-o3-timing/stats.txt
PRE-CREATION
>
tests/long/10.linux-boot/ref/x86/linux/pc-o3-timing/system.pc.com_1.terminal
PRE-CREATION
>
> Diff: http://reviews.m5sim.org/r/785/diff
>
>
> Testing
> -------
>
>
> Thanks,
>
> Gabe
>
> _______________________________________________
> gem5-dev mailing list
> gem5-dev@gem5.org
> http://m5sim.org/mailman/listinfo/gem5-dev

_______________________________________________
gem5-dev mailing list
gem5-dev@gem5.org
http://m5sim.org/mailman/listinfo/gem5-dev

_______________________________________________
gem5-dev mailing list
gem5-dev@gem5.org
http://m5sim.org/mailman/listinfo/gem5-dev



_______________________________________________
gem5-dev mailing list
gem5-dev@gem5.org
http://m5sim.org/mailman/listinfo/gem5-dev

Reply via email to