On Thu, Sep 26, 2013 at 2:54 PM, Stefan Wallentowitz < [email protected]> wrote:
> Dear all, > > we (Simon Schulze and I) are heavily working on a cache coherent version > of mor1kx at the moment. We are more or less done with the basic > snooping version for bus-based systems. Next step is the integration > with our directory-based L2-cache coherency. > > As we are about to push this stuff to my github repository next days I > am still concerned about openrisc/mor1kx and our mor1kx diverging too much. > > Precisely, the most crucial problem is the LSU: > * The LSU contains a store buffer. A data write pushes data to the > store buffer and writes it to the cache. > * The store buffer can be problematic with regards to coherency and > (yet openrisc undefined) consistency model. You can imagine a delayed > write can overwrite a concurrent write on the same cache block etc. > * This can be no problem if the consistency model allows, but the cache > should not think its in modified state until really sure (can be a pain > in the *** as you will find out sometime later) > * There are of course various ways to avoid this in the current setup, > but they become arbitrary complex one you think them through > * In the naive (and most transparent) implementation, the cache > performs the writes itself. It first accesses the bus and then updates > the tag memory when the write was successfull. This is what we did. > > We have excessively been thinking about this problem and then > unfortunately removed the store buffer and most of the (honestly: > confusing) wiring in the LSU. > > Yes, the wiring in cappucino's LSU has became confusing, I know... It has became that way from the additions of the store buffer and the tlb reload, and the plan I had to cure it unfortunately doesn't go well with what you line out below... I wanted to remove *all* bus access logic from the dcache and control it completely from the state machine in the LSU (the writes were already moved out of the cache with the storebuffer addition, but refills are still left). > While we pack it up for a first version now I see two realistic options > to not diverge (i.e., allow for FEATURE_MULTICORE-based common modules): > > 1. Have two cappucino lsu+cache implementations which are instantiated > based on the activated multicore feature. Alternatively this may be done > in the modules with a massive number of "generate if(FEATURE_MULTICORE) > ... else ... endgenerate"/"assign ... = !FEATURE_MULTICORE ? .. : ..." > etc., where I definitely would prefer the first.. > > 2. We move the LSU behind the cache, so that we have a linear chain > CPU->MMU->Cache->SB->Bus-IF. This may cost an extra cycle here and there > as the way to the store buffer may cross a register or so. The clear > advantage is that LSU and Cache can stay common for baseline and > multicore variant. > > From a pragmatic point of view the first one seems the easiest. From a > non-divergent standpoint the second might be better. > > What are your opinions? We would do the work, but for option 2 we of > course would like to see the path to upstream. So if you don't see a > chance for this change, we will stick to option 1 what is also perfectly > fine with us. > > Even with the above statements I'm in strong favor of alternative number 2, I have had thoughts of playing with multicore implementations as well with cappuccino (long term future plans, but still), and it would make no sense at all to divert this effort. However, the "cost an extra cycle here and there" needs to be more specific. To be clear, the main motivation for the store buffer was to ensure single cycle writes as opposed to two-cycle writes that is the maximum achievable wishbone access without resorting to bursts. And I'm not sure I completely understand how the "behind" the cache differs from how it is currently implemented, the cache is not behind the storebuffer as it is now, the stores happens simultaneously to the store buffer and the cache. So if the cache want to prevent the store to happen to the storebuffer, that would be possible. But, I'm looking forward to take a look at your work and we can continue discussion from there. Stefan
_______________________________________________ OpenRISC mailing list [email protected] http://lists.openrisc.net/listinfo/openrisc
