> On Feb. 16, 2012, 1:08 a.m., Andreas Hansson wrote:
> > Could you provide a bit more background to what system this intends to 
> > capture and where/how/when this is needed?
> > 
> > My initial feeling is that we need to think carefully about this, and 
> > design the functionality slightly differently and make sure we can support: 
> > 1) multiple distributed memories in the system (without requirements on 
> > size, location, etC), 2) a non-contiguous address map (either a global one, 
> > or per master), and do so without any magic constants etc.
> > 
> > I've been working on a patch that wraps all the memories in the system in a 
> > "memoryspace" that can fill the role of the current system.physmem 
> > structure, i.e. a global chunk where you can find the total size and the 
> > valid address ranges. This non-strucural collection of the memory system 
> > would simply get populated when we instantiate the real memories in the 
> > system (i.e. PhysMem etc). The individual memories are, until now, all 
> > contiguous, but you could have as many of them as you want and thus chop up 
> > the address map.
> 
> Nilay Vaish wrote:
>     Andreas, for the x86 architecture, the addresses from 0xC0000000 to 
> 0xFFFFFFFF are reserved for
>     devices. Hence the physical memory can be at most 3GB in size because of 
> the assumption that it
>     needs to continuous (as you mentioned). While trying to remove this 
> limit, I thought there are
>     two possible choices. One is that a single physical memory can cater to 
> multiple different
>     address ranges. The second choice is to have multiple physical memories 
> in the system. In order to
>     have multiple different physical memories, I would have to figure out how 
> to connect them correctly.
>     So, instead I decided on having a single physical memory that can support 
> multiple address ranges.
>     
>     Does the choice really matter i.e. aren't the two approaches equivalent?
> 
> Ali Saidi wrote:
>     I meant to edit my review, not publish it, so I'l comment here instead. I 
> think here is a difference because if you want to have memories with 
> different characteristics you can't with the current mechanism. 
>
> 
> Nilay Vaish wrote:
>     I agree that having multiple physical memories is a better solution. 
> Andreas, how soon
>     will you be able to finish your proposed patch? Do you need any help on 
> this?
> 
> Andreas Hansson wrote:
>     The "memory space", a.k.a. "no more physmem" patch is probably a few 
> weeks away. There are a bunch of patches in our queue that we need to get out 
> first, but once that is sorted it shouldn't be a problem.
>     
>     I'll let you know if any help is needed.
>

The distributed-memory patch has arrived! Hopefully it solves this problem 
Nilay, and we can close this one.


- Andreas


-----------------------------------------------------------
This is an automatically generated e-mail. To reply, visit:
http://reviews.gem5.org/r/1050/#review2152
-----------------------------------------------------------


On Feb. 15, 2012, 3:07 p.m., Nilay Vaish wrote:
> 
> -----------------------------------------------------------
> This is an automatically generated e-mail. To reply, visit:
> http://reviews.gem5.org/r/1050/
> -----------------------------------------------------------
> 
> (Updated Feb. 15, 2012, 3:07 p.m.)
> 
> 
> Review request for Default.
> 
> 
> Description
> -------
> 
> Changeset 8852:3c033ec380b5
> ---------------------------
> Extend physical memory beyond 4GB
> The patch adds a list of address ranges to the physical memory instead of
> having a single address range. It has been tested with X86 architecture so
> far.
> 
> 
> Diffs
> -----
> 
>   configs/common/Benchmarks.py ef8630054b5e 
>   configs/common/FSConfig.py ef8630054b5e 
>   configs/ruby/MESI_CMP_directory.py ef8630054b5e 
>   configs/ruby/Ruby.py ef8630054b5e 
>   src/mem/PhysicalMemory.py ef8630054b5e 
>   src/mem/dram.cc ef8630054b5e 
>   src/mem/physical.hh ef8630054b5e 
>   src/mem/physical.cc ef8630054b5e 
> 
> Diff: http://reviews.gem5.org/r/1050/diff/
> 
> 
> Testing
> -------
> 
> 
> Thanks,
> 
> Nilay Vaish
> 
>

_______________________________________________
gem5-dev mailing list
[email protected]
http://m5sim.org/mailman/listinfo/gem5-dev

Reply via email to