> On Feb. 3, 2015, 9:37 p.m., Brandon Potter wrote:
> > The patch seems like a landmine for an unsuspecting user as it would be 
> > difficult to diagnose if swap space is exhausted.  It will probably be 
> > evident that memory exhaustion caused the runtime error (segmentation 
> > fault), but tracking down the cause to this patch will probably be 
> > non-trivial (maybe?).  For a user, the memory allocation failure from 
> > insufficient swap space would probably be preferable to a hard to diagnose 
> > segmentation fault.
> > 
> > Even though it is ugly, maybe it's best to add #if 0 / #endif around the 
> > code to allow someone to find this later on and enable the feature if they 
> > need it (with a warning about the segmentation fault).  The memory 
> > allocation failure will point the user to the code in this patch.
> 
> Andreas Hansson wrote:
>     Fair point. An option would be to not use the flag for "sensible" sizes, 
> perhaps <16 GB or so, and adopt it for larger sizes with a warning?
> 
> Brandon Potter wrote:
>     Perhaps that's fine.  Although, I think that a value for "sensible" will 
> vary from person to person.
>     
>     Personally, I still feel that it is better to leave it completely 
> disabled and allow someone to enable it if needed (solely to prevent 
> confusion later).  It is trivial to enable the functionality by modifying the 
> condition.  Another avenue might be whether it is worthwhile to use a 
> preprocessor define and the #ifdef to enable/disable this during compilation; 
> might be overkill to go through the trouble though.
>     
>     Overall, maybe this is just paranoia on my part; it does seem unlikely to 
> cause a problem in practice.
> 
> Andreas Hansson wrote:
>     The input is much appreciated.
>     
>     My hesitation to the approach of asking the user to enable it if needed, 
> is that it means yet another parameter to keep track of (or yet another patch 
> to manage). If we go for the "limit approach", and a large value like 1 TB 
> perhaps?
>     
>     I also think this is unlikely to cause problems the way it is, and as 
> such I would prefer not adding more configurability unless you really think 
> it is needed.
>
> 
> Brandon Potter wrote:
>     Seems reasonable.  There is a tradeoff either way.
> 
> Andreas Hansson wrote:
>     So what do people think?
>     
>     1) Good as it is (avoid more switches and magic limits)?
>     2) Add a limit, e.g. 1 TB and use above that?
>     3) Add a switch and force the user to explicitly set it?
>     
>     In the spirit of keeping things simple I'd suggest (1).
> 
> Steve Reinhardt wrote:
>     Hi Andreas,
>     
>     I'm curious about the scenarios you have where you want to simulate a 
> system with TB of physical memory but don't expect to touch more than a few 
> GB of that.  That seems like a special case to me.  I sympathize with 
> Brandon's point, and I don't want to make things more confusing for what I 
> would guess is the majority of users (in terms of them getting weird delayed 
> faults when they could have been told up front they need more swap) just to 
> make the default case work for what I would guess is a minority of users who 
> probably know they're unusual by trying to simulate really large memories.
>     
>     My inclination would be to make the use of MAP_NORESERVE for the backing 
> store a SimObject option that's false by default (maintaining the current 
> behavior).  At AMD, we end up writing custom top-level python scripts for our 
> major configurations anyway, so if we wanted to hardwire that to true for 
> some of those configs, we could just do that in our custom script.
>     
>     If, in some configurations, people really want to define a threshold to 
> do this automatically, then that can be done in python.
>     
>     Or to put it another way, I definitely want to enable you to use 
> MAP_NORESERVE without having to maintain it as a patch on the gem5 tree, but 
> since I haven't heard any clamoring from the broader user population for this 
> change (did I miss it?), I'm not too enthusiastic about enabling it by 
> default.

I'll add a switch.

This patch allows a sparse address map without actually implementing a sparse 
memory in gem5 and adding another level of indirection. This patch simply 
leaves it to the OS to allocate only those pages we touch, and does not 
complain if they are very far apart in the physical address space.


- Andreas


-----------------------------------------------------------
This is an automatically generated e-mail. To reply, visit:
http://reviews.gem5.org/r/2624/#review5821
-----------------------------------------------------------


On Feb. 3, 2015, 7:57 p.m., Andreas Hansson wrote:
> 
> -----------------------------------------------------------
> This is an automatically generated e-mail. To reply, visit:
> http://reviews.gem5.org/r/2624/
> -----------------------------------------------------------
> 
> (Updated Feb. 3, 2015, 7:57 p.m.)
> 
> 
> Review request for Default.
> 
> 
> Repository: gem5
> 
> 
> Description
> -------
> 
> Changeset 10689:568e9a533cf0
> ---------------------------
> mem: mmap the backing store with MAP_NORESERVE
> 
> This patch ensures we can run simulations with very large simulated
> memories (at least 64 TB based on some quick runs on a Linux
> workstation). In essence this allows us to efficiently deal with
> sparse address maps without having to implement a redirection layer in
> the backing store.
> 
> This opens up for run-time errors if we eventually exhausts the hosts
> memory and swap space, but this should hopefully never happen.
> 
> 
> Diffs
> -----
> 
>   src/mem/physical.cc 7639c17357dc 
> 
> Diff: http://reviews.gem5.org/r/2624/diff/
> 
> 
> Testing
> -------
> 
> 
> Thanks,
> 
> Andreas Hansson
> 
>

_______________________________________________
gem5-dev mailing list
[email protected]
http://m5sim.org/mailman/listinfo/gem5-dev

Reply via email to