> On Feb. 3, 2015, 9:37 p.m., Brandon Potter wrote:
> > The patch seems like a landmine for an unsuspecting user as it would be 
> > difficult to diagnose if swap space is exhausted.  It will probably be 
> > evident that memory exhaustion caused the runtime error (segmentation 
> > fault), but tracking down the cause to this patch will probably be 
> > non-trivial (maybe?).  For a user, the memory allocation failure from 
> > insufficient swap space would probably be preferable to a hard to diagnose 
> > segmentation fault.
> > 
> > Even though it is ugly, maybe it's best to add #if 0 / #endif around the 
> > code to allow someone to find this later on and enable the feature if they 
> > need it (with a warning about the segmentation fault).  The memory 
> > allocation failure will point the user to the code in this patch.
> 
> Andreas Hansson wrote:
>     Fair point. An option would be to not use the flag for "sensible" sizes, 
> perhaps <16 GB or so, and adopt it for larger sizes with a warning?
> 
> Brandon Potter wrote:
>     Perhaps that's fine.  Although, I think that a value for "sensible" will 
> vary from person to person.
>     
>     Personally, I still feel that it is better to leave it completely 
> disabled and allow someone to enable it if needed (solely to prevent 
> confusion later).  It is trivial to enable the functionality by modifying the 
> condition.  Another avenue might be whether it is worthwhile to use a 
> preprocessor define and the #ifdef to enable/disable this during compilation; 
> might be overkill to go through the trouble though.
>     
>     Overall, maybe this is just paranoia on my part; it does seem unlikely to 
> cause a problem in practice.
> 
> Andreas Hansson wrote:
>     The input is much appreciated.
>     
>     My hesitation to the approach of asking the user to enable it if needed, 
> is that it means yet another parameter to keep track of (or yet another patch 
> to manage). If we go for the "limit approach", and a large value like 1 TB 
> perhaps?
>     
>     I also think this is unlikely to cause problems the way it is, and as 
> such I would prefer not adding more configurability unless you really think 
> it is needed.
>
> 
> Brandon Potter wrote:
>     Seems reasonable.  There is a tradeoff either way.

So what do people think?

1) Good as it is (avoid more switches and magic limits)?
2) Add a limit, e.g. 1 TB and use above that?
3) Add a switch and force the user to explicitly set it?

In the spirit of keeping things simple I'd suggest (1).


- Andreas


-----------------------------------------------------------
This is an automatically generated e-mail. To reply, visit:
http://reviews.gem5.org/r/2624/#review5821
-----------------------------------------------------------


On Feb. 3, 2015, 7:57 p.m., Andreas Hansson wrote:
> 
> -----------------------------------------------------------
> This is an automatically generated e-mail. To reply, visit:
> http://reviews.gem5.org/r/2624/
> -----------------------------------------------------------
> 
> (Updated Feb. 3, 2015, 7:57 p.m.)
> 
> 
> Review request for Default.
> 
> 
> Repository: gem5
> 
> 
> Description
> -------
> 
> Changeset 10689:568e9a533cf0
> ---------------------------
> mem: mmap the backing store with MAP_NORESERVE
> 
> This patch ensures we can run simulations with very large simulated
> memories (at least 64 TB based on some quick runs on a Linux
> workstation). In essence this allows us to efficiently deal with
> sparse address maps without having to implement a redirection layer in
> the backing store.
> 
> This opens up for run-time errors if we eventually exhausts the hosts
> memory and swap space, but this should hopefully never happen.
> 
> 
> Diffs
> -----
> 
>   src/mem/physical.cc 7639c17357dc 
> 
> Diff: http://reviews.gem5.org/r/2624/diff/
> 
> 
> Testing
> -------
> 
> 
> Thanks,
> 
> Andreas Hansson
> 
>

_______________________________________________
gem5-dev mailing list
[email protected]
http://m5sim.org/mailman/listinfo/gem5-dev

Reply via email to