Hi Andreas,
                    We (me and Anju) are working together on it. Till now,
no luck and we migrated into a 32GB machine. We will let you know if
something works.


On Wed, Mar 12, 2014 at 3:57 PM, Andreas Hansson <[email protected]>wrote:

>  Hi Anju,
>
>  What was the outcome on this one? Any luck?
>
>  Thanks,
>
>  Andreas
>
>   From: Andreas Hansson <[email protected]>
>
> Reply-To: gem5 users mailing list <[email protected]>
> Date: Monday, 10 March 2014 14:08
>
> To: gem5 users mailing list <[email protected]>
> Subject: Re: [gem5-users] mmap error when I increase mem-size
>
>   Hi Anju,
>
>  Have a look at:
> http://stackoverflow.com/questions/4803152/mmap-fails-when-length-is-larger-than-4gb
>
>  I suspect that it is your swap allocation that is running out. Could you
> try adding MAP_NORESERVE to the mmap flags in physical.cc?
>
>  Note that this is not POSIX, and we'd have to make this OS dependent if
> we want to make it the default behaviour.
>
>  Andreas
>
>   From: Anju M A <[email protected]>
> Reply-To: gem5 users mailing list <[email protected]>
> Date: Monday, 10 March 2014 13:55
> To: gem5 users mailing list <[email protected]>
> Subject: [gem5-users] mmap error when I increase mem-size
>
>  Hello,
>
>  When I try to run mcf application with 4096MB as mem-size, I am getting
> the following error :
>
>
>
>
> *mmap: Invalid argument fatal: Could not mmap 4294967296 bytes for range
> [0 : 0xffffffff]!  @ cycle 0
> [createBackingStore:build/ALPHA/mem/physical.cc, line 158] Memory Usage:
> 37068 KBytes*
>
> The command used to run the simulation is :
> ./build/ALPHA/gem5.fast  ./configs/example/se.py -n 1   --caches --l2cache
> --cpu-type=detailed --bench mcf --maxinsts 500000000 --fast-forward
> 5000000000 --warmup-insts 250000000 --mem-type=DDR3_1600_x64
> --mem-channels=1 *--mem-size=4096MB* --sys-clock=2.4GHz
>
>
>  free -m gives this result on my system :
>                                   total         used       *free*
> shared    buffers     cached
> Mem:                         16157       2960      *13196*
> 0        109        988
> -/+ buffers/cache:        1863       14294
> Swap:                         9535          0          9535
>
>  If I have around 13GB of free memory in my system, why is mmap failing?
>  How can I resolve this issue?
>  Any help would be appreciated a lot.
>
> --
> Thanks & Regards,
> Anju
>
> -- IMPORTANT NOTICE: The contents of this email and any attachments are
> confidential and may also be privileged. If you are not the intended
> recipient, please notify the sender immediately and do not disclose the
> contents to any other person, use it for any purpose, or store or copy the
> information in any medium. Thank you.
>
> ARM Limited, Registered office 110 Fulbourn Road, Cambridge CB1 9NJ,
> Registered in England & Wales, Company No: 2557590
> ARM Holdings plc, Registered office 110 Fulbourn Road, Cambridge CB1 9NJ,
> Registered in England & Wales, Company No: 2548782
>
> -- IMPORTANT NOTICE: The contents of this email and any attachments are
> confidential and may also be privileged. If you are not the intended
> recipient, please notify the sender immediately and do not disclose the
> contents to any other person, use it for any purpose, or store or copy the
> information in any medium. Thank you.
>
> ARM Limited, Registered office 110 Fulbourn Road, Cambridge CB1 9NJ,
> Registered in England & Wales, Company No: 2557590
> ARM Holdings plc, Registered office 110 Fulbourn Road, Cambridge CB1 9NJ,
> Registered in England & Wales, Company No: 2548782
>
> _______________________________________________
> gem5-users mailing list
> [email protected]
> http://m5sim.org/cgi-bin/mailman/listinfo/gem5-users
>



-- 


*thanks&regards*
*BISWABANDAN*
http://www.cse.iitm.ac.in/~biswa/

"We might fall down, but we will never lay down. We might not be the best,
but we will beat the best! We might not be at the top, but we will rise."
_______________________________________________
gem5-users mailing list
[email protected]
http://m5sim.org/cgi-bin/mailman/listinfo/gem5-users

Reply via email to