That is weird. I had the same issue with older versions and increasing the
memory was the solution. What does "in_addr_map = True" mean then?
I used PhysicalMemory for physmem




Regards,
Mahmood



On Wed, Feb 19, 2014 at 10:28 PM, Siddharth Nilakantan <[email protected]>wrote:

> Hi All,
>
> I tried Mahmood's suggestion and ended up with the same issue. It reported
> higher memory usage now, but dies at the same address and same cycle
> number!
>
> warn: ignoring syscall futex(1, 7020308, ...)
> warn: ignoring syscall futex(1, 7020308, ...)
> *panic: Tried to read unmapped address 0x140495a85.*
> * @ cycle 143932527000*
> [invoke:build/X86_MESI_CMP_directory/arch/x86/faults.cc, line 160]
> *Memory Usage: 8793360 KBytes*
> Program aborted at cycle 143932527000
>
> Is there some other way I should be increasing memory size. I just have
> the following in my se.py:
>
> system = System(cpu = [CPUClass(cpu_id=i) for i in xrange(np)],
> physmem = SimpleMemory(in_addr_map = True,range=AddrRange('8192MB')))
>
> Note, that this gem5 version is slightly older, checked out at the
> beginning of last year. Any suggestions would help.
>
> Regards,
> Sid
>
>
> On 19 February 2014 01:42, Siddharth Nilakantan <[email protected]> wrote:
>
>> Hi Mahmood,
>>
>> Thanks for that. Will try 8GB next and report what I find. I linked the
>> executable to the m5 threads library. I also linked in some other custom
>> object code that shouldn't be called when running under Gem5 at all. This
>> is evidenced by the fact that I can also run the same executable natively
>> and confirm that it finishes. (I also did a Valgrind Memcheck just to make
>> sure.)
>>
>> Sid
>>
>>
>> On 18 February 2014 13:23, Mahmood Naderan <[email protected]> wrote:
>>
>>> Hi
>>> Have you modified the code in a way to create new addresses?
>>> You should note that the unmapped address is in the range of 4GB< <
>>> 8GB. If you increase the memory to 8GB, that specific address will be
>>> resolved but you may see another error for addresses larger than 8GB!!
>>>
>>> Hope that helps
>>>
>>>
>>> On 2/18/14, Siddharth Nilakantan <[email protected]> wrote:
>>> > Hi All,
>>> >
>>> > I'm using Gem5's SE mode with Splash-2 compiled for m5threads. When
>>> running
>>> > the Cholesky, water-spatial and ocean benchmarks I noticed the "panic:
>>> > Tried to read unmapped address" error. Based on previous questions of
>>> the
>>> > same type, I made sure to try testing the executables under Valgrind.
>>> There
>>> > are no memory leaks, so I'm not sure what is happening.
>>> >
>>> > build/X86_MESI_CMP_directory/gem5.opt configs/example/se.py
>>> > --garnet-network=fixed --topology=Mesh --ruby
>>> >
>>> --cmd=~/benchmarks/splash2_gem5se/splash2/codes/kernels/cholesky/CHOLESKY
>>> > --options="-p8
>>> >
>>>  ~/benchmarks/splash2_gem5se/splash2/codes/kernels/cholesky/inputs/tk23.O"
>>> > --num-cpus=8 --num-dirs=8 --num-l2caches=8 --l1d_size=64kB
>>> --l1d_assoc=4
>>> > --l1i_size=64kB --l1i_assoc=4 --l2_size=4096kB --l2_assoc=8
>>> > .
>>> > .
>>> > .
>>> >
>>> > warn: ignoring syscall futex(0, 7020308, ...)
>>> > warn: ignoring syscall futex(0, 7020308, ...)
>>> > warn: ignoring syscall futex(0, 7020308, ...)
>>> > warn: ignoring syscall futex(0, 7020308, ...)
>>> > warn: ignoring syscall futex(0, 7020308, ...)
>>> > warn: ignoring syscall futex(0, 7020308, ...)
>>> > warn: ignoring syscall futex(0, 7020308, ...)
>>> > warn: ignoring syscall futex(0, 7020308, ...)
>>> > warn: ignoring syscall futex(0, 7020308, ...)
>>> > warn: ignoring syscall futex(0, 7020308, ...)
>>> > warn: ignoring syscall futex(0, 7020308, ...)
>>> > warn: ignoring syscall futex(0, 7020308, ...)
>>> > warn: ignoring syscall futex(1, 7020308, ...)
>>> > warn: ignoring syscall futex(1, 7020308, ...)
>>> > *panic: Tried to read unmapped address 0x140495a85.*
>>> >  @ cycle 143932527000
>>> > [invoke:build/X86_MESI_CMP_directory/arch/x86/faults.cc, line 160]
>>> > Memory Usage: 4599016 KBytes
>>> > Program aborted at cycle 143932527000
>>> >
>>> >  The system is configured to have 4GB of Physical memory and the memory
>>> > usage is still higher. It always dies at the same address and the same
>>> > cycle number.
>>> >
>>> > Can anyone help me figure out why this is happening? What flags should
>>> I
>>> > turn on to get more useful information?
>>> >
>>> > Regards,
>>> > Sid
>>> >
>>>
>>>
>>> --
>>> Regards,
>>> Mahmood
>>> _______________________________________________
>>> gem5-users mailing list
>>> [email protected]
>>> http://m5sim.org/cgi-bin/mailman/listinfo/gem5-users
>>>
>>
>>
>
> _______________________________________________
> gem5-users mailing list
> [email protected]
> http://m5sim.org/cgi-bin/mailman/listinfo/gem5-users
>
_______________________________________________
gem5-users mailing list
[email protected]
http://m5sim.org/cgi-bin/mailman/listinfo/gem5-users

Reply via email to