Hi,
The gdb trace for it looks like follows, after changing the fatal to panic
is as follows:-

*#3  0x000000000042e235 in DefaultPeerPort::blowUp (this=<value optimized
out>) at build/ALPHA_SE/mem/port.cc:47
#4  0x000000000042e3f9 in DefaultPeerPort::deviceBlockSize (this=0x142) at
build/ALPHA_SE/mem/port.cc:80
#5  0x000000000071503c in peerBlockSize (this=0x13e3af0) at
build/ALPHA_SE/mem/port.hh:217
#6  CacheUnit::init (this=0x13e3af0) at
build/ALPHA_SE/cpu/inorder/resources/cache_unit.cc:144
#7  0x00000000007885a3 in ResourcePool::init (this=0x13e38c0) at
build/ALPHA_SE/cpu/inorder/resource_pool.cc:119
#8  0x0000000000a40486 in _wrap_SimObject_init (args=<value optimized out>)
at build/ALPHA_SE/python/m5/internal/param_SimObject_wrap.cc:3571
....
#22 0x00000000007f0fc0 in m5Main (argc=<value optimized out>, argv=<value
optimized out>) at build/ALPHA_SE/sim/init.cc:248
#23 0x0000000000408a61 in main (argc=12, argv=0x7fffffffeb08) at
build/ALPHA_SE/sim/main.cc:57*

So, as you had said, the ICache port in fetch_unit tries to get the
deviceBlockSize and since the port is not connected, the default port is
called and it just blows up. So, I tried to include a takeoverfrom function
in cpu.cc in inorder cpu model which is something like:

void InOrderCPU::takeOverFrom(BaseCPU *oldCPU)
{
    BaseCPU::takeOverFrom(oldCPU, resPool->getIcachePort(),
resPool->getDcachePort());
}

where the getIcachePort and getDcachePort will return the port references
from Cache_unit.cc and fetch_unit.cc.

But I dont understand the code flow well. Where should this "takeoverfrom"
function be called? It doesnot seem to me that the takeoverfrom function is
getting called. Where should I make modifications basically?

Thanks..

On Sat, May 7, 2011 at 10:03 PM, Korey Sewell <[email protected]> wrote:

> the stack unwind could be helpful though to figure out what function that
> the change needs to be made. That error message may be propagated from
> something actually trying to connect the port (Rather than use the port).
>
> On Sat, May 7, 2011 at 7:16 PM, Gabe Black <[email protected]> wrote:
>
>>  gdb won't be that helpful here. When "fatal" is called, it doesn't cause
>> the program to immediately exit with an error with the stack intact, it
>> returns more gracefully and unwinds the stack. If you change it to "panic"
>> you'd be able to get a stack. But really, I don't think the stack will be
>> very useful since the error message is telling you most of what you need to
>> know, that a port wasn't connected that should have been. The only thing you
>> might still need to know is -which- port is unconnected, and it should be
>> pretty easy to add that to the call to fatal. In fact, it's probably a good
>> idea to add that information to the message in the main repository since it
>> would almost always be useful. Would you mind making that change and
>> submitting it to review board?
>>
>> Gabe
>>
>>
>> On 05/07/11 12:34, reena panda wrote:
>>
>> Hi Korey,
>>
>> Thanks for replying. I tried doing a backtrace on gdb, but it says no
>> stack.
>> 0: system.remote_gdb.listener: listening for remote gdb #0 on port 7002
>> fatal: default_port: Unconnected port!
>>  @ cycle 0
>> [blowUp:build/ALPHA_SE/mem/port.cc, line 47]
>> Memory Usage: 2160736 KBytes
>> For more information see: http://www.m5sim.org/fatal/a98d4a8d
>>
>> Program exited with code 01.
>> (gdb) backtrace
>> No stack.
>>
>> Why is it so?
>>
>> Thanks
>>
>> On Sat, May 7, 2011 at 1:47 PM, Korey Sewell <[email protected]> wrote:
>>
>>> I havent done anything with InOrder and switchcpus so unless someone has
>>> a patch you'll have to look into adding that code.
>>>
>>> I anticipate it's the same (or similar code) from O3CPU's "takeOverFrom"
>>> function. You need to figure out what's the function call causing that fatal
>>> command (gdb backtrace) and then figure out how to implement the
>>> takeOverFrom function in InOrder. If you try, and have some problems,
>>> forward those emails to m5-dev.
>>>
>>> Typically, what needs to happen is the cache ports in the AtomicCPU needs
>>> to be reconnected to the ports in the InOrder so you can run successfully
>>> after the switch. It sounds like this may be a symptom of that not working.
>>>
>>>  On Sat, May 7, 2011 at 2:05 PM, reena panda <[email protected]>wrote:
>>>
>>>>  Hi,
>>>> I am trying to run m5 in the SE mode, for inorder CPU model. I am trying
>>>> to run SPEC2k6 benchmarks. If I run the benchmark, with only max-insts
>>>> option set like follows, it runs to the max insts and quits.
>>>> ./build/ALPHA_SE/m5.opt  ./configs/spec2006/se.py --inorder --caches
>>>> --l2cache -n 1 --maxinsts 100 --bench mcf
>>>>
>>>> But, I want to be able to fast forward some number of instructions in
>>>> atomic mode,  then warm up in timing mode and then switch to inorder mode
>>>> for simulation. If I use the command line :-
>>>> ./build/ALPHA_SE/m5.opt --stats-file=mcf_inorder
>>>> ./configs/spec2006/se.py --inorder --caches --l2cache -n 1
>>>> --fast-forward=5000 --maxinsts 100 -s --bench mcf
>>>>
>>>> I get the following error:- What might be the issue or what I am doing
>>>> wrong here?
>>>>
>>>> *Global frequency set at 1000000000000 ticks per second
>>>> 0: system.remote_gdb.listener: listening for remote gdb #0 on port 7002
>>>> fatal: default_port: Unconnected port!
>>>>  @ cycle 0
>>>> [blowUp:build/ALPHA_SE/mem/port.cc, line 47]
>>>> Memory Usage: 2159556 KBytes
>>>> For more information see: http://www.m5sim.org/fatal/a98d4a8d*
>>>>
>>>> The following changes were made to the Simulation.py file:-
>>>>
>>>>  if options.standard_switch:
>>>>         switch_cpus = [TimingSimpleCPU(defer_
>>>> registration=True, cpu_id=(np+i))
>>>>                        for i in xrange(np)]
>>>>         switch_cpus_1 = [InOrderCPU(defer_registration=True,
>>>> cpu_id=(2*np+i))
>>>>                         for i in xrange(np)]
>>>>
>>>>         for i in xrange(np):
>>>>             switch_cpus[i].system =  testsys
>>>>             switch_cpus_1[i].system =  testsys
>>>>
>>>>             .....
>>>>
>>>>          # simulation period
>>>>          if options.maxinsts:
>>>>                switch_cpus_1[i].max_insts_all_threads = options.maxinsts
>>>>
>>>> Thanks,
>>>> Reena
>>>>
>>>>  _______________________________________________
>>>> m5-users mailing list
>>>> [email protected]
>>>> http://m5sim.org/cgi-bin/mailman/listinfo/m5-users
>>>>
>>>
>>>
>>>
>>> --
>>> - Korey
>>>
>>> _______________________________________________
>>> m5-users mailing list
>>> [email protected]
>>> http://m5sim.org/cgi-bin/mailman/listinfo/m5-users
>>>
>>
>>
>> _______________________________________________
>> m5-users mailing 
>> [email protected]http://m5sim.org/cgi-bin/mailman/listinfo/m5-users
>>
>>
>>
>> _______________________________________________
>> m5-users mailing list
>> [email protected]
>> http://m5sim.org/cgi-bin/mailman/listinfo/m5-users
>>
>
>
>
> --
> - Korey
>
> _______________________________________________
> m5-users mailing list
> [email protected]
> http://m5sim.org/cgi-bin/mailman/listinfo/m5-users
>
_______________________________________________
m5-users mailing list
[email protected]
http://m5sim.org/cgi-bin/mailman/listinfo/m5-users

Reply via email to