Ah, yes, you're right. I misread your first email and confirmed my bad
assumptions with your second. O3 may or may not work with SPARC_FS. It
shouldn't be horribly broken, but I don't know if it will work 100% either.

Gabe

prasun gera wrote:
> Mr. Gabe,
> I had consulted the help earlier, and  the -d option to m5 just
> determines the output directory, wheras, as you said, -d to the script
> is used for the detailed model. In my command line,
> which was
> build/SPARC_FS/m5.opt -v -d /tmp/output/ configs/example/fs.py
> I had just passed the option for the directory and no options to the
> script. And as I said, after reading your mail when i tried to
> consciously pass the -d option to the script(with --caches), it threw
> the error I mentioned. So on a related note, is the O3 model supported
> for SPARC_FS?
>
> On Mon, Feb 15, 2010 at 7:11 AM, Gabe Black <[email protected]> wrote:
>   
>> When you pass the -d option to fs.py, you select the "detailed", aka O3,
>> cpu model. If you leave that off you'll use the simple CPU. You can use
>> --help as an option to both M5 and/or the configuration script depending
>> on where you put it in the command line. If you put it before the script
>> it's for M5, and if you put it after it's for the script.
>>
>> Gabe
>>
>> prasun gera wrote:
>>     
>>> Hi,
>>> You mentioned that I'm using the O3 CPU model. Isn't the default model
>>> simple atomic? I mean, I didn't pass any arguments to the script fs.py
>>> and from setCPUClass, it seemed as though it is using the simple
>>> atomic model.
>>> In fact, later I tried the command line
>>>
>>> build/SPARC_FS/m5.opt -v -d /tmp/output/ configs/example/fs.py -d --caches
>>>
>>> to use the detailed CPU model but it threw an error
>>> NameError: global name 'DerivO3CPU' is not defined.
>>>
>>>
>>> On Sat, Feb 13, 2010 at 6:56 AM, Gabriel Michael Black
>>> <[email protected]> wrote:
>>>
>>>       
>>>> It looks like the simulation ran out of things to do and stopped at
>>>> the end of simulated time. You could use the Exec trace flag to see
>>>> what, if anything, is executing when that happens. If the simulation
>>>> runs for a while before failing, Exec will output a lot of text.
>>>> You'll want to start tracing close to the interesting point.
>>>>
>>>> One other thing I notice is that you're using the O3 CPU model with
>>>> SPARC_FS. While that model should work with SPARC_SE and SPARC_FS
>>>> works with the simple CPUs, I don't know if anyone ever got the bugs
>>>> worked out of that particular combination (someone please say
>>>> something if you know otherwise). That makes me think that O3 is
>>>> losing track of work that it needs to do, thinks it should become
>>>> idle, and effectively goes to sleep and never wakes up.
>>>>
>>>> Gabe
>>>>
>>>> Quoting prasun gera <[email protected]>:
>>>>
>>>>
>>>>         
>>>>> I could boot solaris in SPARC_FS, but m5 exited abruptly after that
>>>>> with the following message:
>>>>> Exiting @ cycle 9223372036854775807 because simulate() limit reached
>>>>>
>>>>> The command line I executed was:
>>>>> build/SPARC_FS/m5.opt -v -d /tmp/output/ configs/example/fs.py
>>>>>
>>>>> Host system: Ubuntu 32 bit
>>>>>
>>>>> I tried it twice, and it quit at the same cycle count both the times.
>>>>> To ascertain whether the error was caused because of something I did,
>>>>> I didn't enter anything on the solaris terminal the second time. i.e.
>>>>> The computer was idle for the entire duration except for the boot
>>>>> command on opb. Has anyone run into a similar error? Or any hints
>>>>> regarding debugging this?
>>>>>
>>>>>
>>>>> On Fri, Feb 12, 2010 at 10:26 PM, Ali Saidi <[email protected]> wrote:
>>>>>
>>>>>           
>>>>>> The original binaries should work just fine, the _new versions were ones
>>>>>> that we verified we could compile from source.
>>>>>>
>>>>>> Ali
>>>>>>
>>>>>>
>>>>>> On Fri, 12 Feb 2010 20:50:07 +0530, prasun gera <[email protected]>
>>>>>> wrote:
>>>>>>
>>>>>>             
>>>>>>> Figured it out. Copied the files to the binaries and disks directories
>>>>>>> and could run configs/example/fs.py after that. One small thing
>>>>>>> though. The names of the solaris binaries used in m5 have new as a
>>>>>>> suffix ( for eg. openboot_new.bin and q_new.bin). Does it mean that
>>>>>>> the original binaries from opensparc need to be modified in some way?
>>>>>>> _______________________________________________
>>>>>>> m5-users mailing list
>>>>>>> [email protected]
>>>>>>> http://m5sim.org/cgi-bin/mailman/listinfo/m5-users
>>>>>>>
>>>>>>>               
>>>>>> _______________________________________________
>>>>>> m5-users mailing list
>>>>>> [email protected]
>>>>>> http://m5sim.org/cgi-bin/mailman/listinfo/m5-users
>>>>>>
>>>>>>
>>>>>>             
>>>>> _______________________________________________
>>>>> m5-users mailing list
>>>>> [email protected]
>>>>> http://m5sim.org/cgi-bin/mailman/listinfo/m5-users
>>>>>
>>>>>
>>>>>           
>>>> _______________________________________________
>>>> m5-users mailing list
>>>> [email protected]
>>>> http://m5sim.org/cgi-bin/mailman/listinfo/m5-users
>>>>
>>>>
>>>>         
>>> _______________________________________________
>>> m5-users mailing list
>>> [email protected]
>>> http://m5sim.org/cgi-bin/mailman/listinfo/m5-users
>>>
>>>       
>> _______________________________________________
>> m5-users mailing list
>> [email protected]
>> http://m5sim.org/cgi-bin/mailman/listinfo/m5-users
>>
>>     
> _______________________________________________
> m5-users mailing list
> [email protected]
> http://m5sim.org/cgi-bin/mailman/listinfo/m5-users
>   

_______________________________________________
m5-users mailing list
[email protected]
http://m5sim.org/cgi-bin/mailman/listinfo/m5-users

Reply via email to