I was just rereading this and I realized it's probably confusing. The 
first paragraph is a response to Ali and the second paragraph is a 
response to Steve.

Gabe Black wrote:
> I've thought a lot about this since x86 has a lot of tables and 
> mechanisms for this sort of thing which I've implemented to various 
> degrees. I think it'd be easier to collect that information at the 
> python level and then propagate it to C++ as apposed to getting it once 
> you start running. In python, you're still working at a fairly high 
> level of abstraction which makes following relationships and getting at 
> values easier. If you did it in the C++, you'd have to have help from 
> all the objects to figure out how to get to things since they can store 
> the pointers they get anywhere they want, or even extract the 
> information they need and discard them. Also, some things like the in 
> memory tables in x86 need to be set up before the simulation gets going 
> so that the kernel can find them as it's just starting up. You could 
> probably slip that in there early in the startup sequence for the C++, 
> but that might get complicated.
>
> As far as doing it in C++ inside of M5, I don't appose that idea but I'm 
> not sure where the code would go. Would that be another part of the CPU 
> with a bunch of functions for accessing it in the thread_context, or 
> part of the microop itself, or what? This actually brings up my idea to 
> turn the ISA into a simobject since that would be an ideal place for 
> that code. I can write a long email about that if people are interested.
>
> Gabe
>
> Ali Saidi wrote:
>   
>> I kind of ran into a similar thing with sparc. There is configuration  
>> code that needed to inform the system about the speed/size/type of  
>> various objects. It would be good to have a C++ interface to easily  
>> query the object tree to be able to make those determinations.
>>
>> Ali
>>
>> On Sep 20, 2008, at 9:42 AM, Steve Reinhardt wrote:
>>
>>   
>>     
>>> If it's that complicated, why not just do it in C++ inside of M5,  
>>> and have a special microop that just calls that function and lets it  
>>> do the dirty work?  I don't think performance fidelity is an issue  
>>> here, and even if it were, we could always just make that single  
>>> microop take longer.
>>>
>>> Steve
>>>
>>> On Sat, Sep 20, 2008 at 12:50 AM, Gabe Black <[EMAIL PROTECTED]>  
>>> wrote:
>>>    Now that I'm making the branch microop always use a fixed absolute
>>> micropc, the only place I wasn't already using it, the CPUID
>>> instruction, needs to change. The problem is, as things are  
>>> implemented,
>>> it really has to be able to compute it's target. The CPUID instruction
>>> basically queries a mostly but not completely static pool of  
>>> information
>>> about the CPU it's run on. For instance, it can tell you the size of
>>> various caches, what version the CPU is, who the manufacturer was,  
>>> what
>>> instruction extensions are supported (that's partially where the  
>>> info in
>>> /proc/cpuinfo comes from), blah blah blah. It's not completely static
>>> for two reasons. First, sometimes certain extensions are implemented  
>>> in
>>> only some modes. I believe some CPUs turn off the bits of instructions
>>> that won't work in the current mode, although I'm not sure of that  
>>> and I
>>> think it's done inconsistently among processors. Second, I believe  
>>> Intel
>>> now allows you to tamper with the values returned by CPUID in order to
>>> allow a virtualized guest to query freely and not see capabilities  
>>> that
>>> wouldn't work or that it shouldn't use.
>>>
>>>    Right now, my implementation of CPUID does a little munging on the
>>> "function" code, which specifies what information you want, and then
>>> goes into what is essentially a big case statement/computed branch  
>>> that
>>> puts the right values in the right registers and then returns. As I
>>> mentioned, since I'll no longer be able to do computed branches, this
>>> will no longer work. There are lots of other limitations too like  
>>> having
>>> lots of microops to function as a basic lookup table, and the fact  
>>> that
>>> the information is static and completely unconfigurable. For instance,
>>> the cache would always reported as the same size, and if some  
>>> benchmark
>>> tried to use that value to behave in a certain way, it wouldn't do  
>>> what
>>> it was supposed to. What I'm thinking I'd want to do is one of two
>>> things. Either the CPUID instruction should do a series of loads out  
>>> of
>>> an actual lookup ROM/RAM somewhere outside of the CPU, or there  
>>> could be
>>> a CPUID device which would allow it to respond in intelligent ways
>>> depending on the CPU mode, for instance. I'm favoring sticking a ROM  
>>> in
>>> the memory system somewhere. Also, I'd like to put in some sort of
>>> configuration interface that would allow the configs to program in  
>>> what
>>> CPUID should say if it needs to reflect the actual hardware or someone
>>> wants to add a new function, for instance.
>>>
>>>    What do you guys think?
>>>
>>> Gabe
>>> _______________________________________________
>>> m5-dev mailing list
>>> m5-dev@m5sim.org
>>> http://m5sim.org/mailman/listinfo/m5-dev
>>>
>>> _______________________________________________
>>> m5-dev mailing list
>>> m5-dev@m5sim.org
>>> http://m5sim.org/mailman/listinfo/m5-dev
>>>     
>>>       
>> _______________________________________________
>> m5-dev mailing list
>> m5-dev@m5sim.org
>> http://m5sim.org/mailman/listinfo/m5-dev
>>   
>>     
>
> _______________________________________________
> m5-dev mailing list
> m5-dev@m5sim.org
> http://m5sim.org/mailman/listinfo/m5-dev
>   

_______________________________________________
m5-dev mailing list
m5-dev@m5sim.org
http://m5sim.org/mailman/listinfo/m5-dev

Reply via email to