Hi again,

Well the lock survives all resets except power on.

Yes, and that might be a bug in how we keep state across resets.

Also, we were trying to initialize SMM several times on each boot, once
per CPU and then once in the southbridge code. So you should actually
have seen this message on every boot, with locking enabled. Not just on
reboots.

No, this was intended because I thought it won't go away on INIT - it looks like it does, maybe we can kill this code.


+               printk(BIOS_DEBUG, "SMM is still locked from last boot, using old 
handler.\n");
+               return;
+       }
+
+       /* Only copy SMM handler once, not once per CPU */
+       if (!smm_handler_copied) {
+               msr_t syscfg_orig, mtrr_aseg_orig;
+
+               smm_handler_copied = 1;
+
+               // XXX Really?

Yes if you mess with MTRR you need to do that otherwise it is
undefined (check system programming manual AMD...)

I know about the cache disable requirement for MTRR changes

The comment was supposed to be about whether it's needed to modify the SYSCFG
and the Fixes ASEG MTRR at all. The changes you are doing should be done
through the SMM_MASK MSR instead. And there is no mention on SYSCFG
changes for SMM in the BKDG. I keep thinking we should not do that at
all.

No check table 62. There is no way if aseg is enabled to write to that memory outside SMM. The special SYSCFG gymnastic will enable to write to Aseg because the MTRR extensions can route it to memory (completely independent of SMM logic) And this is what I do. If the SMM logic is on (aseg enable) it wont let you ever write to to aseg



+               disable_cache();
+               syscfg_orig = rdmsr(SYSCFG_MSR);
+               mtrr_aseg_orig = rdmsr(MTRRfix16K_A0000_MSR);
+
+               // XXX Why?
This is because there are AMD extension which tells if MTRR is MMIO
or memory and we need them ON check AMD programming manual.

Which one, where? I am referring to 32559, and I can't find any such
requirement in the SMM chapter. Check SMM_MASK MSR on page 282/283 in
32559.

Well this has nothing to do with SMM. If SMM is NOT yet enabled, you can route A0000 to memory using the CPU, this is what I do. If SMM gets enabled it started to behave way you know it from Intel.


That's what we do with the /* enable the SMM memory window */ chunk
below.

No. Check table

Table 62.
SMM ASeg-Enabled Memory Types

You see once Aseg is enabled there is no way to copy there something outside the SMM.

We need to COPY FIRST and then ENABLE SMM because until the enable
ASEG is done we can write to memory as it is normal memory. (this is
kind of equvalent of the D_OPEN in intel)

No, I don't think so. The above does NOT ENABLE SMM but opens the SMM
RAM area to be accessible. This is the equivalent to D_OPEN on Intel.

No, enabling Aseg will stop routing to RAM if outside of SMM.




And yes we need to call SMM set base addr, together with SMM lock...

depending on what you mean by SMM lock. The smm_lock() function must be
called after all CPU cores had a chance to set SMBASE to the correct
value.

I think we can set it per core.

Please check next mail where I fixed your patch.

Thanks,
Rudolf

--
coreboot mailing list: [email protected]
http://www.coreboot.org/mailman/listinfo/coreboot

Reply via email to