Re: [edk2-devel] [Qemu-devel] [PATCH 1/2] q35: implement 128K SMRAM at default SMBASE address

2019-10-07 Thread Laszlo Ersek
On 10/04/19 13:31, Igor Mammedov wrote:
> On Tue, 1 Oct 2019 20:03:20 +0200
> "Laszlo Ersek"  wrote:

>> (1) What values to use.

> SeaBIOS writes 0x00 into command port, but it seems that's taken by
> EFI_SMM_COMMUNICATION_PROTOCOL. So we can use the next unused value
> (lets say 0x4). We probably don't have to use status port or 
> EFI_SMM_COMMUNICATION_PROTOCOL, since the value of written into 0xB2
> is sufficient to distinguish hotplug event.

Thanks. Can you please write a QEMU patch for the ACPI generator such
that hotplugging a VCPU writes value 4 to IO port 0xB2?

That will allow me to experiment with OVMF.

(I can experiment with some other parts in edk2 even before that.)

>> (2) How the parameters are passed.
>>
>>
>> (2a) For the new CPU, the SMI remains pending, until it gets an
>> INIT-SIPI-SIPI from one of the previously plugged CPUs (most likely, the
>> BSP). At that point, the new CPU will execute the "initial SMI handler
>> for hotplugged CPUs", at the default SMBASE.
>>
>> That's a routine we'll have to write in assembly, from zero. In this
>> routine, we can read back IO ports 0xB2 and 0xB3. And QEMU will be happy
>> to provide the values last written (see apm_ioport_readb() in
>> "hw/isa/apm.c"). So we can receive the values in this routine. Alright.
> 
> Potentially we can can avoid writing custom SMI handler,
> what do you think about following workflow:
> 
> on system boot after initial CPUs relocation, firmware set NOP SMI handler
> at default SMBASE.
> Then as reaction to GPE triggered SMI (on cpu hotplug), after SMI rendezvous,
> a host cpu reads IO port 0xB2 and does hotplugged CPUs enumeration.
> 
>   a) assuming we allow hotplug only in case of negotiated SMI broadcast
>  host CPU shoots down all in-flight INIT/SIPI/SIPI for hotpugged CPUs
>  to avoid race within relocation handler.

How is that "shootdown" possible?

>  After that host CPU in loop
> 
>   b) it prepares/initializes necessary CPU structures for a hotplugged
>  CPU if necessary and replaces NOP SMI handler with the relocation
>  SMI handler that is used during system boot.
>  
>   c) a host CPU sends NOP INIT/SIPI/SIPI to the hotplugged CPU
> 
>   d) the woken up hotplugged CPU, jumps to default SMBASE and
>  executes hotplug relocation handler.
> 
>   e) after the hotplugged CPU  is relocated and if there are more
>  hotplugged CPUs, a host CPU repeats b-d steps for the next
>  hotplugged CPU.
> 
>   f) after all CPUs are relocated, restore NOP SMI handler at default
>  SMBASE.
> 

Thanks
Laszlo



Re: [edk2-devel] [Qemu-devel] [PATCH 1/2] q35: implement 128K SMRAM at default SMBASE address

2019-10-04 Thread Igor Mammedov
On Tue, 1 Oct 2019 20:03:20 +0200
"Laszlo Ersek"  wrote:

> On 09/30/19 16:22, Yao, Jiewen wrote:
> >   
> >> -Original Message-
> >> From: de...@edk2.groups.io  On Behalf Of Igor
> >> Mammedov
> >> Sent: Monday, September 30, 2019 8:37 PM
> >> To: Laszlo Ersek   
> 
> >>> To me it looks like we need to figure out how QEMU can make the OS call
> >>> into SMM (in the GPE cpu hotplug handler), passing in parameters and
> >>> such. This would be step (03).
> >>>
> >>> Do you agree?
> >>>
> >>> If so, I'll ask Jiewen about such OS->SMM calls separately, because I
> >>> seem to remember that there used to be an "SMM communcation table" of
> >>> sorts, for flexible OS->SMM calls. However, it appears to be deprecated
> >>> lately.  
> >> we can try to resurrect and put over it some kind of protocol
> >> to describe which CPUs to where hotplugged.
> >>
> >> or we could put a parameter into SMI status register (IO port 0xb3)
> >> and the trigger SMI from GPE handler to tell SMI handler that cpu
> >> hotplug happened and then use QEMU's cpu hotplug interface
> >> to enumerate hotplugged CPUs for SMI handler.
> >>
> >> The later is probably simpler as we won't need to reinvent the wheel
> >> (just reuse the interface that's already in use by GPE handler).  
> 
> Based on "docs/specs/acpi_cpu_hotplug.txt", this seems to boil down to a
> bunch of IO port accesses at base 0x0cd8.
> 
> Is that correct?

yep, you can use it to iterate over hotplugged CPUs.
hw side (QEMU) uses cpu_hotplug_ops as IO write/read handlers
and firmware side (ACPI) scannig for hotplugged CPUs is implemented
in CPU_SCAN_METHOD.

What we can do on QEMU side is to write agreed upon value to command port (0xB2)
from CPU_SCAN_METHOD after taking ctrl_lock but before starting scan loop.
That way firmware will first bring up (from fw pov) all hotplugged CPUs
and then return control to OS to do the same from OS pov.


> 
> > [Jiewen] The PI specification Volume 4 - SMM defines 
> > EFI_MM_COMMUNICATION_PROTOCOL.Communicate() - It can be used to communicate 
> > between OS and SMM handler. But it requires the runtime protocol call. I am 
> > not sure how OS loader passes this information to OS kernel.
> > 
> > As such, I think using ACPI SCI/GPE -> software SMI handler is an easier 
> > way to achieve this. I also recommend this way.
> > For parameter passing, we can use 1) Port B2 (1 byte), 2) Port B3 (1 byte), 
> > 3) chipset scratch register (4 bytes or 8 bytes, based upon scratch 
> > register size), 4) ACPI NVS OPREGION, if the data structure is complicated. 
> >  
> 
> I'm confused about the details. In two categories:
> (1) what values to use,
> (2) how those values are passed.
> 
> Assume a CPU is hotpluged, QEMU injects an SCI, and the ACPI GPE handler
> in the OS -- which also originates from QEMU -- writes a particular byte
> to the Data port (0xB3), and then to the Control port (0xB2),
> broadcasting an SMI.
> 
> (1) What values to use.
> 
> Note that values ICH9_APM_ACPI_ENABLE (2) and ICH9_APM_ACPI_DISABLE (3)
> are already reserved in QEMU for IO port 0xB2, for different purposes.
> So we can't use those in the GPE handler.

SeaBIOS writes 0x00 into command port, but it seems that's taken by
EFI_SMM_COMMUNICATION_PROTOCOL. So we can use the next unused value
(lets say 0x4). We probably don't have to use status port or 
EFI_SMM_COMMUNICATION_PROTOCOL, since the value of written into 0xB2
is sufficient to distinguish hotplug event.

> Furthermote, OVMF's EFI_SMM_CONTROL2_PROTOCOL.Trigger() implementation
> (in "OvmfPkg/SmmControl2Dxe/SmmControl2Dxe.c") writes 0 to both ports,
> as long as the caller does not specify them.
> 
>   IoWrite8 (ICH9_APM_STS, DataPort== NULL ? 0 : *DataPort);
>   IoWrite8 (ICH9_APM_CNT, CommandPort == NULL ? 0 : *CommandPort);
> 
> And, there is only one Trigger() call site in edk2: namely in
> "MdeModulePkg/Core/PiSmmCore/PiSmmIpl.c", in the
> SmmCommunicationCommunicate() function.
> 
> (That function implements EFI_SMM_COMMUNICATION_PROTOCOL.Communicate().)
> This call site passes NULL for both DataPort and CommandPort.
> 
> Yet further, EFI_SMM_COMMUNICATION_PROTOCOL.Communicate() is used for
> example by the UEFI variable driver, for talking between the
> unprivileged (runtime DXE) and privileged (SMM) half.
> 
> As a result, all of the software SMIs currently in use in OVMF, related
> to actual firmware services, write 0 to both ports.
> 
> I guess we can choose new values, as long as we avoid 2 and 3 for the
> control port (0xB2), because those are reserved in QEMU -- see
> ich9_apm_ctrl_changed() in "hw/isa/lpc_ich9.c".
> 
> 
> (2) How the parameters are passed.
> 
> 
> (2a) For the new CPU, the SMI remains pending, until it gets an
> INIT-SIPI-SIPI from one of the previously plugged CPUs (most likely, the
> BSP). At that point, the new CPU will execute the "initial SMI handler
> for hotplugged CPUs", at the default SMBASE.
> 
> That's a routine we'll have to write in assembly, from zero. In 

Re: [edk2-devel] [Qemu-devel] [PATCH 1/2] q35: implement 128K SMRAM at default SMBASE address

2019-10-01 Thread Laszlo Ersek
On 09/30/19 16:22, Yao, Jiewen wrote:
> 
>> -Original Message-
>> From: de...@edk2.groups.io  On Behalf Of Igor
>> Mammedov
>> Sent: Monday, September 30, 2019 8:37 PM
>> To: Laszlo Ersek 

>>> To me it looks like we need to figure out how QEMU can make the OS call
>>> into SMM (in the GPE cpu hotplug handler), passing in parameters and
>>> such. This would be step (03).
>>>
>>> Do you agree?
>>>
>>> If so, I'll ask Jiewen about such OS->SMM calls separately, because I
>>> seem to remember that there used to be an "SMM communcation table" of
>>> sorts, for flexible OS->SMM calls. However, it appears to be deprecated
>>> lately.
>> we can try to resurrect and put over it some kind of protocol
>> to describe which CPUs to where hotplugged.
>>
>> or we could put a parameter into SMI status register (IO port 0xb3)
>> and the trigger SMI from GPE handler to tell SMI handler that cpu
>> hotplug happened and then use QEMU's cpu hotplug interface
>> to enumerate hotplugged CPUs for SMI handler.
>>
>> The later is probably simpler as we won't need to reinvent the wheel
>> (just reuse the interface that's already in use by GPE handler).

Based on "docs/specs/acpi_cpu_hotplug.txt", this seems to boil down to a
bunch of IO port accesses at base 0x0cd8.

Is that correct?

> [Jiewen] The PI specification Volume 4 - SMM defines 
> EFI_MM_COMMUNICATION_PROTOCOL.Communicate() - It can be used to communicate 
> between OS and SMM handler. But it requires the runtime protocol call. I am 
> not sure how OS loader passes this information to OS kernel.
> 
> As such, I think using ACPI SCI/GPE -> software SMI handler is an easier way 
> to achieve this. I also recommend this way.
> For parameter passing, we can use 1) Port B2 (1 byte), 2) Port B3 (1 byte), 
> 3) chipset scratch register (4 bytes or 8 bytes, based upon scratch register 
> size), 4) ACPI NVS OPREGION, if the data structure is complicated.

I'm confused about the details. In two categories:
(1) what values to use,
(2) how those values are passed.

Assume a CPU is hotpluged, QEMU injects an SCI, and the ACPI GPE handler
in the OS -- which also originates from QEMU -- writes a particular byte
to the Data port (0xB3), and then to the Control port (0xB2),
broadcasting an SMI.

(1) What values to use.

Note that values ICH9_APM_ACPI_ENABLE (2) and ICH9_APM_ACPI_DISABLE (3)
are already reserved in QEMU for IO port 0xB2, for different purposes.
So we can't use those in the GPE handler.

Furthermote, OVMF's EFI_SMM_CONTROL2_PROTOCOL.Trigger() implementation
(in "OvmfPkg/SmmControl2Dxe/SmmControl2Dxe.c") writes 0 to both ports,
as long as the caller does not specify them.

  IoWrite8 (ICH9_APM_STS, DataPort== NULL ? 0 : *DataPort);
  IoWrite8 (ICH9_APM_CNT, CommandPort == NULL ? 0 : *CommandPort);

And, there is only one Trigger() call site in edk2: namely in
"MdeModulePkg/Core/PiSmmCore/PiSmmIpl.c", in the
SmmCommunicationCommunicate() function.

(That function implements EFI_SMM_COMMUNICATION_PROTOCOL.Communicate().)
This call site passes NULL for both DataPort and CommandPort.

Yet further, EFI_SMM_COMMUNICATION_PROTOCOL.Communicate() is used for
example by the UEFI variable driver, for talking between the
unprivileged (runtime DXE) and privileged (SMM) half.

As a result, all of the software SMIs currently in use in OVMF, related
to actual firmware services, write 0 to both ports.

I guess we can choose new values, as long as we avoid 2 and 3 for the
control port (0xB2), because those are reserved in QEMU -- see
ich9_apm_ctrl_changed() in "hw/isa/lpc_ich9.c".


(2) How the parameters are passed.


(2a) For the new CPU, the SMI remains pending, until it gets an
INIT-SIPI-SIPI from one of the previously plugged CPUs (most likely, the
BSP). At that point, the new CPU will execute the "initial SMI handler
for hotplugged CPUs", at the default SMBASE.

That's a routine we'll have to write in assembly, from zero. In this
routine, we can read back IO ports 0xB2 and 0xB3. And QEMU will be happy
to provide the values last written (see apm_ioport_readb() in
"hw/isa/apm.c"). So we can receive the values in this routine. Alright.


(2b) On all other CPUs, the SMM foundation already accepts the SMI.

There point where it makes sense to start looking is SmmEntryPoint()
[MdeModulePkg/Core/PiSmmCore/PiSmmCore.c].

(2b1) This function first checks whether the SMI is synchronous. The
logic for determining that is based on
"gSmmCorePrivate->CommunicationBuffer" being non-NULL. This field is set
to non-NULL in SmmCommunicationCommunicate() -- see above, in (1).

In other words, the SMI is deemed synchronous if it was initiated with
EFI_SMM_COMMUNICATION_PROTOCOL.Communicate(). In that case, the
HandlerType GUID is extracted from the communication buffer, and passed
to SmiManage(). In turn, SmiManage() locates the SMI handler registered
with the same handler GUID, and delegates the SMI handling to that
specific handler.

This is how the UEFI variable driver is split in two 

RE: [edk2-devel] [Qemu-devel] [PATCH 1/2] q35: implement 128K SMRAM at default SMBASE address

2019-09-30 Thread Yao, Jiewen
below

> -Original Message-
> From: de...@edk2.groups.io  On Behalf Of Igor
> Mammedov
> Sent: Monday, September 30, 2019 8:37 PM
> To: Laszlo Ersek 
> Cc: de...@edk2.groups.io; qemu-devel@nongnu.org; Chen, Yingwen
> ; phillip.go...@oracle.com;
> alex.william...@redhat.com; Yao, Jiewen ; Nakajima,
> Jun ; Kinney, Michael D
> ; pbonz...@redhat.com;
> boris.ostrov...@oracle.com; r...@edk2.groups.io; joao.m.mart...@oracle.com;
> Brijesh Singh 
> Subject: Re: [edk2-devel] [Qemu-devel] [PATCH 1/2] q35: implement 128K
> SMRAM at default SMBASE address
> 
> On Mon, 30 Sep 2019 13:51:46 +0200
> "Laszlo Ersek"  wrote:
> 
> > Hi Igor,
> >
> > On 09/24/19 13:19, Igor Mammedov wrote:
> > > On Mon, 23 Sep 2019 20:35:02 +0200
> > > "Laszlo Ersek"  wrote:
> >
> > >> I've got good results. For this (1/2) QEMU patch:
> > >>
> > >> Tested-by: Laszlo Ersek 
> > >>
> > >> I tested the following scenarios. In every case, I verified the OVMF
> > >> log, and also the "info mtree" monitor command's result (i.e. whether
> > >> "smbase-blackhole" / "smbase-window" were disabled or enabled).
> > >> Mostly, I diffed these text files between the test scenarios (looking
> > >> for desired / undesired differences). In the Linux guests, I checked
> > >> / compared the dmesg too (wrt. the UEFI memmap).
> > >>
> > >> - unpatched OVMF (regression test), Fedora guest, normal boot and S3
> > >>
> > >> - patched OVMF, but feature disabled with "-global
> > >>   mch.smbase-smram=off" (another regression test), Fedora guest,
> > >>   normal boot and S3
> > >>
> > >> - patched OVMF, feature enabled, Fedora and various Windows guests
> > >>   (win7, win8, win10 families, client/server), normal boot and S3
> > >>
> > >> - a subset of the above guests, with S3 disabled (-global
> > >>   ICH9-LPC.disable_s3=1), and obviously S3 resume not tested
> > >>
> > >> SEV: used 5.2-ish Linux guest, with S3 disabled (no support under SEV
> > >> for that now):
> > >>
> > >> - unpatched OVMF (regression test), normal boot
> > >>
> > >> - patched OVMF but feature disabled on the QEMU cmdline (another
> > >>   regression test), normal boot
> > >>
> > >> - patched OVMF, feature enabled, normal boot.
> > >>
> > >> I plan to post the OVMF patches tomorrow, for discussion.
> > >>
> > >> (It's likely too early to push these QEMU / edk2 patches right now --
> > >> we don't know yet if this path will take us to the destination. For
> > >> now, it certainly looks great.)
> > >
> > > Laszlo, thanks for trying it out.
> > > It's nice to hear that approach is somewhat usable.
> > > Hopefully we won't have to invent 'paused' cpu mode.
> > >
> > > Pls CC me on your patches
> > > (not that I qualify for reviewing,
> > > but may be I could learn a thing or two from it)
> >
> > Considering the plan at [1], the two patch sets [2] [3] should cover
> > step (01); at least as proof of concept.
> >
> > [1] [edk2-devel] CPU hotplug using SMM with QEMU+OVMF
> > http://mid.mail-archive.com/20190830164802.1b17ff26@redhat.com
> >
> > [2] The current thread:
> > [Qemu-devel] [PATCH 0/2] q35: mch: allow to lock down 128K RAM at
> default SMBASE address
> > http://mid.mail-archive.com/20190917130708.10281-1-
> imamm...@redhat.com
> >
> > [3] [edk2-devel] [PATCH wave 1 00/10] support QEMU's "SMRAM at default
> SMBASE" feature
> > http://mid.mail-archive.com/20190924113505.27272-1-lersek@redhat.com
> >
> > (I'll have to figure out what SMI handler to put in place there, but I'd
> > like to experiment with that once we can cause a new CPU to start
> > executing code there, in SMM.)
> >
> > So what's next?
> >
> > To me it looks like we need to figure out how QEMU can make the OS call
> > into SMM (in the GPE cpu hotplug handler), passing in parameters and
> > such. This would be step (03).
> >
> > Do you agree?
> >
> > If so, I'll ask Jiewen about such OS->SMM calls separately, because I
> > seem to remember that there used to be an "SMM communcation table" of
> > sorts, for flexible OS->SMM calls. However, it appears to be deprecated
> > lately.
> we can try to resurrect and put over 

Re: [edk2-devel] [Qemu-devel] [PATCH 1/2] q35: implement 128K SMRAM at default SMBASE address

2019-09-30 Thread Igor Mammedov
On Mon, 30 Sep 2019 13:51:46 +0200
"Laszlo Ersek"  wrote:

> Hi Igor,
> 
> On 09/24/19 13:19, Igor Mammedov wrote:
> > On Mon, 23 Sep 2019 20:35:02 +0200
> > "Laszlo Ersek"  wrote:  
> 
> >> I've got good results. For this (1/2) QEMU patch:
> >>
> >> Tested-by: Laszlo Ersek 
> >>
> >> I tested the following scenarios. In every case, I verified the OVMF
> >> log, and also the "info mtree" monitor command's result (i.e. whether
> >> "smbase-blackhole" / "smbase-window" were disabled or enabled).
> >> Mostly, I diffed these text files between the test scenarios (looking
> >> for desired / undesired differences). In the Linux guests, I checked
> >> / compared the dmesg too (wrt. the UEFI memmap).
> >>
> >> - unpatched OVMF (regression test), Fedora guest, normal boot and S3
> >>
> >> - patched OVMF, but feature disabled with "-global
> >>   mch.smbase-smram=off" (another regression test), Fedora guest,
> >>   normal boot and S3
> >>
> >> - patched OVMF, feature enabled, Fedora and various Windows guests
> >>   (win7, win8, win10 families, client/server), normal boot and S3
> >>
> >> - a subset of the above guests, with S3 disabled (-global
> >>   ICH9-LPC.disable_s3=1), and obviously S3 resume not tested
> >>
> >> SEV: used 5.2-ish Linux guest, with S3 disabled (no support under SEV
> >> for that now):
> >>
> >> - unpatched OVMF (regression test), normal boot
> >>
> >> - patched OVMF but feature disabled on the QEMU cmdline (another
> >>   regression test), normal boot
> >>
> >> - patched OVMF, feature enabled, normal boot.
> >>
> >> I plan to post the OVMF patches tomorrow, for discussion.
> >>
> >> (It's likely too early to push these QEMU / edk2 patches right now --
> >> we don't know yet if this path will take us to the destination. For
> >> now, it certainly looks great.)  
> >
> > Laszlo, thanks for trying it out.
> > It's nice to hear that approach is somewhat usable.
> > Hopefully we won't have to invent 'paused' cpu mode.
> >
> > Pls CC me on your patches
> > (not that I qualify for reviewing,
> > but may be I could learn a thing or two from it)  
> 
> Considering the plan at [1], the two patch sets [2] [3] should cover
> step (01); at least as proof of concept.
> 
> [1] [edk2-devel] CPU hotplug using SMM with QEMU+OVMF
> http://mid.mail-archive.com/20190830164802.1b17ff26@redhat.com
> 
> [2] The current thread:
> [Qemu-devel] [PATCH 0/2] q35: mch: allow to lock down 128K RAM at default 
> SMBASE address
> http://mid.mail-archive.com/20190917130708.10281-1-imammedo@redhat.com
> 
> [3] [edk2-devel] [PATCH wave 1 00/10] support QEMU's "SMRAM at default 
> SMBASE" feature
> http://mid.mail-archive.com/20190924113505.27272-1-lersek@redhat.com
> 
> (I'll have to figure out what SMI handler to put in place there, but I'd
> like to experiment with that once we can cause a new CPU to start
> executing code there, in SMM.)
> 
> So what's next?
> 
> To me it looks like we need to figure out how QEMU can make the OS call
> into SMM (in the GPE cpu hotplug handler), passing in parameters and
> such. This would be step (03).
> 
> Do you agree?
> 
> If so, I'll ask Jiewen about such OS->SMM calls separately, because I
> seem to remember that there used to be an "SMM communcation table" of
> sorts, for flexible OS->SMM calls. However, it appears to be deprecated
> lately.
we can try to resurrect and put over it some kind of protocol
to describe which CPUs to where hotplugged.

or we could put a parameter into SMI status register (IO port 0xb3)
and the trigger SMI from GPE handler to tell SMI handler that cpu
hotplug happened and then use QEMU's cpu hotplug interface
to enumerate hotplugged CPUs for SMI handler.

The later is probably simpler as we won't need to reinvent the wheel
(just reuse the interface that's already in use by GPE handler).

> Hmmm Yes, UEFI 2.8 has "Appendix O - UEFI ACPI Data Table", and it
> writes (after defining the table format):
> 
> The first use of this UEFI ACPI table format is the SMM
> Communication ACPI Table. This table describes a special software
> SMI that can be used to initiate inter-mode communication in the OS
> present environment by non-firmware agents with SMM code.
> 
> Note: The use of the SMM Communication ACPI table is deprecated in
>   UEFI spec. 2.7. This is due to the lack of a use case for
>   inter-mode communication by non-firmware agents with SMM code
>   and support for initiating this form of communication in
>   common OSes.
> 
> The changelog at the front of the UEFI spec also references the
> Mantis#1691 spec ticket, "Remove/Deprecate SMM Communication ACPI Table"
> (addressed in UEFI 2.6B).
> 
> (I think that must have been a security ticket, because, while I
> generally have access to Mantis tickets,
>  gives me "Access
> Denied" :/ )
> 
> Thanks,
> Laszlo
> 
> -=-=-=-=-=-=-=-=-=-=-=-
> Groups.io Links: You receive all messages sent to 

Re: [edk2-devel] [Qemu-devel] [PATCH 1/2] q35: implement 128K SMRAM at default SMBASE address

2019-09-30 Thread Laszlo Ersek
Hi Igor,

On 09/24/19 13:19, Igor Mammedov wrote:
> On Mon, 23 Sep 2019 20:35:02 +0200
> "Laszlo Ersek"  wrote:

>> I've got good results. For this (1/2) QEMU patch:
>>
>> Tested-by: Laszlo Ersek 
>>
>> I tested the following scenarios. In every case, I verified the OVMF
>> log, and also the "info mtree" monitor command's result (i.e. whether
>> "smbase-blackhole" / "smbase-window" were disabled or enabled).
>> Mostly, I diffed these text files between the test scenarios (looking
>> for desired / undesired differences). In the Linux guests, I checked
>> / compared the dmesg too (wrt. the UEFI memmap).
>>
>> - unpatched OVMF (regression test), Fedora guest, normal boot and S3
>>
>> - patched OVMF, but feature disabled with "-global
>>   mch.smbase-smram=off" (another regression test), Fedora guest,
>>   normal boot and S3
>>
>> - patched OVMF, feature enabled, Fedora and various Windows guests
>>   (win7, win8, win10 families, client/server), normal boot and S3
>>
>> - a subset of the above guests, with S3 disabled (-global
>>   ICH9-LPC.disable_s3=1), and obviously S3 resume not tested
>>
>> SEV: used 5.2-ish Linux guest, with S3 disabled (no support under SEV
>> for that now):
>>
>> - unpatched OVMF (regression test), normal boot
>>
>> - patched OVMF but feature disabled on the QEMU cmdline (another
>>   regression test), normal boot
>>
>> - patched OVMF, feature enabled, normal boot.
>>
>> I plan to post the OVMF patches tomorrow, for discussion.
>>
>> (It's likely too early to push these QEMU / edk2 patches right now --
>> we don't know yet if this path will take us to the destination. For
>> now, it certainly looks great.)
>
> Laszlo, thanks for trying it out.
> It's nice to hear that approach is somewhat usable.
> Hopefully we won't have to invent 'paused' cpu mode.
>
> Pls CC me on your patches
> (not that I qualify for reviewing,
> but may be I could learn a thing or two from it)

Considering the plan at [1], the two patch sets [2] [3] should cover
step (01); at least as proof of concept.

[1] [edk2-devel] CPU hotplug using SMM with QEMU+OVMF
http://mid.mail-archive.com/20190830164802.1b17ff26@redhat.com

[2] The current thread:
[Qemu-devel] [PATCH 0/2] q35: mch: allow to lock down 128K RAM at default 
SMBASE address
http://mid.mail-archive.com/20190917130708.10281-1-imammedo@redhat.com

[3] [edk2-devel] [PATCH wave 1 00/10] support QEMU's "SMRAM at default SMBASE" 
feature
http://mid.mail-archive.com/20190924113505.27272-1-lersek@redhat.com

(I'll have to figure out what SMI handler to put in place there, but I'd
like to experiment with that once we can cause a new CPU to start
executing code there, in SMM.)

So what's next?

To me it looks like we need to figure out how QEMU can make the OS call
into SMM (in the GPE cpu hotplug handler), passing in parameters and
such. This would be step (03).

Do you agree?

If so, I'll ask Jiewen about such OS->SMM calls separately, because I
seem to remember that there used to be an "SMM communcation table" of
sorts, for flexible OS->SMM calls. However, it appears to be deprecated
lately.

Hmmm Yes, UEFI 2.8 has "Appendix O - UEFI ACPI Data Table", and it
writes (after defining the table format):

The first use of this UEFI ACPI table format is the SMM
Communication ACPI Table. This table describes a special software
SMI that can be used to initiate inter-mode communication in the OS
present environment by non-firmware agents with SMM code.

Note: The use of the SMM Communication ACPI table is deprecated in
  UEFI spec. 2.7. This is due to the lack of a use case for
  inter-mode communication by non-firmware agents with SMM code
  and support for initiating this form of communication in
  common OSes.

The changelog at the front of the UEFI spec also references the
Mantis#1691 spec ticket, "Remove/Deprecate SMM Communication ACPI Table"
(addressed in UEFI 2.6B).

(I think that must have been a security ticket, because, while I
generally have access to Mantis tickets,
 gives me "Access
Denied" :/ )

Thanks,
Laszlo



Re: [edk2-devel] [Qemu-devel] [PATCH 1/2] q35: implement 128K SMRAM at default SMBASE address

2019-09-24 Thread Paolo Bonzini
On 20/09/19 11:28, Laszlo Ersek wrote:
>> On QEMU side,  we can drop black-hole approach and allocate
>> dedicated SMRAM region, which explicitly gets mapped into
>> RAM address space and after SMI hanlder initialization, gets
>> unmapped (locked). So that SMRAM would be accessible only
>> from SMM context. That way RAM at 0x3 could be used as
>> normal when SMRAM is unmapped.
>
> I prefer the black-hole approach, introduced in your current patch
> series, if it can work. Way less opportunity for confusion.

Another possibility would be to alias the 0xA..0xB SMRAM to
0x3..0x4 (only when in SMM).

I'm not super enthusiastic about adding this kind of QEMU-only feature.
 The alternative would be to implement VT-d range locking through the
intel-iommu device's PCI configuration space (which includes _adding_
the configuration space, i.e. making the IOMMU a PCI device in the first
place, and the support to the firmware for configuring the VT-d BAR at
0xfed9).  This would be the right way to do it, but it would entail
a lot of work throughout the stack. :(  So I guess some variant of this
would be okay, as long as it's peppered with "this is not how real
hardware does it" comments in both QEMU and EDK2.

Thanks,

Paolo

> I've started work on the counterpart OVMF patches; I'll report back.




Re: [edk2-devel] [Qemu-devel] [PATCH 1/2] q35: implement 128K SMRAM at default SMBASE address

2019-09-24 Thread Igor Mammedov
On Mon, 23 Sep 2019 20:35:02 +0200
"Laszlo Ersek"  wrote:

> On 09/20/19 11:28, Laszlo Ersek wrote:
> > On 09/20/19 10:28, Igor Mammedov wrote:  
> >> On Thu, 19 Sep 2019 19:02:07 +0200
> >> "Laszlo Ersek"  wrote:
> >>  
> >>> Hi Igor,
> >>>
> >>> (+Brijesh)
> >>>
> >>> long-ish pondering ahead, with a question at the end.  
> >> [...]
> >>  
> >>> Finally: can you please remind me why we lock down 128KB (32 pages) at
> >>> 0x3_, and not just half of that? What do we need the range at
> >>> [0x4_..0x4_] for?  
> >>
> >>
> >> If I recall correctly, CPU consumes 64K of save/restore area.
> >> The rest 64K are temporary RAM for using in SMI relocation handler,
> >> if it's possible to get away without it then we can drop it and
> >> lock only 64K required for CPU state. It won't help with SEV
> >> conflict though as it's in the first 64K.  
> > 
> > OK. Let's go with 128KB for now. Shrinking the area is always easier
> > than growing it.
> >   
> >> On QEMU side,  we can drop black-hole approach and allocate
> >> dedicated SMRAM region, which explicitly gets mapped into
> >> RAM address space and after SMI hanlder initialization, gets
> >> unmapped (locked). So that SMRAM would be accessible only
> >> from SMM context. That way RAM at 0x3 could be used as
> >> normal when SMRAM is unmapped.  
> > 
> > I prefer the black-hole approach, introduced in your current patch
> > series, if it can work. Way less opportunity for confusion.
> > 
> > I've started work on the counterpart OVMF patches; I'll report back.  
> 
> I've got good results. For this (1/2) QEMU patch:
> 
> Tested-by: Laszlo Ersek 
> 
> I tested the following scenarios. In every case, I verified the OVMF
> log, and also the "info mtree" monitor command's result (i.e. whether
> "smbase-blackhole" / "smbase-window" were disabled or enabled). Mostly,
> I diffed these text files between the test scenarios (looking for
> desired / undesired differences). In the Linux guests, I checked /
> compared the dmesg too (wrt. the UEFI memmap).
> 
> - unpatched OVMF (regression test), Fedora guest, normal boot and S3
> 
> - patched OVMF, but feature disabled with "-global mch.smbase-smram=off"
> (another regression test), Fedora guest, normal boot and S3
> 
> - patched OVMF, feature enabled, Fedora and various Windows guests
> (win7, win8, win10 families, client/server), normal boot and S3
> 
> - a subset of the above guests, with S3 disabled (-global
>   ICH9-LPC.disable_s3=1), and obviously S3 resume not tested
> 
> SEV: used 5.2-ish Linux guest, with S3 disabled (no support under SEV
> for that now):
> 
> - unpatched OVMF (regression test), normal boot
> 
> - patched OVMF but feature disabled on the QEMU cmdline (another
> regression test), normal boot
> 
> - patched OVMF, feature enabled, normal boot.
> 
> I plan to post the OVMF patches tomorrow, for discussion.
> 
> (It's likely too early to push these QEMU / edk2 patches right now -- we
> don't know yet if this path will take us to the destination. For now, it
> certainly looks great.)

Laszlo, thanks for trying it out.
It's nice to hear that approach is somewhat usable.
Hopefully we won't have to invent 'paused' cpu mode.

Pls CC me on your patches
(not that I qualify for reviewing,
but may be I could learn a thing or two from it)

> Thanks
> Laszlo
> 
> -=-=-=-=-=-=-=-=-=-=-=-
> Groups.io Links: You receive all messages sent to this group.
> 
> View/Reply Online (#47864): https://edk2.groups.io/g/devel/message/47864
> Mute This Topic: https://groups.io/mt/34201782/1958639
> Group Owner: devel+ow...@edk2.groups.io
> Unsubscribe: https://edk2.groups.io/g/devel/unsub  [imamm...@redhat.com]
> -=-=-=-=-=-=-=-=-=-=-=-
> 




Re: [edk2-devel] [Qemu-devel] [PATCH 1/2] q35: implement 128K SMRAM at default SMBASE address

2019-09-23 Thread Laszlo Ersek
On 09/20/19 11:28, Laszlo Ersek wrote:
> On 09/20/19 10:28, Igor Mammedov wrote:
>> On Thu, 19 Sep 2019 19:02:07 +0200
>> "Laszlo Ersek"  wrote:
>>
>>> Hi Igor,
>>>
>>> (+Brijesh)
>>>
>>> long-ish pondering ahead, with a question at the end.
>> [...]
>>
>>> Finally: can you please remind me why we lock down 128KB (32 pages) at
>>> 0x3_, and not just half of that? What do we need the range at
>>> [0x4_..0x4_] for?
>>
>>
>> If I recall correctly, CPU consumes 64K of save/restore area.
>> The rest 64K are temporary RAM for using in SMI relocation handler,
>> if it's possible to get away without it then we can drop it and
>> lock only 64K required for CPU state. It won't help with SEV
>> conflict though as it's in the first 64K.
> 
> OK. Let's go with 128KB for now. Shrinking the area is always easier
> than growing it.
> 
>> On QEMU side,  we can drop black-hole approach and allocate
>> dedicated SMRAM region, which explicitly gets mapped into
>> RAM address space and after SMI hanlder initialization, gets
>> unmapped (locked). So that SMRAM would be accessible only
>> from SMM context. That way RAM at 0x3 could be used as
>> normal when SMRAM is unmapped.
> 
> I prefer the black-hole approach, introduced in your current patch
> series, if it can work. Way less opportunity for confusion.
> 
> I've started work on the counterpart OVMF patches; I'll report back.

I've got good results. For this (1/2) QEMU patch:

Tested-by: Laszlo Ersek 

I tested the following scenarios. In every case, I verified the OVMF
log, and also the "info mtree" monitor command's result (i.e. whether
"smbase-blackhole" / "smbase-window" were disabled or enabled). Mostly,
I diffed these text files between the test scenarios (looking for
desired / undesired differences). In the Linux guests, I checked /
compared the dmesg too (wrt. the UEFI memmap).

- unpatched OVMF (regression test), Fedora guest, normal boot and S3

- patched OVMF, but feature disabled with "-global mch.smbase-smram=off"
(another regression test), Fedora guest, normal boot and S3

- patched OVMF, feature enabled, Fedora and various Windows guests
(win7, win8, win10 families, client/server), normal boot and S3

- a subset of the above guests, with S3 disabled (-global
  ICH9-LPC.disable_s3=1), and obviously S3 resume not tested

SEV: used 5.2-ish Linux guest, with S3 disabled (no support under SEV
for that now):

- unpatched OVMF (regression test), normal boot

- patched OVMF but feature disabled on the QEMU cmdline (another
regression test), normal boot

- patched OVMF, feature enabled, normal boot.

I plan to post the OVMF patches tomorrow, for discussion.

(It's likely too early to push these QEMU / edk2 patches right now -- we
don't know yet if this path will take us to the destination. For now, it
certainly looks great.)

Thanks
Laszlo



Re: [edk2-devel] [Qemu-devel] [PATCH 1/2] q35: implement 128K SMRAM at default SMBASE address

2019-09-20 Thread Laszlo Ersek
On 09/20/19 10:28, Igor Mammedov wrote:
> On Thu, 19 Sep 2019 19:02:07 +0200
> "Laszlo Ersek"  wrote:
> 
>> Hi Igor,
>>
>> (+Brijesh)
>>
>> long-ish pondering ahead, with a question at the end.
> [...]
> 
>> Finally: can you please remind me why we lock down 128KB (32 pages) at
>> 0x3_, and not just half of that? What do we need the range at
>> [0x4_..0x4_] for?
> 
> 
> If I recall correctly, CPU consumes 64K of save/restore area.
> The rest 64K are temporary RAM for using in SMI relocation handler,
> if it's possible to get away without it then we can drop it and
> lock only 64K required for CPU state. It won't help with SEV
> conflict though as it's in the first 64K.

OK. Let's go with 128KB for now. Shrinking the area is always easier
than growing it.

> On QEMU side,  we can drop black-hole approach and allocate
> dedicated SMRAM region, which explicitly gets mapped into
> RAM address space and after SMI hanlder initialization, gets
> unmapped (locked). So that SMRAM would be accessible only
> from SMM context. That way RAM at 0x3 could be used as
> normal when SMRAM is unmapped.

I prefer the black-hole approach, introduced in your current patch
series, if it can work. Way less opportunity for confusion.

I've started work on the counterpart OVMF patches; I'll report back.

Thanks
Laszlo



Re: [edk2-devel] [Qemu-devel] [PATCH 1/2] q35: implement 128K SMRAM at default SMBASE address

2019-09-20 Thread Igor Mammedov
On Thu, 19 Sep 2019 19:02:07 +0200
"Laszlo Ersek"  wrote:

> Hi Igor,
> 
> (+Brijesh)
> 
> long-ish pondering ahead, with a question at the end.
[...]

> Finally: can you please remind me why we lock down 128KB (32 pages) at
> 0x3_, and not just half of that? What do we need the range at
> [0x4_..0x4_] for?


If I recall correctly, CPU consumes 64K of save/restore area.
The rest 64K are temporary RAM for using in SMI relocation handler,
if it's possible to get away without it then we can drop it and
lock only 64K required for CPU state. It won't help with SEV
conflict though as it's in the first 64K.

On QEMU side,  we can drop black-hole approach and allocate
dedicated SMRAM region, which explicitly gets mapped into
RAM address space and after SMI hanlder initialization, gets
unmapped (locked). So that SMRAM would be accessible only
from SMM context. That way RAM at 0x3 could be used as
normal when SMRAM is unmapped.