Re: [PATCH v8 07/10] ACPI ERST: create ACPI ERST table for pc/x86 machines

2021-10-23 Thread Boris Ostrovsky



On 10/23/21 4:14 PM, Michael S. Tsirkin wrote:

On Sat, Oct 23, 2021 at 07:52:21AM +0530, Ani Sinha wrote:


On Fri, 22 Oct 2021, Eric DeVolder wrote:


Ani, inline below.
eric

On 10/22/21 05:18, Ani Sinha wrote:


On Fri, 15 Oct 2021, Eric DeVolder wrote:



diff --git a/hw/i386/acpi-microvm.c b/hw/i386/acpi-microvm.c

I do not think we need to include this for microvm machines. They are
supposed to have minimal ACPUI support. So lets not bloat it unless there
is a specific requirement to support ERST on microvms as well.

Would it be ok if I ifdef this on CONFIG_ERST also?

I think we should not touch microvm machine unless you can justify why you
need ERST support there.

OTOH why not? No idea... CC microvm maintainers and let them decide.



I would argue that ERST support for microvm is in fact more useful than for 
"regular" VMs: those VMs can use EFI storage for pstore while microvms won't 
have that option.


-boris




Re: [Qemu-devel] [POC Seabios PATCH] seabios: use isolated SMM address space for relocation

2019-08-26 Thread Boris Ostrovsky
On 8/26/19 9:57 AM, Igor Mammedov wrote:
>
>> I most likely don't understand how this is supposed to work but aren't
>> we here successfully reading SMRAM from non-SMM context, something we
>> are not supposed to be able to do?
> We are aren't reading SMRAM at 0x3 base directly,
> "RAM" marked log lines are non-SMM context reads using as base
>   BUILD_SMM_INIT_ADDR   0x3
> and as you see, it isn't showing anything from SMRAM
>
> For mgmt/demo purposes SMRAM (which is at 0x3 in SMM address space)
> is also aliased at
>   BUILD_SMM_ADDR0xa
> into non-SMM address space to allow us to initialize SMM entry point
> (log entries are marked as "SMRAM").



OK, I then misunderstood the purpose of this demo. I thought you were
not supposed to be able to read it from either location in non-SMM mode.

Thanks for the explanation.

-boris

>
> Aliased SMRAM also allows us to check that relocation worked
> (i.e. smm_base was relocated from default "handle_smi cmd=0 smbase=0x0003"
> to a new one "smm_relocate: SMRAM  cpu.i64.smm_base  a").
>
>
> It's similar to what we do with TSEG where QEMU steals RAM from
> normal address space and puts MMIO region 'tseg_blackhole' over it
> so non-SMM context reads 0xFF from TSEG window, while SMM context
> accesses RAM hidden below tseg_blackhole.
>
> These patches show that we can have normal usable RAM at 0x3
> which doesn't overlap with SMRAM at the same address and each can
> be made accessible only from its own mode (no-SMM and SMM).
> Preventing non-SMM mode from injecting attack on SMRAM via CPU
> that hasn't been initialized yet once firmware locked down SMRAM.
>
>
>>
>> -boris
>>




Re: [Qemu-devel] [POC Seabios PATCH] seabios: use isolated SMM address space for relocation

2019-08-16 Thread Boris Ostrovsky
On 8/16/19 7:24 AM, Igor Mammedov wrote:
> for purpose of demo SMRAM (at 0x3) is aliased at a in system address 
> space
> for easy initialization of SMI entry point.
> Here is resulting debug output showing that RAM at 0x3 is not affected
> by SMM and only RAM in SMM adderss space is modified:
>
> init smm
> smm_relocate: before relocaten
> smm_relocate: RAM codeentry 0
> smm_relocate: RAM  cpu.i64.smm_base  0
> smm_relocate: SMRAM  codeentry f000c831eac88c
> smm_relocate: SMRAM  cpu.i64.smm_base  0
> handle_smi cmd=0 smbase=0x0003
> smm_relocate: after relocaten
> smm_relocate: RAM codeentry 0
> smm_relocate: RAM  cpu.i64.smm_base  0
> smm_relocate: SMRAM  codeentry f000c831eac88c
> smm_relocate: SMRAM  cpu.i64.smm_base  a


I most likely don't understand how this is supposed to work but aren't
we here successfully reading SMRAM from non-SMM context, something we
are not supposed to be able to do?


-boris




Re: [Qemu-devel] [RFC v2 0/4] QEMU changes to do PVH boot

2019-01-09 Thread Boris Ostrovsky
On 1/9/19 6:53 AM, Stefano Garzarella wrote:
> Hi Liam,
>
> On Tue, Jan 8, 2019 at 3:47 PM Liam Merwick  wrote:
>> QEMU sets the hvm_modlist_entry in load_linux() after the call to
>> load_elfboot() and then qboot loads it in boot_pvh_from_fw_cfg()
>>
>> But the current PVH patches don't handle initrd (they have
>> start_info.nr_modules == 1).
> Looking in the linux kernel (arch/x86/platform/pvh/enlighten.c) I saw:
> /* The first module is always ramdisk. */
> if (pvh_start_info.nr_modules) {
> struct hvm_modlist_entry *modaddr =
> __va(pvh_start_info.modlist_paddr);
> pvh_bootparams.hdr.ramdisk_image = modaddr->paddr;
> pvh_bootparams.hdr.ramdisk_size = modaddr->size;
> }
>
> So, putting start_info.nr_modules = 1, means that the first
> hvm_modlist_entry should have the ramdisk paddr and size. Is it
> correct?
>
>
>> During (or after) the call to load_elfboot() it looks like we'd need to
>> do something like what load_multiboot() does below (along with the
>> associated initialisation)
>>
>> 400 fw_cfg_add_i32(fw_cfg, FW_CFG_INITRD_ADDR, ADDR_MBI);
>> 401 fw_cfg_add_i32(fw_cfg, FW_CFG_INITRD_SIZE, sizeof(bootinfo));
>> 402 fw_cfg_add_bytes(fw_cfg, FW_CFG_INITRD_DATA, mb_bootinfo_data,
>> 403  sizeof(bootinfo));
>>
> In this case I think they used the FW_CFG_INITRD_* to pass bootinfo
> varibales to the guest, maybe we could add something like what
> linux_load() does:
>
> /* load initrd */
> if (initrd_filename) {
> ...
> initrd_addr = (initrd_max-initrd_size) & ~4095;
>
> fw_cfg_add_i32(fw_cfg, FW_CFG_INITRD_ADDR, initrd_addr);
> fw_cfg_add_i32(fw_cfg, FW_CFG_INITRD_SIZE, initrd_size);
> fw_cfg_add_bytes(fw_cfg, FW_CFG_INITRD_DATA, initrd_data, 
> initrd_size);
> ...
> }
>
> Then we can load the initrd in qboot or in the optionrom that I'm writing.
>
> What do you think?


Why not specify this in pvh_start_info? This will be much faster for
everyone, no need to go through fw_cfg.

-boris




Re: [Qemu-devel] QEMU/NEMU boot time with several x86 firmwares

2018-12-05 Thread Boris Ostrovsky
On 12/5/18 8:20 AM, Stefan Hajnoczi wrote:
> On Tue, Dec 04, 2018 at 02:44:33PM -0800, Maran Wilson wrote:
>>
>> Since then, we have put together an alternative solution that would allow
>> Qemu to boot an uncompressed Linux binary via the x86/HVM direct boot ABI
>> (https://xenbits.xen.org/docs/unstable/misc/pvh.html). The solution involves
>> first making changes to both the ABI as well as Linux, and then updating
>> Qemu to take advantage of the updated ABI which is already supported by both
>> Linux and Free BSD for booting VMs. As such, Qemu can remain OS agnostic,
>> and just be programmed to the published ABI.
>>
>> The canonical definition for the HVM direct boot ABI is in the Xen tree and
>> we needed to make some minor changes to the ABI definition to allow KVM
>> guests to also use the same structure and entry point. Those changes were
>> accepted to the Xen tree already:
>> https://lists.xenproject.org/archives/html/xen-devel/2018-04/msg00057.html
>>
>> The corresponding Linux changes that would allow KVM guests to be booted via
>> this PVH entry point have already been posted and reviewed:
>> https://lkml.org/lkml/2018/4/16/1002
>>
>> The final part is the set of Qemu changes to take advantage of the above and
>> boot a KVM guest via an uncompressed kernel binary using the entry point
>> defined by the ABI. Liam Merwick will be posting some RFC patches very soon
>> to allow this.
> Cool, thanks for doing this work!
>
> How do the boot times compare to qemu-lite and Firecracker's
> (https://github.com/firecracker-microvm/firecracker/) direct vmlinux ELF
> boot?
>
> I'm asking because there are several custom approaches to fast kernel
> boot and we should make sure that whatever Linux and QEMU end up
> natively supporting is likely to work for all projects (NEMU, qemu-lite,
> Firecracker) and operating systems (Linux distros, other OSes).


I should also add that effort is under way to add support for booting
PVH guests from grub. This is currently Xen guests only (obviously) but
since it's based on the ABI that Maran mentioned above I don't see why a
non-Xen guest can't be supported as well.

v6 is here:
https://lists.xenproject.org/archives/html/xen-devel/2018-11/msg03174.html


-boris




signature.asc
Description: OpenPGP digital signature


[Qemu-devel] [PATCH 0/2] Add support for new Opteron CPU model

2012-11-02 Thread Boris Ostrovsky
From: Andre Przywara o...@andrep.de

Two patches to provide support for new Opteron processors. The first
patch was submitted earlier 
(http://lists.nongnu.org/archive/html/qemu-devel/2012-10/msg03058.html)
and may have already been applied.

Andre Przywara (2):
  x86/cpu: name new CPUID bits
  x86/cpu: add new Opteron CPU model

 target-i386/cpu.c |   48 
 target-i386/cpu.h |   21 +
 2 files changed, 61 insertions(+), 8 deletions(-)

-- 
1.7.10.4




[Qemu-devel] [PATCH 1/2] x86/cpu: name new CPUID bits

2012-11-02 Thread Boris Ostrovsky
From: Andre Przywara o...@andrep.de

Update QEMU's knowledge of CPUID bit names. This allows to
enable/disable those new features on QEMU's command line when
using KVM and prepares future feature enablement in QEMU.

This adds F16C, RDRAND, LWP, TBM, TopoExt, PerfCtr_Core, PerfCtr_NB,
FSGSBASE, BMI1, AVX2, BMI2, ERMS, InvPCID, RTM, RDSeed and ADX.

Sources where the AMD BKDG for Family 15h/Model 10h and the Linux kernel
for the leaf 7 bits.

Signed-off-by: Andre Przywara o...@andrep.de
Signed-off-by: Boris Ostrovsky boris.ostrov...@amd.com
---
 target-i386/cpu.c |   16 
 target-i386/cpu.h |   21 +
 2 files changed, 29 insertions(+), 8 deletions(-)

diff --git a/target-i386/cpu.c b/target-i386/cpu.c
index d4f2e65..ec9b71f 100644
--- a/target-i386/cpu.c
+++ b/target-i386/cpu.c
@@ -59,7 +59,7 @@ static const char *ext_feature_name[] = {
 NULL, pcid, dca, sse4.1|sse4_1,
 sse4.2|sse4_2, x2apic, movbe, popcnt,
 tsc-deadline, aes, xsave, osxsave,
-avx, NULL, NULL, hypervisor,
+avx, f16c, rdrand, hypervisor,
 };
 /* Feature names that are already defined on feature_name[] but are set on
  * CPUID[8000_0001].EDX on AMD CPUs don't have their names on
@@ -80,10 +80,10 @@ static const char *ext3_feature_name[] = {
 lahf_lm /* AMD LahfSahf */, cmp_legacy, svm, extapic /* AMD 
ExtApicSpace */,
 cr8legacy /* AMD AltMovCr8 */, abm, sse4a, misalignsse,
 3dnowprefetch, osvw, ibs, xop,
-skinit, wdt, NULL, NULL,
-fma4, NULL, cvt16, nodeid_msr,
-NULL, NULL, NULL, NULL,
-NULL, NULL, NULL, NULL,
+skinit, wdt, NULL, lwp,
+fma4, tce, NULL, nodeid_msr,
+NULL, tbm, topoext, perfctr_core,
+perfctr_nb, NULL, NULL, NULL,
 NULL, NULL, NULL, NULL,
 };
 
@@ -106,9 +106,9 @@ static const char *svm_feature_name[] = {
 };
 
 static const char *cpuid_7_0_ebx_feature_name[] = {
-NULL, NULL, NULL, NULL, NULL, NULL, NULL, smep,
-NULL, NULL, NULL, NULL, NULL, NULL, NULL, NULL,
-NULL, NULL, NULL, NULL, smap, NULL, NULL, NULL,
+fsgsbase, NULL, NULL, bmi1, hle, avx2, NULL, smep,
+bmi2, erms, invpcid, rtm, NULL, NULL, NULL, NULL,
+NULL, NULL, rdseed, adx, smap, NULL, NULL, NULL,
 NULL, NULL, NULL, NULL, NULL, NULL, NULL, NULL,
 };
 
diff --git a/target-i386/cpu.h b/target-i386/cpu.h
index de33303..a597e03 100644
--- a/target-i386/cpu.h
+++ b/target-i386/cpu.h
@@ -403,6 +403,7 @@
 #define CPUID_EXT_TM2  (1  8)
 #define CPUID_EXT_SSSE3(1  9)
 #define CPUID_EXT_CID  (1  10)
+#define CPUID_EXT_FMA  (1  12)
 #define CPUID_EXT_CX16 (1  13)
 #define CPUID_EXT_XTPR (1  14)
 #define CPUID_EXT_PDCM (1  15)
@@ -417,6 +418,8 @@
 #define CPUID_EXT_XSAVE(1  26)
 #define CPUID_EXT_OSXSAVE  (1  27)
 #define CPUID_EXT_AVX  (1  28)
+#define CPUID_EXT_F16C (1  29)
+#define CPUID_EXT_RDRAND   (1  30)
 #define CPUID_EXT_HYPERVISOR  (1  31)
 
 #define CPUID_EXT2_FPU (1  0)
@@ -472,7 +475,15 @@
 #define CPUID_EXT3_IBS (1  10)
 #define CPUID_EXT3_XOP (1  11)
 #define CPUID_EXT3_SKINIT  (1  12)
+#define CPUID_EXT3_WDT (1  13)
+#define CPUID_EXT3_LWP (1  15)
 #define CPUID_EXT3_FMA4(1  16)
+#define CPUID_EXT3_TCE (1  17)
+#define CPUID_EXT3_NODEID  (1  19)
+#define CPUID_EXT3_TBM (1  21)
+#define CPUID_EXT3_TOPOEXT (1  22)
+#define CPUID_EXT3_PERFCORE (1  23)
+#define CPUID_EXT3_PERFNB  (1  24)
 
 #define CPUID_SVM_NPT  (1  0)
 #define CPUID_SVM_LBRV (1  1)
@@ -485,7 +496,17 @@
 #define CPUID_SVM_PAUSEFILTER  (1  10)
 #define CPUID_SVM_PFTHRESHOLD  (1  12)
 
+#define CPUID_7_0_EBX_FSGSBASE (1  0)
+#define CPUID_7_0_EBX_BMI1 (1  3)
+#define CPUID_7_0_EBX_HLE  (1  4)
+#define CPUID_7_0_EBX_AVX2 (1  5)
 #define CPUID_7_0_EBX_SMEP (1  7)
+#define CPUID_7_0_EBX_BMI2 (1  8)
+#define CPUID_7_0_EBX_ERMS (1  9)
+#define CPUID_7_0_EBX_INVPCID  (1  10)
+#define CPUID_7_0_EBX_RTM  (1  11)
+#define CPUID_7_0_EBX_RDSEED   (1  18)
+#define CPUID_7_0_EBX_ADX  (1  19)
 #define CPUID_7_0_EBX_SMAP (1  20)
 
 #define CPUID_VENDOR_INTEL_1 0x756e6547 /* Genu */
-- 
1.7.10.4




[Qemu-devel] [PATCH 2/2] x86/cpu: add new Opteron CPU model

2012-11-02 Thread Boris Ostrovsky
From: Andre Przywara o...@andrep.de

Add a new base CPU model called Opteron_G5 to model the latest
Opteron CPUs. This increases the model value and model numbers and
adds TBM, F16C and FMA over the latest G4 model.

Signed-off-by: Andre Przywara o...@andrep.de
Signed-off-by: Boris Ostrovsky boris.ostrov...@amd.com
---
 target-i386/cpu.c |   32 
 1 file changed, 32 insertions(+)

diff --git a/target-i386/cpu.c b/target-i386/cpu.c
index ec9b71f..332f9e8 100644
--- a/target-i386/cpu.c
+++ b/target-i386/cpu.c
@@ -745,6 +745,38 @@ static x86_def_t builtin_x86_defs[] = {
 .xlevel = 0x801A,
 .model_id = AMD Opteron 62xx class CPU,
 },
+{
+.name = Opteron_G5,
+.level = 0xd,
+.vendor1 = CPUID_VENDOR_AMD_1,
+.vendor2 = CPUID_VENDOR_AMD_2,
+.vendor3 = CPUID_VENDOR_AMD_3,
+.family = 21,
+.model = 2,
+.stepping = 0,
+.features = CPUID_SSE2 | CPUID_SSE | CPUID_FXSR | CPUID_MMX |
+ CPUID_CLFLUSH | CPUID_PSE36 | CPUID_PAT | CPUID_CMOV | CPUID_MCA |
+ CPUID_PGE | CPUID_MTRR | CPUID_SEP | CPUID_APIC | CPUID_CX8 |
+ CPUID_MCE | CPUID_PAE | CPUID_MSR | CPUID_TSC | CPUID_PSE |
+ CPUID_DE | CPUID_FP87,
+.ext_features = CPUID_EXT_F16C | CPUID_EXT_AVX | CPUID_EXT_XSAVE |
+ CPUID_EXT_AES | CPUID_EXT_POPCNT | CPUID_EXT_SSE42 |
+ CPUID_EXT_SSE41 | CPUID_EXT_CX16 | CPUID_EXT_FMA |
+ CPUID_EXT_SSSE3 | CPUID_EXT_PCLMULQDQ | CPUID_EXT_SSE3,
+.ext2_features = CPUID_EXT2_LM | CPUID_EXT2_RDTSCP |
+ CPUID_EXT2_PDPE1GB | CPUID_EXT2_FXSR | CPUID_EXT2_MMX |
+ CPUID_EXT2_NX | CPUID_EXT2_PSE36 | CPUID_EXT2_PAT |
+ CPUID_EXT2_CMOV | CPUID_EXT2_MCA | CPUID_EXT2_PGE |
+ CPUID_EXT2_MTRR | CPUID_EXT2_SYSCALL | CPUID_EXT2_APIC |
+ CPUID_EXT2_CX8 | CPUID_EXT2_MCE | CPUID_EXT2_PAE | CPUID_EXT2_MSR 
|
+ CPUID_EXT2_TSC | CPUID_EXT2_PSE | CPUID_EXT2_DE | CPUID_EXT2_FPU,
+.ext3_features = CPUID_EXT3_TBM | CPUID_EXT3_FMA4 | CPUID_EXT3_XOP |
+ CPUID_EXT3_3DNOWPREFETCH | CPUID_EXT3_MISALIGNSSE |
+ CPUID_EXT3_SSE4A | CPUID_EXT3_ABM | CPUID_EXT3_SVM |
+ CPUID_EXT3_LAHF_LM,
+.xlevel = 0x801A,
+.model_id = AMD Opteron 63xx class CPU,
+},
 };
 
 static int cpu_x86_fill_model_id(char *str)
-- 
1.7.10.4