Re: [Qemu-devel] [RFC PATCH v1 16/22] i386: pc: load OS images at fixed location in SEV-enabled guest

2016-09-21 Thread Paolo Bonzini


On 21/09/2016 18:08, Brijesh Singh wrote:
> Hi Paolo,
> 
> On 09/21/2016 10:58 AM, Paolo Bonzini wrote:
>>
>>
>> On 21/09/2016 17:55, Brijesh Singh wrote:
>>> I'm working on v2 and getting ready for another review but not sure how
>>> to address this feedback. For now, I can drop this patch from the series
>>> and get other patches reviewed. But I would like to get some direction
>>> on how do I go about adding -kernel support for SEV guest.
>>
>> Don't.  The solution is to use UEFI, which runs in 64-bit mode and can
>> do DMA to shared pages.
>>
> 
> Thanks.
> 
> I will explore UEFI BIOS path. Just for confirmation you mean the OVMF
> project [1] right ?

Yes.

Paolo

> [1] http://www.linux-kvm.org/page/OVMF
> https://github.com/tianocore/tianocore.github.io/wiki/How-to-run-OVMF
> 
> 
>> Paolo
>>



Re: [Qemu-devel] [RFC PATCH v1 16/22] i386: pc: load OS images at fixed location in SEV-enabled guest

2016-09-21 Thread Brijesh Singh

Hi Paolo,

On 09/21/2016 10:58 AM, Paolo Bonzini wrote:



On 21/09/2016 17:55, Brijesh Singh wrote:

I'm working on v2 and getting ready for another review but not sure how
to address this feedback. For now, I can drop this patch from the series
and get other patches reviewed. But I would like to get some direction
on how do I go about adding -kernel support for SEV guest.


Don't.  The solution is to use UEFI, which runs in 64-bit mode and can
do DMA to shared pages.



Thanks.

I will explore UEFI BIOS path. Just for confirmation you mean the OVMF 
project [1] right ?


[1] http://www.linux-kvm.org/page/OVMF
https://github.com/tianocore/tianocore.github.io/wiki/How-to-run-OVMF



Paolo





Re: [Qemu-devel] [RFC PATCH v1 16/22] i386: pc: load OS images at fixed location in SEV-enabled guest

2016-09-21 Thread Paolo Bonzini


On 21/09/2016 17:55, Brijesh Singh wrote:
> I'm working on v2 and getting ready for another review but not sure how
> to address this feedback. For now, I can drop this patch from the series
> and get other patches reviewed. But I would like to get some direction
> on how do I go about adding -kernel support for SEV guest.

Don't.  The solution is to use UEFI, which runs in 64-bit mode and can
do DMA to shared pages.

Paolo



Re: [Qemu-devel] [RFC PATCH v1 16/22] i386: pc: load OS images at fixed location in SEV-enabled guest

2016-09-21 Thread Brijesh Singh

Hi Michael and Paolo,

On 09/13/2016 01:37 PM, Michael S. Tsirkin wrote:

On Tue, Sep 13, 2016 at 10:49:29AM -0400, Brijesh Singh wrote:

Typically linux kernel, initrd and cmdline are build and loaded
into guest memory through linux optionrom. The linux optionrom is
probed and executed by SeaBIOS. This method will not work for
SEV-enabled guest.

In SEV-enabled guest all the code and data must be copied using SEV
launch command prior to starting the guest (i.e before first vmrun).
The data copied using SEV launch command will be encrypted using guest
owner's key. This patch loads kernel, initrd and cmdline blobs at fixed
location into guest memory and builds etc/sev_cfg config file. The cfg
file provide the below structure

struct sev_cfg {
u32 kernel_addr, initrd_addr, cmdline_addr;
u32 kernel_size, initrd_size, cmdline_size;
}

The config file can be used by SeaBIOS to locate OS images into guest
RAM and build linux boot entry code.

Signed-off-by: Brijesh Singh 


I don't think we want to give users this kind of
control over how we manage memory internally
for what is essentially a debugging feature at this point.

Isn't there a way to first launch guest, and then have it
encrypt itself once it's running?

If not, I guess it's not too bad if -kernel does not
work with sev debug feature - just load kernel from disk.



I'm working on v2 and getting ready for another review but not sure how 
to address this feedback. For now, I can drop this patch from the series 
and get other patches reviewed. But I would like to get some direction 
on how do I go about adding -kernel support for SEV guest.


Looking through -kernel option code, I see that qemu uses optionrom 
(linuxboot.bin or linuxboot_dma.bin) to load kernel specified through 
-kernel option.


- linuxboot optionrom uses "rep ins" instruction to fetch the kernel and 
initrd images into guest memory. SEV does not support string I/O, so 
this method will not work for SEV. But we can consider unrolling the 
string I/O operation into a loop, something similar to this KVM patch [1].


- linuxboot_dma - as name suggests it uses DMA. On SEV guest DMA should 
to be done on shared pages and since SeaBIOS runs into non-PAE mode 
hence it will not able to create a shared pages for this to work.


please let me know your thoughts on the following approaches:

- update linuxboot.S to unroll the string I/O instruction into a loop 
for SEV guest. This may work but the boot will be very slow, fetching 
kernel and initrd via 'inb' will increase the boot time significantly.


or

- Create a SEV specific linuxboot rom. The ROM will use a special port 
to communicate kernel and initrd load address to qemu. In qemu when we 
see the request on special port we map the pre-encrypted kernel and 
initrd at the requested location. This maybe similar to linuxboot_dma 
with exception that instead of copying the data into guest memory we 
somehow map the pre-encrypted image. I have not looked into much detail 
and not sure if we can do something like this. If possible then this 
seems like a bit better approach.



[1] http://marc.info/?l=kvm=147191048324519=2




Re: [Qemu-devel] [RFC PATCH v1 16/22] i386: pc: load OS images at fixed location in SEV-enabled guest

2016-09-13 Thread Michael S. Tsirkin
On Tue, Sep 13, 2016 at 10:49:29AM -0400, Brijesh Singh wrote:
> Typically linux kernel, initrd and cmdline are build and loaded
> into guest memory through linux optionrom. The linux optionrom is
> probed and executed by SeaBIOS. This method will not work for
> SEV-enabled guest.
> 
> In SEV-enabled guest all the code and data must be copied using SEV
> launch command prior to starting the guest (i.e before first vmrun).
> The data copied using SEV launch command will be encrypted using guest
> owner's key. This patch loads kernel, initrd and cmdline blobs at fixed
> location into guest memory and builds etc/sev_cfg config file. The cfg
> file provide the below structure
> 
> struct sev_cfg {
>   u32 kernel_addr, initrd_addr, cmdline_addr;
>   u32 kernel_size, initrd_size, cmdline_size;
> }
> 
> The config file can be used by SeaBIOS to locate OS images into guest
> RAM and build linux boot entry code.
> 
> Signed-off-by: Brijesh Singh 

I don't think we want to give users this kind of
control over how we manage memory internally
for what is essentially a debugging feature at this point.

Isn't there a way to first launch guest, and then have it
encrypt itself once it's running?

If not, I guess it's not too bad if -kernel does not
work with sev debug feature - just load kernel from disk.


> ---
>  hw/i386/pc.c |   94 
> +-
>  1 file changed, 93 insertions(+), 1 deletion(-)
> 
> diff --git a/hw/i386/pc.c b/hw/i386/pc.c
> index 1471df4..f2c7472 100644
> --- a/hw/i386/pc.c
> +++ b/hw/i386/pc.c
> @@ -104,6 +104,15 @@ static struct e820_entry *e820_table;
>  static unsigned e820_entries;
>  struct hpet_fw_config hpet_cfg = {.count = UINT8_MAX};
>  
> +struct sev_cfg_data {
> +uint32_t kernel_addr;
> +uint32_t initrd_addr;
> +uint32_t cmdline_addr;
> +uint32_t kernel_size;
> +uint32_t initrd_size;
> +uint32_t cmdline_size;
> +} QEMU_PACKED __attribute((__aligned__(4)));
> +
>  void gsi_handler(void *opaque, int n, int level)
>  {
>  GSIState *s = opaque;
> @@ -824,6 +833,86 @@ struct setup_data {
>  uint8_t data[0];
>  } __attribute__((packed));
>  
> +#define round_up(x, align) ((x + align) & ~(align - 1))
> +
> +static void sev_load_file_fixed(const char *filename, int start,
> +int *end, int *sz)
> +{
> +FILE *f;
> +int sz_aligned, ret;
> +char *data;
> +
> +f = fopen(filename, "rb");
> +if (!f) {
> +fprintf(stderr, "qemu: could not load '%s': %s\n",
> +filename, strerror(errno));
> +exit(1);
> +}
> +
> +/* SEV update commands needs 16-byte aligned length */
> +*sz = get_file_size(f);
> +sz_aligned = round_up(*sz, 16);
> +data = g_malloc(sz_aligned);
> +ret = fread(data, 1, *sz, f);
> +if (ret != *sz) {
> +fprintf(stderr, "qemu: failed to read %d bytes from %s\n",
> +*sz, filename);
> +exit(1);
> +}
> +rom_add_blob_fixed(filename, data, sz_aligned, start);
> +*end = start + sz_aligned;
> +fclose(f);
> +g_free(data);
> +}
> +
> +/* load kernel, initrd and cmdline blobs at fixed location into guest
> + * memory and generate etc/sev config file.
> + */
> +static void sev_load_linux(PCMachineState *pcms,
> +   FWCfgState *fw_cfg)
> +{
> +FILE *f;
> +int cmdline_size, kernel_size, initrd_size;
> +int initrd_addr, kernel_addr, cmdline_addr, end;
> +MachineState *machine = MACHINE(pcms);
> +const char *kernel_cmdline = machine->kernel_cmdline;
> +struct sev_cfg_data *sev_cfg;
> +char *cmdline_string;
> +
> +/* load kernel command line string */
> +cmdline_addr = 0x100;
> +cmdline_size = round_up(strlen(kernel_cmdline), 16);
> +cmdline_string = g_malloc(cmdline_size);
> +strncpy(cmdline_string, kernel_cmdline, strlen(kernel_cmdline));
> +rom_add_blob_fixed("cmdline", cmdline_string, cmdline_size, 
> cmdline_addr);
> +g_free(cmdline_string);
> +
> +/* load linux kernel */
> +kernel_addr = cmdline_addr + cmdline_size;
> +sev_load_file_fixed(machine->kernel_filename, kernel_addr,
> +_addr, _size);
> +
> +/* load initrd */
> +f = fopen(machine->initrd_filename, "rb");
> +if (f) {
> +sev_load_file_fixed(machine->initrd_filename, initrd_addr,
> +, _size);
> +} else {
> +initrd_addr = 0;
> +initrd_size = 0;
> +}
> +
> +sev_cfg = g_malloc0(sizeof(*sev_cfg));
> +sev_cfg->kernel_addr = kernel_addr;
> +sev_cfg->initrd_addr = initrd_addr;
> +sev_cfg->cmdline_addr = cmdline_addr;
> +sev_cfg->kernel_size = kernel_size;
> +sev_cfg->initrd_size = initrd_size;
> +sev_cfg->cmdline_size = cmdline_size;
> +
> +fw_cfg_add_file(fw_cfg, "etc/sev_cfg", sev_cfg, sizeof(*sev_cfg));
> +}
> +
>  static void load_linux(PCMachineState *pcms,
>