* Henning Schild <henning.sch...@siemens.com> [2017-08-08 11:33:12 +0200]:
> Am Mon, 7 Aug 2017 16:24:47 -0700 > schrieb Gustavo Lima Chaves <gustavo.lima.cha...@intel.com>: > > > With the new JAILHOUSE_CELL_CLEAR_MEM (struct jailhouse_cell_desc's) > > flag, Jailhouse will cleanup all of the cell's *loadable* memory, on > > its destruction, before handing that memory region back to the root > > cell. This prevents the latter from accessing data that the former > > wanted to keep private. > > > > One could argue that cells without passive communication > > region (no JAILHOUSE_CELL_PASSIVE_COMMREG flag) could use a first > > attempt to kill them to do any desired cleanup. This does not cover > > the cases in which the cell developer still wants passive > > communication region (they don't want to bother adding code to > > read/write to the comms region address to their logic) but no data > > leaks whatsoever. This also covers the case in which a cell goes to > > parked state and never has the chance to do such cleanup: with the > > new flag, when destroyed the root cell will still be clueless of what > > happened there memory-wise. > > This is adding unneeded complexity to the hypervisor. As you described > any cell can delay shutdown and clean up itself. If a cell crashes > while "holding the lock" you will have to reboot, no memory will leak to > Linux. Wait a minute, I was talking about the case of *not* using locks for being killed at all (JAILHOUSE_CELL_PASSIVE_COMMREG). I even find this design of using the lock and making the hypervisor locked nuts, but that's another story (we should handle it with some timer, later). > > If something can be done outside the hypervisor it should. Not sure how > Jan feels about it, but from me it is a No. > > How about putting that feature into the inmate lib? > > Henning > > > Signed-off-by: Gustavo Lima Chaves <gustavo.lima.cha...@intel.com> > > --- > > hypervisor/arch/arm/mach-vexpress.c | 2 +- > > hypervisor/arch/x86/ioapic.c | 2 +- > > hypervisor/control.c | 28 ++++++++++++++++++- > > hypervisor/include/jailhouse/cell-config.h | 1 + > > hypervisor/include/jailhouse/paging.h | 3 ++- > > hypervisor/paging.c | 43 > > +++++++++++++++++++++++++----- > > hypervisor/pci.c | 7 +++-- 7 files > > changed, 71 insertions(+), 15 deletions(-) > > > > diff --git a/hypervisor/arch/arm/mach-vexpress.c > > b/hypervisor/arch/arm/mach-vexpress.c index 6a2af38..0aaae71 100644 > > --- a/hypervisor/arch/arm/mach-vexpress.c > > +++ b/hypervisor/arch/arm/mach-vexpress.c > > @@ -65,7 +65,7 @@ int mach_init(void) > > if (!sysregs_base) > > return -ENOMEM; > > root_entry = mmio_read32(sysregs_base + VEXPRESS_FLAGSSET); > > - paging_unmap_device(SYSREGS_BASE, sysregs_base, PAGE_SIZE); > > + paging_unmap(SYSREGS_BASE, sysregs_base, PAGE_SIZE); > > > > mach_cell_init(&root_cell); > > > > diff --git a/hypervisor/arch/x86/ioapic.c > > b/hypervisor/arch/x86/ioapic.c index bb7dae0..ddc8c48 100644 > > --- a/hypervisor/arch/x86/ioapic.c > > +++ b/hypervisor/arch/x86/ioapic.c > > @@ -262,7 +262,7 @@ done: > > return 0; > > > > error_paging_destroy: > > - paging_unmap_device(irqchip->address, phys_ioapic->reg_base, > > PAGE_SIZE); > > + paging_unmap(irqchip->address, phys_ioapic->reg_base, > > PAGE_SIZE); return err; > > } > > > > diff --git a/hypervisor/control.c b/hypervisor/control.c > > index b1beed9..bbe6c0b 100644 > > --- a/hypervisor/control.c > > +++ b/hypervisor/control.c > > @@ -285,6 +285,20 @@ static int remap_to_root_cell(const struct > > jailhouse_memory *mem, return err; > > } > > > > +static void cell_wipe_out_mem(const struct jailhouse_memory *mem) > > +{ > > + void *cell_mem = paging_map(mem->phys_start, mem->size); > > + > > + if (!cell_mem) { > > + paging_dump_stats("Paging memory depleted!"); > > + panic_printk("FATAL: out of memory on page pools\n"); > > + panic_stop(); > > + } > > + > > + memset(cell_mem, 0, mem->size); > > + paging_unmap(mem->phys_start, cell_mem, mem->size); > > +} > > + > > static void cell_destroy_internal(struct per_cpu *cpu_data, struct > > cell *cell) { > > const struct jailhouse_memory *mem; > > @@ -309,8 +323,20 @@ static void cell_destroy_internal(struct per_cpu > > *cpu_data, struct cell *cell) arch_unmap_memory_region(cell, mem); > > > > if (!(mem->flags & (JAILHOUSE_MEM_COMM_REGION | > > - JAILHOUSE_MEM_ROOTSHARED))) > > + JAILHOUSE_MEM_ROOTSHARED))) { > > + if (mem->flags & JAILHOUSE_MEM_LOADABLE && > > + cell->config->flags & > > JAILHOUSE_CELL_CLEAR_MEM) > > + /* > > + * Clear its memory region when > > + * destroyed. Note that if a cell is > > + * parked, the root cell does *not* > > + * gain access to its memory region > > + * right away, it only happens at > > this > > + * point. > > + */ > > + cell_wipe_out_mem(mem); > > remap_to_root_cell(mem, WARN_ON_ERROR); > > + } > > } > > > > pci_cell_exit(cell); > > diff --git a/hypervisor/include/jailhouse/cell-config.h > > b/hypervisor/include/jailhouse/cell-config.h index 3a003f0..a9bdfdd > > 100644 --- a/hypervisor/include/jailhouse/cell-config.h > > +++ b/hypervisor/include/jailhouse/cell-config.h > > @@ -46,6 +46,7 @@ > > > > #define JAILHOUSE_CELL_PASSIVE_COMMREG 0x00000001 > > #define JAILHOUSE_CELL_DEBUG_CONSOLE 0x00000002 > > +#define JAILHOUSE_CELL_CLEAR_MEM 0x00000004 > > > > #define JAILHOUSE_CELL_DESC_SIGNATURE "JHCELL" > > > > diff --git a/hypervisor/include/jailhouse/paging.h > > b/hypervisor/include/jailhouse/paging.h index 551f73a..f95f832 100644 > > --- a/hypervisor/include/jailhouse/paging.h > > +++ b/hypervisor/include/jailhouse/paging.h > > @@ -243,8 +243,9 @@ int paging_destroy(const struct paging_structures > > *pg_structs, unsigned long virt, unsigned long size, > > enum paging_coherent coherent); > > > > +void *paging_map(unsigned long phys, unsigned long size); > > void *paging_map_device(unsigned long phys, unsigned long size); > > -void paging_unmap_device(unsigned long phys, void *virt, unsigned > > long size); +void paging_unmap(unsigned long phys, void *virt, > > unsigned long size); > > void *paging_get_guest_pages(const struct guest_paging_structures > > *pg_structs, unsigned long gaddr, unsigned int num, > > diff --git a/hypervisor/paging.c b/hypervisor/paging.c > > index a431550..04c18be 100644 > > --- a/hypervisor/paging.c > > +++ b/hypervisor/paging.c > > @@ -447,11 +447,40 @@ paging_gvirt2gphys(const struct > > guest_paging_structures *pg_structs, } > > > > /** > > + * Map physical memory into hypervisor address space > > + * @param phys Physical address of the memory region. > > + * @param size Size of the memory region. > > + * > > + * @return Virtual mapping address of the memory region or NULL, on > > + * errors. > > + * > > + * @see paging_unmap > > + */ > > +void *paging_map(unsigned long phys, unsigned long size) > > +{ > > + void *virt; > > + > > + virt = page_alloc(&remap_pool, PAGES(size)); > > + if (!virt) > > + return NULL; > > + > > + if (paging_create(&hv_paging_structs, phys, size, (unsigned > > long)virt, > > + PAGE_DEFAULT_FLAGS, > > PAGING_NON_COHERENT) != 0) { > > + page_free(&remap_pool, virt, PAGES(size)); > > + return NULL; > > + } > > + > > + return virt; > > +} > > + > > +/** > > * Map physical device resource into hypervisor address space > > * @param phys Physical address of the resource. > > * @param size Size of the resource. > > * > > * @return Virtual mapping address of the resource or NULL on error. > > + * > > + * @see paging_unmap > > */ > > void *paging_map_device(unsigned long phys, unsigned long size) > > { > > @@ -472,17 +501,17 @@ void *paging_map_device(unsigned long phys, > > unsigned long size) } > > > > /** > > - * Unmap physical device resource from hypervisor address space > > - * @param phys Physical address of the resource. > > - * @param virt Virtual address of the resource. > > - * @param size Size of the resource. > > + * Unmap physical memory from hypervisor address space > > + * @param phys Physical address to be unmapped. > > + * @param virt Virtual address to be unmapped. > > + * @param size Size of memory region. > > * > > * @note Unmap must use the same parameters as provided to / > > returned by > > - * paging_map_device(). > > + * paging_map_device()/paging_map(). > > */ > > -void paging_unmap_device(unsigned long phys, void *virt, unsigned > > long size) +void paging_unmap(unsigned long phys, void *virt, > > unsigned long size) { > > - /* Cannot fail if paired with paging_map_device. */ > > + /* Cannot fail if paired with paging_map_device/paging_map. > > */ paging_destroy(&hv_paging_structs, (unsigned long)virt, size, > > PAGING_NON_COHERENT); > > page_free(&remap_pool, virt, PAGES(size)); > > diff --git a/hypervisor/pci.c b/hypervisor/pci.c > > index 2f95dd7..be37730 100644 > > --- a/hypervisor/pci.c > > +++ b/hypervisor/pci.c > > @@ -634,8 +634,7 @@ static int pci_add_physical_device(struct cell > > *cell, struct pci_device *device) return 0; > > > > error_unmap_table: > > - paging_unmap_device(device->info->msix_address, > > device->msix_table, > > - size); > > + paging_unmap(device->info->msix_address, device->msix_table, > > size); error_remove_dev: > > arch_pci_remove_physical_device(device); > > return err; > > @@ -656,8 +655,8 @@ static void pci_remove_physical_device(struct > > pci_device *device) if (!device->msix_table) > > return; > > > > - paging_unmap_device(device->info->msix_address, > > device->msix_table, > > - device->info->msix_region_size); > > + paging_unmap(device->info->msix_address, device->msix_table, > > + device->info->msix_region_size); > > > > if (device->msix_vectors != device->msix_vector_array) > > page_free(&mem_pool, device->msix_vectors, > > -- > You received this message because you are subscribed to the Google Groups > "Jailhouse" group. > To unsubscribe from this group and stop receiving emails from it, send an > email to jailhouse-dev+unsubscr...@googlegroups.com. > For more options, visit https://groups.google.com/d/optout. -- Gustavo Lima Chaves Intel - Open Source Technology Center -- You received this message because you are subscribed to the Google Groups "Jailhouse" group. To unsubscribe from this group and stop receiving emails from it, send an email to jailhouse-dev+unsubscr...@googlegroups.com. For more options, visit https://groups.google.com/d/optout.