[makedumpfile PATCH RFC v0.1] Implemented the --fill-excluded-pages= feature
When a page is excluded by any of the existing dump levels, that page may still be written to the ELF dump file, depending upon the PFN_EXCLUDED mechanism. The PFN_EXCLUDED mechanism looks for N consecutive "not dumpable" pages, and if found, the current ELF segment is closed out and a new ELF segment started, at the next dumpable page. Otherwise, if the PFN_EXCLUDED criteria is not meet (that is, there is a mix of dumpable and not dumpable pages, but not N consecutive not dumpable pages) all pages are written to the dump file. This patch implements a mechanism for those "not dumpable" pages that are written to the ELF dump file to fill those pages with constant data, rather than the original data. In other words, the dump file still contains the page, but its data is wiped. The motivation for doing this is to protect real user data from "leaking" through to a dump file when that data was asked to be omitted. This is especially important for effort I currently am working on to allow further refinement of what is allowed to be dumped, all in an effort to protect user (customer) data. The patch is simple enough, however, it causes problems with crash; crash is unable to load the resulting ELF dump file. For example, I do the following as a test scenario for this change: - Obtain a non-filtered dump file (eg. dump level 0, no -d option, or straight copy of /proc/vmcore) - Run vmcore through 'crash' to ensure loads ok, test with commands like: ps, files, etc. % crash vmlinux vmcore - Apply this patch and rebuild makedumpfile - Run vmcore through makedumpfile *without* --fill-excluded-pages and with filtering to ensure no uintended side effects of patch: % ./makedumpfile -E -d31 -x vmlinux vmcore newvmcore - Run new vmcore through crash to ensure still loads ok, test with commands like: ps, files, etc. % crash vmlinux newvmcore - Run vmcore through makedumpfile *with* --fill-excluded-pages and with filtering to check side effects of patch: % ./makedumpfile -E -d31 --fill-excluded-pages=0 -x vmlinux vmcore newvmcore2 - Run new vmcore through crash to ensure still loads ok, test with commands like: ps, files, etc. % crash vmlinux newvmcore2 But crash yields errors like: [...] This GDB was configured as "x86_64-unknown-linux-gnu"... crash: cannot determine thread return address please wait... (gathering kmem slab cache data) crash: invalid kernel virtual address: 1c type: "kmem_cache objsize/object_size" If the patch is correct/accurate, then that may mean that crash is using data which it should not be. The more likely scenario is that the patch is not correct/accurate, and I'm corrupting the dump file. Please provide feedback!! --- makedumpfile.c | 28 +--- makedumpfile.h | 3 +++ 2 files changed, 24 insertions(+), 7 deletions(-) diff --git a/makedumpfile.c b/makedumpfile.c index e69b6df..3f9816a 100644 --- a/makedumpfile.c +++ b/makedumpfile.c @@ -7102,7 +7102,7 @@ out: int write_elf_load_segment(struct cache_data *cd_page, unsigned long long paddr, - off_t off_memory, long long size) + off_t off_memory, long long size, struct cycle *cycle) { long page_size = info->page_size; long long bufsz_write; @@ -7126,10 +7126,18 @@ write_elf_load_segment(struct cache_data *cd_page, unsigned long long paddr, else bufsz_write = size; - if (read(info->fd_memory, buf, bufsz_write) != bufsz_write) { - ERRMSG("Can't read the dump memory(%s). %s\n", - info->name_memory, strerror(errno)); - return FALSE; + if (info->flag_fill_excluded_pages && !is_dumpable(info->bitmap2, paddr_to_pfn(paddr), cycle)) { + unsigned k; + unsigned long *p = (unsigned long *)buf; + for (k = 0; k < info->page_size; k += sizeof(unsigned long)) { + *p++ = info->fill_excluded_pages_value; + } + } else { + if (read(info->fd_memory, buf, bufsz_write) != bufsz_write) { + ERRMSG("Can't read the dump memory(%s). %s\n", + info->name_memory, strerror(errno)); + return FALSE; + } } filter_data_buffer((unsigned char *)buf, paddr, bufsz_write); paddr += bufsz_write; @@ -7396,7 +7404,7 @@ write_elf_pages_cyclic(struct cache_data *cd_header, struct cache_data *cd_page) */ if (load.p_filesz) if (!write_elf_load_segment(cd_page, paddr, - off_memory, load.p_filesz)) +
Attention To me and get back to me urgent
Attention:Beneficiary I am Meliana Trump, and I am writing to inform you about your Bank Check Draft brought back 16/07/2017 by the United Embassy Mr John Moore from the government of Benin Republic in the white house Washington DC been mandated to be deliver to your home address once you reconfirm it with the one we have here with us to avoid wrong delivery of your check draft Eighteen million united states dollars $18,000,000,00usd that was assigned to be delivered to your humble home address by Honorable president Donald Trump the president of this great country this week by a delivery agent Mr John Moore Also reconfirm your details for the check delivery by filling the form below and send it immediately to our mrs.melianatr...@yahoo.com in for verification and for prompt collection of your fund. Fill The Form Below: 1. Residential Address : 2. Next of Kin : 3. Mobile Number: 4. Fax Number : 5. Occupation : 6. Sex : 7. Age : 8. Nationality : 9. Country : 10. Marital Status : Accept my hearty congratulation again! Yours Sincerely, MRS MELIANA TRUMP FIRST_LADY 1600 Pennsylvania Ave NW, Washington, DC 20500, United States ___ kexec mailing list kexec@lists.infradead.org http://lists.infradead.org/mailman/listinfo/kexec
Re: [PATCH v10 00/38] x86: Secure Memory Encryption (AMD)
On 7/18/2017 7:03 AM, Thomas Gleixner wrote: On Mon, 17 Jul 2017, Tom Lendacky wrote: This patch series provides support for AMD's new Secure Memory Encryption (SME) feature. SME can be used to mark individual pages of memory as encrypted through the page tables. A page of memory that is marked encrypted will be automatically decrypted when read from DRAM and will be automatically encrypted when written to DRAM. Details on SME can found in the links below. The SME feature is identified through a CPUID function and enabled through the SYSCFG MSR. Once enabled, page table entries will determine how the memory is accessed. If a page table entry has the memory encryption mask set, then that memory will be accessed as encrypted memory. The memory encryption mask (as well as other related information) is determined from settings returned through the same CPUID function that identifies the presence of the feature. The approach that this patch series takes is to encrypt everything possible starting early in the boot where the kernel is encrypted. Using the page table macros the encryption mask can be incorporated into all page table entries and page allocations. By updating the protection map, userspace allocations are also marked encrypted. Certain data must be accounted for as having been placed in memory before SME was enabled (EFI, initrd, etc.) and accessed accordingly. This patch series is a pre-cursor to another AMD processor feature called Secure Encrypted Virtualization (SEV). The support for SEV will build upon the SME support and will be submitted later. Details on SEV can be found in the links below. Well done series. Thanks to all people involved, especially Tom and Boris! It was a pleasure to review that. Reviewed-by: Thomas GleixnerA big thanks from me to everyone that helped review this. I truly appreciate all the time that everyone put into this - especially Boris, who helped guide this series from the start. Thanks, Tom ___ kexec mailing list kexec@lists.infradead.org http://lists.infradead.org/mailman/listinfo/kexec
Re: [PATCH v10 00/38] x86: Secure Memory Encryption (AMD)
On Mon, 17 Jul 2017, Tom Lendacky wrote: > This patch series provides support for AMD's new Secure Memory Encryption > (SME) > feature. > > SME can be used to mark individual pages of memory as encrypted through the > page tables. A page of memory that is marked encrypted will be automatically > decrypted when read from DRAM and will be automatically encrypted when > written to DRAM. Details on SME can found in the links below. > > The SME feature is identified through a CPUID function and enabled through > the SYSCFG MSR. Once enabled, page table entries will determine how the > memory is accessed. If a page table entry has the memory encryption mask set, > then that memory will be accessed as encrypted memory. The memory encryption > mask (as well as other related information) is determined from settings > returned through the same CPUID function that identifies the presence of the > feature. > > The approach that this patch series takes is to encrypt everything possible > starting early in the boot where the kernel is encrypted. Using the page > table macros the encryption mask can be incorporated into all page table > entries and page allocations. By updating the protection map, userspace > allocations are also marked encrypted. Certain data must be accounted for > as having been placed in memory before SME was enabled (EFI, initrd, etc.) > and accessed accordingly. > > This patch series is a pre-cursor to another AMD processor feature called > Secure Encrypted Virtualization (SEV). The support for SEV will build upon > the SME support and will be submitted later. Details on SEV can be found > in the links below. Well done series. Thanks to all people involved, especially Tom and Boris! It was a pleasure to review that. Reviewed-by: Thomas Gleixner___ kexec mailing list kexec@lists.infradead.org http://lists.infradead.org/mailman/listinfo/kexec