On Wed, Mar 4, 2015 at 2:16 AM, Borislav Petkov b...@alien8.de wrote:
On Wed, Mar 04, 2015 at 12:00:37AM -0800, Yinghai Lu wrote:
commit f47233c2d34f (x86/mm/ASLR: Propagate base load address calculation)
is using address as value for kaslr_enabled.
That will random kaslr_enabled get that set
On Wed, Mar 4, 2015 at 7:54 AM, Jiri Kosina jkos...@suse.cz wrote:
Also this 15-patch series needs to be separated into two patchsets. The
whole series is not appropriate for -rc3, but this particular one at least
is a regression fix that has to go in.
The first 4 should go v4.0.
could
On Wed, Mar 4, 2015 at 10:06 AM, Yinghai Lu ying...@kernel.org wrote:
On Wed, Mar 4, 2015 at 2:16 AM, Borislav Petkov b...@alien8.de wrote:
On Wed, Mar 04, 2015 at 12:00:37AM -0800, Yinghai Lu wrote:
commit f47233c2d34f (x86/mm/ASLR: Propagate base load address calculation)
is using address as
On Wed, Mar 4, 2015 at 12:00 PM, Ingo Molnar mi...@kernel.org wrote:
It is totally unacceptable that you don't do proper analysis of the
patches you submit, and that you don't bother writing proper, readable
changelogs.
Sorry, please check it again:
Subject: [PATCH v4] x86, kaslr: Get
* Yinghai Lu ying...@kernel.org wrote:
On Wed, Mar 4, 2015 at 2:16 AM, Borislav Petkov b...@alien8.de wrote:
On Wed, Mar 04, 2015 at 12:00:37AM -0800, Yinghai Lu wrote:
commit f47233c2d34f (x86/mm/ASLR: Propagate base load address
calculation)
is using address as value for
* Yinghai Lu ying...@kernel.org wrote:
On Wed, Mar 4, 2015 at 7:54 AM, Jiri Kosina jkos...@suse.cz wrote:
Also this 15-patch series needs to be separated into two patchsets. The
whole series is not appropriate for -rc3, but this particular one at least
is a regression fix that has to
Hi Yinghai,
On Wed, Mar 04, 2015 at 10:12:58AM -0800, Yinghai Lu wrote:
On Wed, Mar 4, 2015 at 7:54 AM, Jiri Kosina jkos...@suse.cz wrote:
Also this 15-patch series needs to be separated into two patchsets. The
whole series is not appropriate for -rc3, but this particular one at least
On Wed, Mar 4, 2015 at 6:58 PM, joeyli j...@suse.com wrote:
After 84c91b7ae merged to v3.17 kernel, hibernate code checks the e280 regions
should not be changed when doing hibernate resume. Without your patch 8,
the hibernate resume checking will randomly fail on the machines that reserved
patch 1-7: are kasl related.
1. make ZO: arch/x86/boot/compressed/vmlinux data region is not
overwritten by final VO: vmlinux after decompress.
so could pass data from ZO to VO
2. create new ident mapping for kasl 64bit, so we can cover
above 4G random kernel base, also don't need to track
We need to include that in boot::decompress_kernel stage to set new mapping.
Signed-off-by: Yinghai Lu ying...@kernel.org
---
arch/x86/include/asm/page.h | 5 +++
arch/x86/mm/ident_map.c | 74 +
arch/x86/mm/init_64.c | 74
aslr will support to put random VO above 4G, so we need to set ident
mapping for the range even we come from startup_32 path.
At the same time, when boot from 64bit bootloader, bootloader will
set ident mapping, and boot via ZO startup_64.
Then pages for pagetable need to be avoided when
bp found data from boot stage can not be used kernel stage.
Actually those data area is overlapped with kernel bss stage, and clear_bss()
clear them before code in arch/x86/kernel/setup.c access them.
To make the data survive that later, we should avoid the overlapping.
We already move
commit e6023367d779 (x86, kaslr: Prevent .bss from overlaping initrd)
introduced one run_size for kaslr.
We do not need to have home grown run_size.
We should use real runtime size (include copy/decompress) aka init_size
Please check arch/x86/boot/header.S about init_size for detail.
Fixes:
commit f47233c2d34f (x86/mm/ASLR: Propagate base load address calculation)
is using address as value for kaslr_enabled.
That will random kaslr_enabled get that set or cleared.
Will have problem for system really have kaslr enabled.
-v2: update changelog.
Fixes: f47233c2d34f (x86/mm/ASLR:
bp found data from boot stage can not be used kernel stage.
Actually those data area is overlapped with VO kernel bss stage, and clear_bss()
clear them before code in arch/x86/kernel/setup.c access them.
To make the data survive that later, we should avoid the overlapping.
At first move
Let it reserve setup_data, and keep it's own list.
Also clear the hdr.setup_data, as all handler will handle or
reserve setup_data locally already.
Cc: Bjorn Helgaas bhelg...@google.com
Cc: Matt Fleming matt.flem...@intel.com
Cc: linux-...@vger.kernel.org
Signed-off-by: Yinghai Lu
Cc: Matt Fleming matt.flem...@intel.com
Signed-off-by: Yinghai Lu ying...@kernel.org
---
arch/x86/kernel/kdebugfs.c | 142 -
arch/x86/kernel/setup.c| 17 --
2 files changed, 159 deletions(-)
diff --git a/arch/x86/kernel/kdebugfs.c
So we could let kexec-tools to rebuild SETUP_PCI and pass it to
second kernel.
Now kexec-tools already build SETUP_EFI and SETUP_E820EXT.
Cc: Bjorn Helgaas bhelg...@google.com
Cc: linux-...@vger.kernel.org
Signed-off-by: Yinghai Lu ying...@kernel.org
---
arch/x86/pci/common.c | 175
Now we are using memblock to do early resource reserver/allocation
instead of using e820 map directly, and setup_data is reserved in
memblock early already.
Also kexec will generate setup_data and pass pointer to second kernel,
so second kernel will reserve setup_data by their own.
We can kill
As EFI stub code could put them high when on 32bit or with exactmap=
on 64bit conf.
Check is the range is mapped, otherwise allocate new one and have
the rom data copied. So we could really avoid ioremap.
Signed-off-by: Yinghai Lu ying...@kernel.org
---
arch/x86/pci/common.c | 47
So we could avoid ioremap every time later.
Cc: Bjorn Helgaas bhelg...@google.com
Cc: linux-...@vger.kernel.org
Signed-off-by: Yinghai Lu ying...@kernel.org
---
arch/x86/include/asm/pci.h | 2 ++
arch/x86/kernel/setup.c| 1 +
arch/x86/pci/common.c | 77
the copy will be in __initdata, and it is small.
We can use pointer to access the setup_data instead of keeping on
early_memmap and early_memunmap everywhere.
Cc: Matt Fleming matt.flem...@intel.com
Cc: linux-efi@vger.kernel.org
Signed-off-by: Yinghai Lu ying...@kernel.org
---
We will not reserve setup_data in general code. Every handler
need to reserve and copy.
Current dtd handling already have code copying, just add reserve code ...
also simplify code a bit with storing real dtb size.
Cc: Rob Herring r...@kernel.org
Cc: David Vrabel david.vra...@citrix.com
Now ZO sit end of the buffer, we can find out where is ZO text
and data/bss etc.
[input, input+input_size) is copied compressed kernel, not the whole ZO.
[output, output+init_size) is the buffer for VO.
[input+input_size, output+init_size) is [_text, _end) for ZO.
that will be first range in
On Wed, Mar 04, 2015 at 12:00:37AM -0800, Yinghai Lu wrote:
commit f47233c2d34f (x86/mm/ASLR: Propagate base load address calculation)
is using address as value for kaslr_enabled.
That will random kaslr_enabled get that set or cleared.
Will have problem for system really have kaslr enabled.
When allocating memory for the copy of the FDT that the stub
modifies and passes to the kernel, it uses the current size as
an estimate of how much memory to allocate, and increases it page
by page if it turns out to be too small. However, when loading
the FDT from a UEFI configuration table, the
On 3 March 2015 at 19:03, Roy Franz roy.fr...@linaro.org wrote:
On Tue, Mar 3, 2015 at 1:08 AM, Ard Biesheuvel
ard.biesheu...@linaro.org wrote:
When allocating memory for the copy of the FDT that the stub
modifies and passes to the kernel, it uses the current size as
an estimate of how much
27 matches
Mail list logo