Re: [PATCH] powerpc/boot: Remove duplicate typedefs from libfdt_env.h
On Thu, Mar 29, 2018 at 07:26:52PM -0700, Mark Greer wrote: > On Thu, Mar 29, 2018 at 07:22:50PM -0700, Mark Greer wrote: > > On Tue, Mar 20, 2018 at 10:55:07AM +1100, Oliver wrote: > > > > Yeah that's pretty odd. It might be a bug in your specific version of > > > GCC since I can't replicate it with this dumb test case: > > > > > > #include > > > typedef unsigned intu32; > > > > > > typedef u32 uint32_t; > > > typedef u32 uint32_t; > > > > > > int main(void) { > > > uint32_t test = 0; > > > printf("%u\n", test); > > > return 0; > > > } > > > > > > Does that result in an error? > > > > Hi Oliver. I'm very sorry for the long delay in responding. > > > > This fail to compile too: > > > > $ cat test.c > > #include > > typedef unsigned int u32; > > > > typedef u32 uint32_t; > > typedef u32 uint32_t; > > > > int main(void) { > > uint32_t test = 0; > > printf("%u\n", test); > > return 0; > > } > > $ > > $ powerpc-linux-gnu-gcc -o test test.c > > test.c:5:13: error: redefinition of typedef 'uint32_t' > > test.c:4:13: note: previous declaration of 'uint32_t' was here > > And I meant to add: > > $ powerpc-linux-gnu-gcc --version > powerpc-linux-gnu-gcc (Sourcery G++ Lite 2010.09-55) 4.5.1 > Copyright (C) 2010 Free Software Foundation, Inc. > This is free software; see the source for copying conditions. There is NO > warranty; not even for MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. > > So, yeah, its really old. > > I'll get a newer one and test it. I downloaded this version from denx.de (thank you, Wolfgang): $ powerpc-linux-gcc --version powerpc-linux-gcc (GCC) 4.8.2 Copyright (C) 2013 Free Software Foundation, Inc. This is free software; see the source for copying conditions. There is NO warranty; not even for MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. Its still dated but its the best prebuilt version that I could find easily available. With this version, the kernel (ppc6xx_defconfig) built without issue. Thanks to all who helped me through this. To Ben or whoever, I think the original patch is still worth applying even if it isn't critical. Thanks, Mark --
Re: [PATCH] PCI/IOV: Add missing prototype for powerpc specific
On Thu, Mar 22, 2018 at 09:33:55PM +0100, Mathieu Malaterre wrote: > Some prototypes for weak functions were missing for powerpc specific > functions. Add the missing prototypes to the CONFIG_PCI_IOV block. This > fixes the following three warnings treated as error when using W=1: > > arch/powerpc/kernel/pci-common.c:236:17: error: no previous prototype for > ‘pcibios_default_alignment’ [-Werror=missing-prototypes] > arch/powerpc/kernel/pci-common.c:253:5: error: no previous prototype for > ‘pcibios_sriov_enable’ [-Werror=missing-prototypes] > arch/powerpc/kernel/pci-common.c:261:5: error: no previous prototype for > ‘pcibios_sriov_disable’ [-Werror=missing-prototypes] > > Also in commit 978d2d683123 ("PCI: Add pcibios_iov_resource_alignment() > interface") a new function was added but the prototype was located in the > main header instead of the CONFIG_PCI_IOV specific section. Move this > function next to the newly added ones. > > Signed-off-by: Mathieu MalaterreApplied to pci/virtualization for v4.17, thanks! > --- > include/linux/pci.h | 7 ++- > 1 file changed, 6 insertions(+), 1 deletion(-) > > diff --git a/include/linux/pci.h b/include/linux/pci.h > index 024a1beda008..f43b43b9b643 100644 > --- a/include/linux/pci.h > +++ b/include/linux/pci.h > @@ -1295,7 +1295,6 @@ unsigned char pci_bus_max_busnr(struct pci_bus *bus); > void pci_setup_bridge(struct pci_bus *bus); > resource_size_t pcibios_window_alignment(struct pci_bus *bus, >unsigned long type); > -resource_size_t pcibios_iov_resource_alignment(struct pci_dev *dev, int > resno); > > #define PCI_VGA_STATE_CHANGE_BRIDGE (1 << 0) > #define PCI_VGA_STATE_CHANGE_DECODES (1 << 1) > @@ -1923,6 +1922,7 @@ void pcibios_release_device(struct pci_dev *dev); > void pcibios_penalize_isa_irq(int irq, int active); > int pcibios_alloc_irq(struct pci_dev *dev); > void pcibios_free_irq(struct pci_dev *dev); > +resource_size_t pcibios_default_alignment(void); > > #ifdef CONFIG_HIBERNATE_CALLBACKS > extern struct dev_pm_ops pcibios_pm_ops; > @@ -1955,6 +1955,11 @@ int pci_sriov_set_totalvfs(struct pci_dev *dev, u16 > numvfs); > int pci_sriov_get_totalvfs(struct pci_dev *dev); > resource_size_t pci_iov_resource_size(struct pci_dev *dev, int resno); > void pci_vf_drivers_autoprobe(struct pci_dev *dev, bool probe); > + > +/* Arch may override these (weak) */ > +int pcibios_sriov_enable(struct pci_dev *pdev, u16 num_vfs); > +int pcibios_sriov_disable(struct pci_dev *pdev); > +resource_size_t pcibios_iov_resource_alignment(struct pci_dev *dev, int > resno); > #else > static inline int pci_iov_virtfn_bus(struct pci_dev *dev, int id) > { > -- > 2.11.0 >
Re: [PATCH v2] crypto: talitos - fix IPsec cipher in length
On Thu, Mar 22, 2018 at 10:57:01AM +0100, Christophe Leroy wrote: > For SEC 2.x+, cipher in length must contain only the ciphertext length. > In case of using hardware ICV checking, the ICV length is provided via > the "extent" field of the descriptor pointer. > > Cc:# 4.8+ > Fixes: 549bd8bc5987 ("crypto: talitos - Implement AEAD for SEC1 using > HMAC_SNOOP_NO_AFEU") > Reported-by: Horia Geantă > Signed-off-by: Christophe Leroy Patch applied. Thanks. -- Email: Herbert Xu Home Page: http://gondor.apana.org.au/~herbert/ PGP Key: http://gondor.apana.org.au/~herbert/pubkey.txt
[PATCH 2/2] powerpc/pseries: Restore default security feature flags on setup
After migration the security feature flags might have changed (e.g., destination system with unpatched firmware), but some flags are not set/clear again in init_cpu_char_feature_flags() because it assumes the security flags to be the defaults. Additionally, if the H_GET_CPU_CHARACTERISTICS hypercall fails then init_cpu_char_feature_flags() does not run again, which potentially might leave the system in an insecure or sub-optimal configuration. So, just restore the security feature flags to the defaults assumed by init_cpu_char_feature_flags() so it can set/clear them correctly, and to ensure safe settings are in place in case the hypercall fail. Fixes: f636c14790ea ("powerpc/pseries: Set or clear security feature flags") Signed-off-by: Mauricio Faria de Oliveira--- arch/powerpc/platforms/pseries/setup.c | 11 +++ 1 file changed, 11 insertions(+) diff --git a/arch/powerpc/platforms/pseries/setup.c b/arch/powerpc/platforms/pseries/setup.c index b11564f..2581fc8 100644 --- a/arch/powerpc/platforms/pseries/setup.c +++ b/arch/powerpc/platforms/pseries/setup.c @@ -462,6 +462,10 @@ static void __init find_and_init_phbs(void) static void init_cpu_char_feature_flags(struct h_cpu_char_result *result) { + /* +* The features below are disabled by default, so we instead look to see +* if firmware has *enabled* them, and set them if so. +*/ if (result->character & H_CPU_CHAR_SPEC_BAR_ORI31) security_ftr_set(SEC_FTR_SPEC_BAR_ORI31); @@ -501,6 +505,13 @@ void pseries_setup_rfi_flush(void) bool enable; long rc; + /* +* Set features to the defaults assumed by init_cpu_char_feature_flags() +* so it can set/clear again any features that might have changed after +* migration, and in case the hypercall fails and it is not even called. +*/ + powerpc_security_features = SEC_FTR_DEFAULT; + rc = plpar_get_cpu_characteristics(); if (rc == H_SUCCESS) init_cpu_char_feature_flags(); -- 1.8.3.1
[PATCH 1/2] powerpc: Move default security feature flags
This moves the definition of the default security feature flags (i.e., enabled by default) closer to the security feature flags. This can be used to restore current flags to the default flags. Signed-off-by: Mauricio Faria de Oliveira--- arch/powerpc/include/asm/security_features.h | 8 arch/powerpc/kernel/security.c | 7 +-- 2 files changed, 9 insertions(+), 6 deletions(-) diff --git a/arch/powerpc/include/asm/security_features.h b/arch/powerpc/include/asm/security_features.h index 400a905..fa4d2e1 100644 --- a/arch/powerpc/include/asm/security_features.h +++ b/arch/powerpc/include/asm/security_features.h @@ -63,4 +63,12 @@ static inline bool security_ftr_enabled(unsigned long feature) // Firmware configuration indicates user favours security over performance #define SEC_FTR_FAVOUR_SECURITY0x0200ull + +// Features enabled by default +#define SEC_FTR_DEFAULT \ + (SEC_FTR_L1D_FLUSH_HV | \ +SEC_FTR_L1D_FLUSH_PR | \ +SEC_FTR_BNDS_CHK_SPEC_BAR | \ +SEC_FTR_FAVOUR_SECURITY) + #endif /* _ASM_POWERPC_SECURITY_FEATURES_H */ diff --git a/arch/powerpc/kernel/security.c b/arch/powerpc/kernel/security.c index 2cee3dc..bab5a27 100644 --- a/arch/powerpc/kernel/security.c +++ b/arch/powerpc/kernel/security.c @@ -11,12 +11,7 @@ #include -unsigned long powerpc_security_features __read_mostly = \ - SEC_FTR_L1D_FLUSH_HV | \ - SEC_FTR_L1D_FLUSH_PR | \ - SEC_FTR_BNDS_CHK_SPEC_BAR | \ - SEC_FTR_FAVOUR_SECURITY; - +unsigned long powerpc_security_features __read_mostly = SEC_FTR_DEFAULT; ssize_t cpu_show_meltdown(struct device *dev, struct device_attribute *attr, char *buf) { -- 1.8.3.1
Re: [RFC] new SYSCALL_DEFINE/COMPAT_SYSCALL_DEFINE wrappers
On Fri, Mar 30, 2018 at 12:58:02PM +0200, Ingo Molnar wrote: > * John Paul Adrian Glaubitzwrote: > > > On 03/27/2018 12:40 PM, Linus Torvalds wrote: > > > On Mon, Mar 26, 2018 at 4:37 PM, John Paul Adrian Glaubitz > > > wrote: > > >> > > >> What about a tarball with a minimal Debian x32 chroot? Then you can > > >> install interesting packages you would like to test yourself. > Here's the direct download link: > $ wget > https://people.debian.org/~glaubitz/chroots/debian-x32-unstable.tar.gz > Seems to work fine here (on a distro kernel) even if I extract all the files > as a > non-root user and do: > > ~/s/debian-x32-unstable> fakechroot /usr/sbin/chroot . /usr/bin/dpkg -l | > tail -2 > > ERROR: ld.so: object 'libfakechroot.so' from LD_PRELOAD cannot be preloaded > (cannot open shared object file): ignored. > ii util-linux:x32 2.31.1-0.5 x32 miscellaneous > system utilities > ii zlib1g:x32 1:1.2.8.dfsg-5 x32 compression > library - runtime > So that 'dpkg' instance appears to be running inside the chroot environment > and is > listing x32 installed packages. > Although I did get this warning: > ERROR: ld.so: object 'libfakechroot.so' from LD_PRELOAD cannot be preloaded > (cannot open shared object file): ignored. > Even with that warning, is still still a sufficiently complex test of x32 > syscall > code paths? Instead of mucking with fakechroot which would require installing its :x32 part inside the guest, or running the test as root, what about using any random static binary? For example, a shell like sash or bash-static would have a decentish syscall coverage even by itself. I've extracted sash from http://ftp.ports.debian.org/debian-ports//pool-x32/main/s/sash/sash_3.8-4_x32.deb and placed at https://angband.pl/tmp/sash.x32 $ sha256sum sash.x32 de24097c859b313fa422af742b648c9d731de6b33b98cb995658d1da16398456 sash.x32 Obviously, you can compile a static binary that uses whatever syscalls you want. Without a native chroot, you can "gcc -mx32" although you'd need some kind of libc unless your program is stand-alone. It might be worth mentioning my "arch-test: https://github.com/kilobyte/arch-test Because of many toolchain pieces it needs, you want a prebuilt copy: https://github.com/kilobyte/arch-test/releases/download/v0.10/arch-test_prebuilt_0.10.tar.xz https://github.com/kilobyte/arch-test/releases/download/v0.10/arch-test_prebuilt_0.10.tar.xz.asc -- while it has _extremely_ small coverage of syscalls (just write() and _exit(), enough to check endianness and pointer width), concentrating on instruction set inadequacies (broken SWP on arm, POWER7 vs POWER8, powerpc vs powerpcspe, etc), it provides minimal test binaries for a wide range of architectures. Meow! -- ⢀⣴⠾⠻⢶⣦⠀ ⣾⠁⢰⠒⠀⣿⡁ I was born a dumb, ugly and work-loving kid, then I got swapped on ⢿⡄⠘⠷⠚⠋⠀ the maternity ward. ⠈⠳⣄ signature.asc Description: PGP signature
Linux 4.16: Reported regressions as of Friday, 2018-03-30
On 26.03.2018 01:37, Linus Torvalds wrote: > […] Anyway. Go out and test. And let's hope next week is nice and calm and > I can release the final 4.16 next Sunday without any extra rc's. > >Linus Hi! Find below my seventh regression report for Linux 4.16; it's a "the final release is getting closer" special release. It lists 7 regressions I'm currently aware of. 1 was fixed since the report I sent on Tuesday; 1 is new. Are you aware of any other regressions that got introduced this development cycle? Then please let me know by mail (a simple bounce or forward to the sender of this email address is enough!). And please tell me if there is anything in the report that shouldn't be there. Ciao, Thorsten == Current regressions == Error updating SMART data during runtime and could not connect to lv ["Possible Regression"] - Status: Stalled afaics - Reported: 2018-03-11 https://marc.info/?l=linux-kernel=152075643627082 https://bugzilla.kernel.org/show_bug.cgi?id=199077 - Note: Two issues discussed in that thread; only one is a regression (latency issues in the MU03 version of the firmware, triggered by polling SMART data, which causes lvmetad to timeout in some cases) - Last known developer activity: 2018-03-19 https://marc.info/?l=linux-kernel=152145306610330 - Other relevant links: https://marc.info/?l=linux-kernel=152146297613525 https://marc.info/?l=linux-scsi=152095303312164=2 15% longer running times on lvm2 test suite - Status: Stalled afaics - Cause: https://git.kernel.org/torvalds/c/44c02a2c3dc5 - Reported: 2018-03-11 https://marc.info/?l=linux-kernel=152077333230274 - Note: Seems the real problem is in the way the test scripts interact with the kernel - Last known developer activity: 2018-03-13 https://marc.info/?l=linux-kernel=152097761921525 AMDGPU Fury X random screen flicker on Linux kernel 4.16rc5 - Status: waiting for bisect - Reported: 2018-03-13 https://bugzilla.kernel.org/show_bug.cgi?id=199101 ASUS XG-C100C 10G Network Adapter no longer working - Status: got driver maintainer involved who asked reporter for more details - Reported: 2018-03-22 https://bugzilla.kernel.org/show_bug.cgi?id=199177 multi_v7_defconfig fails to boot on many OMAP systems - Status: patch available: "clk: ti: fix flag space conflict with clkctrl clocks" https://marc.info/?l=linux-arm-kernel=152217288709609=2 - Cause: https://git.kernel.org/torvalds/c/49159a9dc3da - Reported: 2018-03-23 https://marc.info/?l=linux-clk=152198452423677=2 - Last known developer activity: 2018-03-27 https://marc.info/?l=linux-clk=152199237525182=2 hugetlbfs overflow checking regression on 32bit - Status: patch was proposed, but has issues, too - Cause: https://git.kernel.org/torvalds/c/63489f8e8211 - Reported: 2018-03-29 https://marc.info/?l=linux-kernel=152229704211382=2 - Last known developer activity: 2018-03-29 https://marc.info/?l=linux-mm=152235614429445=2 - Other relevant links: https://marc.info/?l=linux-kernel=152229710411390=2 == Waiting for clarification from reporter == Interrupt storm after suspend causes one busy kworker - Status: Still waiting for data from reporter - Reported: 2018-02-25 https://bugzilla.kernel.org/show_bug.cgi?id=198929 == Fixed since last report == Dell R640 does not boot due to SCSI/SATA failure - Status: Fixed by 2f31115e940c 8b834bff1b73 adbe552349f2 c3506df85091 b5b6e8c8d3b4 - Cause: https://git.kernel.org/torvalds/c/84676c1f21e8 - Reported: 2018-02-22 https://marc.info/?l=linux-kernel=151931128006031 - Note: Thx Artem and Dsterba for pointers
[PATCH] powerpc: fix spelling mistake: "Usupported" -> "Unsupported"
From: Colin Ian KingTrivial fix to spelling mistake in bootx_printf message text Signed-off-by: Colin Ian King --- arch/powerpc/platforms/powermac/bootx_init.c | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/arch/powerpc/platforms/powermac/bootx_init.c b/arch/powerpc/platforms/powermac/bootx_init.c index c3c9bbb3573a..ca60f3ef7ef6 100644 --- a/arch/powerpc/platforms/powermac/bootx_init.c +++ b/arch/powerpc/platforms/powermac/bootx_init.c @@ -519,7 +519,7 @@ void __init bootx_init(unsigned long r3, unsigned long r4) ; } if (bi->architecture != BOOT_ARCH_PCI) { - bootx_printf(" !!! WARNING - Usupported machine" + bootx_printf(" !!! WARNING - Unsupported machine" " architecture !\n"); for (;;) ; -- 2.15.1
Re: [PATCH] powerpc/mm/hash: Move the slb_addr_limit check within PPC_MM_SLICES
Michael Ellermana écrit : "Aneesh Kumar K.V" writes: Should not have any impact, because we always select PP_MM_SLICES these days. Nevertheless it is good to indicate that slb_addr_limit is available only with slice code. That file can only be built if PPC_MM_SLICES=y. So let's just remove the ifdef entirely. These days PPC_MM_SLICES == PPC_BOOK3S_64, so we should remove PPC_MM_SLICES #defines wherever possible and replace them with PPC_BOOK3S_64 otherwise IMO. PPC8xx also selects PPC_MM_SLICES when hugepages is selected. Christophe cheers diff --git a/arch/powerpc/mm/slb_low.S b/arch/powerpc/mm/slb_low.S index c66cb06e73a1..337ef162851d 100644 --- a/arch/powerpc/mm/slb_low.S +++ b/arch/powerpc/mm/slb_low.S @@ -166,6 +166,8 @@ END_MMU_FTR_SECTION_IFCLR(MMU_FTR_1T_SEGMENT) */ cmpdi r9, 0 bne-8f + +#ifdef CONFIG_PPC_MM_SLICES /* * user space make sure we are within the allowed limit */ @@ -183,7 +185,6 @@ END_MMU_FTR_SECTION_IFCLR(MMU_FTR_1T_SEGMENT) * really do dynamic patching unfortunately as processes might flip * between 4k and 64k standard page size */ -#ifdef CONFIG_PPC_MM_SLICES /* r10 have esid */ cmpldi r10,16 /* below SLICE_LOW_TOP */ -- 2.14.3
[RFC 3/3] powerpc/mce: Handle memcpy_mcsafe
Add a blocking notifier callback to be called in real-mode on machine check exceptions for UE (ld/st) errors only. The patch registers a callback on boot to be notified of machine check exceptions and returns a NOTIFY_STOP when a page of interest is seen as the source of the machine check exception. This page of interest is a ZONE_DEVICE page and hence for now, for memcpy_mcsafe to work, the page needs to belong to ZONE_DEVICE and memcpy_mcsafe should be used to access the memory. The patch also modifies the NIP of the exception context to go back to the fixup handler (in memcpy_mcsafe) and does not print any error message as the error is treated as returned via a return value and handled. Signed-off-by: Balbir Singh--- arch/powerpc/include/asm/mce.h | 3 +- arch/powerpc/kernel/mce.c | 77 -- 2 files changed, 77 insertions(+), 3 deletions(-) diff --git a/arch/powerpc/include/asm/mce.h b/arch/powerpc/include/asm/mce.h index 3a1226e9b465..a76638e3e47e 100644 --- a/arch/powerpc/include/asm/mce.h +++ b/arch/powerpc/include/asm/mce.h @@ -125,7 +125,8 @@ struct machine_check_event { enum MCE_UeErrorType ue_error_type:8; uint8_t effective_address_provided; uint8_t physical_address_provided; - uint8_t reserved_1[5]; + uint8_t error_return; + uint8_t reserved_1[4]; uint64_teffective_address; uint64_tphysical_address; uint8_t reserved_2[8]; diff --git a/arch/powerpc/kernel/mce.c b/arch/powerpc/kernel/mce.c index efdd16a79075..b9e4881fa8c5 100644 --- a/arch/powerpc/kernel/mce.c +++ b/arch/powerpc/kernel/mce.c @@ -28,7 +28,9 @@ #include #include #include +#include +#include #include #include @@ -54,6 +56,52 @@ static struct irq_work mce_event_process_work = { DECLARE_WORK(mce_ue_event_work, machine_process_ue_event); +static BLOCKING_NOTIFIER_HEAD(mce_notifier_list); + +int register_mce_notifier(struct notifier_block *nb) +{ + return blocking_notifier_chain_register(_notifier_list, nb); +} +EXPORT_SYMBOL_GPL(register_mce_notifier); + +int unregister_mce_notifier(struct notifier_block *nb) +{ + return blocking_notifier_chain_unregister(_notifier_list, nb); +} +EXPORT_SYMBOL_GPL(unregister_mce_notifier); + + +static int check_memcpy_mcsafe(struct notifier_block *nb, + unsigned long val, void *data) +{ + /* +* val contains the physical_address of the bad address +*/ + unsigned long pfn = val >> PAGE_SHIFT; + struct page *page = realmode_pfn_to_page(pfn); + int rc = NOTIFY_DONE; + + if (!page) + goto out; + + if (is_zone_device_page(page)) /* for HMM and PMEM */ + rc = NOTIFY_STOP; +out: + return rc; +} + +struct notifier_block memcpy_mcsafe_nb = { + .priority = 0, + .notifier_call = check_memcpy_mcsafe, +}; + +int mce_mcsafe_register(void) +{ + register_mce_notifier(_mcsafe_nb); + return 0; +} +arch_initcall(mce_mcsafe_register); + static void mce_set_error_info(struct machine_check_event *mce, struct mce_error_info *mce_err) { @@ -151,9 +199,31 @@ void save_mce_event(struct pt_regs *regs, long handled, mce->u.ue_error.effective_address_provided = true; mce->u.ue_error.effective_address = addr; if (phys_addr != ULONG_MAX) { + int rc; + const struct exception_table_entry *entry; + + /* +* Once we have the physical address, we check to +* see if the current nip has a fixup entry. +* Having a fixup entry plus the notifier stating +* that it can handle the exception is an indication +* that we should return to the fixup entry and +* return an error from there +*/ mce->u.ue_error.physical_address_provided = true; mce->u.ue_error.physical_address = phys_addr; - machine_check_ue_event(mce); + + rc = blocking_notifier_call_chain(_notifier_list, + phys_addr, NULL); + if (rc & NOTIFY_STOP_MASK) { + entry = search_exception_tables(regs->nip); + if (entry != NULL) { + mce->u.ue_error.error_return = 1; + regs->nip = extable_fixup(entry); + } else +
[RFC 2/3] powerpc/memcpy: Add memcpy_mcsafe for pmem
The pmem infrastructure uses memcpy_mcsafe in the pmem layer so as to convert machine check excpetions into a return value on failure in case a machine check exception is encoutered during the memcpy. This patch largely borrows from the copyuser_power7 logic and does not add the VMX optimizations, largely to keep the patch simple. If needed those optimizations can be folded in. Signed-off-by: Balbir Singh--- arch/powerpc/include/asm/string.h | 2 + arch/powerpc/lib/Makefile | 2 +- arch/powerpc/lib/memcpy_mcsafe_64.S | 212 3 files changed, 215 insertions(+), 1 deletion(-) create mode 100644 arch/powerpc/lib/memcpy_mcsafe_64.S diff --git a/arch/powerpc/include/asm/string.h b/arch/powerpc/include/asm/string.h index 9b8cedf618f4..b7e872a64726 100644 --- a/arch/powerpc/include/asm/string.h +++ b/arch/powerpc/include/asm/string.h @@ -30,7 +30,9 @@ extern void * memcpy_flushcache(void *,const void *,__kernel_size_t); #ifdef CONFIG_PPC64 #define __HAVE_ARCH_MEMSET32 #define __HAVE_ARCH_MEMSET64 +#define __HAVE_ARCH_MEMCPY_MCSAFE +extern int memcpy_mcsafe(void *dst, const void *src, __kernel_size_t sz); extern void *__memset16(uint16_t *, uint16_t v, __kernel_size_t); extern void *__memset32(uint32_t *, uint32_t v, __kernel_size_t); extern void *__memset64(uint64_t *, uint64_t v, __kernel_size_t); diff --git a/arch/powerpc/lib/Makefile b/arch/powerpc/lib/Makefile index 3c29c9009bbf..048afee9f518 100644 --- a/arch/powerpc/lib/Makefile +++ b/arch/powerpc/lib/Makefile @@ -24,7 +24,7 @@ endif obj64-y+= copypage_64.o copyuser_64.o mem_64.o hweight_64.o \ copyuser_power7.o string_64.o copypage_power7.o memcpy_power7.o \ - memcpy_64.o memcmp_64.o pmem.o + memcpy_64.o memcmp_64.o pmem.o memcpy_mcsafe_64.o obj64-$(CONFIG_SMP)+= locks.o obj64-$(CONFIG_ALTIVEC)+= vmx-helper.o diff --git a/arch/powerpc/lib/memcpy_mcsafe_64.S b/arch/powerpc/lib/memcpy_mcsafe_64.S new file mode 100644 index ..e7eaa9b6cded --- /dev/null +++ b/arch/powerpc/lib/memcpy_mcsafe_64.S @@ -0,0 +1,212 @@ +/* SPDX-License-Identifier: GPL-2.0 */ +/* + * Copyright (C) IBM Corporation, 2011 + * Derived from copyuser_power7.s by Anton Blanchard + * Author - Balbir Singh + */ +#include +#include + + .macro err1 +100: + EX_TABLE(100b,.Ldo_err1) + .endm + + .macro err2 +200: + EX_TABLE(200b,.Ldo_err2) + .endm + +.Ldo_err2: + ld r22,STK_REG(R22)(r1) + ld r21,STK_REG(R21)(r1) + ld r20,STK_REG(R20)(r1) + ld r19,STK_REG(R19)(r1) + ld r18,STK_REG(R18)(r1) + ld r17,STK_REG(R17)(r1) + ld r16,STK_REG(R16)(r1) + ld r15,STK_REG(R15)(r1) + ld r14,STK_REG(R14)(r1) + addir1,r1,STACKFRAMESIZE +.Ldo_err1: + li r3,-EFAULT + blr + + +_GLOBAL(memcpy_mcsafe) + cmpldi r5,16 + blt .Lshort_copy + +.Lcopy: + /* Get the source 8B aligned */ + neg r6,r4 + mtocrf 0x01,r6 + clrldi r6,r6,(64-3) + + bf cr7*4+3,1f +err1; lbz r0,0(r4) + addir4,r4,1 +err1; stb r0,0(r3) + addir3,r3,1 + +1: bf cr7*4+2,2f +err1; lhz r0,0(r4) + addir4,r4,2 +err1; sth r0,0(r3) + addir3,r3,2 + +2: bf cr7*4+1,3f +err1; lwz r0,0(r4) + addir4,r4,4 +err1; stw r0,0(r3) + addir3,r3,4 + +3: sub r5,r5,r6 + cmpldi r5,128 + blt 5f + + mflrr0 + stdur1,-STACKFRAMESIZE(r1) + std r14,STK_REG(R14)(r1) + std r15,STK_REG(R15)(r1) + std r16,STK_REG(R16)(r1) + std r17,STK_REG(R17)(r1) + std r18,STK_REG(R18)(r1) + std r19,STK_REG(R19)(r1) + std r20,STK_REG(R20)(r1) + std r21,STK_REG(R21)(r1) + std r22,STK_REG(R22)(r1) + std r0,STACKFRAMESIZE+16(r1) + + srdir6,r5,7 + mtctr r6 + + /* Now do cacheline (128B) sized loads and stores. */ + .align 5 +4: +err2; ld r0,0(r4) +err2; ld r6,8(r4) +err2; ld r7,16(r4) +err2; ld r8,24(r4) +err2; ld r9,32(r4) +err2; ld r10,40(r4) +err2; ld r11,48(r4) +err2; ld r12,56(r4) +err2; ld r14,64(r4) +err2; ld r15,72(r4) +err2; ld r16,80(r4) +err2; ld r17,88(r4) +err2; ld r18,96(r4) +err2; ld r19,104(r4) +err2; ld r20,112(r4) +err2; ld r21,120(r4) + addir4,r4,128 +err2; std r0,0(r3) +err2; std r6,8(r3) +err2; std r7,16(r3) +err2; std r8,24(r3) +err2; std r9,32(r3) +err2; std r10,40(r3) +err2; std r11,48(r3) +err2; std r12,56(r3) +err2; std r14,64(r3) +err2; std r15,72(r3) +err2; std r16,80(r3) +err2; std r17,88(r3) +err2; std r18,96(r3) +err2;
[RFC 1/3] powerpc/mce: Bug fixes for MCE handling in kernel space
The code currently assumes PAGE_SHIFT as the shift value of the pfn, this works correctly (mostly) for user space pages, but the correct thing to do is 1. Extrace the shift value returned via the pte-walk API's 2. Use the shift value to access the instruction address. Note, the final physical address still use PAGE_SHIFT for computation. handle_ierror() is not modified and handle_derror() is modified just for extracting the correct instruction address. Signed-off-by: Balbir Singh--- arch/powerpc/kernel/mce_power.c | 17 ++--- 1 file changed, 10 insertions(+), 7 deletions(-) diff --git a/arch/powerpc/kernel/mce_power.c b/arch/powerpc/kernel/mce_power.c index fe6fc63251fe..69c8cc1e8e4f 100644 --- a/arch/powerpc/kernel/mce_power.c +++ b/arch/powerpc/kernel/mce_power.c @@ -36,7 +36,8 @@ * Convert an address related to an mm to a PFN. NOTE: we are in real * mode, we could potentially race with page table updates. */ -static unsigned long addr_to_pfn(struct pt_regs *regs, unsigned long addr) +static unsigned long addr_to_pfn(struct pt_regs *regs, unsigned long addr, + unsigned int *shift) { pte_t *ptep; unsigned long flags; @@ -49,9 +50,9 @@ static unsigned long addr_to_pfn(struct pt_regs *regs, unsigned long addr) local_irq_save(flags); if (mm == current->mm) - ptep = find_current_mm_pte(mm->pgd, addr, NULL, NULL); + ptep = find_current_mm_pte(mm->pgd, addr, NULL, shift); else - ptep = find_init_mm_pte(addr, NULL); + ptep = find_init_mm_pte(addr, shift); local_irq_restore(flags); if (!ptep || pte_special(*ptep)) return ULONG_MAX; @@ -353,13 +354,14 @@ static int mce_find_instr_ea_and_pfn(struct pt_regs *regs, uint64_t *addr, unsigned long pfn, instr_addr; struct instruction_op op; struct pt_regs tmp = *regs; + unsigned int shift; - pfn = addr_to_pfn(regs, regs->nip); + pfn = addr_to_pfn(regs, regs->nip, ); if (pfn != ULONG_MAX) { - instr_addr = (pfn << PAGE_SHIFT) + (regs->nip & ~PAGE_MASK); + instr_addr = (pfn << shift) + (regs->nip & ((1 << shift) - 1)); instr = *(unsigned int *)(instr_addr); if (!analyse_instr(, , instr)) { - pfn = addr_to_pfn(regs, op.ea); + pfn = addr_to_pfn(regs, op.ea, ); *addr = op.ea; *phys_addr = (pfn << PAGE_SHIFT); return 0; @@ -437,7 +439,8 @@ static int mce_handle_ierror(struct pt_regs *regs, unsigned long pfn; if (get_paca()->in_mce < MAX_MCE_DEPTH) { - pfn = addr_to_pfn(regs, regs->nip); + pfn = addr_to_pfn(regs, regs->nip, + NULL); if (pfn != ULONG_MAX) { *phys_addr = (pfn << PAGE_SHIFT); -- 2.13.6
[RFC 0/3] Add support for memcpy_mcsafe
memcpy_mcsafe() is an API currently used by the pmem subsystem to convert errors while doing a memcpy (machine check exception errors) to a return value. This patchset consists of three patches 1. The first patch is a bug fix to handle machine check errors correctly while walking the page tables in kernel mode, due to huge pmd/pud sizes 2. The second patch adds memcpy_mcsafe() support, this is largely derived from existing code 3. The third patch registers for callbacks on machine check exceptions and in them uses specialized knowledge of the type of page to decide whether to handle the MCE as is or to return to a fixup address present in memcpy_mcsafe(). If a fixup address is used, then we return an error value of -EFAULT to the caller. Testing A large part of the testing was done under a simulator by selectively inserting machine check exceptions in a test driver doing memcpy_mcsafe via ioctls. Balbir Singh (3): powerpc/mce: Bug fixes for MCE handling in kernel space powerpc/memcpy: Add memcpy_mcsafe for pmem powerpc/mce: Handle memcpy_mcsafe arch/powerpc/include/asm/mce.h | 3 +- arch/powerpc/include/asm/string.h | 2 + arch/powerpc/kernel/mce.c | 76 +++- arch/powerpc/kernel/mce_power.c | 17 +-- arch/powerpc/lib/Makefile | 2 +- arch/powerpc/lib/memcpy_mcsafe_64.S | 225 6 files changed, 314 insertions(+), 11 deletions(-) create mode 100644 arch/powerpc/lib/memcpy_mcsafe_64.S -- 2.13.6
[PATCH 2/2] powerpc/mm/radix: Update command line parsing for disable_radix
kernel parameter disable_radix takes different options disable_radix=yes|no|1|0 or just disable_radix. prom_init parsing is not supporting these options. Signed-off-by: Aneesh Kumar K.V--- arch/powerpc/kernel/prom_init.c| 16 +--- arch/powerpc/kernel/prom_init_check.sh | 2 +- 2 files changed, 14 insertions(+), 4 deletions(-) diff --git a/arch/powerpc/kernel/prom_init.c b/arch/powerpc/kernel/prom_init.c index 0323e073341d..c85333c244e8 100644 --- a/arch/powerpc/kernel/prom_init.c +++ b/arch/powerpc/kernel/prom_init.c @@ -171,7 +171,7 @@ static unsigned long __initdata prom_tce_alloc_start; static unsigned long __initdata prom_tce_alloc_end; #endif -static bool __initdata prom_radix_disable; +static bool prom_radix_disable __initdata = !IS_ENABLED(CONFIG_PPC_RADIX_MMU_DEFAULT); struct platform_support { bool hash_mmu; @@ -641,9 +641,19 @@ static void __init early_cmdline_parse(void) opt = strstr(prom_cmd_line, "disable_radix"); if (opt) { - prom_debug("Radix disabled from cmdline\n"); - prom_radix_disable = true; + opt += 13; + if (*opt && *opt == '=') { + bool val; + + if (kstrtobool(++opt, )) + prom_radix_disable = false; + else + prom_radix_disable = val; + } else + prom_radix_disable = true; } + if (prom_radix_disable) + prom_debug("Radix disabled from cmdline\n"); } #if defined(CONFIG_PPC_PSERIES) || defined(CONFIG_PPC_POWERNV) diff --git a/arch/powerpc/kernel/prom_init_check.sh b/arch/powerpc/kernel/prom_init_check.sh index 12640f7e726b..acb6b9226352 100644 --- a/arch/powerpc/kernel/prom_init_check.sh +++ b/arch/powerpc/kernel/prom_init_check.sh @@ -19,7 +19,7 @@ WHITELIST="add_reloc_offset __bss_start __bss_stop copy_and_flush _end enter_prom memcpy memset reloc_offset __secondary_hold __secondary_hold_acknowledge __secondary_hold_spinloop __start -strcmp strcpy strlcpy strlen strncmp strstr logo_linux_clut224 +strcmp strcpy strlcpy strlen strncmp strstr kstrtobool logo_linux_clut224 reloc_got2 kernstart_addr memstart_addr linux_banner _stext __prom_init_toc_start __prom_init_toc_end btext_setup_display TOC." -- 2.14.3
[PATCH 1/2] powerpc/mm/radix: Parse disable_radix commandline correctly.
From: "Aneesh Kumar K.V"kernel parameter disable_radix takes different options disable_radix=yes|no|1|0 or just disable_radix. When using the later format we get below error. `Malformed early option 'disable_radix'` Signed-off-by: Aneesh Kumar K.V --- arch/powerpc/mm/init_64.c | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/arch/powerpc/mm/init_64.c b/arch/powerpc/mm/init_64.c index fdb424a29f03..9c9f8dde31c3 100644 --- a/arch/powerpc/mm/init_64.c +++ b/arch/powerpc/mm/init_64.c @@ -372,7 +372,7 @@ static int __init parse_disable_radix(char *p) { bool val; - if (strlen(p) == 0) + if (!p) val = true; else if (kstrtobool(p, )) return -EINVAL; -- 2.14.3
[PATCH V3] powerpc/mm/hugetlb: initialize the pagetable cache correctly for hugetlb
From: "Aneesh Kumar K.V"With 64k page size, we have hugetlb pte entries at the pmd and pud level for book3s64. We don't need to create a separate page table cache for that. With 4k we need to make sure hugepd page table cache for 16M is placed at PUD level and 16G at the PGD level. Simplify all these by not using HUGEPD_PD_SHIFT which is confusing for book3s64. Without this patch, with 64k page size we create pagetable caches with shift value 10 and 7 which are not used at all. Fixes: 419df06eea5b ("powerpc: Reduce the PTE_INDEX_SIZE") Signed-off-by: Aneesh Kumar K.V --- arch/powerpc/mm/hugetlbpage.c | 18 +- 1 file changed, 13 insertions(+), 5 deletions(-) diff --git a/arch/powerpc/mm/hugetlbpage.c b/arch/powerpc/mm/hugetlbpage.c index f4153f21d214..99cf86096970 100644 --- a/arch/powerpc/mm/hugetlbpage.c +++ b/arch/powerpc/mm/hugetlbpage.c @@ -122,9 +122,6 @@ static int __hugepte_alloc(struct mm_struct *mm, hugepd_t *hpdp, #if defined(CONFIG_PPC_FSL_BOOK3E) || defined(CONFIG_PPC_8xx) #define HUGEPD_PGD_SHIFT PGDIR_SHIFT #define HUGEPD_PUD_SHIFT PUD_SHIFT -#else -#define HUGEPD_PGD_SHIFT PUD_SHIFT -#define HUGEPD_PUD_SHIFT PMD_SHIFT #endif /* @@ -670,15 +667,26 @@ static int __init hugetlbpage_init(void) shift = mmu_psize_to_shift(psize); - if (add_huge_page_size(1ULL << shift) < 0) +#ifdef CONFIG_PPC_BOOK3S_64 + if (shift > PGDIR_SHIFT) continue; - + else if (shift > PUD_SHIFT) + pdshift = PGDIR_SHIFT; + else if (shift > PMD_SHIFT) + pdshift = PUD_SHIFT; + else + pdshift = PMD_SHIFT; +#else if (shift < HUGEPD_PUD_SHIFT) pdshift = PMD_SHIFT; else if (shift < HUGEPD_PGD_SHIFT) pdshift = PUD_SHIFT; else pdshift = PGDIR_SHIFT; +#endif + + if (add_huge_page_size(1ULL << shift) < 0) + continue; /* * if we have pdshift and shift value same, we don't * use pgt cache for hugepd. -- 2.14.3
[PATCH] powerpc/kvm: Fix guest boot issue DAWR cpu feature
SLOF check the 'sc 1' support by issuing a hcall with H_SET_DABR. With recent patch to make the hcall return H_UNSUPPORTED, we get guest boot failures. SLOF can work with the hcall failure H_HARDWARE for the above hcall. Switch the return value to H_HARDWARE instead of H_UNSUPPORTED so that we don't break the guest boot. Fixes: e8ebedbf ("KVM: PPC: Book3S HV: Return error from h_set_dabr() on POWER9") Signed-off-by: Aneesh Kumar K.V--- arch/powerpc/kvm/book3s_hv_rmhandlers.S | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/arch/powerpc/kvm/book3s_hv_rmhandlers.S b/arch/powerpc/kvm/book3s_hv_rmhandlers.S index c4c1b169826a..fdd7350d6c87 100644 --- a/arch/powerpc/kvm/book3s_hv_rmhandlers.S +++ b/arch/powerpc/kvm/book3s_hv_rmhandlers.S @@ -2576,7 +2576,7 @@ END_FTR_SECTION_IFSET(CPU_FTR_ARCH_207S) 2: BEGIN_FTR_SECTION /* POWER9 with disabled DAWR */ - li r3, H_UNSUPPORTED + li r3, H_HARDWARE blr END_FTR_SECTION_IFCLR(CPU_FTR_DAWR) /* Emulate H_SET_DABR/X on P8 for the sake of compat mode guests */ -- 2.14.3
Re: [PATCH] powerpc/mm/hash: Move the slb_addr_limit check within PPC_MM_SLICES
"Aneesh Kumar K.V"writes: > Should not have any impact, because we always select PP_MM_SLICES these days. > Nevertheless it is good to indicate that slb_addr_limit is available only > with slice code. That file can only be built if PPC_MM_SLICES=y. So let's just remove the ifdef entirely. These days PPC_MM_SLICES == PPC_BOOK3S_64, so we should remove PPC_MM_SLICES #defines wherever possible and replace them with PPC_BOOK3S_64 otherwise IMO. cheers > diff --git a/arch/powerpc/mm/slb_low.S b/arch/powerpc/mm/slb_low.S > index c66cb06e73a1..337ef162851d 100644 > --- a/arch/powerpc/mm/slb_low.S > +++ b/arch/powerpc/mm/slb_low.S > @@ -166,6 +166,8 @@ END_MMU_FTR_SECTION_IFCLR(MMU_FTR_1T_SEGMENT) >*/ > cmpdi r9, 0 > bne-8f > + > +#ifdef CONFIG_PPC_MM_SLICES > /* > * user space make sure we are within the allowed limit >*/ > @@ -183,7 +185,6 @@ END_MMU_FTR_SECTION_IFCLR(MMU_FTR_1T_SEGMENT) >* really do dynamic patching unfortunately as processes might flip >* between 4k and 64k standard page size >*/ > -#ifdef CONFIG_PPC_MM_SLICES > /* r10 have esid */ > cmpldi r10,16 > /* below SLICE_LOW_TOP */ > -- > 2.14.3
Re: [PATCH] Extract initrd free logic from arch-specific code.
* Shea Levywrote: > Now only those architectures that have custom initrd free requirements > need to define free_initrd_mem. > > Signed-off-by: Shea Levy Please put the Kconfig symbol name this patch introduces both into the title, so that people know what to grep for. > --- > arch/alpha/mm/init.c | 8 > arch/arc/mm/init.c| 7 --- > arch/arm/Kconfig | 1 + > arch/arm64/Kconfig| 1 + > arch/blackfin/Kconfig | 1 + > arch/c6x/mm/init.c| 7 --- > arch/cris/Kconfig | 1 + > arch/frv/mm/init.c| 11 --- > arch/h8300/mm/init.c | 7 --- > arch/hexagon/Kconfig | 1 + > arch/ia64/Kconfig | 1 + > arch/m32r/Kconfig | 1 + > arch/m32r/mm/init.c | 11 --- > arch/m68k/mm/init.c | 7 --- > arch/metag/Kconfig| 1 + > arch/microblaze/mm/init.c | 7 --- > arch/mips/Kconfig | 1 + > arch/mn10300/Kconfig | 1 + > arch/nios2/mm/init.c | 7 --- > arch/openrisc/mm/init.c | 7 --- > arch/parisc/mm/init.c | 7 --- > arch/powerpc/mm/mem.c | 7 --- > arch/riscv/mm/init.c | 6 -- > arch/s390/Kconfig | 1 + > arch/score/Kconfig| 1 + > arch/sh/mm/init.c | 7 --- > arch/sparc/Kconfig| 1 + > arch/tile/Kconfig | 1 + > arch/um/kernel/mem.c | 7 --- > arch/unicore32/Kconfig| 1 + > arch/x86/Kconfig | 1 + > arch/xtensa/Kconfig | 1 + > init/initramfs.c | 7 +++ > usr/Kconfig | 4 > 34 files changed, 28 insertions(+), 113 deletions(-) Please also put it into Documentation/features/. > diff --git a/usr/Kconfig b/usr/Kconfig > index 43658b8a975e..7a94f6df39bf 100644 > --- a/usr/Kconfig > +++ b/usr/Kconfig > @@ -233,3 +233,7 @@ config INITRAMFS_COMPRESSION > default ".lzma" if RD_LZMA > default ".bz2" if RD_BZIP2 > default "" > + > +config HAVE_ARCH_FREE_INITRD_MEM > + bool > + default n Help text would be nice, to tell arch maintainers what the purpose of this switch is. Also, a nit, I think this should be named "ARCH_HAS_FREE_INITRD_MEM", which is the dominant pattern: triton:~/tip> git grep 'select.*ARCH' arch/x86/Kconfig* | cut -f2 | cut -d_ -f1-2 | sort | uniq -c | sort -n ... 2 select ARCH_USES 2 select ARCH_WANTS 3 select ARCH_MIGHT 3 select ARCH_WANT 4 select ARCH_SUPPORTS 4 select ARCH_USE 16 select HAVE_ARCH 23 select ARCH_HAS It also reads nicely in English: "arch has free_initrd_mem()" While the other makes little sense: "have arch free_initrd_mem()" ? Thanks, Ingo
Re: [RFC PATCH v2 0/2] Randomization of address chosen by mmap.
On Fri 2018-03-30 12:07:58, Ilya Smith wrote: > Hi > > > On 30 Mar 2018, at 10:55, Pavel Machekwrote: > > > > Hi! > > > >> Current implementation doesn't randomize address returned by mmap. > >> All the entropy ends with choosing mmap_base_addr at the process > >> creation. After that mmap build very predictable layout of address > >> space. It allows to bypass ASLR in many cases. This patch make > >> randomization of address on any mmap call. > > > > How will this interact with people debugging their application, and > > getting different behaviours based on memory layout? > > > > strace, strace again, get different results? > > > > Honestly I’m confused about your question. If the only one way for debugging > application is to use predictable mmap behaviour, then something went wrong > in > this live and we should stop using computers at all. I'm not saying "only way". I'm saying one way, and you are breaking that. There's advanced stuff like debuggers going "back in time". Pavel -- (english) http://www.livejournal.com/~pavelmachek (cesky, pictures) http://atrey.karlin.mff.cuni.cz/~pavel/picture/horses/blog.html signature.asc Description: Digital signature
Re: [RFC PATCH v2 0/2] Randomization of address chosen by mmap.
Hi! > Current implementation doesn't randomize address returned by mmap. > All the entropy ends with choosing mmap_base_addr at the process > creation. After that mmap build very predictable layout of address > space. It allows to bypass ASLR in many cases. This patch make > randomization of address on any mmap call. How will this interact with people debugging their application, and getting different behaviours based on memory layout? strace, strace again, get different results? Pavel -- (english) http://www.livejournal.com/~pavelmachek (cesky, pictures) http://atrey.karlin.mff.cuni.cz/~pavel/picture/horses/blog.html signature.asc Description: Digital signature
Re: [RFC PATCH v2 0/2] Randomization of address chosen by mmap.
> On 30 Mar 2018, at 12:57, Pavel Machekwrote: > > On Fri 2018-03-30 12:07:58, Ilya Smith wrote: >> Hi >> >>> On 30 Mar 2018, at 10:55, Pavel Machek wrote: >>> >>> Hi! >>> Current implementation doesn't randomize address returned by mmap. All the entropy ends with choosing mmap_base_addr at the process creation. After that mmap build very predictable layout of address space. It allows to bypass ASLR in many cases. This patch make randomization of address on any mmap call. >>> >>> How will this interact with people debugging their application, and >>> getting different behaviours based on memory layout? >>> >>> strace, strace again, get different results? >>> >> >> Honestly I’m confused about your question. If the only one way for debugging >> application is to use predictable mmap behaviour, then something went wrong >> in >> this live and we should stop using computers at all. > > I'm not saying "only way". I'm saying one way, and you are breaking > that. There's advanced stuff like debuggers going "back in time". > Correct me if I wrong, when you run gdb for instance and try to debug some application, gdb will disable randomization. This behaviour works with gdb command: set disable-randomization on. As I know, gdb remove flag PF_RANDOMIZE from current personality thats how it disables ASLR for debugging process. According to my patch, flag PF_RANDOMIZE is checked before calling unmapped_area_random. So I don’t breaking debugging. If you talking about the case, when your application crashes under customer environment and you want to debug it; in this case layout of memory is what you don’t control at all and you have to understand what is where. So for debugging memory process layout is not what you should care of. Thanks, Ilya
Re: [RFC PATCH v2 0/2] Randomization of address chosen by mmap.
Hi > On 30 Mar 2018, at 10:55, Pavel Machekwrote: > > Hi! > >> Current implementation doesn't randomize address returned by mmap. >> All the entropy ends with choosing mmap_base_addr at the process >> creation. After that mmap build very predictable layout of address >> space. It allows to bypass ASLR in many cases. This patch make >> randomization of address on any mmap call. > > How will this interact with people debugging their application, and > getting different behaviours based on memory layout? > > strace, strace again, get different results? > Honestly I’m confused about your question. If the only one way for debugging application is to use predictable mmap behaviour, then something went wrong in this live and we should stop using computers at all. Thanks, Ilya
Re: [PATCH] powerpc: kexec_file: Fix error code when trying to load kdump kernel
On Thu, Mar 29, 2018 at 04:05:43PM -0300, Thiago Jung Bauermann wrote: > kexec_file_load() on powerpc doesn't support kdump kernels yet, so it > returns -ENOTSUPP in that case. > > I've recently learned that this errno is internal to the kernel and isn't > supposed to be exposed to userspace. Therefore, change to -EOPNOTSUPP which > is defined in an uapi header. > > This does indeed make kexec-tools happier. Before the patch, on ppc64le: > > # ~bauermann/src/kexec-tools/build/sbin/kexec -s -p /boot/vmlinuz > kexec_file_load failed: Unknown error 524 > > After the patch: > > # ~bauermann/src/kexec-tools/build/sbin/kexec -s -p /boot/vmlinuz > kexec_file_load failed: Operation not supported > > Fixes: a0458284f062 ("powerpc: Add support code for kexec_file_load()") > Reported-by: Dave Young> Signed-off-by: Thiago Jung Bauermann Reviewed-by: Simon Horman
Re: [RFC] new SYSCALL_DEFINE/COMPAT_SYSCALL_DEFINE wrappers
* John Paul Adrian Glaubitzwrote: > On 03/27/2018 12:40 PM, Linus Torvalds wrote: > > On Mon, Mar 26, 2018 at 4:37 PM, John Paul Adrian Glaubitz > > wrote: > >> > >> What about a tarball with a minimal Debian x32 chroot? Then you can > >> install interesting packages you would like to test yourself. > > > > That probably works fine. > > I just created a fresh Debian x32 unstable chroot using this command: > > $ debootstrap --no-check-gpg --variant=minbase --arch=x32 unstable > debian-x32-unstable http://ftp.ports.debian.org/debian-ports > > It can be downloaded from my Debian webspace along checksum files for > verification: > > > https://people.debian.org/~glaubitz/chroots/ > > Let me know if you run into any issues. Here's the direct download link: $ wget https://people.debian.org/~glaubitz/chroots/debian-x32-unstable.tar.gz Checksum should be: $ sha256sum debian-x32-unstable.tar.gz 010844bcc76bd1a3b7a20fe47f7067ed8e429a84fa60030a2868626e8fa7ec3b debian-x32-unstable.tar.gz Seems to work fine here (on a distro kernel) even if I extract all the files as a non-root user and do: ~/s/debian-x32-unstable> fakechroot /usr/sbin/chroot . /usr/bin/dpkg -l | tail -2 ERROR: ld.so: object 'libfakechroot.so' from LD_PRELOAD cannot be preloaded (cannot open shared object file): ignored. ii util-linux:x32 2.31.1-0.5 x32 miscellaneous system utilities ii zlib1g:x32 1:1.2.8.dfsg-5 x32 compression library - runtime So that 'dpkg' instance appears to be running inside the chroot environment and is listing x32 installed packages. Although I did get this warning: ERROR: ld.so: object 'libfakechroot.so' from LD_PRELOAD cannot be preloaded (cannot open shared object file): ignored. Even with that warning, is still still a sufficiently complex test of x32 syscall code paths? BTW., "fakechroot /usr/sbin/chroot ." crashes instead of giving me a bash shell. Thanks, Ingo
Re: [PATCH] powerpc: kexec_file: Fix error code when trying to load kdump kernel
On 03/29/18 at 04:05pm, Thiago Jung Bauermann wrote: > kexec_file_load() on powerpc doesn't support kdump kernels yet, so it > returns -ENOTSUPP in that case. > > I've recently learned that this errno is internal to the kernel and isn't > supposed to be exposed to userspace. Therefore, change to -EOPNOTSUPP which > is defined in an uapi header. > > This does indeed make kexec-tools happier. Before the patch, on ppc64le: > > # ~bauermann/src/kexec-tools/build/sbin/kexec -s -p /boot/vmlinuz > kexec_file_load failed: Unknown error 524 > > After the patch: > > # ~bauermann/src/kexec-tools/build/sbin/kexec -s -p /boot/vmlinuz > kexec_file_load failed: Operation not supported > > Fixes: a0458284f062 ("powerpc: Add support code for kexec_file_load()") > Reported-by: Dave Young> Signed-off-by: Thiago Jung Bauermann > --- > arch/powerpc/kernel/machine_kexec_file_64.c | 2 +- > 1 file changed, 1 insertion(+), 1 deletion(-) > > This is a minor issue, but since it's a simple patch it might be worth > applying it to stable branches. > > This is the kexec-tools thread where this problem was brought up: > > https://lists.infradead.org/pipermail/kexec/2018-March/020346.html > > And this is an instance of a similar fix being applied elsewhere in the > kernel, for the same reasons: > > https://patchwork.kernel.org/patch/8490791/ > > The test shown in the commit log was made using Hari Bathini's patch > adding kexec_file_load() support to kexec-tools in ppc64. > > diff --git a/arch/powerpc/kernel/machine_kexec_file_64.c > b/arch/powerpc/kernel/machine_kexec_file_64.c > index e4395f937d63..45e0b7d5f200 100644 > --- a/arch/powerpc/kernel/machine_kexec_file_64.c > +++ b/arch/powerpc/kernel/machine_kexec_file_64.c > @@ -43,7 +43,7 @@ int arch_kexec_kernel_image_probe(struct kimage *image, > void *buf, > > /* We don't support crash kernels yet. */ > if (image->type == KEXEC_TYPE_CRASH) > - return -ENOTSUPP; > + return -EOPNOTSUPP; > > for (i = 0; i < ARRAY_SIZE(kexec_file_loaders); i++) { > fops = kexec_file_loaders[i]; > > > ___ > kexec mailing list > ke...@lists.infradead.org > http://lists.infradead.org/mailman/listinfo/kexec Reviewed-by: Dave Young Thanks Dave