From: Andy Lutomirski
The TSS is a fairly juicy target for exploits, and, now that the TSS
is in the cpu_entry_area, it's no longer protected by kASLR. Make it
read-only on x86_64.
On x86_32, it can't be RO because it's written by the CPU during task
switches, and we use a task gate for double
From: Thomas Gleixner
native_flush_tlb_single() will be changed with the upcoming
KERNEL_PAGE_TABLE_ISOLATION feature. This requires to have more code in
there than INVLPG.
Remove the paravirt patching for it.
Signed-off-by: Thomas Gleixner
Acked-by: Peter Zijlstra
Reviewed-by: Josh Poimboeuf
On Tue, Nov 14, 2017 at 04:32:02PM +0200, Peter Ujfalusi wrote:
> Hi,
>
> With the introduction of .device_synchronize callback it was thought that the
> race caused crash observed in vchan_complete is fixed, but unfortunately it
> can
> still happen.
>
> The observed scenario (really hard to re
From: Thomas Gleixner
Many x86 CPUs leak information to user space due to missing isolation of
user space and kernel space page tables. There are many well documented
ways to exploit that.
The upcoming software migitation of isolating the user and kernel space
page tables needs a misfeature flag
From: Andy Lutomirski
We currently special-case stack overflow on the task stack. We're
going to start putting special stacks in the fixmap with a custom
layout, so they'll have guard pages, too. Teach the unwinder to be
able to unwind an overflow of any of the stacks.
Signed-off-by: Andy Luto
As far as I can tell, commit b03c9f9fdc37 ("bpf/verifier: track signed
and unsigned min/max values") introduced the following effectless bug
in the BPF_RSH case of adjust_scalar_min_max_vals() (unless that's
intentional):
`dst_reg->smax_value` is only updated in the case where
`dst_reg->smin_value
From: Thomas Gleixner
Add the initial files for kernel page table isolation, with a minimal init
function and the boot time detection for this misfeature.
Signed-off-by: Thomas Gleixner
---
Documentation/admin-guide/kernel-parameters.txt |2
arch/x86/boot/compressed/pagetable.c
From: Dave Hansen
KERNEL_PAGE_TABLE_ISOLATION needs to switch to a different CR3 value when
it enters the kernel and switch back when it exits. This essentially needs
to be done before leaving assembly code.
This is extra challenging because the switching context is tricky: the
registers that c
From: Dave Hansen
With KERNEL_PAGE_TABLE_ISOLATION the user portion of the kernel page
tables is poisoned with the NX bit so if the entry code exits with the
kernel page tables selected in CR3, userspace crashes.
But doing so trips the p4d/pgd_bad() checks. Make sure it does not do
that.
Signe
From: Dave Hansen
Add the pagetable helper functions do manage the separate user space page
tables.
[ tglx: Split out from the big combo kaiser patch ]
Signed-off-by: Dave Hansen
Signed-off-by: Thomas Gleixner
---
arch/x86/include/asm/pgtable_64.h | 139
From: Andy Lutomirski
Provide infrastructure to:
- find a kernel PMD for a mapping which must be visible to user space for
the entry/exit code to work.
- walk an address range and share the kernel PMD with it.
This reuses a small part of the original KAISER patches to populate the
user sp
From: Andy Lutomirski
Currently, the GDT is an ad-hoc array of pages, one per CPU, in the
fixmap. Generalize it to be an array of a new 'struct cpu_entry_area'
so that we can cleanly add new things to it.
Signed-off-by: Andy Lutomirski
Signed-off-by: Ingo Molnar
Signed-off-by: Thomas Gleixner
From: Thomas Gleixner
The (irq)entry text must be visible in the user space page tables. To allow
simple PMD based sharing, make the entry text PMD aligned.
Signed-off-by: Thomas Gleixner
---
arch/x86/kernel/vmlinux.lds.S |8
1 file changed, 8 insertions(+)
--- a/arch/x86/kernel
From: Thomas Gleixner
Force the entry through the trampoline only when KPTI is active. Otherwise
go through the normal entry code.
Signed-off-by: Thomas Gleixner
---
arch/x86/kernel/cpu/common.c |5 -
1 file changed, 4 insertions(+), 1 deletion(-)
--- a/arch/x86/kernel/cpu/common.c
+
On Mon, Dec 4, 2017 at 1:31 AM, Michal Hocko wrote:
>
> On Fri 01-12-17 08:29:53, Dan Williams wrote:
> > On Fri, Dec 1, 2017 at 8:02 AM, Jason Gunthorpe wrote:
> > >
> > > On Fri, Dec 01, 2017 at 11:12:18AM +0100, Michal Hocko wrote:
> > > > On Thu 30-11-17 12:01:17, Jason Gunthorpe wrote:
> > >
From: Thomas Gleixner
Share the entry text PMD of the kernel mapping with the user space
mapping. If large pages are enabled this is a single PMD entry and at the
point where it is copied into the user page table the RW bit has not been
cleared yet. Clear it right away so the user space visible m
From: Andy Lutomirski
Now that the SYSENTER stack has a guard page, there's no need for a canary
to detect overflow after the fact.
Signed-off-by: Andy Lutomirski
Signed-off-by: Ingo Molnar
Signed-off-by: Thomas Gleixner
Reviewed-by: Thomas Gleixner
Reviewed-by: Borislav Petkov
Cc: Rik van
From: Andy Lutomirski
When we start using an entry trampoline, a #GP from userspace will
be delivered on the entry stack, not on the task stack. Fix the
espfix64 #DF fixup to set up #GP according to TSS.SP0, rather than
assuming that pt_regs + 1 == SP0. This won't change anything
without an ent
From: Thomas Gleixner
That makes it automatically a shared mapping along with the cpu_entry_area.
Signed-off-by: Thomas Gleixner
---
arch/x86/include/asm/fixmap.h |2 +-
1 file changed, 1 insertion(+), 1 deletion(-)
--- a/arch/x86/include/asm/fixmap.h
+++ b/arch/x86/include/asm/fixmap.h
@
From: Hugh Dickins
The BTS and PEBS buffers both have their virtual addresses programmed into
the hardware. This means that any access to them is performed via the page
tables. The times that the hardware accesses these are entirely dependent
on how the performance monitoring hardware events ar
From: Thomas Gleixner
LDT entries need to be user visible. Add them to the user shared fixmaps so
they can be mapped to the actual location of the LDT entries of a process
on task switch.
Populate the PTEs upfront so the PMD sharing works.
Signed-off-by: Thomas Gleixner
---
arch/x86/include/
From: Andy Lutomirski
The existing code was a mess, mainly because C arrays are nasty. Turn
SYSENTER_stack into a struct, add a helper to find it, and do all the
obvious cleanups this enables.
Signed-off-by: Andy Lutomirski
Signed-off-by: Ingo Molnar
Signed-off-by: Thomas Gleixner
Reviewed-b
From: Thomas Gleixner
There is currently no way to force CPU bug bits like CPU feature bits. That
makes it impossible to set a bug bit once at boot and have it stick for all
upcoming CPUs.
Extend the force set/clear arrays to handle bug bits as well.
Signed-off-by: Thomas Gleixner
---
arch/x
From: Dave Hansen
There are effectively two ASID types:
1. The one stored in the mmu_context that goes from 0..5
2. The one programmed into the hardware that goes from 1..6
This consolidates the locations where converting between the two (by doing
a +1) to a single place which gives us a nice
From: Dave Hansen
In preparation to adding additional PCID flushing, abstract the
loading of a new ASID into CR3.
[ Peterz: Split out from big combo patch ]
Signed-off-by: Dave Hansen
Signed-off-by: Peter Zijlstra (Intel)
Signed-off-by: Thomas Gleixner
---
arch/x86/mm/tlb.c | 22 +
From: Dave Hansen
Global pages stay in the TLB across context switches. Since all contexts
share the same kernel mapping, these mappings are marked as global pages
so kernel entries in the TLB are not flushed out on a context switch.
But, even having these entries in the TLB opens up something
On Mon, Dec 4, 2017 at 5:36 PM, Laurent Pinchart
wrote:
> Hi Arnd,
>
> Thank you for the patch.
>
> On Monday, 4 December 2017 16:44:23 EET Arnd Bergmann wrote:
>> gcc-8 -fsanitize-coverage=trace-pc produces a false-positive warning:
>>
>> drivers/gpu/drm/msm/mdp/mdp5/mdp5_plane.c: In function
>>
Most NMI/paranoid exceptions will not in fact change pagetables and
would thus not require TLB flushing, however RESTORE_CR3 uses flushing
CR3 writes.
Restores to kernel PCIDs can be NOFLUSH, because we explicitly flush
the kernel mappings and now that we track which user PCIDs need
flushing we ca
We can use PCID to retain the TLBs across CR3 switches; including
those now part of the user/kernel switch. This increases performance
of kernel entry/exit at the cost of more expensive/complicated TLB
flushing.
Now that we have two address spaces, one for kernel and one for user
space, we need tw
From: Dave Hansen
This uses INVPCID to shoot down individual lines of the user mapping
instead of marking the entire user map as invalid. This
could/might/possibly be faster.
This for sure needs tlb_single_page_flush_ceiling to be redetermined;
esp. since INVPCID is _slow_.
[ Peterz: Split out
From: Dave Hansen
Populate the PGD entries in the init user PGD which cover the kernel half
of the address space. This makes sure that the installment of the user
visible kernel mappings finds a populated PGD.
In clone_pgd_range() copy the init user PGDs which cover the kernel half of
the addres
On Mon, 4 Dec 2017, Andy Lutomirski wrote:
> On Mon, Dec 4, 2017 at 6:08 AM, Thomas Gleixner wrote:
> > --- a/security/Kconfig
> > +++ b/security/Kconfig
> > @@ -54,6 +54,16 @@ config SECURITY_NETWORK
> > implement socket and networking access controls.
> > If you are unsure ho
From: Dave Hansen
The KERNEL_PAGE_TABLE_ISOLATION code attempts to "poison" the user
portion of the kernel page tables. It detects entries that it wants that it
wants to poison in two ways:
* Looking for addresses >= PAGE_OFFSET
* Looking for entries without _PAGE_USER set
But, to allow the
From: Dave Hansen
Clone the ESPFIX alias mapping area so the entry/exit code has access to it
even with the user space page tables.
[ tglx: Remove the per cpu user mapped oddity ]
Signed-off-by: Dave Hansen
Signed-off-by: Thomas Gleixner
---
arch/x86/kernel/espfix_64.c | 16 ++
From: Dave Hansen
Finally allow CONFIG_KERNEL_PAGE_TABLE_ISOLATION to be enabled.
PARAVIRT generally requires that the kernel not manage its own page tables.
It also means that the hypervisor and kernel must agree wholeheartedly
about what format the page tables are in and what they contain.
KER
From: Borislav Petkov
The upcoming support for dumping the kernel and the user space page tables
of the current process would create more random files in the top level
debugfs directory.
Add a page table directory and move the existing file to it.
Signed-off-by: Borislav Petkov
Signed-off-by:
From: Thomas Gleixner
ptdump_walk_pgd_level_checkwx() checks the kernel page table for WX pages,
but does not check the KERNEL_PAGE_TABLE_ISOLATION user space page table.
Restructure the code so that dmesg output is selected by an explicit
argument and not implicit via checking the pgd argument
From: Thomas Gleixner
Add two debugfs files which allow to dump the pagetable of the current
task.
current_kernel dumps the regular page table. This is the page table which
is normally shared between kernel and user space. If kernel page table
isolation is enabled this is the kernel space mappin
From: Dave Hansen
Kernel page table isolation requires to have two PGDs. One for the kernel,
which contains the full kernel mapping plus the user space mapping and one
for user space which contains the user space mappings and the minimal set
of kernel mappings which are required by the architectu
CPUmasks are never big enough to warrant 64-bit code.
Space savings:
add/remove: 0/0 grow/shrink: 1/4 up/down: 3/-17 (-14)
Function old new delta
sched_init_numa 15301533 +3
compat_sys_s
From: Andy Lutomirski
Share the FIX_USR_SHARED PMDs so the user space and kernel space page
tables have the same PMD page.
[ tglx: Made it use the FIX_USR_SHARED range so later additions
are covered automatically ]
Signed-off-by: Andy Lutomirski
Signed-off-by: Thomas Gleixner
---
arc
From: Andy Lutomirski
This allows the cpu entry area PMDs to be shared between the kernel and
user space page tables.
[ tglx: Fixed bottom of by one and added guards so other fixmaps can be
added later ]
Signed-off-by: Andy Lutomirski
Signed-off-by: Thomas Gleixner
---
arch/x86/incl
On Mon, Dec 4, 2017 at 6:08 AM, Thomas Gleixner wrote:
> From: Dave Hansen
>
> Finally allow CONFIG_KERNEL_PAGE_TABLE_ISOLATION to be enabled.
>
> PARAVIRT generally requires that the kernel not manage its own page tables.
> It also means that the hypervisor and kernel must agree wholeheartedly
>
From: Yazen Ghannam
The McaIntrCfg register (MSRC000_0410), previously known as
CU_DEFER_ERR, is used on SMCA systems to set the LVT offset for the
Threshold and Deferred error interrupts.
This register was used on non-SMCA systems to also set the Deferred
interrupt type in bits 2:1. However, th
From: Xie XiuQi
According to the Intel SDM Volume 3B (253669-063US, July 2017), action
optional (SRAO) errors can be reported either via MCE or CMC:
In cases when SRAO is signaled via CMCI the error signature is
indicated via UC=1, PCC=0, S=0.
Type(*1) UC EN PCC S
On 12/04/2017 07:42 PM, Christoph Hellwig wrote:
> I don't think we are using alloca in kernel mode code, and we shouldn't.
> What do I miss? Is this hidden support for on-stack VLAs? I thought
> we'd get rid of them as well.
>
Yes, this is for on-stack VLA. Last time I checked, we still had
From: Thomas Gleixner
LDT is not really commonly used on 64bit so the overhead of populating the
fixmap entries on context switch for the rare LDT syscall users is a
reasonable trade off vs. having extra dynamically managed mapping space per
process.
Signed-off-by: Thomas Gleixner
---
arch/x86
Perf record can switch output. The new output should only store the
data after switching. However, in overwrite backward mode, the new
output still have the data from old output. That also brings extra
overhead.
At the end of mmap_read, the position of processed ring buffer is
saved in md->prev. N
Remove the backward/forward concept to make it uniform with user
interface (the '--overwrite' option).
Signed-off-by: Wang Nan
---
tools/perf/builtin-record.c | 14 +++---
tools/perf/tests/backward-ring-buffer.c | 4 ++--
tools/perf/util/evlist.c| 30
perf record backward recording doesn't work as we expected: it never
overwrite when ring buffer full.
Test:
(Run a busy python printing task background like this:
while True:
print 123
send SIGUSR2 to perf to capture snapshot.)
# ./perf record --overwrite -e raw_syscalls:sys_enter -e ra
Simplify patch 1/3 following Namhyung's suggestion.
Context adjustment for patch 2 and 3.
Wang Nan (3):
perf mmap: Fix perf backward recording
perf tools: Don't discard prev in backward mode
perf tools: Replace 'backward' to 'overwrite' in evlist. mmap and
record
tools/perf/builtin-re
On 12/04/2017 07:20 PM, Paul Lawrence wrote:
>
> > + # -fasan-shadow-offset fails without -fsanitize
> > + CFLAGS_KASAN_SHADOW := $(call cc-option, -fsanitize=kernel-address \
> > + -fasan-shadow-offset=$(KASAN_SHADOW_OFFSET), \
> > + $(
From: Dave Hansen
First, it's nice to remove the magic numbers.
Second, KERNEL_PAGE_TABLE_ISOLATION is going to consume half of the
available ASID space. The space is currently unused, but add a comment to
spell out this new restriction.
Signed-off-by: Dave Hansen
Signed-off-by: Ingo Molnar
From: Dave Hansen
If changing the page tables in such a way that an invalidation of all
contexts (aka. PCIDs / ASIDs) is required, they can be actively invalidated
by:
1. INVPCID for each PCID (works for single pages too).
2. Load CR3 with each PCID without the NOFLUSH bit set
3. Load CR3 w
From: Dave Hansen
For flushing the TLB, the ASID which has been programmed into the hardware
must be known. That differs from what is in 'cpu_tlbstate'.
Add functions to transform the 'cpu_tlbstate' values into to the one
programmed into the hardware (CR3).
It's not easy to include mmu_context
From: Thomas Gleixner
To support user shared LDT entry mappings it's required to change the LDT
related code so that the kernel side only references the real page mapping
of the LDT. When the LDT is loaded then the entries are alias mapped in the
per cpu fixmap. To catch all users rename ldt_stru
From: Thomas Gleixner
The intel PEBS/BTS debug store is a design trainwreck as is expects virtual
addresses which must be visible in any execution context.
So it is required to make these mappings visible to user space when kernel
page table isolation is active.
Provide enough room for the buff
From: Andy Lutomirski
The cpu_entry_area will contain stacks. Make sure that KASAN has
appropriate shadow mappings for them.
Signed-off-by: Andy Lutomirski
Signed-off-by: Andrey Ryabinin
Signed-off-by: Ingo Molnar
Signed-off-by: Thomas Gleixner
Cc: Rik van Riel
Cc: Denys Vlasenko
Cc: Pete
From: Andy Lutomirski
A future patch will move SYSENTER_stack to the beginning of cpu_tss
to help detect overflow. Before this can happen, fix several code
paths that hardcode assumptions about the old layout.
Signed-off-by: Andy Lutomirski
Signed-off-by: Ingo Molnar
Signed-off-by: Thomas Gle
This series is a major overhaul of the KAISER patches:
1) Entry code
Mostly the same, except for a handful of fixlets and delta
improvements folded into the corresponding patches
New: Map TSS read only into the user space visible mapping
This is 64bit only, as 32bit needs the TSS
From: Andy Lutomirski
SYSENTER_stack should have reliable overflow detection, which
means that it needs to be at the bottom of a page, not the top.
Move it to the beginning of struct tss_struct and page-align it.
Also add an assertion to make sure that the fixed hardware TSS
doesn't cross a page
From: Andy Lutomirski
If the stack overflows into a guard page and the ORC unwinder should work
well: by construction, there can't be any meaningful data in the guard page
because no writes to the guard page will have succeeded.
But there is a bug that prevents unwinding from working correctly:
>>> On 04.12.17 at 17:16, wrote:
> Do you have any further comments on the current version of this patch?.
No. I'm not fully understanding your most recent slot related comments,
but I'll trust you and Konrad to get this into suitable shape.
Jan
Hi,
this is a simpler version that allows just the customization of
"Depends:", as requested by Ben.
It addresses the security issues Jim mentioned by not using eval
anymore.
Henning
Am Mon, 4 Dec 2017 17:48:08 +0100
schrieb Henning Schild :
> The debian packages coming out of "make *deb-pkg"
On Mon, Dec 04, 2017 at 04:59:25PM +0100, Greg Kroah-Hartman wrote:
> This is the start of the stable review cycle for the 4.4.104 release.
> There are 27 patches in this series, all will be posted as a response
> to this one. If anyone has any issues with these being applied, please
> let me know
tree: https://git.kernel.org/pub/scm/linux/kernel/git/tip/tip.git WIP.x86/kpti
head: c7ddf30cab554658b154ee16ae5e5d577ff530bf
commit: 9ebd9d9cdbc90021a5e320fb054cf48c027e6d34 [50/65] x86/fixmap: Add ldt
entries to user shared fixmap
config: x86_64-allmodconfig (attached as .config)
compiler: g
From: Colin Ian King
The switch statement is missing breaks for the cases of
GVT_FAILSAFE_INSUFFICIENT_RESOURCE and GVT_FAILSAFE_GUEST_ERR. Add them
in.
Detected by CoverityScan, CID#1462416 ("Missing break in switch")
Fixes: e011c6ce2b4f ("drm/i915/gvt: Add VM healthy check for workload_thread
On 11/29/2017 03:32 PM, Andrew F. Davis wrote:
> Move to using newer gpiod_* GPIO handling functions. This simplifies
> the code and eases dropping platform data in the next patch. Also
> remember GPIO are active low, so set "1" to reset.
>
> Signed-off-by: Andrew F. Davis
> ---
Kbuild bot seem
The debian packages coming out of "make *deb-pkg" lack the "Depends:"
field. If one tries to install a fresh system with such a "linux-image"
debootstrap or multistrap might try to install the kernel before its
deps and the package hooks will fail.
Different debian-based distros use different valu
On 12/04/2017 08:13 AM, Prarit Bhargava wrote:
>
>
> x86: Booting SMP configuration:
> node #0, CPUs:#1 #2 #3 #4
> node #1, CPUs:#5 #6 #7 #8 #9
> node #0, CPUs: #10 #11 #12 #13 #14
> node #1, CPUs: #15 #16 #17 #18 #19
> smp: Brought up 2 nodes, 20 CP
3.18-stable review patch. If anyone has any objections, please let me know.
--
From: NeilBrown
commit b688741cb06695312f18b730653d6611e1bad28d upstream.
For correct close-to-open semantics, NFS must validate
the change attribute of a directory (or file) on open.
Since commit
On Monday 04 December 2017 10:12 PM, Russell King - ARM Linux wrote:
On Mon, Dec 04, 2017 at 11:34:48AM -0500, David Miller wrote:
From: Russell King - ARM Linux
Date: Mon, 4 Dec 2017 16:24:47 +
On Mon, Dec 04, 2017 at 11:20:49AM -0500, David Miller wrote:
From: Arvind Yadav
Date: Sun
On Thu, Nov 02, 2017 at 04:05:01AM -0700, syzbot wrote:
> Hello,
>
> syzkaller hit the following crash on
> 3a99df9a3d14cd866b5516f8cba515a3bfd554ab
> git://git.kernel.org/pub/scm/linux/kernel/git/torvalds/linux.git/master
> compiler: gcc (GCC) 7.1.1 20170620
> .config is attached
> Raw console ou
3.18-stable review patch. If anyone has any objections, please let me know.
--
From: Kirill A. Shutemov
commit a8f97366452ed491d13cf1e44241bc0b5740b1f0 upstream.
Currently, we unconditionally make page table dirty in touch_pmd().
It may result in false-positive can_follow_writ
3.18-stable review patch. If anyone has any objections, please let me know.
--
From: Josef Bacik
commit 8e138e0d92c6c9d3d481674fb14e3439b495be37 upstream.
We discovered a box that had double allocations, and suspected the space
cache may be to blame. While auditing the write
3.18-stable review patch. If anyone has any objections, please let me know.
--
From: Heiner Kallweit
commit d9bcd462daf34aebb8de9ad7f76de0198bb5a0f0 upstream.
So far we completely rely on the caller to provide valid arguments.
To be on the safe side perform an own sanity check
4.4-stable review patch. If anyone has any objections, please let me know.
--
From: Tom Herbert
commit fc9e50f5a5a4e1fa9ba2756f745a13e693cf6a06 upstream.
The start callback allows the caller to set up a context for the
dump callbacks. Presumably, the context can then be destro
I have no objections to adding this to 4.9-stable or 4.14-stable.
Jeff Lien
-Original Message-
From: Greg Kroah-Hartman [mailto:gre...@linuxfoundation.org]
Sent: Monday, December 4, 2017 10:00 AM
To: linux-kernel@vger.kernel.org
Cc: Greg Kroah-Hartman; sta...@vger.kernel.org; Jeffrey Li
I don't think we are using alloca in kernel mode code, and we shouldn't.
What do I miss? Is this hidden support for on-stack VLAs? I thought
we'd get rid of them as well.
4.4-stable review patch. If anyone has any objections, please let me know.
--
From: chenjie
commit 6ea8d958a2c95a1d514015d4e29ba21a8c0a1a91 upstream.
MADVISE_WILLNEED has always been a noop for DAX (formerly XIP) mappings.
Unfortunately madvise_willneed() doesn't communicate t
4.4-stable review patch. If anyone has any objections, please let me know.
--
From: Matt Fleming
commit edc3b9129cecd0f0857112136f5b8b1bc1d45918 upstream.
The x86 pageattr code is confused about the data that is stored
in cpa->pfn, sometimes it's treated as a page frame number
On Mon, Dec 04, 2017 at 11:34:48AM -0500, David Miller wrote:
> From: Russell King - ARM Linux
> Date: Mon, 4 Dec 2017 16:24:47 +
>
> > On Mon, Dec 04, 2017 at 11:20:49AM -0500, David Miller wrote:
> >> From: Arvind Yadav
> >> Date: Sun, 3 Dec 2017 00:56:15 +0530
> >>
> >> > The platform_g
4.4-stable review patch. If anyone has any objections, please let me know.
--
From: Matt Fleming
commit 67a9108ed4313b85a9c53406d80dc1ae3f8c3e36 upstream.
With commit e1a58320a38d ("x86/mm: Warn on W^X mappings") all
users booting on 64-bit UEFI machines see the following warn
4.4-stable review patch. If anyone has any objections, please let me know.
--
From: Matt Fleming
commit c9f2a9a65e4855b74d92cdad688f6ee4a1a323ff upstream.
This change is a prerequisite for pending patches that switch to
a dedicated EFI page table, instead of using 'trampoline_
4.4-stable review patch. If anyone has any objections, please let me know.
--
From: NeilBrown
commit b688741cb06695312f18b730653d6611e1bad28d upstream.
For correct close-to-open semantics, NFS must validate
the change attribute of a directory (or file) on open.
Since commit e
This is the start of the stable review cycle for the 4.4.104 release.
There are 27 patches in this series, all will be posted as a response
to this one. If anyone has any issues with these being applied, please
let me know.
Responses should be made by Wed Dec 6 15:59:33 UTC 2017.
Anything receiv
4.4-stable review patch. If anyone has any objections, please let me know.
--
From: Trond Myklebust
commit 15ca08d3299682dc49bad73251677b2c5017ef08 upstream.
Open file stateids can linger on the nfs4_file list of stateids even
after they have been closed. In order to avoid reu
On Mon, Dec 4, 2017 at 2:59 PM, Paul Moore wrote:
On 2017/12/02 3:52, syzbot wrote:
> ==
> BUG: KASAN: slab-out-of-bounds in strcmp+0x96/0xb0 lib/string.c:328
> Read of size 1 at addr 8801cd99d2c1 by task
4.4-stable review patch. If anyone has any objections, please let me know.
--
From: Trond Myklebust
commit d8a1a000555ecd1b824ac1ed6df8fe364df0 upstream.
If nfsd4_process_open2() is initialising a new stateid, and yet the
call to nfs4_get_vfs_file() fails for some reason,
4.4-stable review patch. If anyone has any objections, please let me know.
--
From: Adrian Hunter
commit ebe7dd45cf49e3b49cacbaace17f9f878f21fbea upstream.
The block driver must be resumed if the mmc bus fails to suspend the card.
Signed-off-by: Adrian Hunter
Reviewed-by: Li
4.4-stable review patch. If anyone has any objections, please let me know.
--
From: Heiner Kallweit
commit d9bcd462daf34aebb8de9ad7f76de0198bb5a0f0 upstream.
So far we completely rely on the caller to provide valid arguments.
To be on the safe side perform an own sanity check.
4.4-stable review patch. If anyone has any objections, please let me know.
--
From: Roman Kapl
commit 4f626a4ac8f57ddabf06d03870adab91e463217f upstream.
The function for byteswapping the data send to/from atombios was buggy for
num_bytes not divisible by four. The function mus
Up to f5caf621ee35 ("x86/asm: Fix inline asm call constraints for Clang")
we were able to use x86 headers to build to the 'bpf' clang target, as
done by the BPF code in tools/perf/.
With that commit, we ended up with following failure for 'perf test LLVM', this
is because "clang ... -target bpf ..
4.9-stable review patch. If anyone has any objections, please let me know.
--
From: Adam Ford
commit 56322e123235370f1449c7444e311cce857d12f5 upstream.
Fix commit 05c4ffc3a266 ("ARM: dts: LogicPD Torpedo: Add MT9P031 Support")
In the previous commit, I indicated that the only
4.4-stable review patch. If anyone has any objections, please let me know.
--
From: Josef Bacik
commit 8e138e0d92c6c9d3d481674fb14e3439b495be37 upstream.
We discovered a box that had double allocations, and suspected the space
cache may be to blame. While auditing the write o
4.9-stable review patch. If anyone has any objections, please let me know.
--
From: Naofumi Honda
commit 64ebe12494fd5d193f014ce38e1fd83cc57883c8 upstream.
>From kernel 4.9, my two nfsv4 servers sometimes suffer from
"panic: unable to handle kernel page request"
in posix_u
The generic version now takes dma_pfn_offset into account, so there is no
more need for an architecture override.
Signed-off-by: Christoph Hellwig
---
arch/arm64/include/asm/dma-mapping.h | 9 -
1 file changed, 9 deletions(-)
diff --git a/arch/arm64/include/asm/dma-mapping.h
b/arch/arm
This makes sure the generic version can be used with architectures /
devices that have a DMA offset in the direct mapping.
Signed-off-by: Christoph Hellwig
---
include/linux/dma-mapping.h | 2 +-
1 file changed, 1 insertion(+), 1 deletion(-)
diff --git a/include/linux/dma-mapping.h b/include/li
Hi all,
this small series tries to get rid of the global and misnamed
PCI_DMA_BUS_IS_PHYS flag, and replace it with a setting in each
struct dma_map_ops instance.
Hi Heiko,
On Monday, 4 December 2017 15:46:32 EET Heiko Stuebner wrote:
> Am Montag, 4. Dezember 2017, 15:22:07 CET schrieb Laurent Pinchart:
> > On Wednesday, 29 November 2017 20:47:55 EET Brian Norris wrote:
> > > From: Nickey Yang
> > >
> > > We might include additional ports in derivative de
601 - 700 of 1242 matches
Mail list logo