Removed all usages of INIT_MSG and dropped it from dbg.h.
Signed-off-by: Nishad Kamdar
---
Changes in v5:
- No change
---
drivers/staging/mt7621-mmc/dbg.h | 7 ---
drivers/staging/mt7621-mmc/sd.c | 16
2 files changed, 23 deletions(-)
diff --git
Removed all usages of INIT_MSG and dropped it from dbg.h.
Signed-off-by: Nishad Kamdar
---
Changes in v5:
- No change
---
drivers/staging/mt7621-mmc/dbg.h | 7 ---
drivers/staging/mt7621-mmc/sd.c | 16
2 files changed, 23 deletions(-)
diff --git
Replace all usages of ERR_MSG with with dev_ without __func__
or __LINE__ or current->comm and current->pid. Remove the do {}
while(0) loop for the single statement macro. Delete commented
ERR_MSG() usage. Drop ERR_MSG from dbg.h. Issue found by checkpatch.
Signed-off-by: Nishad Kamdar
---
Replace all usages of ERR_MSG with with dev_ without __func__
or __LINE__ or current->comm and current->pid. Remove the do {}
while(0) loop for the single statement macro. Delete commented
ERR_MSG() usage. Drop ERR_MSG from dbg.h. Issue found by checkpatch.
Signed-off-by: Nishad Kamdar
---
Hi,
Please pull this gcc-plugins fix for v4.19-rc1. This is for better
behavior when the kernel is built with Clang, reported by Stefan Agner.
Thanks!
-Kees
The following changes since commit 7ccb95e8fe9131b8fa14b947c60dfb30044fa002:
gcc-plugins: Regularize Makefile.gcc-plugins (2018-07-24
Hi,
Please pull this gcc-plugins fix for v4.19-rc1. This is for better
behavior when the kernel is built with Clang, reported by Stefan Agner.
Thanks!
-Kees
The following changes since commit 7ccb95e8fe9131b8fa14b947c60dfb30044fa002:
gcc-plugins: Regularize Makefile.gcc-plugins (2018-07-24
This patch removes the dead code for N_MSG().
Signed-off-by: Nishad Kamdar
---
Changes in v5:
- Remove commented code for N_MSG()
---
drivers/staging/mt7621-mmc/dbg.h | 8
1 file changed, 8 deletions(-)
diff --git a/drivers/staging/mt7621-mmc/dbg.h b/drivers/staging/mt7621-mmc/dbg.h
This patch removes the dead code for N_MSG().
Signed-off-by: Nishad Kamdar
---
Changes in v5:
- Remove commented code for N_MSG()
---
drivers/staging/mt7621-mmc/dbg.h | 8
1 file changed, 8 deletions(-)
diff --git a/drivers/staging/mt7621-mmc/dbg.h b/drivers/staging/mt7621-mmc/dbg.h
The patchset fixes the four debug macros N_MSG, ERR_MSG, INIT_MSG and
IRQ_MSG. Each patch fixes one particular macro and its usages.
For N_MSG, removes the commented code in dbg.h.
For ERR_MSG and IRQ_MSG, replaces printk with dev_ without __func__ or
__LINE__ or current->comm and current->pid.
The patchset fixes the four debug macros N_MSG, ERR_MSG, INIT_MSG and
IRQ_MSG. Each patch fixes one particular macro and its usages.
For N_MSG, removes the commented code in dbg.h.
For ERR_MSG and IRQ_MSG, replaces printk with dev_ without __func__ or
__LINE__ or current->comm and current->pid.
On Thu, Aug 23, 2018 at 06:56:21PM +0200, Greg Kroah-Hartman wrote:
> On Thu, Aug 23, 2018 at 09:30:21AM -0700, Guenter Roeck wrote:
> > On Thu, Aug 23, 2018 at 09:52:36AM +0200, Greg Kroah-Hartman wrote:
> > > This is the start of the stable review cycle for the 4.4.152 release.
> > > There are
On Thu, Aug 23, 2018 at 06:56:21PM +0200, Greg Kroah-Hartman wrote:
> On Thu, Aug 23, 2018 at 09:30:21AM -0700, Guenter Roeck wrote:
> > On Thu, Aug 23, 2018 at 09:52:36AM +0200, Greg Kroah-Hartman wrote:
> > > This is the start of the stable review cycle for the 4.4.152 release.
> > > There are
On Wed, Aug 22, 2018 at 02:38:44PM +0300, Dan Carpenter wrote:
> On Wed, Aug 22, 2018 at 04:40:56PM +0530, Nishad Kamdar wrote:
> > On Wed, Aug 22, 2018 at 12:09:36PM +0300, Dan Carpenter wrote:
> > > On Wed, Aug 22, 2018 at 02:04:55PM +0530, Nishad Kamdar wrote:
> > > > This patch fixes the debug
On Tue, Aug 21, 2018 at 3:18 AM Sayali Lokhande wrote:
>
> From: Subhash Jadavani
>
> UFS host supplies the reference clock to UFS device and UFS device
> specification allows host to provide one of the 4 frequencies (19.2 MHz,
> 26 MHz, 38.4 MHz, 52 MHz) for reference clock. Host should set the
On Wed, Aug 22, 2018 at 02:38:44PM +0300, Dan Carpenter wrote:
> On Wed, Aug 22, 2018 at 04:40:56PM +0530, Nishad Kamdar wrote:
> > On Wed, Aug 22, 2018 at 12:09:36PM +0300, Dan Carpenter wrote:
> > > On Wed, Aug 22, 2018 at 02:04:55PM +0530, Nishad Kamdar wrote:
> > > > This patch fixes the debug
On Tue, Aug 21, 2018 at 3:18 AM Sayali Lokhande wrote:
>
> From: Subhash Jadavani
>
> UFS host supplies the reference clock to UFS device and UFS device
> specification allows host to provide one of the 4 frequencies (19.2 MHz,
> 26 MHz, 38.4 MHz, 52 MHz) for reference clock. Host should set the
On Thu, Aug 23, 2018 at 04:39:29PM +, Parav Pandit wrote:
>
>
> > From: Jason Gunthorpe
> > Sent: Thursday, August 23, 2018 9:55 AM
> > To: Eric Biggers
> > Cc: Doug Ledford ; linux-r...@vger.kernel.org;
> > dasaratharaman.chandramo...@intel.com; Leon Romanovsky
> > ;
On Thu, Aug 23, 2018 at 04:39:29PM +, Parav Pandit wrote:
>
>
> > From: Jason Gunthorpe
> > Sent: Thursday, August 23, 2018 9:55 AM
> > To: Eric Biggers
> > Cc: Doug Ledford ; linux-r...@vger.kernel.org;
> > dasaratharaman.chandramo...@intel.com; Leon Romanovsky
> > ;
Since copy_optimized_instructions() misses to update real RIP
address while copying several instructions to working buffer,
it adjusts RIP-relative instruction with wrong RIP address for
the 2nd and subsequent instructions.
This may break the kernel (like kernel freeze) because
probed instruction
Since copy_optimized_instructions() misses to update real RIP
address while copying several instructions to working buffer,
it adjusts RIP-relative instruction with wrong RIP address for
the 2nd and subsequent instructions.
This may break the kernel (like kernel freeze) because
probed instruction
On Wed, Aug 22, 2018 at 12:13:42PM +0300, Dan Carpenter wrote:
> On Wed, Aug 22, 2018 at 02:13:07PM +0530, Nishad Kamdar wrote:
> > diff --git a/drivers/staging/mt7621-mmc/sd.c
> > b/drivers/staging/mt7621-mmc/sd.c
> > index 04d23cc7cd4a..6b2c72fc61f2 100644
> > ---
On Wed, Aug 22, 2018 at 12:13:42PM +0300, Dan Carpenter wrote:
> On Wed, Aug 22, 2018 at 02:13:07PM +0530, Nishad Kamdar wrote:
> > diff --git a/drivers/staging/mt7621-mmc/sd.c
> > b/drivers/staging/mt7621-mmc/sd.c
> > index 04d23cc7cd4a..6b2c72fc61f2 100644
> > ---
Hi Doug,
> -Original Message-
> From: Doug Ledford
> Sent: Thursday, August 23, 2018 11:56 AM
> To: Parav Pandit ; Jason Gunthorpe ;
> Eric Biggers
> Cc: linux-r...@vger.kernel.org; dasaratharaman.chandramo...@intel.com;
> Leon Romanovsky ; linux-kernel@vger.kernel.org;
> Mark Bloch ;
Hi Doug,
> -Original Message-
> From: Doug Ledford
> Sent: Thursday, August 23, 2018 11:56 AM
> To: Parav Pandit ; Jason Gunthorpe ;
> Eric Biggers
> Cc: linux-r...@vger.kernel.org; dasaratharaman.chandramo...@intel.com;
> Leon Romanovsky ; linux-kernel@vger.kernel.org;
> Mark Bloch ;
On Wed, Aug 22, 2018 at 01:26:41PM +0200, Greg Kroah-Hartman wrote:
> On Wed, Aug 22, 2018 at 04:40:56PM +0530, Nishad Kamdar wrote:
> > On Wed, Aug 22, 2018 at 12:09:36PM +0300, Dan Carpenter wrote:
> > > On Wed, Aug 22, 2018 at 02:04:55PM +0530, Nishad Kamdar wrote:
> > > > This patch fixes the
On Wed, Aug 22, 2018 at 01:26:41PM +0200, Greg Kroah-Hartman wrote:
> On Wed, Aug 22, 2018 at 04:40:56PM +0530, Nishad Kamdar wrote:
> > On Wed, Aug 22, 2018 at 12:09:36PM +0300, Dan Carpenter wrote:
> > > On Wed, Aug 22, 2018 at 02:04:55PM +0530, Nishad Kamdar wrote:
> > > > This patch fixes the
On 08/23/2018 05:48 AM, Michal Hocko wrote:
> On Tue 21-08-18 18:10:42, Mike Kravetz wrote:
> [...]
>
> OK, after burning myself when trying to be clever here it seems like
> your proposed solution is indeed simpler.
>
>> +bool huge_pmd_sharing_possible(struct vm_area_struct *vma,
>> +
On 08/23/2018 05:48 AM, Michal Hocko wrote:
> On Tue 21-08-18 18:10:42, Mike Kravetz wrote:
> [...]
>
> OK, after burning myself when trying to be clever here it seems like
> your proposed solution is indeed simpler.
>
>> +bool huge_pmd_sharing_possible(struct vm_area_struct *vma,
>> +
On Thu, Aug 23, 2018 at 09:30:21AM -0700, Guenter Roeck wrote:
> On Thu, Aug 23, 2018 at 09:52:36AM +0200, Greg Kroah-Hartman wrote:
> > This is the start of the stable review cycle for the 4.4.152 release.
> > There are 79 patches in this series, all will be posted as a response
> > to this one.
On Thu, Aug 23, 2018 at 09:30:21AM -0700, Guenter Roeck wrote:
> On Thu, Aug 23, 2018 at 09:52:36AM +0200, Greg Kroah-Hartman wrote:
> > This is the start of the stable review cycle for the 4.4.152 release.
> > There are 79 patches in this series, all will be posted as a response
> > to this one.
I am not able to reproduce when I booted my test system with "mem=8G
memmap=4G!8G". I ended up with a single pmem:
[ 57.750556] nd_pmem namespace0.0: unable to guarantee persistence of
writes
[ 57.881573] pmem0: detected capacity change from 0 to 4294967296
However in the reported kmsg, it
On Thu, 2018-08-23 at 16:39 +, Parav Pandit wrote:
> > -Original Message-
> > From: Jason Gunthorpe
> > Sent: Thursday, August 23, 2018 9:55 AM
> > To: Eric Biggers
> > Cc: Doug Ledford ; linux-r...@vger.kernel.org;
> > dasaratharaman.chandramo...@intel.com; Leon Romanovsky
> > ;
I am not able to reproduce when I booted my test system with "mem=8G
memmap=4G!8G". I ended up with a single pmem:
[ 57.750556] nd_pmem namespace0.0: unable to guarantee persistence of
writes
[ 57.881573] pmem0: detected capacity change from 0 to 4294967296
However in the reported kmsg, it
On Thu, 2018-08-23 at 16:39 +, Parav Pandit wrote:
> > -Original Message-
> > From: Jason Gunthorpe
> > Sent: Thursday, August 23, 2018 9:55 AM
> > To: Eric Biggers
> > Cc: Doug Ledford ; linux-r...@vger.kernel.org;
> > dasaratharaman.chandramo...@intel.com; Leon Romanovsky
> > ;
Hi,
On 08/21/2018 01:54 AM, Miguel de Dios wrote:
On 08/17/2018 11:27 AM, Steve Muckle wrote:
From: John Dias
When rt_mutex_setprio changes a task's scheduling class to RT,
we're seeing cases where the task's vruntime is not updated
correctly upon return to the fair class.
Specifically, the
Hi,
On 08/21/2018 01:54 AM, Miguel de Dios wrote:
On 08/17/2018 11:27 AM, Steve Muckle wrote:
From: John Dias
When rt_mutex_setprio changes a task's scheduling class to RT,
we're seeing cases where the task's vruntime is not updated
correctly upon return to the fair class.
Specifically, the
On Wed, 22 Aug 2018, Serge E. Hallyn wrote:
> Quoting Christian Brauner (christ...@brauner.io):
> > bprm_caps_from_vfs_caps() never returned -EINVAL so remove the
> > rc == -EINVAL check.
> >
> > Signed-off-by: Christian Brauner
>
> Thanks.
>
> Reviewed-by: Serge Hallyn
Thanks, I'll queue
On Wed, 22 Aug 2018, Serge E. Hallyn wrote:
> Quoting Christian Brauner (christ...@brauner.io):
> > bprm_caps_from_vfs_caps() never returned -EINVAL so remove the
> > rc == -EINVAL check.
> >
> > Signed-off-by: Christian Brauner
>
> Thanks.
>
> Reviewed-by: Serge Hallyn
Thanks, I'll queue
Trace file offsets are different for every enqueued write operation
and calculated dynamically in trace streaming loop and don't overlap
so write requests can be written in parallel.
record__mmap_read_sync implements sort of a barrier between spilling
ready profiling data to disk.
Trace file offsets are different for every enqueued write operation
and calculated dynamically in trace streaming loop and don't overlap
so write requests can be written in parallel.
record__mmap_read_sync implements sort of a barrier between spilling
ready profiling data to disk.
With new helpers, FS/GS base access is centralized.
Eventually, when FSGSBASE instruction enabled, it will
be faster.
"inactive" GS base refers to base backed up at kernel
entries and of inactive (user) task's.
task_seg_base() is moved out to kernel/process_64.c, where
the helper functions are
With new helpers, FS/GS base access is centralized.
Eventually, when FSGSBASE instruction enabled, it will
be faster.
"inactive" GS base refers to base backed up at kernel
entries and of inactive (user) task's.
task_seg_base() is moved out to kernel/process_64.c, where
the helper functions are
Resending the patchset that was posted before [6].
Given feedbacks from [1], it was suggested to separate two parts
and to (re-)submit this patchset first.
To facilitate FSGSBASE, Andy's FS/GS base read fix is first
ordered, then some helper functions and refactoring work
are included. Cleanup
Resending the patchset that was posted before [6].
Given feedbacks from [1], it was suggested to separate two parts
and to (re-)submit this patchset first.
To facilitate FSGSBASE, Andy's FS/GS base read fix is first
ordered, then some helper functions and refactoring work
are included. Cleanup
On 08/23/2018 01:21 AM, Kirill A. Shutemov wrote:
> On Thu, Aug 23, 2018 at 09:30:35AM +0200, Michal Hocko wrote:
>> On Wed 22-08-18 09:48:16, Mike Kravetz wrote:
>>> On 08/22/2018 05:28 AM, Michal Hocko wrote:
On Tue 21-08-18 18:10:42, Mike Kravetz wrote:
[...]
> diff --git
From: Andy Lutomirski
ptrace can read FS/GS base using the register access API
(PTRACE_PEEKUSER, etc) or PTRACE_ARCH_PRCTL. Make both of these
mechanisms return the actual FS/GS base.
This will improve debuggability by providing the correct information
to ptracer (GDB and etc).
Signed-off-by:
The FS/GS base helper functions are used on ptrace APIs
(PTRACE_ARCH_PRCTL, PTRACE_SETREG, PTRACE_GETREG, etc).
The FS/GS-update mechanism is now a bit organized.
Based-on-code-from: Andy Lutomirski
Signed-off-by: Chang S. Bae
Cc: H. Peter Anvin
Cc: Thomas Gleixner
Cc: Ingo Molnar
Cc: Dave
On 08/23/2018 01:21 AM, Kirill A. Shutemov wrote:
> On Thu, Aug 23, 2018 at 09:30:35AM +0200, Michal Hocko wrote:
>> On Wed 22-08-18 09:48:16, Mike Kravetz wrote:
>>> On 08/22/2018 05:28 AM, Michal Hocko wrote:
On Tue 21-08-18 18:10:42, Mike Kravetz wrote:
[...]
> diff --git
From: Andy Lutomirski
ptrace can read FS/GS base using the register access API
(PTRACE_PEEKUSER, etc) or PTRACE_ARCH_PRCTL. Make both of these
mechanisms return the actual FS/GS base.
This will improve debuggability by providing the correct information
to ptracer (GDB and etc).
Signed-off-by:
The FS/GS base helper functions are used on ptrace APIs
(PTRACE_ARCH_PRCTL, PTRACE_SETREG, PTRACE_GETREG, etc).
The FS/GS-update mechanism is now a bit organized.
Based-on-code-from: Andy Lutomirski
Signed-off-by: Chang S. Bae
Cc: H. Peter Anvin
Cc: Thomas Gleixner
Cc: Ingo Molnar
Cc: Dave
On Thu, Aug 23, 2018 at 06:24:24PM +0200, Vitaly Kuznetsov wrote:
> nested_run_pending is set 20 lines above and check_vmentry_prereqs()/
> check_vmentry_postreqs() don't seem to be resetting it (the later, however,
> checks it).
>
Reviewed-by: Eduardo Valentin
> Signed-off-by: Vitaly
On Thu, Aug 23, 2018 at 06:24:24PM +0200, Vitaly Kuznetsov wrote:
> nested_run_pending is set 20 lines above and check_vmentry_prereqs()/
> check_vmentry_postreqs() don't seem to be resetting it (the later, however,
> checks it).
>
Reviewed-by: Eduardo Valentin
> Signed-off-by: Vitaly
Instead of open coding the calls to load_seg_legacy(), add a
load_fsgs() helper to handle fs and gs. When FSGSBASE is enabled,
load_fsgs() will be updated.
Signed-off-by: Chang S. Bae
Reviewed-by: Andi Kleen
Reviewed-by: Andy Lutomirski
Reviewed-by: Thomas Gleixner
Cc: H. Peter Anvin
Cc:
Instead of open coding the calls to load_seg_legacy(), add a
load_fsgs() helper to handle fs and gs. When FSGSBASE is enabled,
load_fsgs() will be updated.
Signed-off-by: Chang S. Bae
Reviewed-by: Andi Kleen
Reviewed-by: Andy Lutomirski
Reviewed-by: Thomas Gleixner
Cc: H. Peter Anvin
Cc:
64-bit doesn't use the entry for per CPU data, but for CPU
(and node) numbers. The change will clarify the real usage
of this entry in GDT.
Suggested-by: H. Peter Anvin
Signed-off-by: Chang S. Bae
Acked-by: Andy Lutomirski
Reviewed-by: Thomas Gleixner
Cc: Ingo Molnar
Cc: Andi Kleen
Cc: Dave
The CPU initialization in vDSO is now a bit cleaned up by
the new helper functions. The helper functions will take
care of combining CPU and node number and reading each from
the combined value.
Suggested-by: Andy Lutomirski
Suggested-by: Thomas Gleixner
Signed-off-by: Chang S. Bae
Cc: H.
The open coded access is now replaced, that might prevent
from using the enhanced FSGSBASE mechanism.
Based-on-code-from: Andy Lutomirski
Signed-off-by: Chang S. Bae
Reviewed-by: Andi Kleen
Reviewed-by: Andy Lutomirski
Reviewed-by: Thomas Gleixner
Cc: H. Peter Anvin
Cc: Ingo Molnar
Cc:
64-bit doesn't use the entry for per CPU data, but for CPU
(and node) numbers. The change will clarify the real usage
of this entry in GDT.
Suggested-by: H. Peter Anvin
Signed-off-by: Chang S. Bae
Acked-by: Andy Lutomirski
Reviewed-by: Thomas Gleixner
Cc: Ingo Molnar
Cc: Andi Kleen
Cc: Dave
The CPU initialization in vDSO is now a bit cleaned up by
the new helper functions. The helper functions will take
care of combining CPU and node number and reading each from
the combined value.
Suggested-by: Andy Lutomirski
Suggested-by: Thomas Gleixner
Signed-off-by: Chang S. Bae
Cc: H.
The open coded access is now replaced, that might prevent
from using the enhanced FSGSBASE mechanism.
Based-on-code-from: Andy Lutomirski
Signed-off-by: Chang S. Bae
Reviewed-by: Andi Kleen
Reviewed-by: Andy Lutomirski
Reviewed-by: Thomas Gleixner
Cc: H. Peter Anvin
Cc: Ingo Molnar
Cc:
The CPU and node number will be written, as early enough,
to the segment limit of per CPU data and TSC_AUX MSR entry.
The information has been retrieved by vgetcpu in user space
and will be also loaded from the paranoid entry, when
FSGSBASE enabled.
The new setup function is named after the
The CPU and node number will be written, as early enough,
to the segment limit of per CPU data and TSC_AUX MSR entry.
The information has been retrieved by vgetcpu in user space
and will be also loaded from the paranoid entry, when
FSGSBASE enabled.
The new setup function is named after the
The data buffer and accompanying AIO control block are allocated at
perf_mmap object and the mapped data buffer size is equal to
the kernel one.
The buffer is then used to preserve profiling data ready for dumping
and queue it for asynchronous writing into perf trace thru implemented
The data buffer and accompanying AIO control block are allocated at
perf_mmap object and the mapped data buffer size is equal to
the kernel one.
The buffer is then used to preserve profiling data ready for dumping
and queue it for asynchronous writing into perf trace thru implemented
USB device
Vendor 05ac (Apple)
Device 026c (Magic Keyboard with Numeric Keypad)
Bluetooth devices
Vendor 004c (Apple)
Device 0267 (Magic Keyboard)
Device 026c (Magic Keyboard with Numeric Keypad)
Support already exists for the Magic Keyboard over USB
USB device
Vendor 05ac (Apple)
Device 026c (Magic Keyboard with Numeric Keypad)
Bluetooth devices
Vendor 004c (Apple)
Device 0267 (Magic Keyboard)
Device 026c (Magic Keyboard with Numeric Keypad)
Support already exists for the Magic Keyboard over USB
> -Original Message-
> From: Jason Gunthorpe
> Sent: Thursday, August 23, 2018 9:55 AM
> To: Eric Biggers
> Cc: Doug Ledford ; linux-r...@vger.kernel.org;
> dasaratharaman.chandramo...@intel.com; Leon Romanovsky
> ; linux-kernel@vger.kernel.org; Mark Bloch
> ; Moni Shoua ; Parav
> -Original Message-
> From: Jason Gunthorpe
> Sent: Thursday, August 23, 2018 9:55 AM
> To: Eric Biggers
> Cc: Doug Ledford ; linux-r...@vger.kernel.org;
> dasaratharaman.chandramo...@intel.com; Leon Romanovsky
> ; linux-kernel@vger.kernel.org; Mark Bloch
> ; Moni Shoua ; Parav
Currently in record mode the tool implements trace writing serially.
The algorithm loops over mapped per-cpu data buffers and stores ready
data chunks into a trace file using write() system call.
At some circumstances the kernel may lack free space in a buffer
because the other buffer's half
Hello LKML,
As time goes by more and more fixes of Intel/AMD/ARM CPUs
vulnerabilities are added to the Linux kernel without a simple way to
disable them all in one fell swoop.
Disabling is a good option for strictly confined environments where no
3d party untrusted code is ever to be run,
Currently in record mode the tool implements trace writing serially.
The algorithm loops over mapped per-cpu data buffers and stores ready
data chunks into a trace file using write() system call.
At some circumstances the kernel may lack free space in a buffer
because the other buffer's half
Hello LKML,
As time goes by more and more fixes of Intel/AMD/ARM CPUs
vulnerabilities are added to the Linux kernel without a simple way to
disable them all in one fell swoop.
Disabling is a good option for strictly confined environments where no
3d party untrusted code is ever to be run,
On Thu, Aug 23, 2018 at 09:52:36AM +0200, Greg Kroah-Hartman wrote:
> This is the start of the stable review cycle for the 4.4.152 release.
> There are 79 patches in this series, all will be posted as a response
> to this one. If anyone has any issues with these being applied, please
> let me
On Thu, Aug 23, 2018 at 09:52:36AM +0200, Greg Kroah-Hartman wrote:
> This is the start of the stable review cycle for the 4.4.152 release.
> There are 79 patches in this series, all will be posted as a response
> to this one. If anyone has any issues with these being applied, please
> let me
On 23/08/2018 18:24, Vitaly Kuznetsov wrote:
> nested_run_pending is set 20 lines above and check_vmentry_prereqs()/
> check_vmentry_postreqs() don't seem to be resetting it (the later, however,
> checks it).
>
> Signed-off-by: Vitaly Kuznetsov
> ---
> arch/x86/kvm/vmx.c | 3 ---
> 1 file
On 23/08/2018 18:24, Vitaly Kuznetsov wrote:
> nested_run_pending is set 20 lines above and check_vmentry_prereqs()/
> check_vmentry_postreqs() don't seem to be resetting it (the later, however,
> checks it).
>
> Signed-off-by: Vitaly Kuznetsov
> ---
> arch/x86/kvm/vmx.c | 3 ---
> 1 file
While running the AIM7 microbenchmark, it was found that there was
a severe spinlock contention problem in the current XFS log space
reservation code. To alleviate the problem, the log space waiter
waiting and waking functions are modified to use the wake_q for waking
up waiters without holding
While running the AIM7 microbenchmark, it was found that there was
a severe spinlock contention problem in the current XFS log space
reservation code. To alleviate the problem, the log space waiter
waiting and waking functions are modified to use the wake_q for waking
up waiters without holding
The use of wake_q_add() and wake_up_q() functions help to do task wakeup
without holding lock can help to reduce lock hold time. They should be
available to kernel modules as well.
A new wake_q_empty() inline function is also added.
Signed-off-by: Waiman Long
---
include/linux/sched/wake_q.h |
The use of wake_q_add() and wake_up_q() functions help to do task wakeup
without holding lock can help to reduce lock hold time. They should be
available to kernel modules as well.
A new wake_q_empty() inline function is also added.
Signed-off-by: Waiman Long
---
include/linux/sched/wake_q.h |
Running the AIM7 fserver workload on a 2-socket 24-core 48-thread
Broadwell system, it was found that there were severe spinlock contention
in the XFS code. In particular, native_queued_spin_lock_slowpath()
consumes 69.7% of cpu time. The xlog_grant_head_check() function call and
its sub-function
Running the AIM7 fserver workload on a 2-socket 24-core 48-thread
Broadwell system, it was found that there were severe spinlock contention
in the XFS code. In particular, native_queued_spin_lock_slowpath()
consumes 69.7% of cpu time. The xlog_grant_head_check() function call and
its sub-function
On 08/23/2018 02:50 AM, Johan Hovold wrote:
> On Wed, Aug 22, 2018 at 09:44:32PM -0500, Dave Gerlach wrote:
>> Currently the ti-cpufreq driver blindly registers a 'ti-cpufreq' to force
>> the driver to probe on any platforms where the driver is built in.
>> However, this should only happen on
On 08/23/2018 02:50 AM, Johan Hovold wrote:
> On Wed, Aug 22, 2018 at 09:44:32PM -0500, Dave Gerlach wrote:
>> Currently the ti-cpufreq driver blindly registers a 'ti-cpufreq' to force
>> the driver to probe on any platforms where the driver is built in.
>> However, this should only happen on
On Wed, Aug 22, 2018 at 04:12:13PM +0200, Michal Hocko wrote:
> On Tue 21-08-18 14:35:57, Roman Gushchin wrote:
> > If CONFIG_VMAP_STACK is set, kernel stacks are allocated
> > using __vmalloc_node_range() with __GFP_ACCOUNT. So kernel
> > stack pages are charged against corresponding memory
On Wed, Aug 22, 2018 at 04:12:13PM +0200, Michal Hocko wrote:
> On Tue 21-08-18 14:35:57, Roman Gushchin wrote:
> > If CONFIG_VMAP_STACK is set, kernel stacks are allocated
> > using __vmalloc_node_range() with __GFP_ACCOUNT. So kernel
> > stack pages are charged against corresponding memory
nested_run_pending is set 20 lines above and check_vmentry_prereqs()/
check_vmentry_postreqs() don't seem to be resetting it (the later, however,
checks it).
Signed-off-by: Vitaly Kuznetsov
---
arch/x86/kvm/vmx.c | 3 ---
1 file changed, 3 deletions(-)
diff --git a/arch/x86/kvm/vmx.c
nested_run_pending is set 20 lines above and check_vmentry_prereqs()/
check_vmentry_postreqs() don't seem to be resetting it (the later, however,
checks it).
Signed-off-by: Vitaly Kuznetsov
---
arch/x86/kvm/vmx.c | 3 ---
1 file changed, 3 deletions(-)
diff --git a/arch/x86/kvm/vmx.c
Roman Kagan writes:
> On Wed, Aug 22, 2018 at 12:18:32PM +0200, Vitaly Kuznetsov wrote:
>> Using hypercall for sending IPIs is faster because this allows to specify
>> any number of vCPUs (even > 64 with sparse CPU set), the whole procedure
>> will take only one VMEXIT.
>>
>> Current Hyper-V
On 08/23/2018 10:31 AM, Arnaldo Carvalho de Melo wrote:
> Em Thu, Aug 23, 2018 at 01:21:45PM +0200, Martin Liška escreveu:
>> May I please ping this.
>
> I was waiting for someone to give some ack, perhaps Will Cohen can take
> a brief look and provide that? Will?
>
> Thanks,
>
> - Arnaldo
>
Roman Kagan writes:
> On Wed, Aug 22, 2018 at 12:18:32PM +0200, Vitaly Kuznetsov wrote:
>> Using hypercall for sending IPIs is faster because this allows to specify
>> any number of vCPUs (even > 64 with sparse CPU set), the whole procedure
>> will take only one VMEXIT.
>>
>> Current Hyper-V
On 08/23/2018 10:31 AM, Arnaldo Carvalho de Melo wrote:
> Em Thu, Aug 23, 2018 at 01:21:45PM +0200, Martin Liška escreveu:
>> May I please ping this.
>
> I was waiting for someone to give some ack, perhaps Will Cohen can take
> a brief look and provide that? Will?
>
> Thanks,
>
> - Arnaldo
>
On 23/08/2018 17:29, Sean Christopherson wrote:
> On Thu, Aug 23, 2018 at 01:26:55PM +0200, Paolo Bonzini wrote:
>> On 22/08/2018 22:11, Brijesh Singh wrote:
>>>
>>> Yes, this is one of approach I have in mind. It will avoid splitting
>>> the larger pages; I am thinking that early in boot code we
On 23/08/2018 17:29, Sean Christopherson wrote:
> On Thu, Aug 23, 2018 at 01:26:55PM +0200, Paolo Bonzini wrote:
>> On 22/08/2018 22:11, Brijesh Singh wrote:
>>>
>>> Yes, this is one of approach I have in mind. It will avoid splitting
>>> the larger pages; I am thinking that early in boot code we
Hi Arnaldo,
On 23.08.2018 17:30, Arnaldo Carvalho de Melo wrote:
> Em Thu, Aug 23, 2018 at 01:30:47PM +0300, Alexey Budankov escreveu:
>> +++ b/tools/perf/util/evlist.c
>> @@ -718,6 +718,8 @@ static void perf_evlist__munmap_nofree(struct
>> perf_evlist *evlist)
>> void
Hi Arnaldo,
On 23.08.2018 17:30, Arnaldo Carvalho de Melo wrote:
> Em Thu, Aug 23, 2018 at 01:30:47PM +0300, Alexey Budankov escreveu:
>> +++ b/tools/perf/util/evlist.c
>> @@ -718,6 +718,8 @@ static void perf_evlist__munmap_nofree(struct
>> perf_evlist *evlist)
>> void
On 8/23/18 8:15 AM, Oleg Nesterov wrote:
On 08/22, Srikar Dronamraju wrote:
* Vlastimil Babka [2018-08-22 12:55:59]:
On 08/15/2018 08:49 PM, Yang Shi wrote:
We need check if mm or vma has uprobes in the following patch to check
if a vma could be unmapped with holding read mmap_sem.
On 8/23/18 8:15 AM, Oleg Nesterov wrote:
On 08/22, Srikar Dronamraju wrote:
* Vlastimil Babka [2018-08-22 12:55:59]:
On 08/15/2018 08:49 PM, Yang Shi wrote:
We need check if mm or vma has uprobes in the following patch to check
if a vma could be unmapped with holding read mmap_sem.
On Wed, Aug 22, 2018 at 12:18:32PM +0200, Vitaly Kuznetsov wrote:
> Using hypercall for sending IPIs is faster because this allows to specify
> any number of vCPUs (even > 64 with sparse CPU set), the whole procedure
> will take only one VMEXIT.
>
> Current Hyper-V TLFS (v5.0b) claims that
On Wed, Aug 22, 2018 at 12:18:32PM +0200, Vitaly Kuznetsov wrote:
> Using hypercall for sending IPIs is faster because this allows to specify
> any number of vCPUs (even > 64 with sparse CPU set), the whole procedure
> will take only one VMEXIT.
>
> Current Hyper-V TLFS (v5.0b) claims that
401 - 500 of 2464 matches
Mail list logo