Bjorn Helgaas writes:
> On Wed, Mar 19, 2014 at 12:37 AM, Rusty Russell wrote:
>> Side effect of stricter permissions means removing the unnecessary
>> S_IFREG from drivers/pci/slot.c.
>>
>> Suggested-by: Joe Perches
>> Cc: Bjorn Helgaas
>> Cc: Greg Kroah-Hartman
>> Signed-off-by: Rusty Russel
Joe Perches writes:
> Couple more bikesheddy things:
>
> Is there ever a reason to use a non __builtin_const_p(perms)?
It's a bit conservative, and anyway, the test is useless since AFAICT
BUILD_BUG_ON() is a noop if !__builtin_const_p(). I removed it
and re-tested.
> Maybe that should be a BUI
Couple more bikesheddy things:
Is there ever a reason to use a non __builtin_const_p(perms)?
Maybe that should be a BUILD_BUG_ON too
BUILD_BUG_ON(!builtin_const_p_perms)
My brain of little size gets confused by the
BUILD_BUG_ON_ZERO(foo) +
vs
BUILD_BUG_ON(foo);
as it just
On Wed, Mar 19, 2014 at 04:14:05PM -0400, Waiman Long wrote:
> This patch adds a XEN init function to activate the unfair queue
> spinlock in a XEN guest when the PARAVIRT_UNFAIR_LOCKS kernel config
> option is selected.
>
> Signed-off-by: Waiman Long
> ---
> arch/x86/xen/setup.c | 19
On Wed, Mar 19, 2014 at 04:14:00PM -0400, Waiman Long wrote:
> This patch makes the necessary changes at the x86 architecture
> specific layer to enable the use of queue spinlock for x86-64. As
> x86-32 machines are typically not multi-socket. The benefit of queue
> spinlock may not be apparent. So
A major problem with the queue spinlock patch is its performance at
low contention level (2-4 contending tasks) where it is slower than
the corresponding ticket spinlock code. The following table shows the
execution time (in ms) of a micro-benchmark where 5M iterations of
the lock/unlock cycles wer
This patch makes the necessary changes at the x86 architecture
specific layer to enable the use of queue spinlock for x86-64. As
x86-32 machines are typically not multi-socket. The benefit of queue
spinlock may not be apparent. So queue spinlock is not enabled.
Currently, there is some incompatibi
v6->v7:
- Remove an atomic operation from the 2-task contending code
- Shorten the names of some macros
- Make the queue waiter to attempt to steal lock when unfair lock is
enabled.
- Remove lock holder kick from the PV code and fix a race condition
- Run the unfair lock & PV code on
This patch adds a KVM init function to activate the unfair queue
spinlock in a KVM guest when the PARAVIRT_UNFAIR_LOCKS kernel config
option is selected.
Signed-off-by: Waiman Long
---
arch/x86/kernel/kvm.c | 17 +
1 files changed, 17 insertions(+), 0 deletions(-)
diff --git a
This patch adds the necessary KVM specific code to allow XEN to support
the sleeping and CPU kicking operations needed by the queue spinlock PV
code.
Signed-off-by: Waiman Long
---
arch/x86/xen/spinlock.c | 90 ---
kernel/Kconfig.locks|2 +-
This patch adds the necessary KVM specific code to allow KVM to support
the sleeping and CPU kicking operations needed by the queue spinlock PV
code.
Two KVM guests of 20 CPU cores (2 nodes) were created for performance
testing. With only one KVM guest powered on (no overcommit), the
disk workloa
This patch adds para-virtualization support to the queue spinlock in
the same way as was done in the PV ticket lock code. In essence, the
lock waiters will spin for a specified number of times (QSPIN_THRESHOLD
= 2^14) and then halted itself. The queue head waiter, unlike the
other waiter, will spin
This patch adds a XEN init function to activate the unfair queue
spinlock in a XEN guest when the PARAVIRT_UNFAIR_LOCKS kernel config
option is selected.
Signed-off-by: Waiman Long
---
arch/x86/xen/setup.c | 19 +++
1 files changed, 19 insertions(+), 0 deletions(-)
diff --git
This patch renames the paravirt_ticketlocks_enabled static key to a
more generic paravirt_spinlocks_enabled name.
Signed-off-by: Waiman Long
---
arch/x86/include/asm/spinlock.h |4 ++--
arch/x86/kernel/kvm.c|2 +-
arch/x86/kernel/paravirt-spinlocks.c |4 ++--
arc
Locking is always an issue in a virtualized environment as the virtual
CPU that is waiting on a lock may get scheduled out and hence block
any progress in lock acquisition even when the lock has been freed.
One solution to this problem is to allow unfair lock in a
para-virtualized environment. In
For architectures that support atomic operations on smaller 8 or
16 bits data types. It is possible to simplify the code and produce
slightly better optimized code at the expense of smaller number of
supported CPUs.
The qspinlock code can support up to a maximum of 4M-1 CPUs. With
less than 16K CP
This patch introduces a new generic queue spinlock implementation that
can serve as an alternative to the default ticket spinlock. Compared
with the ticket spinlock, this queue spinlock should be almost as fair
as the ticket spinlock. It has about the same speed in single-thread
and it can be much
On 03/19/2014 06:07 AM, Paolo Bonzini wrote:
Il 19/03/2014 04:15, Waiman Long ha scritto:
You should see the same values with the PV ticketlock. It is not clear
to me if this testing did include that variant of locks?
Yes, PV is fine. But up to this point of the series, we are concerned
about
> And I rewrote it substantially, mainly to take
> VIRTIO_RING_F_INDIRECT_DESC into account.
>
> As QEMU sets the vq size for PCI to 128, Venkatash's patch wouldn't
> have made a change. This version does (since QEMU also offers
> VIRTIO_RING_F_INDIRECT_DESC.
That divide-by-2 produced the same qu
On Wed, 2014-03-19 at 17:07 +1030, Rusty Russell wrote:
> Summary of http://lkml.org/lkml/2014/3/14/363 :
>
> Ted: module_param(queue_depth, int, 444)
> Joe: 0444!
> Rusty: User perms >= group perms >= other perms?
> Joe: CLASS_ATTR, DEVICE_ATTR, SENSOR_ATTR and SENSOR_ATTR_2?
Adding:
Il 19/03/2014 17:58, Waiman Long ha scritto:
Exactly. What you want is boot_cpu_has(X86_FEATURE_HYPERVISOR).
The unfair lock is to be enabled by boot time check, not just by the
presence of a configuration macro during the build process in order to
avoid using unfair lock on bare metal. Of cou
On Wed, Mar 19, 2014 at 12:37 AM, Rusty Russell wrote:
> Joe Perches writes:
>> On Sun, 2014-03-16 at 22:00 -0700, Joe Perches wrote:
>>> On Mon, 2014-03-17 at 14:25 +1030, Rusty Russell wrote:
>>>
>>> > Erk, our tests are insufficient. Testbuilding an allmodconfig with this
>>> > now:
>>>
>>> G
On Wed, Mar 19, 2014 at 05:07:50PM +1030, Rusty Russell wrote:
> Joe Perches writes:
> > On Sun, 2014-03-16 at 22:00 -0700, Joe Perches wrote:
> >> On Mon, 2014-03-17 at 14:25 +1030, Rusty Russell wrote:
> >>
> >> > Erk, our tests are insufficient. Testbuilding an allmodconfig with this
> >> > n
On Tue, Mar 18, 2014 at 11:11:43PM -0400, Waiman Long wrote:
> On 03/17/2014 03:10 PM, Konrad Rzeszutek Wilk wrote:
> >On Mon, Mar 17, 2014 at 01:44:34PM -0400, Waiman Long wrote:
> >>On 03/14/2014 04:30 AM, Peter Zijlstra wrote:
> >>>On Thu, Mar 13, 2014 at 04:05:19PM -0400, Waiman Long wrote:
> >
Il 19/03/2014 04:15, Waiman Long ha scritto:
You should see the same values with the PV ticketlock. It is not clear
to me if this testing did include that variant of locks?
Yes, PV is fine. But up to this point of the series, we are concerned
about spinlock performance when running on an overc
25 matches
Mail list logo