On Wed, Jun 02, 2021 at 09:53:15AM +0300, Christoph Hellwig wrote:
> Hi all,
Hi!
You wouldn't have a nice git repo to pull so one can test it easily?
Thank you!
Cc-ing Boris/Juergen - pls see below xen.
>
> this series is the scond part of cleaning up lifetimes and allocation of
> the gendisk
On 4/29/21 12:16 AM, Jason Wang wrote:
在 2021/4/29 上午5:06, Konrad Rzeszutek Wilk 写道:
On Wed, Apr 21, 2021 at 11:21:10AM +0800, Jason Wang wrote:
Hi All:
Sometimes, the driver doesn't trust the device. This is usually
happens for the encrtpyed VM or VDUSE[1]. In both cases, technology
On Wed, Jun 02, 2021 at 05:41:30PM -0700, Andi Kleen wrote:
> swiotlb currently only uses the start address of a DMA to check if something
> is in the swiotlb or not. But with virtio and untrusted hosts the host
> could give some DMA mapping that crosses the swiotlb boundaries,
> potentially leakin
On Wed, Apr 21, 2021 at 11:21:10AM +0800, Jason Wang wrote:
> Hi All:
>
> Sometimes, the driver doesn't trust the device. This is usually
> happens for the encrtpyed VM or VDUSE[1]. In both cases, technology
> like swiotlb is used to prevent the poking/mangling of memory from the
> device. But thi
On Wed, Feb 10, 2021 at 04:12:25PM +0100, Joerg Roedel wrote:
> Hi Konrad,
>
> On Wed, Feb 10, 2021 at 09:58:35AM -0500, Konrad Rzeszutek Wilk wrote:
> > What GRUB versions are we talking about (CC-ing Daniel Kiper, who owns
> > GRUB).
>
> I think this was about
On Wed, Feb 10, 2021 at 11:21:28AM +0100, Joerg Roedel wrote:
> From: Joerg Roedel
>
> Hi,
>
> these patches add support for the 32-bit boot in the decompressor
> code. This is needed to boot an SEV-ES guest on some firmware and grub
> versions. The patches also add the necessary CPUID sanity ch
On Fri, Feb 05, 2021 at 06:58:52PM +0100, Christoph Hellwig wrote:
> On Wed, Feb 03, 2021 at 02:36:38PM -0500, Konrad Rzeszutek Wilk wrote:
> > > So what? If you guys want to provide a new capability you'll have to do
> > > work. And designing a new protocol base
On Wed, Feb 03, 2021 at 01:49:22PM +0100, Christoph Hellwig wrote:
> On Mon, Jan 18, 2021 at 12:44:58PM +0100, Martin Radev wrote:
> > Your comment makes sense but then that would require the cooperation
> > of these vendors and the cloud providers to agree on something meaningful.
> > I am also no
On Tue, Feb 02, 2021 at 04:34:09PM -0600, Tom Lendacky wrote:
> On 2/2/21 10:37 AM, Konrad Rzeszutek Wilk wrote:
> > On Mon, Jan 25, 2021 at 07:33:35PM +0100, Martin Radev wrote:
> >> On Mon, Jan 18, 2021 at 10:14:28AM -0500, Konrad Rzeszutek Wilk wrote:
> >>> On M
On Mon, Jan 25, 2021 at 07:33:35PM +0100, Martin Radev wrote:
> On Mon, Jan 18, 2021 at 10:14:28AM -0500, Konrad Rzeszutek Wilk wrote:
> > On Mon, Jan 18, 2021 at 12:44:58PM +0100, Martin Radev wrote:
> > > On Wed, Jan 13, 2021 at 12:30:17PM +0100, Christoph Hellwig wrote:
>
On Mon, Jan 18, 2021 at 12:44:58PM +0100, Martin Radev wrote:
> On Wed, Jan 13, 2021 at 12:30:17PM +0100, Christoph Hellwig wrote:
> > On Tue, Jan 12, 2021 at 04:07:29PM +0100, Martin Radev wrote:
> > > The size of the buffer being bounced is not checked if it happens
> > > to be larger than the si
..snip..
>> > > This raises two issues:
>> > > 1) swiotlb_tlb_unmap_single fails to check whether the index
>generated
>> > > from the dma_addr is in range of the io_tlb_orig_addr array.
>> > That is fairly simple to implement I would think. That is it can
>check
>> > that the dma_addr is from the
On December 16, 2020 1:41:48 AM EST, Jason Wang wrote:
>
>
>- Original Message -
>>
>>
>> - Original Message -
>> > .snip.
>> > > > > This raises two issues:
>> > > > > 1) swiotlb_tlb_unmap_single fails to check whether the index
>> > > > > generated
>> > > > > from the dma_addr
point us to the intel thunder issue that you mentioned?
ThunderClap was it!
https://lwn.net/Articles/786558/
Cc-ing Lu Baolu ..
Hm, this was a year ago and it looks like there are some extra SWIOTLB
patches to be done ?
>
> On 12/15/20 9:47 AM, Ashish Kalra wrote:
> > On Mon, Dec 1
.snip.
> > > This raises two issues:
> > > 1) swiotlb_tlb_unmap_single fails to check whether the index generated
> > > from the dma_addr is in range of the io_tlb_orig_addr array.
> > That is fairly simple to implement I would think. That is it can check
> > that the dma_addr is from the PA in the
On Fri, Dec 11, 2020 at 06:31:21PM +0100, Felicitas Hetzelt wrote:
> Hello,
Hi! Please see below my responses.
> we have been analyzing the Hypervisor-OS interface of Linux
> and discovered bugs in the swiotlb/virtio implementation that can be
> triggered from a malicious Hypervisor / virtual dev
On Thu, Aug 06, 2020 at 03:46:23AM -0700, syzbot wrote:
> Hello,
>
> syzbot found the following issue on:
>
> HEAD commit:47ec5303 Merge git://git.kernel.org/pub/scm/linux/kernel/g..
> git tree: upstream
> console output: https://syzkaller.appspot.com/x/log.txt?x=16fe1dea90
> kernel
On Thu, Jun 11, 2020 at 07:34:19AM -0400, Michael S. Tsirkin wrote:
> As testing shows no performance change, switch to that now.
What kind of testing? 100GiB? Low latency?
___
Virtualization mailing list
Virtualization@lists.linux-foundation.org
https:
On Wed, Apr 29, 2020 at 06:20:48AM -0400, Michael S. Tsirkin wrote:
> On Wed, Apr 29, 2020 at 03:39:53PM +0530, Srivatsa Vaddagiri wrote:
> > That would still not work I think where swiotlb is used for pass-thr devices
> > (when private memory is fine) as well as virtio devices (when shared memory
On Thu, Dec 12, 2019 at 06:11:24PM +0100, David Hildenbrand wrote:
> This series is based on latest linux-next. The patches are located at:
> https://github.com/davidhildenbrand/linux.git virtio-mem-rfc-v4
Heya!
Would there be by any chance a virtio-spec git tree somewhere?
..snip..
> ---
On Fri, Aug 09, 2019 at 07:00:24PM +0300, Adalbert Lazăr wrote:
> This patch might be obsolete thanks to single-stepping.
sooo should it be skipped from this large patchset to easy
review?
>
> Signed-off-by: Adalbert Lazăr
> ---
> arch/x86/kvm/x86.c | 9 +++--
> 1 file changed, 7 insertion
On Thu, Jan 31, 2019 at 11:24:07AM -0800, Thomas Garnier wrote:
> There has been no major concern in the latest iterations. I am interested on
> what would be the best way to slowly integrate this patchset upstream.
One question that I was somehow expected in this cover letter - what
about all tho
On Wed, Jan 30, 2019 at 05:40:02PM +0100, Joerg Roedel wrote:
> Hi,
>
> here is the next version of this patch-set. Previous
> versions can be found here:
>
> V1: https://lore.kernel.org/lkml/20190110134433.15672-1-j...@8bytes.org/
>
> V2: https://lore.kernel.org/lkml/20190115132257.
On Mon, Jan 28, 2019 at 10:20:05AM -0500, Michael S. Tsirkin wrote:
> On Wed, Jan 23, 2019 at 04:14:53PM -0500, Konrad Rzeszutek Wilk wrote:
> > On Wed, Jan 23, 2019 at 01:51:29PM -0500, Michael S. Tsirkin wrote:
> > > On Wed, Jan 23, 2019 at 05:30:44PM +0100, Joerg Roedel
On Wed, Jan 23, 2019 at 01:51:29PM -0500, Michael S. Tsirkin wrote:
> On Wed, Jan 23, 2019 at 05:30:44PM +0100, Joerg Roedel wrote:
> > Hi,
> >
> > here is the third version of this patch-set. Previous
> > versions can be found here:
> >
> > V1: https://lore.kernel.org/lkml/20190110134433.156
_size()
> virtio-blk: Consider virtio_max_dma_size() for maximum segment size
>
> drivers/block/virtio_blk.c | 10 ++
> drivers/virtio/virtio_ring.c | 10 ++
The kvm-devel mailing list should have been copied on those.
When you do can you please put 'Reviewed-by: K
On Fri, Jan 11, 2019 at 10:12:31AM +0100, Joerg Roedel wrote:
> On Thu, Jan 10, 2019 at 12:02:05PM -0500, Konrad Rzeszutek Wilk wrote:
> > Why not use swiotlb_nr_tbl ? That is how drivers/gpu/drm use to figure if
> > they
> > need to limit the size of pages.
>
> Tha
On Thu, Jan 10, 2019 at 02:44:31PM +0100, Joerg Roedel wrote:
> From: Joerg Roedel
>
> The SWIOTLB implementation has a maximum size it can
> allocate dma-handles for. This needs to be exported so that
> device drivers don't try to allocate larger chunks.
>
> This is especially important for blo
.giant snip..
> > + npinned = get_user_pages_fast(uaddr, npages, write, pages);
> > + if (npinned != npages)
> > + goto err;
> > +
>
> As I said I have doubts about the whole approach, but this
> implementation in particular isn't a good idea
> as it keeps the page around forever.
>
On Thu, Jul 19, 2018 at 11:37:59PM +0200, Ahmed Abd El Mawgood wrote:
> Hi,
>
> This is my first set of patches that works as I would expect, and the
> third revision I sent to mailing lists.
>
> Following up with my previous discussions about kernel rootkit mitigation
> via placing R/O protectio
On Mon, Apr 23, 2018 at 10:59:43PM +0300, Michael S. Tsirkin wrote:
> On Mon, Apr 23, 2018 at 03:31:20PM -0400, Konrad Rzeszutek Wilk wrote:
> > On Mon, Apr 23, 2018 at 01:34:52PM +0800, Jason Wang wrote:
> > > Hi all:
> > >
> > > This RFC implement packed ri
On Mon, Apr 23, 2018 at 01:34:52PM +0800, Jason Wang wrote:
> Hi all:
>
> This RFC implement packed ring layout. The code were tested with
> Tiwei's RFC V2 a thttps://lkml.org/lkml/2018/4/1/48. Some fixups and
> tweaks were needed on top of Tiwei's code to make it run. TCP stream
> and pktgen does
On Mon, Mar 26, 2018 at 11:38:45AM +0800, Jason Wang wrote:
> Hi all:
>
> This RFC implement packed ring layout. The code were tested with pmd
> implement by Jens at
> http://dpdk.org/ml/archives/dev/2018-January/089417.html. Minor change
> was needed for pmd codes to kick virtqueue since it assum
On Mon, Nov 13, 2017 at 06:05:59PM +0800, Quan Xu wrote:
> From: Yang Zhang
>
> Some latency-intensive workload have seen obviously performance
> drop when running inside VM. The main reason is that the overhead
> is amplified when running inside VM. The most cost I have seen is
> inside idle pat
On Tue, Aug 29, 2017 at 11:46:35AM +, Yang Zhang wrote:
> So far, pv_idle_ops.poll is the only ops for pv_idle. .poll is called in
> idle path which will polling for a while before we enter the real idle
> state.
>
> In virtualization, idle path includes several heavy operations
> includes tim
On Fri, Oct 28, 2016 at 04:11:26AM -0400, Pan Xinhui wrote:
> From: Juergen Gross
>
> Support the vcpu_is_preempted() functionality under Xen. This will
> enhance lock performance on overcommitted hosts (more runnable vcpus
> than physical cpus in the system) as doing busy waits for preempted
> v
On Fri, Oct 28, 2016 at 04:11:16AM -0400, Pan Xinhui wrote:
> change from v5:
> spilt x86/kvm patch into guest/host part.
> introduce kvm_write_guest_offset_cached.
> fix some typos.
> rebase patch onto 4.9.2
> change from v4:
> spilt x86 kvm vcpu preempted check into
On Thu, Mar 26, 2015 at 09:21:53PM +0100, Peter Zijlstra wrote:
> On Wed, Mar 25, 2015 at 03:47:39PM -0400, Konrad Rzeszutek Wilk wrote:
> > Ah nice. That could be spun out as a seperate patch to optimize the existing
> > ticket locks I presume.
>
> Yes I suppose we can do
On Mon, Mar 16, 2015 at 02:16:13PM +0100, Peter Zijlstra wrote:
> Hi Waiman,
>
> As promised; here is the paravirt stuff I did during the trip to BOS last
> week.
>
> All the !paravirt patches are more or less the same as before (the only real
> change is the copyright lines in the first patch).
On Wed, Mar 04, 2015 at 05:47:03PM -0800, Luis R. Rodriguez wrote:
> On Wed, Mar 4, 2015 at 6:36 AM, Andrey Ryabinin
> wrote:
> > On 03/03/2015 07:02 PM, Konrad Rzeszutek Wilk wrote:
> >> If it is like that - then just using what had to be implemented
> >> for the
On Tue, Mar 03, 2015 at 06:38:20PM +0300, Andrey Ryabinin wrote:
> On 03/03/2015 05:16 PM, Konrad Rzeszutek Wilk wrote:
> > On Tue, Mar 03, 2015 at 04:15:06PM +0300, Andrey Ryabinin wrote:
> >> On 03/03/2015 12:40 PM, Luis R. Rodriguez wrote:
> >>> Andrey,
> &
On Tue, Mar 03, 2015 at 04:15:06PM +0300, Andrey Ryabinin wrote:
> On 03/03/2015 12:40 PM, Luis R. Rodriguez wrote:
> > Andrey,
> >
> > I believe that on Xen we should disable kasan, would like confirmation
>
> I guess Xen guests won't work with kasan because Xen guests doesn't setup
> shadow
>
On Wed, Oct 29, 2014 at 04:19:10PM -0400, Waiman Long wrote:
> This patch adds the necessary KVM specific code to allow KVM to
> support the CPU halting and kicking operations needed by the queue
> spinlock PV code.
>
> Two KVM guests of 20 CPU cores (2 nodes) were created for performance
> testin
On Tue, Nov 25, 2014 at 07:33:58PM -0500, Waiman Long wrote:
> On 10/27/2014 02:02 PM, Konrad Rzeszutek Wilk wrote:
> >On Mon, Oct 27, 2014 at 01:38:20PM -0400, Waiman Long wrote:
> >>
> >>My concern is that spin_unlock() can be called in many places, including
> >
On Wed, Oct 29, 2014 at 04:19:09PM -0400, Waiman Long wrote:
> This patch adds para-virtualization support to the queue spinlock
> code base with minimal impact to the native case. There are some
> minor code changes in the generic qspinlock.c file which should be
> usable in other architectures. T
On Sun, Nov 02, 2014 at 09:32:20AM -0800, Josh Triplett wrote:
> This will allow making set_iopl_mask optional later.
>
> Signed-off-by: Josh Triplett
Reviewed-by: Konrad Rzeszutek Wilk
> ---
> arch/x86/include/asm/paravirt_types.h | 1 +
> arch/x86/kernel/paravirt.c
On Mon, Oct 27, 2014 at 01:38:20PM -0400, Waiman Long wrote:
> On 10/24/2014 04:54 AM, Peter Zijlstra wrote:
> >On Thu, Oct 16, 2014 at 02:10:38PM -0400, Waiman Long wrote:
> >
> >>Since enabling paravirt spinlock will disable unlock function inlining,
> >>a jump label can be added to the unlock fu
On Tue, Sep 30, 2014 at 11:01:29AM -0700, Andy Lutomirski wrote:
> On Tue, Sep 30, 2014 at 10:53 AM, Konrad Rzeszutek Wilk
> wrote:
> >> x86 will be worse than PPC, too: the special case needed to support
> >> QEMU 2.2 with IOMMU and virtio enabled with a Xen guest will
> x86 will be worse than PPC, too: the special case needed to support
> QEMU 2.2 with IOMMU and virtio enabled with a Xen guest will be fairly
> large and disgusting and will only exist to support something that IMO
> should never have existed in the first place.
I don't follow.
>
> PPC at least
On Tue, Sep 16, 2014 at 10:22:25PM -0700, Andy Lutomirski wrote:
> This fixes virtio on Xen guests as well as on any other platform
> that uses virtio_pci on which physical addresses don't match bus
> addresses.
I can do 'Reviewed-by: Konrad Rzeszutek Wilk '
but not sure
On Wed, Sep 03, 2014 at 06:53:33AM +1000, Benjamin Herrenschmidt wrote:
> On Mon, 2014-09-01 at 22:55 -0700, Andy Lutomirski wrote:
> >
> > On x86, at least, I doubt that we'll ever see a physically addressed
> > PCI virtio device for which ACPI advertises an IOMMU, since any sane
> > hypervisor w
On Thu, Aug 28, 2014 at 07:31:16AM +1000, Benjamin Herrenschmidt wrote:
> On Wed, 2014-08-27 at 20:40 +0930, Rusty Russell wrote:
>
> > Hi Andy,
> >
> > This has long been a source of contention. virtio assumes that
> > the hypervisor can decode guest-physical addresses.
> >
> >
On Wed, Aug 27, 2014 at 01:52:50PM +0200, Michael S. Tsirkin wrote:
> On Wed, Aug 27, 2014 at 08:40:51PM +0930, Rusty Russell wrote:
> > Andy Lutomirski writes:
> > > Currently, a lot of the virtio code assumes that bus (i.e. hypervisor)
> > > addresses are the same as physical address. This is f
On Wed, Aug 27, 2014 at 07:10:20PM +0100, Stefan Hajnoczi wrote:
> On Wed, Aug 27, 2014 at 6:27 PM, Konrad Rzeszutek Wilk
> wrote:
> > On Wed, Aug 27, 2014 at 07:46:46AM +0100, Stefan Hajnoczi wrote:
> >> On Tue, Aug 26, 2014 at 10:16 PM, Andy Lutomirski
> >&g
On Wed, Aug 27, 2014 at 10:35:10AM -0700, Andy Lutomirski wrote:
> On Wed, Aug 27, 2014 at 10:32 AM, Konrad Rzeszutek Wilk
> wrote:
> > On Tue, Aug 26, 2014 at 02:17:02PM -0700, Andy Lutomirski wrote:
> >> A virtqueue is a coherent DMA mapping. Use the DMA API for it.
>
On Wed, Aug 27, 2014 at 09:29:36AM +0200, Christian Borntraeger wrote:
> On 26/08/14 23:17, Andy Lutomirski wrote:
> > virtio_ring currently sends the device (usually a hypervisor)
> > physical addresses of its I/O buffers. This is okay when DMA
> > addresses and physical addresses are the same th
On Tue, Aug 26, 2014 at 02:17:02PM -0700, Andy Lutomirski wrote:
> A virtqueue is a coherent DMA mapping. Use the DMA API for it.
> This fixes virtio_pci on Xen.
>
> Signed-off-by: Andy Lutomirski
> ---
> drivers/virtio/virtio_pci.c | 25 ++---
> 1 file changed, 18 insertion
On Wed, Aug 27, 2014 at 07:46:46AM +0100, Stefan Hajnoczi wrote:
> On Tue, Aug 26, 2014 at 10:16 PM, Andy Lutomirski wrote:
> > There are two outstanding issues. virtio_net warns if DMA debugging
> > is on because it does DMA from the stack. (The warning is correct.)
> > This also is likely to d
On Mon, Aug 25, 2014 at 10:18:46AM -0700, Andy Lutomirski wrote:
> Currently, a lot of the virtio code assumes that bus (i.e. hypervisor)
> addresses are the same as physical address. This is false on Xen, so
> virtio is completely broken. I wouldn't be surprised if it also
> becomes a problem th
On Mon, Jul 07, 2014 at 05:27:34PM +0200, Peter Zijlstra wrote:
> On Fri, Jun 20, 2014 at 09:46:08AM -0400, Konrad Rzeszutek Wilk wrote:
> > I dug in the code and I have some comments about it, but before
> > I post them I was wondering if you have any plans to run any perfor
On Mon, Jun 23, 2014 at 06:12:00PM +0200, Peter Zijlstra wrote:
> On Tue, Jun 17, 2014 at 04:03:29PM -0400, Konrad Rzeszutek Wilk wrote:
> > > > + new = tail | (val & _Q_LOCKED_MASK);
> > > > +
> > > > +
On Mon, Jun 23, 2014 at 05:56:50PM +0200, Peter Zijlstra wrote:
> On Mon, Jun 16, 2014 at 04:49:18PM -0400, Konrad Rzeszutek Wilk wrote:
> > > Index: linux-2.6/kernel/locking/mcs_spinlock.h
> > > ===
> >
On Mon, Jun 23, 2014 at 06:26:22PM +0200, Peter Zijlstra wrote:
> On Tue, Jun 17, 2014 at 04:05:31PM -0400, Konrad Rzeszutek Wilk wrote:
> > > + * The basic principle of a queue-based spinlock can best be understood
> > > + * by studying a classic queue-based spinlock impl
On Sun, Jun 15, 2014 at 02:47:07PM +0200, Peter Zijlstra wrote:
> Add minimal paravirt support.
>
> The code aims for minimal impact on the native case.
Woot!
>
> On the lock side we add one jump label (asm_goto) and 4 paravirt
> callee saved calls that default to NOPs. The only effects are the
On Sun, Jun 15, 2014 at 02:47:02PM +0200, Peter Zijlstra wrote:
> From: Peter Zijlstra
>
> When we allow for a max NR_CPUS < 2^14 we can optimize the pending
> wait-acquire and the xchg_tail() operations.
>
> By growing the pending bit to a byte, we reduce the tail to 16bit.
> This means we can
On Sun, Jun 15, 2014 at 02:47:04PM +0200, Peter Zijlstra wrote:
> From: Waiman Long
>
> Currently, atomic_cmpxchg() is used to get the lock. However, this is
> not really necessary if there is more than one task in the queue and
> the queue head don't need to reset the queue code word. For that c
ra
Acked-by: Konrad Rzeszutek Wilk
> ---
> arch/x86/include/asm/spinlock.h |4 ++--
> arch/x86/kernel/kvm.c|2 +-
> arch/x86/kernel/paravirt-spinlocks.c |4 ++--
> arch/x86/xen/spinlock.c |2 +-
> 4 files changed, 6 insertions(+), 6 d
On Sun, Jun 15, 2014 at 02:47:05PM +0200, Peter Zijlstra wrote:
> When we detect a hypervisor (!paravirt, see later patches), revert to
Please spell out the name of the patches.
> a simple test-and-set lock to avoid the horrors of queue preemption.
Heheh.
>
> Signed-off-by: Peter Zijlstra
> --
> >>However, I *do* agree with you that it's simpler to just squash this patch
> >>into 01/11.
> >Uh, did I say that? Oh I said why don't make it right the first time!
> >
> >I meant in terms of seperating the slowpath (aka the bytelock on the pending
> >bit) from the queue (MCS code). Or renaming
On Wed, Jun 18, 2014 at 01:37:45PM +0200, Paolo Bonzini wrote:
> Il 17/06/2014 22:55, Konrad Rzeszutek Wilk ha scritto:
> >On Sun, Jun 15, 2014 at 02:47:01PM +0200, Peter Zijlstra wrote:
> >>From: Waiman Long
> >>
> >>This patch extracts the logic for t
On Wed, Jun 18, 2014 at 01:29:48PM +0200, Paolo Bonzini wrote:
> Il 17/06/2014 22:36, Konrad Rzeszutek Wilk ha scritto:
> >+/* One more attempt - but if we fail mark it as pending. */
> >+if (val == _Q_LOCKED_VAL) {
> >+new = Q_LOCKE
On Jun 17, 2014 6:25 PM, Waiman Long wrote:
>
> On 06/17/2014 05:10 PM, Konrad Rzeszutek Wilk wrote:
> > On Tue, Jun 17, 2014 at 05:07:29PM -0400, Konrad Rzeszutek Wilk wrote:
> >> On Tue, Jun 17, 2014 at 04:51:57PM -0400, Waiman Long wrote:
> >>> On 06/17/201
On Tue, Jun 17, 2014 at 05:07:29PM -0400, Konrad Rzeszutek Wilk wrote:
> On Tue, Jun 17, 2014 at 04:51:57PM -0400, Waiman Long wrote:
> > On 06/17/2014 04:36 PM, Konrad Rzeszutek Wilk wrote:
> > >On Sun, Jun 15, 2014 at 02:47:00PM +0200, Peter Zijlstra wrote:
> > >>
On Tue, Jun 17, 2014 at 04:51:57PM -0400, Waiman Long wrote:
> On 06/17/2014 04:36 PM, Konrad Rzeszutek Wilk wrote:
> >On Sun, Jun 15, 2014 at 02:47:00PM +0200, Peter Zijlstra wrote:
> >>Because the qspinlock needs to touch a second cacheline; add a pending
> >>bit
On Sun, Jun 15, 2014 at 03:16:54PM +0200, Peter Zijlstra wrote:
> On Thu, Jun 12, 2014 at 04:48:41PM -0400, Waiman Long wrote:
> > I don't have a good understanding of the kernel alternatives mechanism.
>
> I didn't either; I do now, cost me a whole day reading up on
> alternative/paravirt code pa
On Sun, Jun 15, 2014 at 02:47:01PM +0200, Peter Zijlstra wrote:
> From: Waiman Long
>
> This patch extracts the logic for the exchange of new and previous tail
> code words into a new xchg_tail() function which can be optimized in a
> later patch.
And also adds a third try on acquiring the lock.
On Sun, Jun 15, 2014 at 02:47:00PM +0200, Peter Zijlstra wrote:
> Because the qspinlock needs to touch a second cacheline; add a pending
> bit and allow a single in-word spinner before we punt to the second
> cacheline.
Could you add this in the description please:
And by second cacheline we mean
> + * The basic principle of a queue-based spinlock can best be understood
> + * by studying a classic queue-based spinlock implementation called the
> + * MCS lock. The paper below provides a good description for this kind
> + * of lock.
> + *
> + * http://www.cise.ufl.edu/tr/DOC/REP-1992-71.pdf
>
> > + new = tail | (val & _Q_LOCKED_MASK);
> > +
> > + old = atomic_cmpxchg(&lock->val, val, new);
> > + if (old == val)
> > + break;
> > +
> > + val = old;
> > + }
> > +
> > + /*
> > +* we won the trylock; forget about queue
On Sun, Jun 15, 2014 at 02:46:57PM +0200, Peter Zijlstra wrote:
> Since Waiman seems incapable of doing simple things; here's my take on the
> paravirt crap.
>
> The first few patches are taken from Waiman's latest series, but the virt
> support is completely new. Its primary aim is to not mess up
On Sun, Jun 15, 2014 at 02:46:58PM +0200, Peter Zijlstra wrote:
> From: Waiman Long
>
> This patch introduces a new generic queue spinlock implementation that
> can serve as an alternative to the default ticket spinlock. Compared
> with the ticket spinlock, this queue spinlock should be almost as
> Raghavendra KT had done some performance testing on this patch with
> the following results:
>
> Overall we are seeing good improvement for pv-unfair version.
>
> System: 32 cpu sandybridge with HT on (4 node with 32 GB each)
> Guest : 8GB with 16 vcpu/VM.
> Average was taken over 8-10 data poi
On Wed, May 07, 2014 at 11:01:28AM -0400, Waiman Long wrote:
> v9->v10:
> - Make some minor changes to qspinlock.c to accommodate review feedback.
> - Change author to PeterZ for 2 of the patches.
> - Include Raghavendra KT's test results in patch 18.
Any chance you can post these on a git t
On Wed, Apr 23, 2014 at 01:43:58PM -0400, Waiman Long wrote:
> On 04/23/2014 10:56 AM, Konrad Rzeszutek Wilk wrote:
> >On Wed, Apr 23, 2014 at 10:23:43AM -0400, Waiman Long wrote:
> >>On 04/18/2014 05:40 PM, Waiman Long wrote:
> >>>On 04/18/2014 03:05 PM, Peter Zijlst
On Wed, Apr 23, 2014 at 10:23:43AM -0400, Waiman Long wrote:
> On 04/18/2014 05:40 PM, Waiman Long wrote:
> >On 04/18/2014 03:05 PM, Peter Zijlstra wrote:
> >>On Fri, Apr 18, 2014 at 01:52:50PM -0400, Waiman Long wrote:
> >>>I am confused by your notation.
> >>Nah, I think I was confused :-) Make t
On Fri, Apr 18, 2014 at 12:23:29PM -0400, Waiman Long wrote:
> On 04/18/2014 03:42 AM, Ingo Molnar wrote:
> >* Waiman Long wrote:
> >
> >>Because the qspinlock needs to touch a second cacheline; add a pending
> >>bit and allow a single in-word spinner before we punt to the second
> >>cacheline.
>
On Thu, Apr 17, 2014 at 09:48:36PM -0400, Waiman Long wrote:
> On 04/17/2014 01:23 PM, Konrad Rzeszutek Wilk wrote:
> >On Thu, Apr 17, 2014 at 11:03:52AM -0400, Waiman Long wrote:
> >>v8->v9:
> >> - Integrate PeterZ's version of the queue spinlock pa
On Thu, Apr 17, 2014 at 11:03:52AM -0400, Waiman Long wrote:
> v8->v9:
> - Integrate PeterZ's version of the queue spinlock patch with some
> modification:
> http://lkml.kernel.org/r/20140310154236.038181...@infradead.org
> - Break the more complex patches into smaller ones to ease revi
On Mon, Apr 07, 2014 at 04:12:58PM +0200, Peter Zijlstra wrote:
> On Fri, Apr 04, 2014 at 12:57:27PM -0400, Konrad Rzeszutek Wilk wrote:
> > On Fri, Apr 04, 2014 at 03:00:12PM +0200, Peter Zijlstra wrote:
> > >
> > > So I'm just not ever going to pick up this patch
On Fri, Apr 04, 2014 at 01:58:15PM -0400, Konrad Rzeszutek Wilk wrote:
> On Fri, Apr 04, 2014 at 01:13:17PM -0400, Waiman Long wrote:
> > On 04/04/2014 12:55 PM, Konrad Rzeszutek Wilk wrote:
> > >On Thu, Apr 03, 2014 at 10:57:18PM -0400, Waiman Long wrote:
> > >>
On Fri, Apr 04, 2014 at 01:13:17PM -0400, Waiman Long wrote:
> On 04/04/2014 12:55 PM, Konrad Rzeszutek Wilk wrote:
> >On Thu, Apr 03, 2014 at 10:57:18PM -0400, Waiman Long wrote:
> >>On 04/03/2014 01:23 PM, Konrad Rzeszutek Wilk wrote:
> >>>On Wed, Apr 02, 2014 at
On Fri, Apr 04, 2014 at 03:00:12PM +0200, Peter Zijlstra wrote:
>
> So I'm just not ever going to pick up this patch; I spend a week trying
> to reverse engineer this; I posted a 7 patch series creating the
> equivalent, but in a gradual and readable fashion:
>
> http://lkml.kernel.org/r/201403
On Thu, Apr 03, 2014 at 10:57:18PM -0400, Waiman Long wrote:
> On 04/03/2014 01:23 PM, Konrad Rzeszutek Wilk wrote:
> >On Wed, Apr 02, 2014 at 10:10:17PM -0400, Waiman Long wrote:
> >>On 04/02/2014 04:35 PM, Waiman Long wrote:
> >>>On 04/02/2014 10:32 AM, Konrad Rzes
On Wed, Apr 02, 2014 at 10:32:01AM -0400, Konrad Rzeszutek Wilk wrote:
> On Wed, Apr 02, 2014 at 09:27:29AM -0400, Waiman Long wrote:
> > N.B. Sorry for the duplicate. This patch series were resent as the
> > original one was rejected by the vger.kernel.org list server
>
On Wed, Apr 02, 2014 at 10:10:17PM -0400, Waiman Long wrote:
> On 04/02/2014 04:35 PM, Waiman Long wrote:
> >On 04/02/2014 10:32 AM, Konrad Rzeszutek Wilk wrote:
> >>On Wed, Apr 02, 2014 at 09:27:29AM -0400, Waiman Long wrote:
> >>>N.B. Sorry for the duplicate. This p
> diff --git a/kernel/Kconfig.locks b/kernel/Kconfig.locks
> index a70fdeb..451e392 100644
> --- a/kernel/Kconfig.locks
> +++ b/kernel/Kconfig.locks
> @@ -229,4 +229,4 @@ config ARCH_USE_QUEUE_SPINLOCK
>
> config QUEUE_SPINLOCK
> def_bool y if ARCH_USE_QUEUE_SPINLOCK
> - depends on SMP
On Wed, Apr 02, 2014 at 09:27:29AM -0400, Waiman Long wrote:
> N.B. Sorry for the duplicate. This patch series were resent as the
> original one was rejected by the vger.kernel.org list server
> due to long header. There is no change in content.
>
> v7->v8:
> - Remove one unneeded atom
On Mar 20, 2014 11:40 PM, Waiman Long wrote:
>
> On 03/19/2014 04:28 PM, Konrad Rzeszutek Wilk wrote:
> > On Wed, Mar 19, 2014 at 04:14:05PM -0400, Waiman Long wrote:
> >> This patch adds a XEN init function to activate the unfair queue
> >> sp
On Wed, Mar 19, 2014 at 04:14:05PM -0400, Waiman Long wrote:
> This patch adds a XEN init function to activate the unfair queue
> spinlock in a XEN guest when the PARAVIRT_UNFAIR_LOCKS kernel config
> option is selected.
>
> Signed-off-by: Waiman Long
> ---
> arch/x86/xen/setup.c | 19
On Wed, Mar 19, 2014 at 04:14:00PM -0400, Waiman Long wrote:
> This patch makes the necessary changes at the x86 architecture
> specific layer to enable the use of queue spinlock for x86-64. As
> x86-32 machines are typically not multi-socket. The benefit of queue
> spinlock may not be apparent. So
1 - 100 of 268 matches
Mail list logo