Re: [RFC/PATCH 1/1] virtio: Introduce MMIO ops

2020-04-30 Thread Srivatsa Vaddagiri
* Jan Kiszka [2020-04-30 14:59:50]: > >I believe ivshmem2_virtio requires hypervisor to support PCI device emulation > >(for life-cycle management of VMs), which our hypervisor may not support. A > >simple shared memory and doorbell or message-queue based transport will work > >for > >us. > >

Re: [RFC/PATCH 1/1] virtio: Introduce MMIO ops

2020-04-30 Thread Srivatsa Vaddagiri
* Will Deacon [2020-04-30 11:41:50]: > On Thu, Apr 30, 2020 at 04:04:46PM +0530, Srivatsa Vaddagiri wrote: > > If CONFIG_VIRTIO_MMIO_OPS is defined, then I expect this to be > > unconditionally > > set to 'magic_qcom_ops' that uses hypervisor-supported interface fo

Re: [RFC/PATCH 0/1] virtio_mmio: hypervisor specific interfaces for MMIO

2020-04-30 Thread Srivatsa Vaddagiri
* Will Deacon [2020-04-30 11:39:19]: > Hi Vatsa, > > On Thu, Apr 30, 2020 at 03:59:39PM +0530, Srivatsa Vaddagiri wrote: > > > What's stopping you from implementing the trapping support in the > > > hypervisor? Unlike the other patches you sent

Re: [RFC/PATCH 0/1] virtio_mmio: hypervisor specific interfaces for MMIO

2020-04-30 Thread Srivatsa Vaddagiri
* Michael S. Tsirkin [2020-04-30 06:07:56]: > On Thu, Apr 30, 2020 at 03:32:55PM +0530, Srivatsa Vaddagiri wrote: > > The Type-1 hypervisor we are dealing with does not allow for MMIO > > transport. > > How about PCI then? Correct me if I am wrong, but basically virtio_

Re: [RFC/PATCH 1/1] virtio: Introduce MMIO ops

2020-04-30 Thread Srivatsa Vaddagiri
* Will Deacon [2020-04-30 11:14:32]: > > +#ifdef CONFIG_VIRTIO_MMIO_OPS > > > > +static struct virtio_mmio_ops *mmio_ops; > > + > > +#define virtio_readb(a)mmio_ops->mmio_readl((a)) > > +#define virtio_readw(a)mmio_ops->mmio_readl((a)) > > +#define virtio_readl(a)

Re: [RFC/PATCH 0/1] virtio_mmio: hypervisor specific interfaces for MMIO

2020-04-30 Thread Srivatsa Vaddagiri
* Will Deacon [2020-04-30 11:08:22]: > > This patch is meant to seek comments. If its considered to be in right > > direction, will work on making it more complete and send the next version! > > What's stopping you from implementing the trapping support in the > hypervisor? Unlike the other

[RFC/PATCH 1/1] virtio: Introduce MMIO ops

2020-04-30 Thread Srivatsa Vaddagiri
-by: Srivatsa Vaddagiri --- drivers/virtio/virtio_mmio.c | 131 ++- include/linux/virtio.h | 14 + 2 files changed, 94 insertions(+), 51 deletions(-) diff --git a/drivers/virtio/virtio_mmio.c b/drivers/virtio/virtio_mmio.c index 97d5725..69bfa35 100644

[RFC/PATCH 0/1] virtio_mmio: hypervisor specific interfaces for MMIO

2020-04-30 Thread Srivatsa Vaddagiri
methods introduced allows for seamless IO of config space. This patch is meant to seek comments. If its considered to be in right direction, will work on making it more complete and send the next version! 1. https://lkml.org/lkml/2020/4/28/427 Srivatsa Vaddagiri (1): virtio: Introduce MMIO ops

Re: [PATCH 5/5] virtio: Add bounce DMA ops

2020-04-29 Thread Srivatsa Vaddagiri
* Michael S. Tsirkin [2020-04-29 06:20:48]: > On Wed, Apr 29, 2020 at 03:39:53PM +0530, Srivatsa Vaddagiri wrote: > > That would still not work I think where swiotlb is used for pass-thr devices > > (when private memory is fine) as well as virtio devices (when shared memory >

Re: [PATCH 5/5] virtio: Add bounce DMA ops

2020-04-29 Thread Srivatsa Vaddagiri
* Michael S. Tsirkin [2020-04-29 05:52:05]: > > > So it seems that with modern Linux, all one needs > > > to do on x86 is mark the device as untrusted. > > > It's already possible to do this with ACPI and with OF - would that be > > > sufficient for achieving what this patchset is trying to do?

Re: [PATCH 5/5] virtio: Add bounce DMA ops

2020-04-29 Thread Srivatsa Vaddagiri
* Michael S. Tsirkin [2020-04-29 02:50:41]: > So it seems that with modern Linux, all one needs > to do on x86 is mark the device as untrusted. > It's already possible to do this with ACPI and with OF - would that be > sufficient for achieving what this patchset is trying to do? In my case, its

Re: [PATCH 5/5] virtio: Add bounce DMA ops

2020-04-28 Thread Srivatsa Vaddagiri
* Stefano Stabellini [2020-04-28 16:04:34]: > > > Is swiotlb commonly used for multiple devices that may be on different > > > trust > > > boundaries (and not behind a hardware iommu)? > > The trust boundary is not a good way of describing the scenario and I > think it leads to

Re: [PATCH 5/5] virtio: Add bounce DMA ops

2020-04-28 Thread Srivatsa Vaddagiri
* Michael S. Tsirkin [2020-04-28 16:41:04]: > > Won't we still need some changes to virtio to make use of its own pool (to > > bounce buffers)? Something similar to its own DMA ops proposed in this > > patch? > > If you are doing this for all devices, you need to either find a way > to do this

Re: [PATCH 5/5] virtio: Add bounce DMA ops

2020-04-28 Thread Srivatsa Vaddagiri
* Michael S. Tsirkin [2020-04-28 12:17:57]: > Okay, but how is all this virtio specific? For example, why not allow > separate swiotlbs for any type of device? > For example, this might make sense if a given device is from a > different, less trusted vendor. Is swiotlb commonly used for

[PATCH 3/5] swiotlb: Add alloc and free APIs

2020-04-28 Thread Srivatsa Vaddagiri
Move the memory allocation and free portion of swiotlb driver into independent routines. They will be useful for drivers that need swiotlb driver to just allocate/free memory chunks and not additionally bounce memory. Signed-off-by: Srivatsa Vaddagiri --- include/linux/swiotlb.h | 17

[PATCH 4/5] swiotlb: Add API to register new pool

2020-04-28 Thread Srivatsa Vaddagiri
This patch adds an interface for the swiotlb driver to recognize a new memory pool. Upon successful initialization of the pool, swiotlb returns a handle, which needs to be passed as an argument for any future operations on the pool (map/unmap/alloc/free). Signed-off-by: Srivatsa Vaddagiri

[PATCH 5/5] virtio: Add bounce DMA ops

2020-04-28 Thread Srivatsa Vaddagiri
will require swiotlb memory to be shared with backend VM). As a possible extension to this patch, we can provide an option for virtio to make use of default swiotlb memory pool itself, where no such conflicts may exist in a given deployment. Signed-off-by: Srivatsa Vaddagiri --- drivers/virtio/Makefile

[PATCH 1/5] swiotlb: Introduce concept of swiotlb_pool

2020-04-28 Thread Srivatsa Vaddagiri
. Subsequent patches allow the swiotlb driver to work with more than one pool of memory. Signed-off-by: Srivatsa Vaddagiri --- drivers/xen/swiotlb-xen.c | 4 +- include/linux/swiotlb.h | 129 - kernel/dma/swiotlb.c | 359 +++--- 3

[PATCH 2/5] swiotlb: Allow for non-linear mapping between paddr and vaddr

2020-04-28 Thread Srivatsa Vaddagiri
. Signed-off-by: Srivatsa Vaddagiri --- include/linux/swiotlb.h | 2 ++ kernel/dma/swiotlb.c| 20 ++-- 2 files changed, 16 insertions(+), 6 deletions(-) diff --git a/include/linux/swiotlb.h b/include/linux/swiotlb.h index 8c7843f..c634b4d 100644 --- a/include/linux/swiotlb.h

[PATCH 0/5] virtio on Type-1 hypervisor

2020-04-28 Thread Srivatsa Vaddagiri
backend drivers as standalone programs (and not coupled with any VMM). Srivatsa Vaddagiri (5): swiotlb: Introduce concept of swiotlb_pool swiotlb: Allow for non-linear mapping between paddr and vaddr swiotlb: Add alloc and free APIs swiotlb: Add API to register new pool virtio: Add

[tip:sched/core] sched/fair: Fix SCHED_HRTICK bug leading to late preemption of tasks

2016-09-22 Thread tip-bot for Srivatsa Vaddagiri
Commit-ID: 8bf46a39be910937d4c9e8d999a7438a7ae1a75b Gitweb: http://git.kernel.org/tip/8bf46a39be910937d4c9e8d999a7438a7ae1a75b Author: Srivatsa Vaddagiri <va...@codeaurora.org> AuthorDate: Fri, 16 Sep 2016 18:28:51 -0700 Committer: Ingo Molnar <mi...@kernel.org> CommitDate:

[tip:sched/core] sched/fair: Fix SCHED_HRTICK bug leading to late preemption of tasks

2016-09-22 Thread tip-bot for Srivatsa Vaddagiri
Commit-ID: 8bf46a39be910937d4c9e8d999a7438a7ae1a75b Gitweb: http://git.kernel.org/tip/8bf46a39be910937d4c9e8d999a7438a7ae1a75b Author: Srivatsa Vaddagiri AuthorDate: Fri, 16 Sep 2016 18:28:51 -0700 Committer: Ingo Molnar CommitDate: Thu, 22 Sep 2016 15:20:18 +0200 sched/fair: Fix

[tip:x86/spinlocks] kvm: Paravirtual ticketlocks support for linux guests running on KVM hypervisor

2013-08-14 Thread tip-bot for Srivatsa Vaddagiri
Commit-ID: 92b75202e5e8790905f9441ccaea2456cc4621a5 Gitweb: http://git.kernel.org/tip/92b75202e5e8790905f9441ccaea2456cc4621a5 Author: Srivatsa Vaddagiri AuthorDate: Tue, 6 Aug 2013 14:55:41 +0530 Committer: Ingo Molnar CommitDate: Wed, 14 Aug 2013 13:12:35 +0200 kvm: Paravirtual

[tip:x86/spinlocks] kvm: Paravirtual ticketlocks support for linux guests running on KVM hypervisor

2013-08-14 Thread tip-bot for Srivatsa Vaddagiri
Commit-ID: 92b75202e5e8790905f9441ccaea2456cc4621a5 Gitweb: http://git.kernel.org/tip/92b75202e5e8790905f9441ccaea2456cc4621a5 Author: Srivatsa Vaddagiri va...@linux.vnet.ibm.com AuthorDate: Tue, 6 Aug 2013 14:55:41 +0530 Committer: Ingo Molnar mi...@kernel.org CommitDate: Wed, 14 Aug

[tip:x86/spinlocks] kvm: Paravirtual ticketlocks support for linux guests running on KVM hypervisor

2013-08-12 Thread tip-bot for Srivatsa Vaddagiri
Commit-ID: f9021f7fd9c8c8101c90b901053f99bfd0288021 Gitweb: http://git.kernel.org/tip/f9021f7fd9c8c8101c90b901053f99bfd0288021 Author: Srivatsa Vaddagiri AuthorDate: Tue, 6 Aug 2013 14:55:41 +0530 Committer: H. Peter Anvin CommitDate: Mon, 12 Aug 2013 09:03:57 -0700 kvm: Paravirtual

[tip:x86/spinlocks] kvm: Paravirtual ticketlocks support for linux guests running on KVM hypervisor

2013-08-12 Thread tip-bot for Srivatsa Vaddagiri
Commit-ID: f9021f7fd9c8c8101c90b901053f99bfd0288021 Gitweb: http://git.kernel.org/tip/f9021f7fd9c8c8101c90b901053f99bfd0288021 Author: Srivatsa Vaddagiri va...@linux.vnet.ibm.com AuthorDate: Tue, 6 Aug 2013 14:55:41 +0530 Committer: H. Peter Anvin h...@linux.intel.com CommitDate: Mon, 12

[tip:x86/spinlocks] kvm: Paravirtual ticketlocks support for linux guests running on KVM hypervisor

2013-08-10 Thread tip-bot for Srivatsa Vaddagiri
Commit-ID: 23f659a237e8f633f9605fdf9408a8d130ab72c9 Gitweb: http://git.kernel.org/tip/23f659a237e8f633f9605fdf9408a8d130ab72c9 Author: Srivatsa Vaddagiri AuthorDate: Fri, 9 Aug 2013 19:52:02 +0530 Committer: H. Peter Anvin CommitDate: Fri, 9 Aug 2013 07:54:24 -0700 kvm: Paravirtual

[tip:x86/spinlocks] kvm guest: Add configuration support to enable debug information for KVM Guests

2013-08-10 Thread tip-bot for Srivatsa Vaddagiri
Commit-ID: 1e20eb8557cdabf76473b09572be8aa8a2bb9bc0 Gitweb: http://git.kernel.org/tip/1e20eb8557cdabf76473b09572be8aa8a2bb9bc0 Author: Srivatsa Vaddagiri AuthorDate: Fri, 9 Aug 2013 19:52:01 +0530 Committer: H. Peter Anvin CommitDate: Fri, 9 Aug 2013 07:54:18 -0700 kvm guest: Add

[tip:x86/spinlocks] kvm: Paravirtual ticketlocks support for linux guests running on KVM hypervisor

2013-08-10 Thread tip-bot for Srivatsa Vaddagiri
Commit-ID: 23f659a237e8f633f9605fdf9408a8d130ab72c9 Gitweb: http://git.kernel.org/tip/23f659a237e8f633f9605fdf9408a8d130ab72c9 Author: Srivatsa Vaddagiri va...@linux.vnet.ibm.com AuthorDate: Fri, 9 Aug 2013 19:52:02 +0530 Committer: H. Peter Anvin h...@linux.intel.com CommitDate: Fri, 9

[tip:x86/spinlocks] kvm guest: Add configuration support to enable debug information for KVM Guests

2013-08-10 Thread tip-bot for Srivatsa Vaddagiri
Commit-ID: 1e20eb8557cdabf76473b09572be8aa8a2bb9bc0 Gitweb: http://git.kernel.org/tip/1e20eb8557cdabf76473b09572be8aa8a2bb9bc0 Author: Srivatsa Vaddagiri va...@linux.vnet.ibm.com AuthorDate: Fri, 9 Aug 2013 19:52:01 +0530 Committer: H. Peter Anvin h...@linux.intel.com CommitDate: Fri, 9

[tip:x86/spinlocks] kvm : Paravirtual ticketlocks support for linux guests running on KVM hypervisor

2013-08-08 Thread tip-bot for Srivatsa Vaddagiri
Commit-ID: b5eaeb3303fc3086f1d04deea48b5dfcfc4344c0 Gitweb: http://git.kernel.org/tip/b5eaeb3303fc3086f1d04deea48b5dfcfc4344c0 Author: Srivatsa Vaddagiri AuthorDate: Tue, 6 Aug 2013 17:15:21 +0530 Committer: H. Peter Anvin CommitDate: Thu, 8 Aug 2013 16:07:34 -0700 kvm : Paravirtual

[tip:x86/spinlocks] kvm guest : Add configuration support to enable debug information for KVM Guests

2013-08-08 Thread tip-bot for Srivatsa Vaddagiri
Commit-ID: 20a89c88f7d2458e12f66d7850cf17deec7daa1c Gitweb: http://git.kernel.org/tip/20a89c88f7d2458e12f66d7850cf17deec7daa1c Author: Srivatsa Vaddagiri AuthorDate: Tue, 6 Aug 2013 17:15:01 +0530 Committer: H. Peter Anvin CommitDate: Thu, 8 Aug 2013 16:07:30 -0700 kvm guest : Add

[tip:x86/spinlocks] kvm guest : Add configuration support to enable debug information for KVM Guests

2013-08-08 Thread tip-bot for Srivatsa Vaddagiri
Commit-ID: 20a89c88f7d2458e12f66d7850cf17deec7daa1c Gitweb: http://git.kernel.org/tip/20a89c88f7d2458e12f66d7850cf17deec7daa1c Author: Srivatsa Vaddagiri va...@linux.vnet.ibm.com AuthorDate: Tue, 6 Aug 2013 17:15:01 +0530 Committer: H. Peter Anvin h...@linux.intel.com CommitDate: Thu, 8

[tip:x86/spinlocks] kvm : Paravirtual ticketlocks support for linux guests running on KVM hypervisor

2013-08-08 Thread tip-bot for Srivatsa Vaddagiri
Commit-ID: b5eaeb3303fc3086f1d04deea48b5dfcfc4344c0 Gitweb: http://git.kernel.org/tip/b5eaeb3303fc3086f1d04deea48b5dfcfc4344c0 Author: Srivatsa Vaddagiri va...@linux.vnet.ibm.com AuthorDate: Tue, 6 Aug 2013 17:15:21 +0530 Committer: H. Peter Anvin h...@linux.intel.com CommitDate: Thu, 8

Re: [PATCH 1/2] cpuhotplug/nohz: Remove offline cpus from nohz-idle state

2013-01-07 Thread Srivatsa Vaddagiri
* Russell King - ARM Linux [2013-01-05 10:36:27]: > On Thu, Jan 03, 2013 at 06:58:38PM -0800, Srivatsa Vaddagiri wrote: > > I also think that the > > wait_for_completion() based wait in ARM's __cpu_die() can be replaced with a > > busy-loop based one, as the wait th

Re: [PATCH 1/2] cpuhotplug/nohz: Remove offline cpus from nohz-idle state

2013-01-07 Thread Srivatsa Vaddagiri
* Russell King - ARM Linux li...@arm.linux.org.uk [2013-01-05 10:36:27]: On Thu, Jan 03, 2013 at 06:58:38PM -0800, Srivatsa Vaddagiri wrote: I also think that the wait_for_completion() based wait in ARM's __cpu_die() can be replaced with a busy-loop based one, as the wait there in general

Re: [PATCH 2/2] Revert "nohz: Fix idle ticks in cpu summary line of /proc/stat" (commit 7386cdbf2f57ea8cff3c9fde93f206e58b9fe13f).

2013-01-04 Thread Srivatsa Vaddagiri
* Sergei Shtylyov [2013-01-04 16:13:42]: > >With offline cpus no longer beeing seen in nohz mode (ts->idle_active=0), we > >don't need the check for cpu_online() introduced in commit 7386cdbf. Offline > >Please also specify the summary of that commit in parens (or > however you like). I

Re: [PATCH 2/2] Revert nohz: Fix idle ticks in cpu summary line of /proc/stat (commit 7386cdbf2f57ea8cff3c9fde93f206e58b9fe13f).

2013-01-04 Thread Srivatsa Vaddagiri
* Sergei Shtylyov sshtyl...@mvista.com [2013-01-04 16:13:42]: With offline cpus no longer beeing seen in nohz mode (ts-idle_active=0), we don't need the check for cpu_online() introduced in commit 7386cdbf. Offline Please also specify the summary of that commit in parens (or however you

[PATCH 2/2] Revert "nohz: Fix idle ticks in cpu summary line of /proc/stat" (commit 7386cdbf2f57ea8cff3c9fde93f206e58b9fe13f).

2013-01-03 Thread Srivatsa Vaddagiri
istics). Cc: mho...@suse.cz Cc: srivatsa.b...@linux.vnet.ibm.com Signed-off-by: Srivatsa Vaddagiri --- fs/proc/stat.c | 14 -- 1 files changed, 4 insertions(+), 10 deletions(-) diff --git a/fs/proc/stat.c b/fs/proc/stat.c index e296572..64c3b31 100644 --- a/fs/proc/stat.c +++ b/f

[PATCH 1/2] cpuhotplug/nohz: Remove offline cpus from nohz-idle state

2013-01-03 Thread Srivatsa Vaddagiri
olnar Cc: "H. Peter Anvin" Cc: x...@kernel.org Cc: mho...@suse.cz Cc: srivatsa.b...@linux.vnet.ibm.com Signed-off-by: Srivatsa Vaddagiri --- arch/arm/kernel/process.c |9 - arch/arm/kernel/smp.c |2 +- arch/blackfin/kernel/process.c |8 arch/mi

[PATCH 0/2] cpuhotplug/nohz: Fix issue of "negative" idle time

2013-01-03 Thread Srivatsa Vaddagiri
On most architectures (arm, mips, s390, sh and x86) idle thread of a cpu does not cleanly exit nohz state before dying upon hot-remove. As a result, offline cpu is seen to be in nohz mode (ts->idle_active = 1) and its offline time can potentially be included in total idle time reported via

[PATCH 0/2] cpuhotplug/nohz: Fix issue of negative idle time

2013-01-03 Thread Srivatsa Vaddagiri
On most architectures (arm, mips, s390, sh and x86) idle thread of a cpu does not cleanly exit nohz state before dying upon hot-remove. As a result, offline cpu is seen to be in nohz mode (ts-idle_active = 1) and its offline time can potentially be included in total idle time reported via

[PATCH 1/2] cpuhotplug/nohz: Remove offline cpus from nohz-idle state

2013-01-03 Thread Srivatsa Vaddagiri
...@linux.vnet.ibm.com Signed-off-by: Srivatsa Vaddagiri va...@codeaurora.org --- arch/arm/kernel/process.c |9 - arch/arm/kernel/smp.c |2 +- arch/blackfin/kernel/process.c |8 arch/mips/kernel/process.c |6 +++--- arch/powerpc/kernel/idle.c |2

[PATCH 2/2] Revert nohz: Fix idle ticks in cpu summary line of /proc/stat (commit 7386cdbf2f57ea8cff3c9fde93f206e58b9fe13f).

2013-01-03 Thread Srivatsa Vaddagiri
: mho...@suse.cz Cc: srivatsa.b...@linux.vnet.ibm.com Signed-off-by: Srivatsa Vaddagiri va...@codeaurora.org --- fs/proc/stat.c | 14 -- 1 files changed, 4 insertions(+), 10 deletions(-) diff --git a/fs/proc/stat.c b/fs/proc/stat.c index e296572..64c3b31 100644 --- a/fs/proc/stat.c

[tip:sched/core] sched: Improve balance_cpu() to consider other cpus in its group as target of (pinned) task

2012-07-24 Thread tip-bot for Srivatsa Vaddagiri
Commit-ID: 88b8dac0a14c511ff41486b83a8c3d688936eec0 Gitweb: http://git.kernel.org/tip/88b8dac0a14c511ff41486b83a8c3d688936eec0 Author: Srivatsa Vaddagiri AuthorDate: Tue, 19 Jun 2012 17:43:15 +0530 Committer: Ingo Molnar CommitDate: Tue, 24 Jul 2012 13:58:06 +0200 sched: Improve

[tip:sched/core] sched: Improve balance_cpu() to consider other cpus in its group as target of (pinned) task

2012-07-24 Thread tip-bot for Srivatsa Vaddagiri
Commit-ID: 88b8dac0a14c511ff41486b83a8c3d688936eec0 Gitweb: http://git.kernel.org/tip/88b8dac0a14c511ff41486b83a8c3d688936eec0 Author: Srivatsa Vaddagiri va...@linux.vnet.ibm.com AuthorDate: Tue, 19 Jun 2012 17:43:15 +0530 Committer: Ingo Molnar mi...@kernel.org CommitDate: Tue, 24 Jul

Re: [PATCH] sched: revert load_balance_monitor()

2008-02-25 Thread Srivatsa Vaddagiri
On Mon, Feb 25, 2008 at 04:28:02PM +0100, Peter Zijlstra wrote: > Vatsa, would it make sense to take just that out, or just do a full > revert? Peter, 6b2d7700266b9402e12824e11e0099ae6a4a6a79 and 58e2d4ca581167c2a079f4ee02be2f0bc52e8729 are related very much. The later changes how cpu

Re: [PATCH] sched: revert load_balance_monitor()

2008-02-25 Thread Srivatsa Vaddagiri
On Mon, Feb 25, 2008 at 04:28:02PM +0100, Peter Zijlstra wrote: Vatsa, would it make sense to take just that out, or just do a full revert? Peter, 6b2d7700266b9402e12824e11e0099ae6a4a6a79 and 58e2d4ca581167c2a079f4ee02be2f0bc52e8729 are related very much. The later changes how cpu load

Re: 2.6.24-git4+ regression

2008-02-18 Thread Srivatsa Vaddagiri
On Mon, Feb 18, 2008 at 08:38:24AM +0100, Mike Galbraith wrote: > Here, it does not. It seems fine without CONFIG_FAIR_GROUP_SCHED. My hunch is its because of the vruntime driven preemption which shoots up latencies (and the fact perhaps that Peter hasnt't focused more on SMP case yet!).

Re: 2.6.24-git4+ regression

2008-02-18 Thread Srivatsa Vaddagiri
On Mon, Feb 18, 2008 at 08:38:24AM +0100, Mike Galbraith wrote: Here, it does not. It seems fine without CONFIG_FAIR_GROUP_SCHED. My hunch is its because of the vruntime driven preemption which shoots up latencies (and the fact perhaps that Peter hasnt't focused more on SMP case yet!).

Re: 2.6.24-git4+ regression

2008-02-14 Thread Srivatsa Vaddagiri
On Wed, Jan 30, 2008 at 02:56:09PM +0100, Lukas Hejtmanek wrote: > Hello, > > I noticed short thread in LKM regarding "sched: add vslice" causes horrible > interactivity under load. > > I can see similar behavior. If I stress both CPU cores, even typing on > keyboard suffers from huge latencies,

Re: 2.6.24-git4+ regression

2008-02-14 Thread Srivatsa Vaddagiri
On Wed, Jan 30, 2008 at 02:56:09PM +0100, Lukas Hejtmanek wrote: Hello, I noticed short thread in LKM regarding sched: add vslice causes horrible interactivity under load. I can see similar behavior. If I stress both CPU cores, even typing on keyboard suffers from huge latencies, I can

Re: Regression in latest sched-git

2008-02-13 Thread Srivatsa Vaddagiri
On Wed, Feb 13, 2008 at 10:04:44PM +0530, Dhaval Giani wrote: > I know I am missing something, but aren't we trying to reduce latencies > here? I guess Peter is referring to the latency in seeing fairness results. In other words, with single rq approach, you may require more time for the groups

Re: 2.6.25-rc1: volanoMark 45% regression

2008-02-13 Thread Srivatsa Vaddagiri
8e2d4ca581167c2a079f4ee02be2f0bc52e8729 > > Author: Srivatsa Vaddagiri <[EMAIL PROTECTED]> > > Date: Fri Jan 25 21:08:00 2008 +0100 > > > > sched: group scheduling, change how cpu load is calculated > > > > > > > > hackbench has about

Re: 2.6.25-rc1: volanoMark 45% regression

2008-02-13 Thread Srivatsa Vaddagiri
: Srivatsa Vaddagiri [EMAIL PROTECTED] Date: Fri Jan 25 21:08:00 2008 +0100 sched: group scheduling, change how cpu load is calculated hackbench has about 30% regression on 16-core tigerton, but has about 10% improvement on 8-core stoakley. In addition, tbench has about

Re: Regression in latest sched-git

2008-02-13 Thread Srivatsa Vaddagiri
On Wed, Feb 13, 2008 at 10:04:44PM +0530, Dhaval Giani wrote: I know I am missing something, but aren't we trying to reduce latencies here? I guess Peter is referring to the latency in seeing fairness results. In other words, with single rq approach, you may require more time for the groups to

Re: Regression in latest sched-git

2008-02-12 Thread Srivatsa Vaddagiri
On Tue, Feb 12, 2008 at 08:40:08PM +0100, Peter Zijlstra wrote: > Yes, latency isolation is the one thing I had to sacrifice in order to > get the normal latencies under control. Hi Peter, I don't have easy solution in mind either to meet both fairness and latency goals in a acceptable

Re: Regression in latest sched-git

2008-02-12 Thread Srivatsa Vaddagiri
On Tue, Feb 12, 2008 at 08:40:08PM +0100, Peter Zijlstra wrote: Yes, latency isolation is the one thing I had to sacrifice in order to get the normal latencies under control. Hi Peter, I don't have easy solution in mind either to meet both fairness and latency goals in a acceptable way.

Re: [RFC] Default child of a cgroup

2008-02-01 Thread Srivatsa Vaddagiri
On Thu, Jan 31, 2008 at 06:39:56PM -0800, Paul Menage wrote: > On Jan 30, 2008 6:40 PM, Srivatsa Vaddagiri <[EMAIL PROTECTED]> wrote: > > > > Here are some questions that arise in this picture: > > > > 1. What is the relationship of the task-group in A/tasks w

Re: [RFC] Default child of a cgroup

2008-02-01 Thread Srivatsa Vaddagiri
On Thu, Jan 31, 2008 at 06:39:56PM -0800, Paul Menage wrote: On Jan 30, 2008 6:40 PM, Srivatsa Vaddagiri [EMAIL PROTECTED] wrote: Here are some questions that arise in this picture: 1. What is the relationship of the task-group in A/tasks with the task-group in A/a1/tasks

[RFC] Default child of a cgroup

2008-01-30 Thread Srivatsa Vaddagiri
Hi, As we were implementing multiple-hierarchy support for CPU controller, we hit some oddities in its implementation, partly related to current cgroups implementation. Peter and I have been debating on the exact solution and I thought of bringing that discussion to lkml. Consider the

[RFC] Default child of a cgroup

2008-01-30 Thread Srivatsa Vaddagiri
Hi, As we were implementing multiple-hierarchy support for CPU controller, we hit some oddities in its implementation, partly related to current cgroups implementation. Peter and I have been debating on the exact solution and I thought of bringing that discussion to lkml. Consider the

Re: High wake up latencies with FAIR_USER_SCHED

2008-01-29 Thread Srivatsa Vaddagiri
On Tue, Jan 29, 2008 at 04:53:56PM +0100, Guillaume Chazarain wrote: > I just thought about something to restore low latencies with > FAIR_GROUP_SCHED, but it's possibly utter nonsense, so bear with me > ;-) The idea would be to reverse the trees upside down. The scheduler > would only see tasks

Re: scheduler scalability - cgroups, cpusets and load-balancing

2008-01-29 Thread Srivatsa Vaddagiri
On Tue, Jan 29, 2008 at 11:57:22AM +0100, Peter Zijlstra wrote: > On Tue, 2008-01-29 at 10:53 +0100, Peter Zijlstra wrote: > > > My thoughts were to make stronger use of disjoint cpu-sets. cgroups and > > cpusets are related, in that cpusets provide a property to a cgroup. > > However,

Re: scheduler scalability - cgroups, cpusets and load-balancing

2008-01-29 Thread Srivatsa Vaddagiri
On Tue, Jan 29, 2008 at 11:57:22AM +0100, Peter Zijlstra wrote: On Tue, 2008-01-29 at 10:53 +0100, Peter Zijlstra wrote: My thoughts were to make stronger use of disjoint cpu-sets. cgroups and cpusets are related, in that cpusets provide a property to a cgroup. However,

Re: High wake up latencies with FAIR_USER_SCHED

2008-01-29 Thread Srivatsa Vaddagiri
On Tue, Jan 29, 2008 at 04:53:56PM +0100, Guillaume Chazarain wrote: I just thought about something to restore low latencies with FAIR_GROUP_SCHED, but it's possibly utter nonsense, so bear with me ;-) The idea would be to reverse the trees upside down. The scheduler would only see tasks (on

Re: High wake up latencies with FAIR_USER_SCHED

2008-01-28 Thread Srivatsa Vaddagiri
On Mon, Jan 28, 2008 at 09:13:53PM +0100, Guillaume Chazarain wrote: > Unfortunately it seems to not be completely fixed, with this script: The maximum scheduling latency of a task with group scheduler is: Lmax = latency to schedule group entity at level0 + latency to

Re: High wake up latencies with FAIR_USER_SCHED

2008-01-28 Thread Srivatsa Vaddagiri
On Mon, Jan 28, 2008 at 09:13:53PM +0100, Guillaume Chazarain wrote: Unfortunately it seems to not be completely fixed, with this script: The maximum scheduling latency of a task with group scheduler is: Lmax = latency to schedule group entity at level0 + latency to

Re: High wake up latencies with FAIR_USER_SCHED

2008-01-27 Thread Srivatsa Vaddagiri
On Sun, Jan 27, 2008 at 09:01:15PM +0100, Guillaume Chazarain wrote: > I noticed some strangely high wake up latencies with FAIR_USER_SCHED > using this script: > We have two busy loops with UID=1. > And UID=2 maintains the running median of its wake up latency. > I get these latencies: > > #

Re: (ondemand) CPU governor regression between 2.6.23 and 2.6.24

2008-01-27 Thread Srivatsa Vaddagiri
On Sun, Jan 27, 2008 at 04:06:17PM +0100, Toralf Förster wrote: > > The third line (giving overall cpu usage stats) is what is interesting here. > > If you have more than one cpu, you can get cpu usage stats for each cpu > > in top by pressing 1. Can you provide this information with and w/o > >

Re: (ondemand) CPU governor regression between 2.6.23 and 2.6.24

2008-01-27 Thread Srivatsa Vaddagiri
On Sat, Jan 26, 2008 at 07:46:51PM +0100, Toralf Förster wrote: > > The problem is the same as described here : http://lkml.org/lkml/2007/10/21/85 > If I run dnetc even with lowest prority than the CPU stays at 600 MHz > regardless > of any other load (eg. rsyncing, svn update, compiling, ...) >

Re: (ondemand) CPU governor regression between 2.6.23 and 2.6.24

2008-01-27 Thread Srivatsa Vaddagiri
On Sun, Jan 27, 2008 at 04:06:17PM +0100, Toralf Förster wrote: The third line (giving overall cpu usage stats) is what is interesting here. If you have more than one cpu, you can get cpu usage stats for each cpu in top by pressing 1. Can you provide this information with and w/o

Re: High wake up latencies with FAIR_USER_SCHED

2008-01-27 Thread Srivatsa Vaddagiri
On Sun, Jan 27, 2008 at 09:01:15PM +0100, Guillaume Chazarain wrote: I noticed some strangely high wake up latencies with FAIR_USER_SCHED using this script: snip We have two busy loops with UID=1. And UID=2 maintains the running median of its wake up latency. I get these latencies: #

Re: [PATCH] sched: don't take a mutex from interrupt context

2008-01-22 Thread Srivatsa Vaddagiri
On Tue, Jan 22, 2008 at 05:47:34PM +0100, Peter Zijlstra wrote: > It should not, that would be another bug, but from a quick glance at the > code it doesn't do that. Hmm I had it in my back of mind that printk() could sleep. Looks like that has changed and so the patch you sent should be fine.

Re: [PATCH] sched: don't take a mutex from interrupt context

2008-01-22 Thread Srivatsa Vaddagiri
On Tue, Jan 22, 2008 at 05:25:38PM +0100, Peter Zijlstra wrote: > @@ -1428,9 +1428,9 @@ static void print_cfs_stats(struct seq_f > #ifdef CONFIG_FAIR_GROUP_SCHED > print_cfs_rq(m, cpu, _rq(cpu)->cfs); > #endif > - lock_task_group_list(); > + rcu_read_lock(); >

Re: [PATCH] sched: don't take a mutex from interrupt context

2008-01-22 Thread Srivatsa Vaddagiri
On Tue, Jan 22, 2008 at 05:25:38PM +0100, Peter Zijlstra wrote: @@ -1428,9 +1428,9 @@ static void print_cfs_stats(struct seq_f #ifdef CONFIG_FAIR_GROUP_SCHED print_cfs_rq(m, cpu, cpu_rq(cpu)-cfs); #endif - lock_task_group_list(); + rcu_read_lock();

Re: [PATCH] sched: don't take a mutex from interrupt context

2008-01-22 Thread Srivatsa Vaddagiri
On Tue, Jan 22, 2008 at 05:47:34PM +0100, Peter Zijlstra wrote: It should not, that would be another bug, but from a quick glance at the code it doesn't do that. Hmm I had it in my back of mind that printk() could sleep. Looks like that has changed and so the patch you sent should be fine.

Re: Regression with idle cpu cycle handling in 2.6.24 (compared to 2.6.22)

2008-01-21 Thread Srivatsa Vaddagiri
On Sun, Jan 20, 2008 at 09:03:38AM +0530, Dhaval Giani wrote: > > btw: writing 1 into "cpu_share" totally locks up the computer! > > > > Can you please provide some more details. Can you go into another > console (try ctrl-alt-f1) and try to reproduce the issue there. Could > you take a photo of

Re: Regression with idle cpu cycle handling in 2.6.24 (compared to 2.6.22)

2008-01-21 Thread Srivatsa Vaddagiri
On Sun, Jan 20, 2008 at 09:03:38AM +0530, Dhaval Giani wrote: btw: writing 1 into cpu_share totally locks up the computer! Can you please provide some more details. Can you go into another console (try ctrl-alt-f1) and try to reproduce the issue there. Could you take a photo of the

Re: [PATCH 00/11] another rt group sched update

2008-01-07 Thread Srivatsa Vaddagiri
On Mon, Jan 07, 2008 at 11:51:20AM +0100, Peter Zijlstra wrote: > - figure out what to do for UID based group scheduling, the current >implementation leaves it impossible for !root users to execute >real time tasks by setting rt_runtime_us to 0, and it has no way >to change it. > >

Re: [PATCH 00/11] another rt group sched update

2008-01-07 Thread Srivatsa Vaddagiri
On Mon, Jan 07, 2008 at 11:51:20AM +0100, Peter Zijlstra wrote: - figure out what to do for UID based group scheduling, the current implementation leaves it impossible for !root users to execute real time tasks by setting rt_runtime_us to 0, and it has no way to change it.

Re: [PATCH] sched: cpu accounting controller (V2)

2007-11-30 Thread Srivatsa Vaddagiri
On Fri, Nov 30, 2007 at 01:35:13PM +0100, Ingo Molnar wrote: > * Srivatsa Vaddagiri <[EMAIL PROTECTED]> wrote: > > > Here's V2 of the cpu acccounting controller patch, which makes > > accounting scale better on SMP systems by splitting the usage counter > > to be

[PATCH] sched: cpu accounting controller (V2)

2007-11-30 Thread Srivatsa Vaddagiri
On Fri, Nov 30, 2007 at 01:48:33AM +0530, Srivatsa Vaddagiri wrote: > It is indeed an important todo. Right now we take a per-group global > lock on every accounting update (which can be very frequent) and hence > it is pretty bad. > > Ingo had expressed the need to reintroduce

[PATCH] sched: cpu accounting controller (V2)

2007-11-30 Thread Srivatsa Vaddagiri
On Fri, Nov 30, 2007 at 01:48:33AM +0530, Srivatsa Vaddagiri wrote: It is indeed an important todo. Right now we take a per-group global lock on every accounting update (which can be very frequent) and hence it is pretty bad. Ingo had expressed the need to reintroduce this patch asap

Re: [PATCH] sched: cpu accounting controller (V2)

2007-11-30 Thread Srivatsa Vaddagiri
On Fri, Nov 30, 2007 at 01:35:13PM +0100, Ingo Molnar wrote: * Srivatsa Vaddagiri [EMAIL PROTECTED] wrote: Here's V2 of the cpu acccounting controller patch, which makes accounting scale better on SMP systems by splitting the usage counter to be per-cpu. thanks, applied. But you dont

Re: [PATCH] sched: cpu accounting controller

2007-11-29 Thread Srivatsa Vaddagiri
an be made static. This symbol is needed in kernel/cgroup.c file, where it does this: static struct cgroup_subsys *subsys[] = { #include }; and hence it cant be static. Thanks for the rest of your comments. I have fixed them in this version below: Signed-off-by: Srivatsa Vaddagiri <[EMA

Re: [PATCH] sched: cpu accounting controller

2007-11-29 Thread Srivatsa Vaddagiri
On Thu, Nov 29, 2007 at 08:20:58PM +0100, Ingo Molnar wrote: > ok, this looks certainly doable for v2.6.24. I've added it to the > scheduler fixes queue and will let it brew there for a few days and send > it to Linus after that if everything goes fine - unless anyone objects. Thanks. --

[PATCH] sched: cpu accounting controller

2007-11-29 Thread Srivatsa Vaddagiri
same accounting information. Todo: - Make the accounting scalable on SMP systems (perhaps for 2.6.25) Signed-off-by: Srivatsa Vaddagiri <[EMAIL PROTECTED]> --- include/linux/cgroup_subsys.h |6 ++ include/linux/cpu_acct.h | 14 + init/Kconfig

[PATCH] sched: cpu accounting controller

2007-11-29 Thread Srivatsa Vaddagiri
on SMP systems (perhaps for 2.6.25) Signed-off-by: Srivatsa Vaddagiri [EMAIL PROTECTED] --- include/linux/cgroup_subsys.h |6 ++ include/linux/cpu_acct.h | 14 + init/Kconfig |7 ++ kernel/Makefile |1 kernel/cpu_acct.c

Re: [PATCH] sched: cpu accounting controller

2007-11-29 Thread Srivatsa Vaddagiri
On Thu, Nov 29, 2007 at 08:20:58PM +0100, Ingo Molnar wrote: ok, this looks certainly doable for v2.6.24. I've added it to the scheduler fixes queue and will let it brew there for a few days and send it to Linus after that if everything goes fine - unless anyone objects. Thanks. --

Re: [PATCH] sched: cpu accounting controller

2007-11-29 Thread Srivatsa Vaddagiri
struct cgroup_subsys *subsys[] = { #include linux/cgroup_subsys.h }; and hence it cant be static. Thanks for the rest of your comments. I have fixed them in this version below: Signed-off-by: Srivatsa Vaddagiri [EMAIL PROTECTED] --- include/linux/cgroup_subsys.h |6 ++ include/linux/cpu_acct.h

Re: [Patch 0/5] sched: group scheduler related patches (V4)

2007-11-27 Thread Srivatsa Vaddagiri
On Tue, Nov 27, 2007 at 01:53:12PM +0100, Ingo Molnar wrote: > > Fine. I will resubmit this patchset then once we are into 2.6.25 > > cycle. > > no need (unless you have bugfixes) i'm carrying this around in the > scheduler git tree. (it will show up in sched-devel.git) Cool .. Thx! It will

Re: [Patch 0/5] sched: group scheduler related patches (V4)

2007-11-27 Thread Srivatsa Vaddagiri
On Tue, Nov 27, 2007 at 12:09:10PM +0100, Ingo Molnar wrote: > thanks, it looks good - but the fact that we are at v4 of the patchset > underlines the point that this is more of a v2.6.25 patchset than a > v2.6.24 one. Fine. I will resubmit this patchset then once we are into 2.6.25 cycle. >

Re: [Patch 0/5] sched: group scheduler related patches (V4)

2007-11-27 Thread Srivatsa Vaddagiri
On Tue, Nov 27, 2007 at 12:09:10PM +0100, Ingo Molnar wrote: thanks, it looks good - but the fact that we are at v4 of the patchset underlines the point that this is more of a v2.6.25 patchset than a v2.6.24 one. Fine. I will resubmit this patchset then once we are into 2.6.25 cycle. Group

Re: [Patch 0/5] sched: group scheduler related patches (V4)

2007-11-27 Thread Srivatsa Vaddagiri
On Tue, Nov 27, 2007 at 01:53:12PM +0100, Ingo Molnar wrote: Fine. I will resubmit this patchset then once we are into 2.6.25 cycle. no need (unless you have bugfixes) i'm carrying this around in the scheduler git tree. (it will show up in sched-devel.git) Cool .. Thx! It will get me

[Patch 5/5] sched: Improve fairness of cpu bandwidth allocation for task groups

2007-11-26 Thread Srivatsa Vaddagiri
are introduced (under SCHED_DEBUG) to control the rate at which it runs. Signed-off-by: Srivatsa Vaddagiri <[EMAIL PROTECTED]> --- include/linux/sched.h |4 kernel/sched.c| 259 -- kernel/sched_fair.c | 88 ++--

[Patch 4/5] sched: introduce a mutex and corresponding API to serialize access to doms_cur[] array

2007-11-26 Thread Srivatsa Vaddagiri
rebalancing shares of task groups across cpus. Signed-off-by: Srivatsa Vaddagiri <[EMAIL PROTECTED]> --- kernel/sched.c | 19 +++ 1 files changed, 19 insertions(+) Index: current/kernel/sched.c === --- curren

[Patch 3/5 v2] sched: change how cpu load is calculated

2007-11-26 Thread Srivatsa Vaddagiri
to it. This version of patch (v2 of Patch 3/5) has a minor impact on code size (but should have no runtime/functional impact) for !CONFIG_FAIR_GROUP_SCHED case, but the overall code, IMHO, is neater compared to v1 of Patch 3/5 (because of lesser #ifdefs). I prefer v2 of Patch 3/5. Signed-off-by: Srivatsa

[Patch 3/5 v1] sched: change how cpu load is calculated

2007-11-26 Thread Srivatsa Vaddagiri
to it. This version of patch (v1 of Patch 3/5) has zero impact for !CONFIG_FAIR_GROUP_SCHED case. Signed-off-by: Srivatsa Vaddagiri <[EMAIL PROTECTED]> --- kernel/sched.c | 38 ++ kernel/sched_fair.c | 31 +++ kernel/sche

[Patch 2/5] sched: minor fixes for group scheduler

2007-11-26 Thread Srivatsa Vaddagiri
group list) Signed-off-by: Srivatsa Vaddagiri <[EMAIL PROTECTED]> --- kernel/sched.c | 34 ++ kernel/sched_fair.c |4 +++- 2 files changed, 29 insertions(+), 9 deletions(-) Index: current/kernel/s

  1   2   3   4   5   6   7   8   9   10   >