* Jan Kiszka [2020-04-30 14:59:50]:
> >I believe ivshmem2_virtio requires hypervisor to support PCI device emulation
> >(for life-cycle management of VMs), which our hypervisor may not support. A
> >simple shared memory and doorbell or message-queue based transport will work
> >for
> >us.
>
>
* Will Deacon [2020-04-30 11:41:50]:
> On Thu, Apr 30, 2020 at 04:04:46PM +0530, Srivatsa Vaddagiri wrote:
> > If CONFIG_VIRTIO_MMIO_OPS is defined, then I expect this to be
> > unconditionally
> > set to 'magic_qcom_ops' that uses hypervisor-supported interface fo
* Will Deacon [2020-04-30 11:39:19]:
> Hi Vatsa,
>
> On Thu, Apr 30, 2020 at 03:59:39PM +0530, Srivatsa Vaddagiri wrote:
> > > What's stopping you from implementing the trapping support in the
> > > hypervisor? Unlike the other patches you sent
* Michael S. Tsirkin [2020-04-30 06:07:56]:
> On Thu, Apr 30, 2020 at 03:32:55PM +0530, Srivatsa Vaddagiri wrote:
> > The Type-1 hypervisor we are dealing with does not allow for MMIO
> > transport.
>
> How about PCI then?
Correct me if I am wrong, but basically virtio_
* Will Deacon [2020-04-30 11:14:32]:
> > +#ifdef CONFIG_VIRTIO_MMIO_OPS
> >
> > +static struct virtio_mmio_ops *mmio_ops;
> > +
> > +#define virtio_readb(a)mmio_ops->mmio_readl((a))
> > +#define virtio_readw(a)mmio_ops->mmio_readl((a))
> > +#define virtio_readl(a)
* Will Deacon [2020-04-30 11:08:22]:
> > This patch is meant to seek comments. If its considered to be in right
> > direction, will work on making it more complete and send the next version!
>
> What's stopping you from implementing the trapping support in the
> hypervisor? Unlike the other
-by: Srivatsa Vaddagiri
---
drivers/virtio/virtio_mmio.c | 131 ++-
include/linux/virtio.h | 14 +
2 files changed, 94 insertions(+), 51 deletions(-)
diff --git a/drivers/virtio/virtio_mmio.c b/drivers/virtio/virtio_mmio.c
index 97d5725..69bfa35 100644
methods
introduced allows for seamless IO of config space.
This patch is meant to seek comments. If its considered to be in right
direction, will work on making it more complete and send the next version!
1. https://lkml.org/lkml/2020/4/28/427
Srivatsa Vaddagiri (1):
virtio: Introduce MMIO ops
* Michael S. Tsirkin [2020-04-29 06:20:48]:
> On Wed, Apr 29, 2020 at 03:39:53PM +0530, Srivatsa Vaddagiri wrote:
> > That would still not work I think where swiotlb is used for pass-thr devices
> > (when private memory is fine) as well as virtio devices (when shared memory
>
* Michael S. Tsirkin [2020-04-29 05:52:05]:
> > > So it seems that with modern Linux, all one needs
> > > to do on x86 is mark the device as untrusted.
> > > It's already possible to do this with ACPI and with OF - would that be
> > > sufficient for achieving what this patchset is trying to do?
* Michael S. Tsirkin [2020-04-29 02:50:41]:
> So it seems that with modern Linux, all one needs
> to do on x86 is mark the device as untrusted.
> It's already possible to do this with ACPI and with OF - would that be
> sufficient for achieving what this patchset is trying to do?
In my case, its
* Stefano Stabellini [2020-04-28 16:04:34]:
> > > Is swiotlb commonly used for multiple devices that may be on different
> > > trust
> > > boundaries (and not behind a hardware iommu)?
>
> The trust boundary is not a good way of describing the scenario and I
> think it leads to
* Michael S. Tsirkin [2020-04-28 16:41:04]:
> > Won't we still need some changes to virtio to make use of its own pool (to
> > bounce buffers)? Something similar to its own DMA ops proposed in this
> > patch?
>
> If you are doing this for all devices, you need to either find a way
> to do this
* Michael S. Tsirkin [2020-04-28 12:17:57]:
> Okay, but how is all this virtio specific? For example, why not allow
> separate swiotlbs for any type of device?
> For example, this might make sense if a given device is from a
> different, less trusted vendor.
Is swiotlb commonly used for
Move the memory allocation and free portion of swiotlb driver
into independent routines. They will be useful for drivers that
need swiotlb driver to just allocate/free memory chunks and not
additionally bounce memory.
Signed-off-by: Srivatsa Vaddagiri
---
include/linux/swiotlb.h | 17
This patch adds an interface for the swiotlb driver to recognize
a new memory pool. Upon successful initialization of the pool,
swiotlb returns a handle, which needs to be passed as an argument
for any future operations on the pool (map/unmap/alloc/free).
Signed-off-by: Srivatsa Vaddagiri
will require swiotlb memory to be
shared with backend VM). As a possible extension to this patch,
we can provide an option for virtio to make use of default
swiotlb memory pool itself, where no such conflicts may exist in
a given deployment.
Signed-off-by: Srivatsa Vaddagiri
---
drivers/virtio/Makefile
.
Subsequent patches allow the swiotlb driver to work with more
than one pool of memory.
Signed-off-by: Srivatsa Vaddagiri
---
drivers/xen/swiotlb-xen.c | 4 +-
include/linux/swiotlb.h | 129 -
kernel/dma/swiotlb.c | 359 +++---
3
.
Signed-off-by: Srivatsa Vaddagiri
---
include/linux/swiotlb.h | 2 ++
kernel/dma/swiotlb.c| 20 ++--
2 files changed, 16 insertions(+), 6 deletions(-)
diff --git a/include/linux/swiotlb.h b/include/linux/swiotlb.h
index 8c7843f..c634b4d 100644
--- a/include/linux/swiotlb.h
backend drivers as standalone programs (and not coupled
with any VMM).
Srivatsa Vaddagiri (5):
swiotlb: Introduce concept of swiotlb_pool
swiotlb: Allow for non-linear mapping between paddr and vaddr
swiotlb: Add alloc and free APIs
swiotlb: Add API to register new pool
virtio: Add
Commit-ID: 8bf46a39be910937d4c9e8d999a7438a7ae1a75b
Gitweb: http://git.kernel.org/tip/8bf46a39be910937d4c9e8d999a7438a7ae1a75b
Author: Srivatsa Vaddagiri <va...@codeaurora.org>
AuthorDate: Fri, 16 Sep 2016 18:28:51 -0700
Committer: Ingo Molnar <mi...@kernel.org>
CommitDate:
Commit-ID: 8bf46a39be910937d4c9e8d999a7438a7ae1a75b
Gitweb: http://git.kernel.org/tip/8bf46a39be910937d4c9e8d999a7438a7ae1a75b
Author: Srivatsa Vaddagiri
AuthorDate: Fri, 16 Sep 2016 18:28:51 -0700
Committer: Ingo Molnar
CommitDate: Thu, 22 Sep 2016 15:20:18 +0200
sched/fair: Fix
Commit-ID: 92b75202e5e8790905f9441ccaea2456cc4621a5
Gitweb: http://git.kernel.org/tip/92b75202e5e8790905f9441ccaea2456cc4621a5
Author: Srivatsa Vaddagiri
AuthorDate: Tue, 6 Aug 2013 14:55:41 +0530
Committer: Ingo Molnar
CommitDate: Wed, 14 Aug 2013 13:12:35 +0200
kvm: Paravirtual
Commit-ID: 92b75202e5e8790905f9441ccaea2456cc4621a5
Gitweb: http://git.kernel.org/tip/92b75202e5e8790905f9441ccaea2456cc4621a5
Author: Srivatsa Vaddagiri va...@linux.vnet.ibm.com
AuthorDate: Tue, 6 Aug 2013 14:55:41 +0530
Committer: Ingo Molnar mi...@kernel.org
CommitDate: Wed, 14 Aug
Commit-ID: f9021f7fd9c8c8101c90b901053f99bfd0288021
Gitweb: http://git.kernel.org/tip/f9021f7fd9c8c8101c90b901053f99bfd0288021
Author: Srivatsa Vaddagiri
AuthorDate: Tue, 6 Aug 2013 14:55:41 +0530
Committer: H. Peter Anvin
CommitDate: Mon, 12 Aug 2013 09:03:57 -0700
kvm: Paravirtual
Commit-ID: f9021f7fd9c8c8101c90b901053f99bfd0288021
Gitweb: http://git.kernel.org/tip/f9021f7fd9c8c8101c90b901053f99bfd0288021
Author: Srivatsa Vaddagiri va...@linux.vnet.ibm.com
AuthorDate: Tue, 6 Aug 2013 14:55:41 +0530
Committer: H. Peter Anvin h...@linux.intel.com
CommitDate: Mon, 12
Commit-ID: 23f659a237e8f633f9605fdf9408a8d130ab72c9
Gitweb: http://git.kernel.org/tip/23f659a237e8f633f9605fdf9408a8d130ab72c9
Author: Srivatsa Vaddagiri
AuthorDate: Fri, 9 Aug 2013 19:52:02 +0530
Committer: H. Peter Anvin
CommitDate: Fri, 9 Aug 2013 07:54:24 -0700
kvm: Paravirtual
Commit-ID: 1e20eb8557cdabf76473b09572be8aa8a2bb9bc0
Gitweb: http://git.kernel.org/tip/1e20eb8557cdabf76473b09572be8aa8a2bb9bc0
Author: Srivatsa Vaddagiri
AuthorDate: Fri, 9 Aug 2013 19:52:01 +0530
Committer: H. Peter Anvin
CommitDate: Fri, 9 Aug 2013 07:54:18 -0700
kvm guest: Add
Commit-ID: 23f659a237e8f633f9605fdf9408a8d130ab72c9
Gitweb: http://git.kernel.org/tip/23f659a237e8f633f9605fdf9408a8d130ab72c9
Author: Srivatsa Vaddagiri va...@linux.vnet.ibm.com
AuthorDate: Fri, 9 Aug 2013 19:52:02 +0530
Committer: H. Peter Anvin h...@linux.intel.com
CommitDate: Fri, 9
Commit-ID: 1e20eb8557cdabf76473b09572be8aa8a2bb9bc0
Gitweb: http://git.kernel.org/tip/1e20eb8557cdabf76473b09572be8aa8a2bb9bc0
Author: Srivatsa Vaddagiri va...@linux.vnet.ibm.com
AuthorDate: Fri, 9 Aug 2013 19:52:01 +0530
Committer: H. Peter Anvin h...@linux.intel.com
CommitDate: Fri, 9
Commit-ID: b5eaeb3303fc3086f1d04deea48b5dfcfc4344c0
Gitweb: http://git.kernel.org/tip/b5eaeb3303fc3086f1d04deea48b5dfcfc4344c0
Author: Srivatsa Vaddagiri
AuthorDate: Tue, 6 Aug 2013 17:15:21 +0530
Committer: H. Peter Anvin
CommitDate: Thu, 8 Aug 2013 16:07:34 -0700
kvm : Paravirtual
Commit-ID: 20a89c88f7d2458e12f66d7850cf17deec7daa1c
Gitweb: http://git.kernel.org/tip/20a89c88f7d2458e12f66d7850cf17deec7daa1c
Author: Srivatsa Vaddagiri
AuthorDate: Tue, 6 Aug 2013 17:15:01 +0530
Committer: H. Peter Anvin
CommitDate: Thu, 8 Aug 2013 16:07:30 -0700
kvm guest : Add
Commit-ID: 20a89c88f7d2458e12f66d7850cf17deec7daa1c
Gitweb: http://git.kernel.org/tip/20a89c88f7d2458e12f66d7850cf17deec7daa1c
Author: Srivatsa Vaddagiri va...@linux.vnet.ibm.com
AuthorDate: Tue, 6 Aug 2013 17:15:01 +0530
Committer: H. Peter Anvin h...@linux.intel.com
CommitDate: Thu, 8
Commit-ID: b5eaeb3303fc3086f1d04deea48b5dfcfc4344c0
Gitweb: http://git.kernel.org/tip/b5eaeb3303fc3086f1d04deea48b5dfcfc4344c0
Author: Srivatsa Vaddagiri va...@linux.vnet.ibm.com
AuthorDate: Tue, 6 Aug 2013 17:15:21 +0530
Committer: H. Peter Anvin h...@linux.intel.com
CommitDate: Thu, 8
* Russell King - ARM Linux [2013-01-05 10:36:27]:
> On Thu, Jan 03, 2013 at 06:58:38PM -0800, Srivatsa Vaddagiri wrote:
> > I also think that the
> > wait_for_completion() based wait in ARM's __cpu_die() can be replaced with a
> > busy-loop based one, as the wait th
* Russell King - ARM Linux li...@arm.linux.org.uk [2013-01-05 10:36:27]:
On Thu, Jan 03, 2013 at 06:58:38PM -0800, Srivatsa Vaddagiri wrote:
I also think that the
wait_for_completion() based wait in ARM's __cpu_die() can be replaced with a
busy-loop based one, as the wait there in general
* Sergei Shtylyov [2013-01-04 16:13:42]:
> >With offline cpus no longer beeing seen in nohz mode (ts->idle_active=0), we
> >don't need the check for cpu_online() introduced in commit 7386cdbf. Offline
>
>Please also specify the summary of that commit in parens (or
> however you like).
I
* Sergei Shtylyov sshtyl...@mvista.com [2013-01-04 16:13:42]:
With offline cpus no longer beeing seen in nohz mode (ts-idle_active=0), we
don't need the check for cpu_online() introduced in commit 7386cdbf. Offline
Please also specify the summary of that commit in parens (or
however you
istics).
Cc: mho...@suse.cz
Cc: srivatsa.b...@linux.vnet.ibm.com
Signed-off-by: Srivatsa Vaddagiri
---
fs/proc/stat.c | 14 --
1 files changed, 4 insertions(+), 10 deletions(-)
diff --git a/fs/proc/stat.c b/fs/proc/stat.c
index e296572..64c3b31 100644
--- a/fs/proc/stat.c
+++ b/f
olnar
Cc: "H. Peter Anvin"
Cc: x...@kernel.org
Cc: mho...@suse.cz
Cc: srivatsa.b...@linux.vnet.ibm.com
Signed-off-by: Srivatsa Vaddagiri
---
arch/arm/kernel/process.c |9 -
arch/arm/kernel/smp.c |2 +-
arch/blackfin/kernel/process.c |8
arch/mi
On most architectures (arm, mips, s390, sh and x86) idle thread of a cpu does
not cleanly exit nohz state before dying upon hot-remove. As a result,
offline cpu is seen to be in nohz mode (ts->idle_active = 1) and its offline
time can potentially be included in total idle time reported via
On most architectures (arm, mips, s390, sh and x86) idle thread of a cpu does
not cleanly exit nohz state before dying upon hot-remove. As a result,
offline cpu is seen to be in nohz mode (ts-idle_active = 1) and its offline
time can potentially be included in total idle time reported via
...@linux.vnet.ibm.com
Signed-off-by: Srivatsa Vaddagiri va...@codeaurora.org
---
arch/arm/kernel/process.c |9 -
arch/arm/kernel/smp.c |2 +-
arch/blackfin/kernel/process.c |8
arch/mips/kernel/process.c |6 +++---
arch/powerpc/kernel/idle.c |2
: mho...@suse.cz
Cc: srivatsa.b...@linux.vnet.ibm.com
Signed-off-by: Srivatsa Vaddagiri va...@codeaurora.org
---
fs/proc/stat.c | 14 --
1 files changed, 4 insertions(+), 10 deletions(-)
diff --git a/fs/proc/stat.c b/fs/proc/stat.c
index e296572..64c3b31 100644
--- a/fs/proc/stat.c
Commit-ID: 88b8dac0a14c511ff41486b83a8c3d688936eec0
Gitweb: http://git.kernel.org/tip/88b8dac0a14c511ff41486b83a8c3d688936eec0
Author: Srivatsa Vaddagiri
AuthorDate: Tue, 19 Jun 2012 17:43:15 +0530
Committer: Ingo Molnar
CommitDate: Tue, 24 Jul 2012 13:58:06 +0200
sched: Improve
Commit-ID: 88b8dac0a14c511ff41486b83a8c3d688936eec0
Gitweb: http://git.kernel.org/tip/88b8dac0a14c511ff41486b83a8c3d688936eec0
Author: Srivatsa Vaddagiri va...@linux.vnet.ibm.com
AuthorDate: Tue, 19 Jun 2012 17:43:15 +0530
Committer: Ingo Molnar mi...@kernel.org
CommitDate: Tue, 24 Jul
On Mon, Feb 25, 2008 at 04:28:02PM +0100, Peter Zijlstra wrote:
> Vatsa, would it make sense to take just that out, or just do a full
> revert?
Peter,
6b2d7700266b9402e12824e11e0099ae6a4a6a79 and
58e2d4ca581167c2a079f4ee02be2f0bc52e8729 are related very much. The
later changes how cpu
On Mon, Feb 25, 2008 at 04:28:02PM +0100, Peter Zijlstra wrote:
Vatsa, would it make sense to take just that out, or just do a full
revert?
Peter,
6b2d7700266b9402e12824e11e0099ae6a4a6a79 and
58e2d4ca581167c2a079f4ee02be2f0bc52e8729 are related very much. The
later changes how cpu load
On Mon, Feb 18, 2008 at 08:38:24AM +0100, Mike Galbraith wrote:
> Here, it does not. It seems fine without CONFIG_FAIR_GROUP_SCHED.
My hunch is its because of the vruntime driven preemption which shoots
up latencies (and the fact perhaps that Peter hasnt't focused more on SMP case
yet!).
On Mon, Feb 18, 2008 at 08:38:24AM +0100, Mike Galbraith wrote:
Here, it does not. It seems fine without CONFIG_FAIR_GROUP_SCHED.
My hunch is its because of the vruntime driven preemption which shoots
up latencies (and the fact perhaps that Peter hasnt't focused more on SMP case
yet!).
On Wed, Jan 30, 2008 at 02:56:09PM +0100, Lukas Hejtmanek wrote:
> Hello,
>
> I noticed short thread in LKM regarding "sched: add vslice" causes horrible
> interactivity under load.
>
> I can see similar behavior. If I stress both CPU cores, even typing on
> keyboard suffers from huge latencies,
On Wed, Jan 30, 2008 at 02:56:09PM +0100, Lukas Hejtmanek wrote:
Hello,
I noticed short thread in LKM regarding sched: add vslice causes horrible
interactivity under load.
I can see similar behavior. If I stress both CPU cores, even typing on
keyboard suffers from huge latencies, I can
On Wed, Feb 13, 2008 at 10:04:44PM +0530, Dhaval Giani wrote:
> I know I am missing something, but aren't we trying to reduce latencies
> here?
I guess Peter is referring to the latency in seeing fairness results. In
other words, with single rq approach, you may require more time for the groups
8e2d4ca581167c2a079f4ee02be2f0bc52e8729
> > Author: Srivatsa Vaddagiri <[EMAIL PROTECTED]>
> > Date: Fri Jan 25 21:08:00 2008 +0100
> >
> > sched: group scheduling, change how cpu load is calculated
> >
> >
> >
> > hackbench has about
: Srivatsa Vaddagiri [EMAIL PROTECTED]
Date: Fri Jan 25 21:08:00 2008 +0100
sched: group scheduling, change how cpu load is calculated
hackbench has about 30% regression on 16-core tigerton, but has about 10%
improvement
on 8-core stoakley.
In addition, tbench has about
On Wed, Feb 13, 2008 at 10:04:44PM +0530, Dhaval Giani wrote:
I know I am missing something, but aren't we trying to reduce latencies
here?
I guess Peter is referring to the latency in seeing fairness results. In
other words, with single rq approach, you may require more time for the groups
to
On Tue, Feb 12, 2008 at 08:40:08PM +0100, Peter Zijlstra wrote:
> Yes, latency isolation is the one thing I had to sacrifice in order to
> get the normal latencies under control.
Hi Peter,
I don't have easy solution in mind either to meet both fairness
and latency goals in a acceptable
On Tue, Feb 12, 2008 at 08:40:08PM +0100, Peter Zijlstra wrote:
Yes, latency isolation is the one thing I had to sacrifice in order to
get the normal latencies under control.
Hi Peter,
I don't have easy solution in mind either to meet both fairness
and latency goals in a acceptable way.
On Thu, Jan 31, 2008 at 06:39:56PM -0800, Paul Menage wrote:
> On Jan 30, 2008 6:40 PM, Srivatsa Vaddagiri <[EMAIL PROTECTED]> wrote:
> >
> > Here are some questions that arise in this picture:
> >
> > 1. What is the relationship of the task-group in A/tasks w
On Thu, Jan 31, 2008 at 06:39:56PM -0800, Paul Menage wrote:
On Jan 30, 2008 6:40 PM, Srivatsa Vaddagiri [EMAIL PROTECTED] wrote:
Here are some questions that arise in this picture:
1. What is the relationship of the task-group in A/tasks with the
task-group in A/a1/tasks
Hi,
As we were implementing multiple-hierarchy support for CPU
controller, we hit some oddities in its implementation, partly related
to current cgroups implementation. Peter and I have been debating on the
exact solution and I thought of bringing that discussion to lkml.
Consider the
Hi,
As we were implementing multiple-hierarchy support for CPU
controller, we hit some oddities in its implementation, partly related
to current cgroups implementation. Peter and I have been debating on the
exact solution and I thought of bringing that discussion to lkml.
Consider the
On Tue, Jan 29, 2008 at 04:53:56PM +0100, Guillaume Chazarain wrote:
> I just thought about something to restore low latencies with
> FAIR_GROUP_SCHED, but it's possibly utter nonsense, so bear with me
> ;-) The idea would be to reverse the trees upside down. The scheduler
> would only see tasks
On Tue, Jan 29, 2008 at 11:57:22AM +0100, Peter Zijlstra wrote:
> On Tue, 2008-01-29 at 10:53 +0100, Peter Zijlstra wrote:
>
> > My thoughts were to make stronger use of disjoint cpu-sets. cgroups and
> > cpusets are related, in that cpusets provide a property to a cgroup.
> > However,
On Tue, Jan 29, 2008 at 11:57:22AM +0100, Peter Zijlstra wrote:
On Tue, 2008-01-29 at 10:53 +0100, Peter Zijlstra wrote:
My thoughts were to make stronger use of disjoint cpu-sets. cgroups and
cpusets are related, in that cpusets provide a property to a cgroup.
However,
On Tue, Jan 29, 2008 at 04:53:56PM +0100, Guillaume Chazarain wrote:
I just thought about something to restore low latencies with
FAIR_GROUP_SCHED, but it's possibly utter nonsense, so bear with me
;-) The idea would be to reverse the trees upside down. The scheduler
would only see tasks (on
On Mon, Jan 28, 2008 at 09:13:53PM +0100, Guillaume Chazarain wrote:
> Unfortunately it seems to not be completely fixed, with this script:
The maximum scheduling latency of a task with group scheduler is:
Lmax = latency to schedule group entity at level0 +
latency to
On Mon, Jan 28, 2008 at 09:13:53PM +0100, Guillaume Chazarain wrote:
Unfortunately it seems to not be completely fixed, with this script:
The maximum scheduling latency of a task with group scheduler is:
Lmax = latency to schedule group entity at level0 +
latency to
On Sun, Jan 27, 2008 at 09:01:15PM +0100, Guillaume Chazarain wrote:
> I noticed some strangely high wake up latencies with FAIR_USER_SCHED
> using this script:
> We have two busy loops with UID=1.
> And UID=2 maintains the running median of its wake up latency.
> I get these latencies:
>
> #
On Sun, Jan 27, 2008 at 04:06:17PM +0100, Toralf Förster wrote:
> > The third line (giving overall cpu usage stats) is what is interesting here.
> > If you have more than one cpu, you can get cpu usage stats for each cpu
> > in top by pressing 1. Can you provide this information with and w/o
> >
On Sat, Jan 26, 2008 at 07:46:51PM +0100, Toralf Förster wrote:
>
> The problem is the same as described here : http://lkml.org/lkml/2007/10/21/85
> If I run dnetc even with lowest prority than the CPU stays at 600 MHz
> regardless
> of any other load (eg. rsyncing, svn update, compiling, ...)
>
On Sun, Jan 27, 2008 at 04:06:17PM +0100, Toralf Förster wrote:
The third line (giving overall cpu usage stats) is what is interesting here.
If you have more than one cpu, you can get cpu usage stats for each cpu
in top by pressing 1. Can you provide this information with and w/o
On Sun, Jan 27, 2008 at 09:01:15PM +0100, Guillaume Chazarain wrote:
I noticed some strangely high wake up latencies with FAIR_USER_SCHED
using this script:
snip
We have two busy loops with UID=1.
And UID=2 maintains the running median of its wake up latency.
I get these latencies:
#
On Tue, Jan 22, 2008 at 05:47:34PM +0100, Peter Zijlstra wrote:
> It should not, that would be another bug, but from a quick glance at the
> code it doesn't do that.
Hmm I had it in my back of mind that printk() could sleep. Looks like
that has changed and so the patch you sent should be fine.
On Tue, Jan 22, 2008 at 05:25:38PM +0100, Peter Zijlstra wrote:
> @@ -1428,9 +1428,9 @@ static void print_cfs_stats(struct seq_f
> #ifdef CONFIG_FAIR_GROUP_SCHED
> print_cfs_rq(m, cpu, _rq(cpu)->cfs);
> #endif
> - lock_task_group_list();
> + rcu_read_lock();
>
On Tue, Jan 22, 2008 at 05:25:38PM +0100, Peter Zijlstra wrote:
@@ -1428,9 +1428,9 @@ static void print_cfs_stats(struct seq_f
#ifdef CONFIG_FAIR_GROUP_SCHED
print_cfs_rq(m, cpu, cpu_rq(cpu)-cfs);
#endif
- lock_task_group_list();
+ rcu_read_lock();
On Tue, Jan 22, 2008 at 05:47:34PM +0100, Peter Zijlstra wrote:
It should not, that would be another bug, but from a quick glance at the
code it doesn't do that.
Hmm I had it in my back of mind that printk() could sleep. Looks like
that has changed and so the patch you sent should be fine.
On Sun, Jan 20, 2008 at 09:03:38AM +0530, Dhaval Giani wrote:
> > btw: writing 1 into "cpu_share" totally locks up the computer!
> >
>
> Can you please provide some more details. Can you go into another
> console (try ctrl-alt-f1) and try to reproduce the issue there. Could
> you take a photo of
On Sun, Jan 20, 2008 at 09:03:38AM +0530, Dhaval Giani wrote:
btw: writing 1 into cpu_share totally locks up the computer!
Can you please provide some more details. Can you go into another
console (try ctrl-alt-f1) and try to reproduce the issue there. Could
you take a photo of the
On Mon, Jan 07, 2008 at 11:51:20AM +0100, Peter Zijlstra wrote:
> - figure out what to do for UID based group scheduling, the current
>implementation leaves it impossible for !root users to execute
>real time tasks by setting rt_runtime_us to 0, and it has no way
>to change it.
>
>
On Mon, Jan 07, 2008 at 11:51:20AM +0100, Peter Zijlstra wrote:
- figure out what to do for UID based group scheduling, the current
implementation leaves it impossible for !root users to execute
real time tasks by setting rt_runtime_us to 0, and it has no way
to change it.
On Fri, Nov 30, 2007 at 01:35:13PM +0100, Ingo Molnar wrote:
> * Srivatsa Vaddagiri <[EMAIL PROTECTED]> wrote:
>
> > Here's V2 of the cpu acccounting controller patch, which makes
> > accounting scale better on SMP systems by splitting the usage counter
> > to be
On Fri, Nov 30, 2007 at 01:48:33AM +0530, Srivatsa Vaddagiri wrote:
> It is indeed an important todo. Right now we take a per-group global
> lock on every accounting update (which can be very frequent) and hence
> it is pretty bad.
>
> Ingo had expressed the need to reintroduce
On Fri, Nov 30, 2007 at 01:48:33AM +0530, Srivatsa Vaddagiri wrote:
It is indeed an important todo. Right now we take a per-group global
lock on every accounting update (which can be very frequent) and hence
it is pretty bad.
Ingo had expressed the need to reintroduce this patch asap
On Fri, Nov 30, 2007 at 01:35:13PM +0100, Ingo Molnar wrote:
* Srivatsa Vaddagiri [EMAIL PROTECTED] wrote:
Here's V2 of the cpu acccounting controller patch, which makes
accounting scale better on SMP systems by splitting the usage counter
to be per-cpu.
thanks, applied. But you dont
an be made static.
This symbol is needed in kernel/cgroup.c file, where it does this:
static struct cgroup_subsys *subsys[] = {
#include
};
and hence it cant be static. Thanks for the rest of your comments. I
have fixed them in this version below:
Signed-off-by: Srivatsa Vaddagiri <[EMA
On Thu, Nov 29, 2007 at 08:20:58PM +0100, Ingo Molnar wrote:
> ok, this looks certainly doable for v2.6.24. I've added it to the
> scheduler fixes queue and will let it brew there for a few days and send
> it to Linus after that if everything goes fine - unless anyone objects.
Thanks.
--
same
accounting information.
Todo:
- Make the accounting scalable on SMP systems (perhaps
for 2.6.25)
Signed-off-by: Srivatsa Vaddagiri <[EMAIL PROTECTED]>
---
include/linux/cgroup_subsys.h |6 ++
include/linux/cpu_acct.h | 14 +
init/Kconfig
on SMP systems (perhaps
for 2.6.25)
Signed-off-by: Srivatsa Vaddagiri [EMAIL PROTECTED]
---
include/linux/cgroup_subsys.h |6 ++
include/linux/cpu_acct.h | 14 +
init/Kconfig |7 ++
kernel/Makefile |1
kernel/cpu_acct.c
On Thu, Nov 29, 2007 at 08:20:58PM +0100, Ingo Molnar wrote:
ok, this looks certainly doable for v2.6.24. I've added it to the
scheduler fixes queue and will let it brew there for a few days and send
it to Linus after that if everything goes fine - unless anyone objects.
Thanks.
--
struct cgroup_subsys *subsys[] = {
#include linux/cgroup_subsys.h
};
and hence it cant be static. Thanks for the rest of your comments. I
have fixed them in this version below:
Signed-off-by: Srivatsa Vaddagiri [EMAIL PROTECTED]
---
include/linux/cgroup_subsys.h |6 ++
include/linux/cpu_acct.h
On Tue, Nov 27, 2007 at 01:53:12PM +0100, Ingo Molnar wrote:
> > Fine. I will resubmit this patchset then once we are into 2.6.25
> > cycle.
>
> no need (unless you have bugfixes) i'm carrying this around in the
> scheduler git tree. (it will show up in sched-devel.git)
Cool .. Thx! It will
On Tue, Nov 27, 2007 at 12:09:10PM +0100, Ingo Molnar wrote:
> thanks, it looks good - but the fact that we are at v4 of the patchset
> underlines the point that this is more of a v2.6.25 patchset than a
> v2.6.24 one.
Fine. I will resubmit this patchset then once we are into 2.6.25 cycle.
>
On Tue, Nov 27, 2007 at 12:09:10PM +0100, Ingo Molnar wrote:
thanks, it looks good - but the fact that we are at v4 of the patchset
underlines the point that this is more of a v2.6.25 patchset than a
v2.6.24 one.
Fine. I will resubmit this patchset then once we are into 2.6.25 cycle.
Group
On Tue, Nov 27, 2007 at 01:53:12PM +0100, Ingo Molnar wrote:
Fine. I will resubmit this patchset then once we are into 2.6.25
cycle.
no need (unless you have bugfixes) i'm carrying this around in the
scheduler git tree. (it will show up in sched-devel.git)
Cool .. Thx! It will get me
are introduced (under SCHED_DEBUG) to control the rate at which
it runs.
Signed-off-by: Srivatsa Vaddagiri <[EMAIL PROTECTED]>
---
include/linux/sched.h |4
kernel/sched.c| 259 --
kernel/sched_fair.c | 88 ++--
rebalancing
shares of task groups across cpus.
Signed-off-by: Srivatsa Vaddagiri <[EMAIL PROTECTED]>
---
kernel/sched.c | 19 +++
1 files changed, 19 insertions(+)
Index: current/kernel/sched.c
===
--- curren
to it.
This version of patch (v2 of Patch 3/5) has a minor impact on code size
(but should have no runtime/functional impact) for !CONFIG_FAIR_GROUP_SCHED
case, but the overall code, IMHO, is neater compared to v1 of Patch 3/5
(because of lesser #ifdefs).
I prefer v2 of Patch 3/5.
Signed-off-by: Srivatsa
to it.
This version of patch (v1 of Patch 3/5) has zero impact for
!CONFIG_FAIR_GROUP_SCHED case.
Signed-off-by: Srivatsa Vaddagiri <[EMAIL PROTECTED]>
---
kernel/sched.c | 38 ++
kernel/sched_fair.c | 31 +++
kernel/sche
group list)
Signed-off-by: Srivatsa Vaddagiri <[EMAIL PROTECTED]>
---
kernel/sched.c | 34 ++
kernel/sched_fair.c |4 +++-
2 files changed, 29 insertions(+), 9 deletions(-)
Index: current/kernel/s
1 - 100 of 953 matches
Mail list logo