In article [EMAIL PROTECTED],
you write:
So how do you reverse a CHROOT?
Assuming your process doesn't drop its root privileges, before you do
the initial chroot() you could do:
old_root = open("/", O_RDONLY);
Then later do
fchdir(oldroot);
chroot(".");
But the cleaner and more portable
On Tue, 31 Oct 2000, Rik van Riel wrote:
Ummm, last I looked Linux held the Specweb99 record;
by a wide margin...
... but since then IBM/Zeus appear to have taken the lead:
http://www.zeus.com/news/articles/001004-001/
http://www.spec.org/osg/web99/results/res2000q3/
But they were using a
sys_swapon() sets SWP_USED in p-flags when it begins to set up a swap
area, and then calls vmalloc() to allocate p-swap_map[], which may
sleep. Most other users of the swap info structures either traverse the
swap list (to which the new swap area hasn't yet been added) or check
SWP_WRITEOK
This is the equivalent patch for Linux 2.2 (prepared against 2.2.19)
for the swapon/procfs race also described in my previous email.
sys_swapon() sets SWP_USED in p-flags when it begins to set up a swap
area, and then calls vmalloc() to allocate p-swap_map[], which may
sleep. Most other users
In article [EMAIL PROTECTED],
you write:
Safe, perhaps, but also completely useless: there is no way the user
can set up a functional environment inside the chroot. In other
words, it's all pain, no gain.
It could potentially be useful for a network daemon (e.g. a simplified
anonymous FTP
Paul Menage wrote:
This could be regarded as the wrong way to solve such a problem, but
this kind of bug seems to be occurring often enough on BugTraq that it
might be useful if you don't have the resources to do a full security
audit on your program (or if the source to some of your
You need to be root to do mknod. You need to do mknod to create /dev/zero.
You need /dev/zero to get anywhere near the normal behaviour of the system.
Sure, but we're not necessarily looking for a system that behaves
normally in all aspects. The example given was that of a paranoid
network
Some of the comments in and before handle_pte_fault() are obsolete or
misleading:
- page_table_lock has been pushed up into handle_mm_fault() and down
into the various do_xxx_page() handlers.
- mmap_sem protects the adding of vma structures to the vmlist, not
pages to the page tables.
Paul
On 4/23/07, Vaidyanathan Srinivasan [EMAIL PROTECTED] wrote:
config CONTAINERS
- bool Container support
- help
- This option will let you create and manage process containers,
- which can be used to aggregate multiple processes, e.g. for
- the purposes of
On 4/23/07, Vaidyanathan Srinivasan [EMAIL PROTECTED] wrote:
Hi Paul,
In [patch 3/7] Containers (V8): Add generic multi-subsystem API to
containers, you have forcefully enabled interrupt in
container_init_subsys() with spin_unlock_irq() which breaks on PPC64.
+static void
On 3/6/07, Pavel Emelianov [EMAIL PROTECTED] wrote:
The idea is:
Task may be the entity that allocates the resources and the
entity that is a resource allocated.
When task is the first entity it may move across containers
(that is implemented in your patches). When task is a resource
it
On 3/9/07, Srivatsa Vaddagiri [EMAIL PROTECTED] wrote:
1. What is the fundamental unit over which resource-management is
applied? Individual tasks or individual containers?
/me thinks latter.
Yes
In which case, it makes sense to stick
resource control information in the
On 3/11/07, Paul Jackson [EMAIL PROTECTED] wrote:
My current understanding of Paul Menage's container patch is that it is
a useful improvement for some of the metered classes - those that could
make good use of a file system like hierarchy for their interface.
It probably doesn't benefit all
On 3/12/07, Herbert Poetzl [EMAIL PROTECTED] wrote:
why? you simply enter that specific space and
use the existing mechanisms (netlink, proc, whatever)
to retrieve the information with _existing_ tools,
That's assuming that you're using network namespace virtualization,
with each group of
On 3/12/07, Srivatsa Vaddagiri [EMAIL PROTECTED] wrote:
- (subjective!) If there is a existing grouping mechanism already (say
tsk-nsproxy[-pid_ns]) over which res control needs to be applied,
then the new grouping mechanism can be considered redundant (it can
On 3/15/07, Srivatsa Vaddagiri [EMAIL PROTECTED] wrote:
On Thu, Mar 15, 2007 at 04:24:37AM -0700, Paul Menage wrote:
If there really was a grouping that was always guaranteed to match the
way you wanted to group tasks for e.g. resource control, then yes, it
would be great to use it. But I
On 2/12/07, Paul Jackson [EMAIL PROTECTED] wrote:
You'll have a rough time selling me on the idea that some kernel thread
should be waking up every few seconds, grabbing system-wide locks, on a
big honkin NUMA box, for the few times per hour, or less, that a cpuset is
abandoned.
I think it
On 2/12/07, Cedric Le Goater [EMAIL PROTECTED] wrote:
+#include asm/proto.h
I did have a problem with this include. On s390 it didn't exist so I've
just been running without it (with no problems). A quick 'find'
suggests it only exists on x86_64, so I'd expect failures on all other
On 2/12/07, Sam Vilain [EMAIL PROTECTED] wrote:
I know I'm a bit out of touch, but AIUI the NSProxy *is* the container.
We decided a long time ago that a container was basically just a set of
namespaces, which includes all of the subsystems you mention.
You may have done that, but the
On 2/12/07, Serge E. Hallyn [EMAIL PROTECTED] wrote:
Well it's an unfortunate conflict, but I don't see where we have any
standing to make Paul change his terminology :)
I have no huge problem with changing my terminology in the interest of
wider adoption. Container seems like an appropriate
On 2/12/07, Sam Vilain [EMAIL PROTECTED] wrote:
Ask yourself this - what do you need the container structure for so
badly, that virtualising the individual resources does not provide for?
Primarily, that otherwise every module that wants to affect/monitor
behaviour of a group of associated
On 2/12/07, Sam Vilain [EMAIL PROTECTED] wrote:
Not every module, you just make them on sensible, planned groupings.
The danger is that the container group becomes a fallback grouping for
things when people can't be bothered thinking about it properly, and
everything including the kitchen sink
On 2/12/07, Srivatsa Vaddagiri [EMAIL PROTECTED] wrote:
On Mon, Feb 12, 2007 at 12:15:24AM -0800, [EMAIL PROTECTED] wrote:
+/*
+ * Call css_get() to hold a reference on the container; following a
+ * return of 0, this container subsystem state object is guaranteed
+ * not to be destroyed
On 2/12/07, Srivatsa Vaddagiri [EMAIL PROTECTED] wrote:
On Mon, Feb 12, 2007 at 12:15:27AM -0800, [EMAIL PROTECTED] wrote:
This patch implements the BeanCounter resource control abstraction
over generic process containers.
Forgive my confusion, but do we really need two-levels of resource
On 2/12/07, Srivatsa Vaddagiri [EMAIL PROTECTED] wrote:
On Mon, Feb 12, 2007 at 12:15:22AM -0800, [EMAIL PROTECTED] wrote:
+void container_fork(struct task_struct *child)
+{
+ task_lock(current);
Can't this be just rcu_read_lock()?
In this particular patch (which is an almost verbatim
On 2/12/07, Paul Menage [EMAIL PROTECTED] wrote:
reaches zero. RCU is still fine for reading the container_group
pointers, but it's no good for updating them, since by the time you
update it it may no longer be your container_group structure, and may
instead be about to be deleted as soon
On 2/13/07, Pavel Emelianov [EMAIL PROTECTED] wrote:
[EMAIL PROTECTED] wrote:
This patch implements the BeanCounter resource control abstraction
over generic process containers. It contains the beancounter core
code, plus the numfiles resource counter. It doesn't currently contain
any of the
On 2/13/07, Pavel Emelianov [EMAIL PROTECTED] wrote:
I have implementation that moves arbitrary task :)
Is that the one that calls stop_machine() in order to move a task
around? That seemed a little heavyweight ...
May be we can do context (container-on-task) handling lockless?
What did
On 2/13/07, Srivatsa Vaddagiri [EMAIL PROTECTED] wrote:
Well, we already bump up reference count in fork() w/o grabbing those
mutexes don't we? Also if rmdir() sees container-count to be zero, then
it means no task is attached to the container. How will then a function
like bc_file_charge()
On 2/15/07, Serge E. Hallyn [EMAIL PROTECTED] wrote:
config CONTAINERS
- bool Container support
- help
- This option will let you create and manage process containers,
- which can be used to aggregate multiple processes, e.g. for
- the purposes of resource
On 2/19/07, Andrew Morton [EMAIL PROTECTED] wrote:
Alas, I fear this might have quite bad worst-case behaviour. One small
container which is under constant memory pressure will churn the
system-wide LRUs like mad, and will consume rather a lot of system time.
So it's a point at which container
On 2/19/07, Andrew Morton [EMAIL PROTECTED] wrote:
This output is hard to parse and to extend. I'd suggest either two
separate files, or multi-line output:
usage: %lu kB
limit: %lu kB
Two separate files would be the container usage model that I
envisaged, inherited from the way cpusets does
On 2/19/07, Kirill Korotaev [EMAIL PROTECTED] wrote:
I think it's OK for a container to consume lots of system time during
reclaim, as long as we can account that time to the container involved
(i.e. if it's done during direct reclaim rather than by something like
kswapd).
hmm, is it ok to
On 2/19/07, Balbir Singh [EMAIL PROTECTED] wrote:
More worrisome is the potential for use-after-free. What prevents the
pointer at mm-container from referring to freed memory after we're dropped
the lock?
The container cannot be freed unless all tasks holding references to it are
gone,
On 3/13/07, Dave Hansen [EMAIL PROTECTED] wrote:
How do we determine what is shared, and goes into the shared zones?
Once we've allocated a page, it's too late because we already picked.
Do we just assume all page cache is shared? Base it on filesystem,
mount, ...? Mount seems the most logical
On 4/5/07, Srivatsa Vaddagiri [EMAIL PROTECTED] wrote:
The approach I am on currently doesnt deal with dynamically loaded
modules ..Partly because it allows subsystem ids to be compile-time
decided
Yes, that part is definitely a good idea, since it removes one of the
potential performance
On 4/6/07, [EMAIL PROTECTED] [EMAIL PROTECTED] wrote:
This patch removes all cpuset-specific knowlege from the container
system, replacing it with a generic API that can be used by multiple
subsystems. Cpusets is adapted to be a container subsystem.
+
+ /* Set of subsystem states, one for
On 4/6/07, Srivatsa Vaddagiri [EMAIL PROTECTED] wrote:
On Fri, Apr 06, 2007 at 04:32:24PM -0700, [EMAIL PROTECTED] wrote:
+static int attach_task(struct container *cont, struct task_struct *tsk)
{
[snip]
+ task_lock(tsk);
You need to check here if task state is PF_EXITING and fail
On 4/10/07, Srivatsa Vaddagiri [EMAIL PROTECTED] wrote:
Is the first argument into all the callbacks, struct container_subsys *ss,
necessary?
I added it to support library-like abstractions - where one subsystem
can have its container callbacks and file accesses all handled by a
library which
On 4/10/07, Srivatsa Vaddagiri [EMAIL PROTECTED] wrote:
[ Sorry abt piece meal reviews, I am sending comments as and when I spot
something ]
That's no problem.
On Fri, Apr 06, 2007 at 04:32:24PM -0700, [EMAIL PROTECTED] wrote:
-void container_exit(struct task_struct *tsk)
+void
On 2/20/07, Eric W. Biederman [EMAIL PROTECTED] wrote:
Paul Menage [EMAIL PROTECTED] writes:
On 2/12/07, Sam Vilain [EMAIL PROTECTED] wrote:
I know I'm a bit out of touch, but AIUI the NSProxy *is* the container.
We decided a long time ago that a container was basically just a set
On 2/20/07, Sam Vilain [EMAIL PROTECTED] wrote:
The term segregated group of processes is too vague. Segregated for
what? What is the kernel supposed to do with this information?
The generic part of the kernel just keeps track of the fact that
they're segregated (and their children, etc).
On 2/20/07, Eric W. Biederman [EMAIL PROTECTED] wrote:
Sam said the NSProxy *is* the container. You appear to be planning
to have some namespaces, possibly not aggregated within the nsproxy
(pid namespace?) but are you planning to have some higher-level
container object that aggregates the
On 2/20/07, Sam Vilain [EMAIL PROTECTED] wrote:
I don't necessarily agree with the 'heirarchy' bit. It doesn't have to
be so segregated. But I think we already covered that in this thread.
OK, but it's much easier to use a hierarchical system as a flat system
(just don't create children)
On 2/20/07, Sam Vilain [EMAIL PROTECTED] wrote:
Paul Menage wrote:
No. A reverse mapping is not needed and is not interesting.
... to you.
You're missing the point of Eric's next sentence. If you can achieve
everything you need to achieve and get all the information you are after
without
Hi Vatsa,
Sorry for the delayed reply - the last week has been very busy ...
On 3/1/07, Srivatsa Vaddagiri [EMAIL PROTECTED] wrote:
Paul,
Based on some of the feedback to container patches, I have
respun them to avoid the container structure abstraction and instead use
nsproxy
Hi Pavel,
On 3/6/07, Pavel Emelianov [EMAIL PROTECTED] wrote:
diff -upr linux-2.6.20.orig/include/linux/sched.h
linux-2.6.20-0/include/linux/sched.h
--- linux-2.6.20.orig/include/linux/sched.h 2007-03-06 13:33:28.0
+0300
+++ linux-2.6.20-0/include/linux/sched.h2007-03-06
On 3/6/07, Pavel Emelianov [EMAIL PROTECTED] wrote:
2. Extended containers may register themselves too late.
Kernel threads/helpers start forking, opening files
and touching pages much earlier. This patchset
workarounds this in not-so-cute manner and I'm waiting
for Paul's comments
On 3/7/07, Srivatsa Vaddagiri [EMAIL PROTECTED] wrote:
- when you do sys_unshare() or a clone that creates new namespaces,
then the task (or its child) will get a new nsproxy that has the rcfs
subsystem state associated with the old nsproxy, and one or more
namespace pointers cloned to
On 3/7/07, Serge E. Hallyn [EMAIL PROTECTED] wrote:
Quoting Srivatsa Vaddagiri ([EMAIL PROTECTED]):
On Tue, Mar 06, 2007 at 06:32:07PM -0800, Paul Menage wrote:
I'm not really sure that I see the value of having this be part of
nsproxy rather than the previous independent container
On 3/7/07, Serge E. Hallyn [EMAIL PROTECTED] wrote:
All that being said, if it were going to save space without overly
complicating things I'm actually not opposed to using nsproxy, but it
If space-saving is the main issue, then the latest version of my
containers patches uses just a single
On 3/7/07, Eric W. Biederman [EMAIL PROTECTED] wrote:
Effectively, container_group is to container as nsproxy is to namespace.
The statement above nicely summarizes the confusion in terminology.
In the namespace world when we say container we mean roughly at the level
of nsproxy and
On 3/7/07, Srivatsa Vaddagiri [EMAIL PROTECTED] wrote:
On Mon, Feb 12, 2007 at 12:15:23AM -0800, [EMAIL PROTECTED] wrote:
/*
@@ -913,12 +537,14 @@ static int update_nodemask(struct cpuset
int migrate;
int fudge;
int retval;
+ struct container *cont;
This seems to be
On 3/7/07, Srivatsa Vaddagiri [EMAIL PROTECTED] wrote:
On Mon, Feb 12, 2007 at 12:15:23AM -0800, [EMAIL PROTECTED] wrote:
- mutex_lock(callback_mutex);
- list_add(cs-sibling, cs-parent-children);
+ cont-cpuset = cs;
+ cs-container = cont;
number_of_cpusets++;
-
On 3/7/07, Srivatsa Vaddagiri [EMAIL PROTECTED] wrote:
It makes sense in the first cpuset patch
(cpusets_using_containers.patch), but should be removed in the second
cpuset patch (multiuser_container.patch). In the 2nd patch, we use this
comparison:
if (task_cs(p) != cs)
On 3/7/07, Srivatsa Vaddagiri [EMAIL PROTECTED] wrote:
If that is the case, I think we can push container_lock entirely inside
cpuset.c and not have others exposed to this double-lock complexity.
This is possible because cpuset.c (build on top of containers) still has
cpuset-parent and walking
On 3/7/07, Sam Vilain [EMAIL PROTECTED] wrote:
Paul Menage wrote:
In the namespace world when we say container we mean roughly at the level
of nsproxy and container_group.
So you're saying that a task can only be in a single system-wide container.
Nope, we didn't make the mistake
On 3/7/07, Sam Vilain [EMAIL PROTECTED] wrote:
But namespace has well-established historical semantics too - a way
of changing the mappings of local * to global objects. This
accurately describes things liek resource controllers, cpusets, resource
monitoring, etc.
Sorry, I think this statement
On 3/7/07, Eric W. Biederman [EMAIL PROTECTED] wrote:
Pretty much. For most of the other cases I think we are safe referring
to them as resource controls or resource limits.I know that roughly covers
what cpusets and beancounters and ckrm currently do.
Plus resource monitoring (which may
On 3/7/07, Sam Vilain [EMAIL PROTECTED] wrote:
Sorry, I didn't realise I was talking with somebody qualified enough to
speak on behalf of the Generally Established Principles of Computer Science.
I made sure to check
http://en.wikipedia.org/wiki/Namespace
On 3/7/07, Eric W. Biederman [EMAIL PROTECTED] wrote:
Please next time this kind of patch is posted add a description of
what is happening and why. I have yet to see people explain why
this is a good idea. Why the current semantics were chosen.
OK. I thought that the descriptions in my last
not the one pushing to move them into ns_proxy.
These patches are all Srivatsa's work. Despite that fact that they say
Signed-off-by: Paul Menage, I'd never seen them before they were
posted to LKML, and I'm not sure that they're the right approach.
(Although some form of unification might be good
On 3/8/07, Srivatsa Vaddagiri [EMAIL PROTECTED] wrote:
On Wed, Mar 07, 2007 at 12:50:03PM -0800, Paul Menage wrote:
The callback mutex (which is what container_lock() actually locks) is
also used to synchronize fork/exit against subsystem additions, in the
event that some subsystem has
On 4/3/07, Serge E. Hallyn [EMAIL PROTECTED] wrote:
But frankly I don't know where we stand right now wrt the containers
patches. Do most people want to go with Vatsa's latest version moving
containers into nsproxy? Has any other development been going on?
Paul, have you made any updates?
On 4/3/07, Srivatsa Vaddagiri [EMAIL PROTECTED] wrote:
On Tue, Apr 03, 2007 at 08:45:37AM -0700, Paul Menage wrote:
Whilst I've got no objection in general to using nsproxy rather than
the container_group object that I introduced in my latest patches,
So are you saying lets (re-)use tsk
On 4/3/07, Srivatsa Vaddagiri [EMAIL PROTECTED] wrote:
On Tue, Apr 03, 2007 at 09:52:35AM -0700, Paul Menage wrote:
I'm not saying let's use nsproxy - I'm not yet convinced that the
lifetime/mutation/correlation rate of a pointer in an nsproxy is
likely to be the same as for a container
On 4/3/07, Srivatsa Vaddagiri [EMAIL PROTECTED] wrote:
On Tue, Apr 03, 2007 at 10:10:35AM -0700, Paul Menage wrote:
Agreed. So I'm not saying it's fundamentally a bad idea - just that
merging container_group and nsproxy is a fairly simple space
optimization that could easily be done later
On 4/3/07, Srivatsa Vaddagiri [EMAIL PROTECTED] wrote:
Hmm no .. I currently have nsproxy having just M additional pointers, where
M is the maximum number of resource controllers and a single dentry
pointer.
So how do you implement something like the /proc/PID/container info
file in my
On 4/3/07, Srivatsa Vaddagiri [EMAIL PROTECTED] wrote:
(Or more generally, tell which container a task is
in for a given hierarchy?)
Why is the hierarchy bit important here? Usually controllers need to
know tell me what cpuset this task belongs to, which is answered
by
On 4/3/07, Srivatsa Vaddagiri [EMAIL PROTECTED] wrote:
User space queries like what is the cpuset to which this task belongs,
where the answer needs to be something of the form /dev/cpuset/C1?
The patches address that requirement atm by having a dentry pointer in
struct cpuset itself.
Have you
On 4/3/07, Srivatsa Vaddagiri [EMAIL PROTECTED] wrote:
On Tue, Apr 03, 2007 at 09:04:59PM -0700, Paul Menage wrote:
Have you posted the cpuset implementation over your system yet?
Yep, here:
http://lists.linux-foundation.org/pipermail/containers/2007-March/001497.html
For some reason
On 4/4/07, Srivatsa Vaddagiri [EMAIL PROTECTED] wrote:
On Wed, Apr 04, 2007 at 12:00:07AM -0700, Paul Menage wrote:
OK, looking at that, I see a few problems related to the use of
nsproxy and lack of a container object:
Before we (and everyone else!) gets lost in this thread, let me say
On 4/4/07, Eric W. Biederman [EMAIL PROTECTED] wrote:
In addition there appear to be some weird assumptions (an array with
one member per task_struct) in the group. The pid limit allows
us millions of task_structs if the user wants it. A several megabyte
array sounds like a completely
On 4/4/07, Paul Menage [EMAIL PROTECTED] wrote:
The current code creates such arrays when it needs an atomic snapshot
of the set of tasks in the container (e.g. for reporting them to
userspace or updating the mempolicies of all the tasks in the case of
cpusets). It may be possible to do
On 4/4/07, Srivatsa Vaddagiri [EMAIL PROTECTED] wrote:
- how do you handle additional reference counts on subsystems? E.g.
beancounters wants to be able to associate each file with the
container that owns it. You need to be able to lock out subsystems
from taking new reference counts on an
On 3/26/07, Srivatsa Vaddagiri [EMAIL PROTECTED] wrote:
On Sun, Mar 25, 2007 at 12:50:25PM -0700, Paul Jackson wrote:
Is there perhaps another race here?
Yes, we have!
Modified patch below. Compile/boot tested on a x86_64 box.
Currently cpuset_exit() changes the exiting task's -cpuset
On 4/4/07, Srivatsa Vaddagiri [EMAIL PROTECTED] wrote:
On Wed, Apr 04, 2007 at 07:57:40PM -0700, Paul Menage wrote:
Firstly, this is not a unique problem introduced by using -nsproxy.
Secondly we have discussed this to some extent before
(http://lkml.org/lkml/2007/2/13/122). Essentially if we
On 4/5/07, Srivatsa Vaddagiri [EMAIL PROTECTED] wrote:
On Wed, Apr 04, 2007 at 10:55:01PM -0700, Paul Menage wrote:
@@ -1257,8 +1260,8 @@ static int attach_task(struct cpuset *cs
put_task_struct(tsk);
synchronize_rcu();
- if (atomic_dec_and_test(oldcs-count
On 4/5/07, Srivatsa Vaddagiri [EMAIL PROTECTED] wrote:
Hmm yes ..I am surprised we arent doing a synhronize_rcu in
cpuset_rmdir() before dropping the dentry. Did you want to send a patch
for that?
Currently cpuset_exit() isn't in a rcu section so it wouldn't help.
But this ceases to be a
On 4/5/07, Srivatsa Vaddagiri [EMAIL PROTECTED] wrote:
You mean dentry-d_fsdata pointing to nsproxy should take a ref count on
nsproxy? afaics it is not needed as long as you first drop the dentry
before freeing associated nsproxy.
You get the nsproxy object from dup_namespaces(), which will
On 4/5/07, Srivatsa Vaddagiri [EMAIL PROTECTED] wrote:
If the container directory were to have no refcount on the nsproxy, so
the initial refcount was 0,
No it should be 1.
mkdir H1/foo
rcfs_create()
ns = dup_namespaces(parent);
On 9/15/07, Paul Jackson [EMAIL PROTECTED] wrote:
From: Paul Jackson [EMAIL PROTECTED]
Paul Menage - in pre-container cpusets, a few config files enabled
cpusets by default. Could you blend the following patch into your
container patch set, so that cpusets continue to be configured
On 9/15/07, Andrew Morton [EMAIL PROTECTED] wrote:
+ BUG_ON(!atomic_read(dentry-d_count));
repeat:
if (atomic_read(dentry-d_count) == 1)
might_sleep();
eek, much too aggressive.
How about the equivalent BUG_ON() in dget()? I figure that they ought
to both be
On 9/15/07, Andrew Morton [EMAIL PROTECTED] wrote:
Yeah. Bug, surely. But I guess it's always been there.
What are the implications of this for cpusets-via-containers?
I don't think it should be any different from the previous version - I
tried to avoid touching those bits of cpusets where
This is already fixed in -mm - see
task-containersv11-basic-task-container-framework-containers-fix-refcount-bug.patch
task-containersv11-add-container_clone-interface-containers-fix-refcount-bug.patch
Paul
On 9/15/07, Paul Jackson [EMAIL PROTECTED] wrote:
Paul Menage,
When I run a cpuset
dentry.
This patch removes container_get_dentry() and replaces it with direct
calls to lookup_one_len(); the initialization of containerfs dentry
ops is done now in container_create_file() at dentry creation time.
Signed-off-by: Paul Menage [EMAIL PROTECTED]
---
kernel/container.c | 26
This example subsystem exports debugging information as an aid to diagnosing
refcount leaks, etc, in the cgroup framework.
Signed-off-by: Paul Menage [EMAIL PROTECTED]
---
include/linux/cgroup_subsys.h |4 +
init/Kconfig | 10 ++
kernel/Makefile |1
Adds the necessary hooks to the fork() and exit() paths to ensure that
new children inherit their parent's cgroup assignments, and that exiting
processes release reference counts on their cgroups.
Signed-off-by: Paul Menage [EMAIL PROTECTED]
---
include/linux/cgroup.h |6 +
kernel/cgroup.c
From: Balbir Singh [EMAIL PROTECTED]
(container-cgroup renaming by Paul Menage [EMAIL PROTECTED])
This patch is inspired by the discussion at
http://lkml.org/lkml/2007/4/11/187 and implements per cgroup statistics
as suggested by Andrew Morton in http://lkml.org/lkml/2007/4/11/263. The
patch
From: Balbir Singh [EMAIL PROTECTED]
(container-cgroup renaming by Paul Menage [EMAIL PROTECTED])
Nick Piggin pointed out that swap cache and page cache addition routines
could be called from non GFP_KERNEL contexts. This patch makes the
charging routine aware of the gfp context. Charging might
From: Balbir Singh [EMAIL PROTECTED]
(container-cgroup renaming by Paul Menage [EMAIL PROTECTED])
Choose if we want cached pages to be accounted or not. By default both are
accounted for. A new set of tunables are added.
echo -n 1 mem_control_type
switches the accounting to account for only
From: Balbir Singh [EMAIL PROTECTED]
(container-cgroup renaming by Paul Menage [EMAIL PROTECTED])
Add the page_cgroup to the per cgroup LRU. The reclaim algorithm has
been modified to make the isolate_lru_pages() as a pluggable component. The
scan_control data structure now accepts the cgroup
-by: Paul Menage [EMAIL PROTECTED]
---
include/linux/cgroup_subsys.h |6
include/linux/cpu_acct.h | 14 ++
init/Kconfig |7 +
kernel/Makefile |1
kernel/cpu_acct.c| 186 +
kernel/sched.c
Add:
/proc/cgroups - general system info
/proc/*/cgroup - per-task cgroup membership info
Signed-off-by: Paul Menage [EMAIL PROTECTED]
---
fs/proc/base.c|7 +
include/linux/cgroup.h |2
kernel/cgroup.c| 132
3 files changed
(first use in
this function)
kernel/cgroup.c:574: error: (Each undeclared identifier is reported only once)
Signed-off-by: Andrew Morton [EMAIL PROTECTED]
Signed-off-by: Paul Menage [EMAIL PROTECTED]
---
kernel/cgroup.c |1 +
1 file changed, 1 insertion(+)
diff -puN
kernel/cgroup.c~task
From: Balbir Singh [EMAIL PROTECTED]
(container-cgroup renaming by Paul Menage [EMAIL PROTECTED])
Setup the memory cgroup and add basic hooks and controls to integrate
and work with the cgroup.
Signed-off-by: Balbir Singh [EMAIL PROTECTED]
Signed-off-by: Paul Menage [EMAIL PROTECTED
From: Andrew Morton [EMAIL PROTECTED]
(container-cgroup renaming by Paul Menage [EMAIL PROTECTED])
mm/memcontrol.c: In function 'mem_cgroup_charge':
mm/memcontrol.c:337: error: implicit declaration of function 'congestion_wait'
Signed-off-by: Andrew Morton [EMAIL PROTECTED]
Signed-off-by: Paul
From: Balbir Singh [EMAIL PROTECTED]
(container-cgroup renaming by Paul Menage [EMAIL PROTECTED])
Signed-off-by: Balbir Singh [EMAIL PROTECTED]
Signed-off-by: Paul Menage [EMAIL PROTECTED]
---
Documentation/controllers/memory.txt | 259 +
1 files changed, 259
Handle reading /proc/self/cpuset when cpusets isn't mounted.
Signed-off-by: Paul Menage [EMAIL PROTECTED]
---
kernel/cgroup.c |9 +
1 file changed, 9 insertions(+)
diff -puN kernel/cgroup.c~task-cgroupsv11-basic-task-cgroup-framework-fix
kernel/cgroup.c
--- a/kernel/cgroup.c~task
assignments, this reduces
overall space usage and keeps the size of the task_struct down (three pointers
added to task_struct compared to a non-cgroups kernel, no matter how many
subsystems are registered).
Signed-off-by: Paul Menage [EMAIL PROTECTED]
---
Documentation/cgroups.txt | 14
1 - 100 of 663 matches
Mail list logo