I've almost got my netns patchset rebased against linus's latest tree.
The sysfs changes were extensive and while I finally have something working with
them. Every time I stop and think about my sysfs code I spot more issues that
need to be resolved.
With any luck I should have something I can
On Mon, Apr 02, 2007 at 02:10:29PM +0200, Andi Kleen wrote:
On Monday 02 April 2007 13:38, Alexey Dobriyan wrote:
They will be used by cpuid driver and powernow-k8 cpufreq driver.
With these changes powernow-k8 driver could run correctly on OpenVZ kernels
with virtual cpus enabled
Both powernow-k8 and cpuid attempt to schedule
to the target CPU so they should already run there. But it is some other
CPU,
but when they ask your _on_cpu() functions they suddenly get a real CPU?
Where is the difference between these levels of virtualness?
*_on_cpu functions do
On Mon, Apr 02, 2007 at 12:02:35PM -0600, Eric W. Biederman wrote:
If we loose directories, then we don't have a way to manage the
task-group it represents thr' the filesystem interface, so I consider
that bad. As we agree, this will not be an issue if initrd
mounts the ns hierarchy
On Tue, Apr 03, 2007 at 03:42:50PM +0200, Andi Kleen wrote:
Both powernow-k8 and cpuid attempt to schedule
to the target CPU so they should already run there. But it is some other
CPU,
but when they ask your _on_cpu() functions they suddenly get a real CPU?
Where is the difference
Quoting Eric W. Biederman ([EMAIL PROTECTED]):
Srivatsa Vaddagiri [EMAIL PROTECTED] writes:
On Mon, Apr 02, 2007 at 09:09:39AM -0500, Serge E. Hallyn wrote:
Losing the directory isn't a big deal though. And both unsharing a
namespace (which causes a ns_container_clone) and mounting the
On 4/3/07, Serge E. Hallyn [EMAIL PROTECTED] wrote:
But frankly I don't know where we stand right now wrt the containers
patches. Do most people want to go with Vatsa's latest version moving
containers into nsproxy? Has any other development been going on?
Paul, have you made any updates?
On Tue, Apr 03, 2007 at 10:32:20AM -0500, Serge E. Hallyn wrote:
But frankly I don't know where we stand right now wrt the containers
patches. Do most people want to go with Vatsa's latest version moving
containers into nsproxy?
Has any other development been going on?
I have another
On Tue, Apr 03, 2007 at 08:45:37AM -0700, Paul Menage wrote:
Whilst I've got no objection in general to using nsproxy rather than
the container_group object that I introduced in my latest patches, I
think that Vatsa's approach of losing the general container object is
flawed, since it loses
On 4/3/07, Srivatsa Vaddagiri [EMAIL PROTECTED] wrote:
On Tue, Apr 03, 2007 at 08:45:37AM -0700, Paul Menage wrote:
Whilst I've got no objection in general to using nsproxy rather than
the container_group object that I introduced in my latest patches,
So are you saying lets (re-)use
Quoting Paul Menage ([EMAIL PROTECTED]):
On 4/3/07, Serge E. Hallyn [EMAIL PROTECTED] wrote:
But frankly I don't know where we stand right now wrt the containers
patches. Do most people want to go with Vatsa's latest version moving
containers into nsproxy? Has any other development been
On 4/3/07, Srivatsa Vaddagiri [EMAIL PROTECTED] wrote:
On Tue, Apr 03, 2007 at 09:52:35AM -0700, Paul Menage wrote:
I'm not saying let's use nsproxy - I'm not yet convinced that the
lifetime/mutation/correlation rate of a pointer in an nsproxy is
likely to be the same as for a container
On Tue, Apr 03, 2007 at 10:10:35AM -0700, Paul Menage wrote:
Agreed. So I'm not saying it's fundamentally a bad idea - just that
merging container_group and nsproxy is a fairly simple space
optimization that could easily be done later.
IMHO, if we agree that space optimization is important,
On 4/3/07, Srivatsa Vaddagiri [EMAIL PROTECTED] wrote:
On Tue, Apr 03, 2007 at 10:10:35AM -0700, Paul Menage wrote:
Agreed. So I'm not saying it's fundamentally a bad idea - just that
merging container_group and nsproxy is a fairly simple space
optimization that could easily be done later.
On 4/3/07, Srivatsa Vaddagiri [EMAIL PROTECTED] wrote:
Hmm no .. I currently have nsproxy having just M additional pointers, where
M is the maximum number of resource controllers and a single dentry
pointer.
So how do you implement something like the /proc/PID/container info
file in my
On 4/3/07, Srivatsa Vaddagiri [EMAIL PROTECTED] wrote:
(Or more generally, tell which container a task is
in for a given hierarchy?)
Why is the hierarchy bit important here? Usually controllers need to
know tell me what cpuset this task belongs to, which is answered
by
On Mon, 2 Apr 2007 19:03:16 +0400
Alexey Dobriyan [EMAIL PROTECTED] wrote:
+int lookup_module_symbol_attrs(unsigned long addr, unsigned long *size,
+ unsigned long *offset, char *modname, char *name)
+{
+ struct module *mod;
+
+ mutex_lock(module_mutex);
+
Aceasta adresa nu mai este folosita. Va rugam trimiteti mail la adresa [EMAIL
PROTECTED]
This e-mail address is not used anymore. Please send further e-mails at [EMAIL
PROTECTED]
___
Devel mailing list
Devel@openvz.org
On Tue, Apr 03, 2007 at 10:49:49AM -0700, Paul Menage wrote:
Why is the hierarchy bit important here? Usually controllers need to
know tell me what cpuset this task belongs to, which is answered
by tsk-nsproxy-ctlr_data[CPUSET_ID].
I was thinking of queries from userspace.
User space
vatsa wrote:
User space queries like what is the cpuset to which this task belongs,
where the answer needs to be something of the form /dev/cpuset/C1?
I think that answer should be of the form /C1, and not include the
cpuset file system mount point ... though for the purposes of the
present
On Tue, Apr 03, 2007 at 09:04:59PM -0700, Paul Menage wrote:
Have you posted the cpuset implementation over your system yet?
Yep, here:
http://lists.linux-foundation.org/pipermail/containers/2007-March/001497.html
For some reason, the above mail didnt make it into lkml (maybe it
exceeded the
21 matches
Mail list logo