- changed from u64 to s64
why?
+/*
+ * For batchingmem_cgroup_charge_statistics()(see below).
+ */
+static inline void mem_cgroup_stat_add(struct mem_cgroup_stat *stat,
+enum mem_cgroup_stat_index idx, int val)
+{
+ int cpu = smp_processor_id();
+
On Mon, 15 Oct 2007 15:37:01 +0900 (JST)
[EMAIL PROTECTED] (YAMAMOTO Takashi) wrote:
- changed from u64 to s64
why?
+/*
+ * For batchingmem_cgroup_charge_statistics()(see below).
+ */
+static inline void mem_cgroup_stat_add(struct mem_cgroup_stat *stat,
+enum
Daniel Lezcano wrote:
Eric W. Biederman wrote:
Daniel Lezcano [EMAIL PROTECTED] writes:
The following patches activate the multicast sockets for
the namespaces. The results is a traffic going through differents
namespaces. So if there are several applications
listenning to the same
From: Pavel Emelyanov [EMAIL PROTECTED]
Date: Fri, 12 Oct 2007 17:00:06 +0400
Introduce the struct inet_frag_queue in include/net/inet_frag.h
file and place there all the common fields from three structs:
* struct ipq in ipv4/ip_fragment.c
* struct nf_ct_frag6_queue in
From: Pavel Emelyanov [EMAIL PROTECTED]
Date: Fri, 12 Oct 2007 17:06:13 +0400
There are some objects that are common in all the places
which are used to keep track of frag queues, they are:
* hash table
* LRU list
* rw lock
* rnd number for hash function
* the number of queues
*
From: Pavel Emelyanov [EMAIL PROTECTED]
Date: Fri, 12 Oct 2007 17:10:05 +0400
Some sysctl variables are used to tune the frag queues
management and it will be useful to work with them in
a common way in the future, so move them into one
structure, moreover they are the same for all the frag
From: Pavel Emelyanov [EMAIL PROTECTED]
Date: Fri, 12 Oct 2007 17:12:43 +0400
Since now all the xxx_frag_kill functions now work
with the generic inet_frag_queue data type, this can
be moved into a common place.
The xxx_unlink() code is moved as well.
Signed-off-by: Pavel Emelyanov
From: Pavel Emelyanov [EMAIL PROTECTED]
Date: Fri, 12 Oct 2007 17:16:25 +0400
This code works with the generic data types as well, so
move this into inet_fragment.c
This move makes it possible to hide the secret_timer
management and the secret_rebuild routine completely in
the
From: Pavel Emelyanov [EMAIL PROTECTED]
Date: Fri, 12 Oct 2007 17:21:06 +0400
To make in possible we need to know the exact frag queue
size for inet_frags-mem management and two callbacks:
* to destoy the skb (optional, used in conntracks only)
* to free the queue itself (mandatory, but
From: Pavel Emelyanov [EMAIL PROTECTED]
Date: Fri, 12 Oct 2007 17:24:27 +0400
The evictors collect some statistics for ipv4 and ipv6,
so make it return the number of evicted queues and account
them all at once in the caller.
The XXX_ADD_STATS_BH() macros are just for this case,
but maybe
From: Pavel Emelyanov [EMAIL PROTECTED]
Date: Fri, 12 Oct 2007 17:27:40 +0400
After the evictor code is consolidated there is no need in
passing the extra pointer to the xxx_put() functions.
The only place when it made sense was the evictor code itself.
Maybe this change must got with
From: Pavel Emelyanov [EMAIL PROTECTED]
Date: Fri, 12 Oct 2007 17:29:01 +0400
These ones use the generic data types too, so move
them in one place.
Signed-off-by: Pavel Emelyanov [EMAIL PROTECTED]
Also applied, thanks!
___
Devel mailing list
From: Pavel Emelyanov [EMAIL PROTECTED]
Date: Fri, 12 Oct 2007 16:55:10 +0400
Patrick recently pointed out, that there are three places that
perform IP fragments management. In ipv4, ipv6 and in ip6
conntracks. Looks like these places can be a bit consolidated.
The proposal is to create a
When the loopback device is failed to initialize inside the new
namespaces, panic() is called. Do not do it when the namespace
in question is not the init_net.
Plus cleanup the error path a bit.
Signed-off-by: Pavel Emelyanov [EMAIL PROTECTED]
---
drivers/net/loopback.c | 11 +--
The difference in both functions is in the id passed to
the rt6_select, so just pass it as an extra argument from
two outer helpers.
This is minus 60 lines of code and 360 bytes of .text
Signed-off-by: Pavel Emelyanov [EMAIL PROTECTED]
---
net/ipv6/route.c | 77
The pnigh_lookup is used to lookup proxy entries and to
create them in case lookup failed.
However, the creation code does not perform the re-lookup
after GFP_KERNEL allocation. This is done because the code
is expected to be protected with the RTNL lock, so add the
assertion (mainly to
Denis V. Lunev wrote:
Daniel Lezcano wrote:
Eric W. Biederman wrote:
Daniel Lezcano [EMAIL PROTECTED] writes:
The following patches activate the multicast sockets for
the namespaces. The results is a traffic going through differents
namespaces. So if there are several applications
Hugh Dickins wrote:
--- 2.6.23-rc8-mm2/mm/swapfile.c 2007-09-27 12:03:36.0 +0100
+++ linux/mm/swapfile.c 2007-10-07 14:33:05.0 +0100
@@ -507,11 +507,23 @@ unsigned int count_swap_pages(int type,
* just let do_wp_page work it out if a write is requested later -
Daniel Lezcano [EMAIL PROTECTED] writes:
Denis V. Lunev wrote:
Daniel Lezcano wrote:
Eric W. Biederman wrote:
Daniel Lezcano [EMAIL PROTECTED] writes:
The following patches activate the multicast sockets for
the namespaces. The results is a traffic going through differents
namespaces. So
On Mon, 15 Oct 2007, Paul Jackson wrote:
--- 2.6.23-mm1.orig/kernel/cpuset.c 2007-10-14 22:24:56.268309633 -0700
+++ 2.6.23-mm1/kernel/cpuset.c2007-10-14 22:34:52.645364388 -0700
@@ -677,6 +677,64 @@ done:
}
/*
+ * update_cgroup_cpus_allowed(cont, cpus)
+ *
+ * Keep looping
From: Pavel Emelyanov [EMAIL PROTECTED]
Date: Mon, 15 Oct 2007 17:38:57 +0400
The pnigh_lookup is used to lookup proxy entries and to
create them in case lookup failed.
However, the creation code does not perform the re-lookup
after GFP_KERNEL allocation. This is done because the code
is
From: Pavel Emelyanov [EMAIL PROTECTED]
Date: Mon, 15 Oct 2007 14:23:13 +0400
When the loopback device is failed to initialize inside the new
namespaces, panic() is called. Do not do it when the namespace
in question is not the init_net.
Plus cleanup the error path a bit.
From: Pavel Emelyanov [EMAIL PROTECTED]
Date: Mon, 15 Oct 2007 15:55:21 +0400
The difference in both functions is in the id passed to
the rt6_select, so just pass it as an extra argument from
two outer helpers.
This is minus 60 lines of code and 360 bytes of .text
Signed-off-by: Pavel
Paul Jackson wrote:
Paul M, David R, others -- how does this look?
Looks plausible, although as David comments I don't think it handles a
concurrent CPU hotplug/unplug. Also I don't like the idea of doing a
cgroup_lock() across sched_setaffinity() - cgroup_lock() can be held for
relatively
+/*
+ * For batchingmem_cgroup_charge_statistics()(see below).
+ */
+static inline void mem_cgroup_stat_add(struct mem_cgroup_stat *stat,
+enum mem_cgroup_stat_index idx, int val)
+{
+ int cpu = smp_processor_id();
+ stat-cpustat[cpu].count[idx] += val;
Paul M wrote:
Here's an alternative for consideration, below.
I don't see the alternative -- I just see my patch, with the added
blurbage:
#12 - /usr/local/google/home/menage/kernel9/linux/kernel/cpuset.c
# action=edit type=text
Should I be increasing my caffeine intake?
--
Index: devel-2.6.23-rc8-mm2/mm/memcontrol.c
===
--- devel-2.6.23-rc8-mm2.orig/mm/memcontrol.c
+++ devel-2.6.23-rc8-mm2/mm/memcontrol.c
@@ -469,6 +469,7 @@ void mem_cgroup_uncharge(struct page_cgr
page = pc-page;
Paul Jackson wrote:
Paul M wrote:
Here's an alternative for consideration, below.
I don't see the alternative -- I just see my patch, with the added
blurbage:
#12 - /usr/local/google/home/menage/kernel9/linux/kernel/cpuset.c
# action=edit type=text
Should I be increasing my
Yet by not doing any locking here to prevent a cpu from being
hot-unplugged, you can race and allow the hot-unplug event to happen
before calling set_cpus_allowed(). That makes this entire function a
no-op with set_cpus_allowed() returning -EINVAL for every call, which
isn't caught, and
currently against an older kernel
ah .. which older kernel?
I tried it against the broken out 2.6.23-rc8-mm2 patch set,
inserting it before the task-containersv11-* patches, but
that blew up on me - three rejected hunks.
Any chance of getting this against a current cgroup (aka
container)
On Tue, 16 Oct 2007 07:38:23 +0900 (JST)
[EMAIL PROTECTED] (YAMAMOTO Takashi) wrote:
+/*
+ * For batchingmem_cgroup_charge_statistics()(see below).
+ */
+static inline void mem_cgroup_stat_add(struct mem_cgroup_stat *stat,
+enum mem_cgroup_stat_index idx,
Hi,
I have joint devel list recently. I want to contribute to OpenVZ. I have
read few papers on openVZ and one of the papers Performance Evaluation of
Virtualization Technologies for Server Consolidation ( HP Labs) - says
that , For OpenVZ, there is no existing tool to directly measure the
Quoting [EMAIL PROTECTED] ([EMAIL PROTECTED]):
When a process in a container signals its container-init, we want
to ensure that the signal does not terminate the container-init.
i.e if the container-init has no handler for the signal, and the
signal is fatal, we want to ignore the signal.
KAMEZAWA Hiroyuki wrote:
On Tue, 16 Oct 2007 07:38:23 +0900 (JST)
[EMAIL PROTECTED] (YAMAMOTO Takashi) wrote:
+/*
+ * For batchingmem_cgroup_charge_statistics()(see below).
+ */
+static inline void mem_cgroup_stat_add(struct mem_cgroup_stat *stat,
+enum
Will do - I justed wanted to get this quickly out to show the idea
that I was working on.
Ok - good.
In the final analysis, I'll take whatever works ;).
I'll lobby for keeping the code simple (a subjective metric) and poke
what holes I can in things, and propose what alternatives I can
35 matches
Mail list logo