Trond Myklebust wrote:
On Mon, 2007-09-17 at 11:57 +0400, Pavel Emelyanov wrote:
The __mandatory_lock(inode) macro makes the same check, but
makes the code more readable.
Could we please avoid using underscores in macros. Also, why are we
breaking the usual convention of capitalising macro
Dave Hansen wrote:
On Mon, 2007-09-17 at 16:35 +0400, Pavel Emelyanov wrote:
+
+ rcu_read_lock();
+ cnt = task_kmem_container(current);
+ if (res_counter_charge(cnt-res, s-size))
+ goto err_locked;
+
+ css_get(cnt-css);
+ rcu_read_unlock();
+
Dave Hansen wrote:
On Mon, 2007-09-17 at 16:35 +0400, Pavel Emelyanov wrote:
The struct page gets an extra pointer (just like it has with
the RSS controller) and this pointer points to the array of
the kmem_container pointers - one for each object stored on
that page itself.
Can't these at
J. Bruce Fields wrote:
On Mon, Sep 17, 2007 at 10:37:56AM +0400, Pavel Emelyanov wrote:
J. Bruce Fields wrote:
Is there a small chance that a lock may be applied after this check:
+ mandatory = (inode-i_flock MANDATORY_LOCK(inode));
+
but early enough that someone can still block on the
Christoph Lameter wrote:
On Mon, 17 Sep 2007, Pavel Emelyanov wrote:
If we turn accounting on on some cache and this cache
is merged with some other, this other will be notified
as well. We can solve this by disabling of cache merging,
but maybe we can do it some other way.
You could
Christoph Lameter wrote:
On Mon, 17 Sep 2007, Pavel Emelyanov wrote:
struct kmem_cache kmalloc_caches[PAGE_SHIFT] __cacheline_aligned;
EXPORT_SYMBOL(kmalloc_caches);
+static inline int is_kmalloc_cache(struct kmem_cache *s)
+{
+int km_idx;
+
+km_idx = s - kmalloc_caches;
+
Trond Myklebust wrote:
On Mon, 2007-09-17 at 18:16 +0400, Pavel Emelyanov wrote:
Trond Myklebust wrote:
On Mon, 2007-09-17 at 12:13 +0400, Pavel Emelyanov wrote:
When the process is blocked on mandatory lock and someone changes
the inode's permissions, so that the lock is no longer
Christoph Lameter wrote:
On Mon, 17 Sep 2007, Pavel Emelyanov wrote:
As I have already told kmalloc caches cannot be accounted easily
so turning the accounting on for them will fail with -EINVAL.
Turning the accounting off is possible only if the cache has
no objects. This is done so
Christoph Lameter wrote:
On Tue, 18 Sep 2007, Balbir Singh wrote:
I've wondered the same thing and asked the question. Pavel wrote
back to me saying
The pages that are full of objects are not linked in any list
in kmem_cache so we just cannot find them.
That is true for any types of
Christoph Lameter wrote:
On Mon, 17 Sep 2007, Pavel Emelyanov wrote:
@@ -1036,7 +1121,10 @@ static struct page *allocate_slab(struct
page = alloc_pages_node(node, flags, s-order);
if (!page)
-return NULL;
+goto out;
+
+if
On Mon, 10 Sep 2007 22:40:49 +0530
Srivatsa Vaddagiri [EMAIL PROTECTED] wrote:
+ tg-cfs_rq = kzalloc(sizeof(cfs_rq) * num_possible_cpus(), GFP_KERNEL);
+ if (!tg-cfs_rq)
+ goto err;
+ tg-se = kzalloc(sizeof(se) * num_possible_cpus(), GFP_KERNEL);
+ if (!tg-se)
+
I proposed introducing a list_for_each_entry_continue_reverse macro
to be used in setup_net() when unrolling the failed -init callback.
Here is the macro and some more cleanup in the setup_net() itself
to remove one variable from the stack :) The same thing is for the
cleanup_net() - the existing
On Tue, Sep 18, 2007 at 05:19:45PM +0900, KAMEZAWA Hiroyuki wrote:
Srivatsa Vaddagiri [EMAIL PROTECTED] wrote:
+ tg-cfs_rq = kzalloc(sizeof(cfs_rq) * num_possible_cpus(), GFP_KERNEL);
+ if (!tg-cfs_rq)
+ goto err;
+ tg-se = kzalloc(sizeof(se) * num_possible_cpus(),
This would be a good opportunity to remove the single-allocated queue struct
in netdevice (at the bottom) that we had to put in to accomodate the static
loopback. Now we can set it back to a zero element list, and have
alloc_netdev_mq() just allocate the number of queues requested, not
num_queues
This is the next step in fs/locks.c cleanup before turning
it into using the struct pid *.
This time I found, that there are some places that do a
similar thing - they try to apply a lock on a file and go
to sleep on error till the blocker exits.
All these places can be easily consolidated,
On Tue, Sep 18, 2007 at 10:33:26AM +0400, Pavel Emelyanov wrote:
Trond Myklebust wrote:
IOW: the process that is waiting in locks_mandatory_area() will be
released as soon as the advisory lock is dropped. If that theory is
broken in practice, then that is the bug that we need to fix. We
On Tue, Sep 18, 2007 at 12:14:55PM -0400, Trond Myklebust wrote:
Note also that strictly speaking, we're not even compliant with the
System V behaviour on read() and write(). See:
http://www.unix.org.ua/orelly/networking_2ndEd/nfs/ch11_01.htm
and
On Tue, 2007-09-18 at 12:52 -0400, J. Bruce Fields wrote:
On Tue, Sep 18, 2007 at 12:14:55PM -0400, Trond Myklebust wrote:
Note also that strictly speaking, we're not even compliant with the
System V behaviour on read() and write(). See:
On Mon, 17 Sep 2007 14:03:27 -0700 Paul Menage wrote:
From: Balbir Singh [EMAIL PROTECTED]
(container-cgroup renaming by Paul Menage [EMAIL PROTECTED])
Signed-off-by: Balbir Singh [EMAIL PROTECTED]
Signed-off-by: Paul Menage [EMAIL PROTECTED]
---
Documentation/controllers/memory.txt
From: Daniel Lezcano [EMAIL PROTECTED]
If the netif_carrier_off is called before register_netdev
that will use and generate an event for a non initialized network
device and that leads to a Oops.
I moved the netif_carrier_off from the setup function after each
register_netdev call.
When I tryed the veth driver, I fall into a kernel oops.
qemu login: Oops: [#1]
Modules linked in:
CPU:0
EIP:0060:[c0265c9e]Not tainted VLI
EFLAGS: 0202 (2.6.23-rc6-g754f885d-dirty #33)
EIP is at __linkwatch_run_queue+0x6a/0x175
eax: c7fc9550 ebx: 6b6b6b6b ecx: c3360c80
On Tue, Sep 18, 2007 at 12:54:56PM -0400, Trond Myklebust wrote:
On Tue, 2007-09-18 at 12:52 -0400, J. Bruce Fields wrote:
So currently there's nothing to prevent this:
- write passes locks_mandatory_area() checks
- get mandatory lock
- read old data
Oleg Nesterov [EMAIL PROTECTED] wrote:
| On 09/13, [EMAIL PROTECTED] wrote:
|
| Oleg Nesterov [EMAIL PROTECTED] wrote:
| |
| | Notes:
| |
| |- Blocked signals are never ignored, so init still can receive
| | a pending blocked signal after sigprocmask(SIG_UNBLOCK).
|
On Tue, 18 Sep 2007, Pavel Emelyanov wrote:
I meant that we cannot find the pages that are full of objects to notify
others that these ones are no longer tracked. I know that we can do
it by tracking these pages with some performance penalty, but does it
worth having the ability to turn
From: Pavel Emelyanov [EMAIL PROTECTED]
Date: Tue, 18 Sep 2007 12:06:53 +0400
I proposed introducing a list_for_each_entry_continue_reverse macro
to be used in setup_net() when unrolling the failed -init callback.
Here is the macro and some more cleanup in the setup_net() itself
to remove
25 matches
Mail list logo