On Sat, Jun 09, 2012 at 12:25:57AM -0700, Andrew Morton wrote:
And... it seems that I misread what's going on. The individual
filesystems are doing the rcu freeing of their inodes, so it is
appropriate that they also call rcu_barrier() prior to running
kmem_cache_free(). Which is what
Il 09/06/2012 02:28, Andrew Morton ha scritto:
On Fri, 8 Jun 2012 16:46:47 -0700 Linus Torvaldstorva...@linux-foundation.org
wrote:
Of course, if you just mean having a VFS wrapper that does
static void vfs_inode_kmem_cache_destroy(struct kmem_cache *cachep)
{
On Sat, 09 Jun 2012 09:06:28 +0200 Marco Stornelli marco.storne...@gmail.com
wrote:
Il 09/06/2012 02:28, Andrew Morton ha scritto:
On Fri, 8 Jun 2012 16:46:47 -0700 Linus
Torvaldstorva...@linux-foundation.org wrote:
Of course, if you just mean having a VFS wrapper that does
From: Kirill A. Shutemov kirill.shute...@linux.intel.com
Sorry for resend. Original mail had too long cc list.
There's no reason to call rcu_barrier() on every deactivate_locked_super().
We only need to make sure that all delayed rcu free inodes are flushed
before we destroy related cache.
On Sat, 9 Jun 2012 00:41:03 +0300
Kirill A. Shutemov kirill.shute...@linux.intel.com wrote:
There's no reason to call rcu_barrier() on every deactivate_locked_super().
We only need to make sure that all delayed rcu free inodes are flushed
before we destroy related cache.
Removing
On Fri, Jun 08, 2012 at 03:02:53PM -0700, Andrew Morton wrote:
On Sat, 9 Jun 2012 00:41:03 +0300
Kirill A. Shutemov kirill.shute...@linux.intel.com wrote:
There's no reason to call rcu_barrier() on every deactivate_locked_super().
We only need to make sure that all delayed rcu free inodes
On Sat, Jun 09, 2012 at 01:14:46AM +0300, Kirill A. Shutemov wrote:
The implementation would be less unpleasant if we could do the
rcu_barrier() in kmem_cache_destroy(). I can't see a way of doing that
without adding a dedicated slab flag, which would require editing all
the filesystems
On Sat, 9 Jun 2012 01:14:46 +0300
Kirill A. Shutemov kirill.shute...@linux.intel.com wrote:
On Fri, Jun 08, 2012 at 03:02:53PM -0700, Andrew Morton wrote:
On Sat, 9 Jun 2012 00:41:03 +0300
Kirill A. Shutemov kirill.shute...@linux.intel.com wrote:
There's no reason to call
On Fri, Jun 08, 2012 at 03:25:50PM -0700, Andrew Morton wrote:
A neater implementation might be to add a kmem_cache* argument to
unregister_filesystem(). If that is non-NULL, unregister_filesystem()
does the rcu_barrier() and destroys the cache. That way we get to
delete (rather than add) a
On Fri, Jun 8, 2012 at 3:23 PM, Al Viro v...@zeniv.linux.org.uk wrote:
Note that module unload is *not* a hot path - not on any even remotely sane
use.
Actually, I think we've had distributions that basically did a load
pretty much everything, and let God sort it out approach to modules.
I
On Fri, Jun 08, 2012 at 03:25:50PM -0700, Andrew Morton wrote:
On Sat, 9 Jun 2012 01:14:46 +0300
Kirill A. Shutemov kirill.shute...@linux.intel.com wrote:
On Fri, Jun 08, 2012 at 03:02:53PM -0700, Andrew Morton wrote:
On Sat, 9 Jun 2012 00:41:03 +0300
Kirill A. Shutemov
On Sat, 9 Jun 2012 02:31:27 +0300
Kirill A. Shutemov kirill.shute...@linux.intel.com wrote:
On Fri, Jun 08, 2012 at 03:31:20PM -0700, Andrew Morton wrote:
On Fri, 8 Jun 2012 23:27:34 +0100
Al Viro v...@zeniv.linux.org.uk wrote:
On Fri, Jun 08, 2012 at 03:25:50PM -0700, Andrew Morton
On Fri, Jun 8, 2012 at 4:37 PM, Andrew Morton a...@linux-foundation.org wrote:
So how about open-coding the rcu_barrier() in btrfs and gfs2 for the
non-inode caches (which is the appropriate place), and hand the inode
cache over to the vfs for treatment (which is the appropriate place).
The
On Fri, 8 Jun 2012 16:46:47 -0700 Linus Torvalds
torva...@linux-foundation.org wrote:
Of course, if you just mean having a VFS wrapper that does
static void vfs_inode_kmem_cache_destroy(struct kmem_cache *cachep)
{
rcu_barrier();
kmem_cache_destroy(cachep);
}
14 matches
Mail list logo