ropping
the refcount to zero and freeing occuring in a different context...
> + /*
> + * We have already exited the read-side of rcu critical section
> + * before calling do_shrink_slab(), the shrinker_info may be
> + * r
ker_put(shrinker);
> + wait_for_completion(>done);
> + }
Needs a comment explaining why we need to wait here...
> +
> down_write(_rwsem);
> if (shrinker->flags & SHRINKER_REGISTERED) {
> - list_del(>list);
> + /
> unsigned long ret, freed = 0;
> - int i;
> + int offset, index = 0;
>
> if (!mem_cgroup_online(memcg))
> return 0;
> @@ -419,56 +470,63 @@ static unsigned long shrink_slab_memcg(gfp_t gfp_mask,
> int nid,
> if (unlikely(!inf
lowing cases.
> This commit uses the refcount+RCU method [5] proposed by Dave Chinner
> to re-implement the lockless global slab shrink. The memcg slab shrink is
> handled in the subsequent patch.
> ---
> include/linux/shrinker.h | 17 ++
>
obviously correct" that what we have
now.
> So not adding that super simple
> helper is not exactly the best choice in my opinion.
Each to their own - I much prefer the existing style/API over having
to go look up a helper function every time I want to check some
random shrinker has been set up correctly
-Dave.
--
Dave Chinner
da...@fromorbit.com
___
Virtualization mailing list
Virtualization@lists.linux-foundation.org
https://lists.linuxfoundation.org/mailman/listinfo/virtualization
On Wed, Jul 26, 2023 at 05:14:09PM +0800, Qi Zheng wrote:
> On 2023/7/26 16:08, Dave Chinner wrote:
> > On Mon, Jul 24, 2023 at 05:43:51PM +0800, Qi Zheng wrote:
> > > @@ -122,6 +126,13 @@ void shrinker_free_non_registered(struct shrinker
> > > *shrinker);
> >
gt; We used to implement the lockless slab shrink with SRCU [2], but then
> kernel test robot reported -88.8% regression in
> stress-ng.ramfs.ops_per_sec test case [3], so we reverted it [4].
>
> This commit uses the refcount+RCU method [5] proposed by Dave Chinner
> to re-implement th
shrinker);
up_write(_rwsem);
if (debugfs_entry)
shrinker_debugfs_remove(debugfs_entry, debugfs_id);
kfree(shrinker->nr_deferred);
kfree(shrinker);
}
EXPORT_SYMBOL_GPL(shrinker_free);
--
Dave Chinner
da...@fromorbit.com
__
On Fri, Jun 23, 2023 at 09:10:57PM +0800, Qi Zheng wrote:
> On 2023/6/23 14:29, Dave Chinner wrote:
> > On Thu, Jun 22, 2023 at 05:12:02PM +0200, Vlastimil Babka wrote:
> > > On 6/22/23 10:53, Qi Zheng wrote:
> > Yes, I suggested the IDR route because radi
t now.
> IIUC this is why Dave in [4] suggests unifying shrink_slab() with
> shrink_slab_memcg(), as the latter doesn't iterate the list but uses IDR.
Yes, I suggested the IDR route because radix tree lookups under RCU
with reference counted objects are a known safe pattern that we can
eas
lts in...
The other advantage of this is that it will break all the existing
out of tree code and third party modules using the old API and will
no longer work with a kernel using lockless slab shrinkers. They
need to break (both at the source and binary levels) to stop bad
things from happening due t
* This is the correct multi-line comment format. Please
* update the patch to maintain the existing comment format.
*/
Cheers,
Dave.
--
Dave Chinner
da...@fromorbit.com
___
Virtualization mailing list
Virtualization@lists.linux-foundation.org
https://lists.linuxfoundation.org/mailman/listinfo/virtualization
checks into a
helper so that the filesystem code just doesn't have to care about
the details of checking for DAX+MAP_SYNC support
Cheers,
Dave.
--
Dave Chinner
da...@fromorbit.com
___
Virtualization mailing list
Virtualization@lists.linux-foundation.org
https://lists.linuxfoundation.org/mailman/listinfo/virtualization
k hard questions about
this topic
Cheers,
Dave.
--
Dave Chinner
da...@fromorbit.com
___
Virtualization mailing list
Virtualization@lists.linux-foundation.org
https://lists.linuxfoundation.org/mailman/listinfo/virtualization
On Mon, Jan 14, 2019 at 01:35:57PM -0800, Dan Williams wrote:
> On Mon, Jan 14, 2019 at 1:25 PM Dave Chinner wrote:
> >
> > On Mon, Jan 14, 2019 at 02:15:40AM -0500, Pankaj Gupta wrote:
> > >
> > > > > Until you have images (and hence host page cache) s
; Its solely decision of host to take action on the host page cache pages.
>
> In case of virtio-pmem, guest does not modify host file directly i.e don't
> perform hole punch & truncation operation directly on host file.
... this will no longer be true, and the nuclear landmin
s. If the guests can then, in any way, control eviction of the
pages from the host cache, then we have a guest-to-guest information
leak channel.
i.e. it's something we need to be aware of and really careful about
enabling infrastructure that /will/ be abused if guests can find a
way to influenc
On Sun, Jan 13, 2019 at 03:38:21PM -0800, Matthew Wilcox wrote:
> On Mon, Jan 14, 2019 at 10:29:02AM +1100, Dave Chinner wrote:
> > Until you have images (and hence host page cache) shared between
> > multiple guests. People will want to do this, because it means they
> > o
We are also planning to support qcow2 sparse image format at
> host side with virtio-pmem.
So you're going to be remapping a huge number of disjoint regions
into a linear pmem mapping? ISTR discussions about similar things
for virtio+
61
I might be wrong, but if I'm not we're going to have to be very
careful about how guest VMs can access and manipulate host side
resources like the page cache.
Cheers,
Dave.
--
Dave Chinner
da...@fromorbit.com
___
Virtualization mailing list
Virtualiz
some_other_struct {
struct vq vq[MAX_NUM_VQ];
};
This keeps locality to objects within a queue, but separates each
queue onto it's own cacheline
Cheers,
Dave.
--
Dave Chinner
da...@fromorbit.com
On Fri, Nov 07, 2008 at 11:31:44AM +0100, Peter Zijlstra wrote:
On Fri, 2008-11-07 at 11:41 +1100, Dave Chinner wrote:
On Thu, Nov 06, 2008 at 06:11:27PM +0100, Peter Zijlstra wrote:
On Thu, 2008-11-06 at 11:57 -0500, Rik van Riel wrote:
Peter Zijlstra wrote:
The only real
22 matches
Mail list logo