Re: [PATCH 2/3] mm: vmscan: get rid of DEFAULT_SEEKS and document shrink_slab logic

2014-02-06 Thread Vladimir Davydov
On 02/06/2014 12:52 AM, Andrew Morton wrote:
> On Wed, 5 Feb 2014 11:16:49 +0400 Vladimir Davydov  
> wrote:
>
>>> So why did I originally make DEFAULT_SEEKS=2?  Because I figured that to
>>> recreate (say) an inode would require a seek to the inode data then a
>>> seek back.  Is it legitimate to include the
>>> seek-back-to-what-you-were-doing-before seek in the cost of an inode
>>> reclaim?  I guess so...
>> Hmm, that explains this 2. Since we typically don't need to "seek back"
>> when recreating a cache page, as they are usually read in bunches by
>> readahead, the number of seeks to bring back a user page is 1, while the
>> number of seeks to recreate an average inode is 2, right?
> Sounds right to me.
>
>> Then to scan inodes and user pages so that they would generate
>> approximately the same number of seeks, we should calculate the number
>> of objects to scan as follows:
>>
>> nr_objects_to_scan = nr_pages_scanned / lru_pages *
>> nr_freeable_objects /
>> shrinker->seeks
>>
>> where shrinker->seeks = DEFAULT_SEEKS = 2 for inodes.
> hm, I wonder if we should take the size of the object into account. 
> Should we be maximizing (memory-reclaimed / seeks-to-reestablish-it).

I'm not sure I understand you quite right. You mean that if two slab
caches have obj sizes 1k and 2k and both of them need 2 seeks to
recreate an object, we should scan the 1k (or 2k?) slab cache more
aggressively than the 2k one? Hmm... I don't know. It depends on what we
want to achieve. But this won't balance the seeks, which is our goal for
now, IIUC.

>> But currently we
>> have four times that. I can explain why we should multiply this by 2 -
>> we do not count pages moving from active to inactive lrus in
>> nr_pages_scanned, and 2*nr_pages_scanned can be a good approximation for
>> that - but I have no idea why we multiply it by 4...
> I don't understand this code at all:
>
>   total_scan = nr;
>   delta = (4 * nr_pages_scanned) / shrinker->seeks;
>   delta *= freeable;
>   do_div(delta, lru_pages + 1);
>   total_scan += delta;
>
> If it actually makes any sense, it sorely sorely needs documentation.

To find its roots I had to checkout the linux history tree:

commit c3f4656118a78c1c294e0b4d338ac946265a822b
Author: Andrew Morton 
Date:   Mon Dec 29 23:48:44 2003 -0800

[PATCH] shrink_slab acounts for seeks incorrectly
   
wli points out that shrink_slab inverts the sense of
shrinker->seeks: those
caches which require more seeks to reestablish an object are shrunk
harder.
That's wrong - they should be shrunk less.
   
So fix that up, but scaling the result so that the patch is actually
a no-op
at this time, because all caches use DEFAULT_SEEKS (2).

diff --git a/mm/vmscan.c b/mm/vmscan.c
index b8594827bbac..f2da3c9fb346 100644
--- a/mm/vmscan.c
+++ b/mm/vmscan.c
@@ -154,7 +154,7 @@ static int shrink_slab(long scanned, unsigned int
gfp_mask)
list_for_each_entry(shrinker, &shrinker_list, list) {
unsigned long long delta;
 
-   delta = scanned * shrinker->seeks;
+   delta = 4 * (scanned / shrinker->seeks);
delta *= (*shrinker->shrinker)(0, gfp_mask);
do_div(delta, pages + 1);
shrinker->nr += delta;


So the idea seemed to be fixing a bug without introducing any functional
changes. Since then we have been living with this "4", which makes no
sense (?). Nobody complained though.

Thanks.

> David, you touched it last.  Any hints?

--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/


Re: [PATCH 2/3] mm: vmscan: get rid of DEFAULT_SEEKS and document shrink_slab logic

2014-02-05 Thread Andrew Morton
On Wed, 5 Feb 2014 11:16:49 +0400 Vladimir Davydov  
wrote:

> > So why did I originally make DEFAULT_SEEKS=2?  Because I figured that to
> > recreate (say) an inode would require a seek to the inode data then a
> > seek back.  Is it legitimate to include the
> > seek-back-to-what-you-were-doing-before seek in the cost of an inode
> > reclaim?  I guess so...
> 
> Hmm, that explains this 2. Since we typically don't need to "seek back"
> when recreating a cache page, as they are usually read in bunches by
> readahead, the number of seeks to bring back a user page is 1, while the
> number of seeks to recreate an average inode is 2, right?

Sounds right to me.

> Then to scan inodes and user pages so that they would generate
> approximately the same number of seeks, we should calculate the number
> of objects to scan as follows:
> 
> nr_objects_to_scan = nr_pages_scanned / lru_pages *
> nr_freeable_objects /
> shrinker->seeks
> 
> where shrinker->seeks = DEFAULT_SEEKS = 2 for inodes.

hm, I wonder if we should take the size of the object into account. 
Should we be maximizing (memory-reclaimed / seeks-to-reestablish-it).

> But currently we
> have four times that. I can explain why we should multiply this by 2 -
> we do not count pages moving from active to inactive lrus in
> nr_pages_scanned, and 2*nr_pages_scanned can be a good approximation for
> that - but I have no idea why we multiply it by 4...

I don't understand this code at all:

total_scan = nr;
delta = (4 * nr_pages_scanned) / shrinker->seeks;
delta *= freeable;
do_div(delta, lru_pages + 1);
total_scan += delta;

If it actually makes any sense, it sorely sorely needs documentation.

David, you touched it last.  Any hints?
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/


Re: [PATCH 2/3] mm: vmscan: get rid of DEFAULT_SEEKS and document shrink_slab logic

2014-02-04 Thread Vladimir Davydov
On 02/05/2014 01:58 AM, Andrew Morton wrote:
> On Fri, 17 Jan 2014 23:25:30 +0400 Vladimir Davydov  
> wrote:
>
>> Each shrinker must define the number of seeks it takes to recreate a
>> shrinkable cache object. It is used to balance slab reclaim vs page
>> reclaim: assuming it costs one seek to replace an LRU page, we age equal
>> percentages of the LRU and ageable caches. So far, everything sounds
>> clear, but the code implementing this behavior is rather confusing.
>>
>> First, there is the DEFAULT_SEEKS constant, which equals 2 for some
>> reason:
>>
>>   #define DEFAULT_SEEKS 2 /* A good number if you don't know better. */
>>
>> Most shrinkers define `seeks' to be equal to DEFAULT_SEEKS, some use
>> DEFAULT_SEEKS*N, and there are a few that totally ignore it. What is
>> peculiar, dcache and icache shrinkers have seeks=DEFAULT_SEEKS although
>> recreating an inode typically requires one seek. Does this mean that we
>> scan twice more inodes than we should?
>>
>> Actually, no. The point is that vmscan handles DEFAULT_SEEKS as if it
>> were 1 (`delta' is the number of objects we are going to scan):
>>
>>   shrink_slab_node():
>> delta = (4 * nr_pages_scanned) / shrinker->seeks;
>> delta *= freeable;
>> do_div(delta, lru_pages + 1);
>>
>> i.e.
>>
>> 2 * nr_pages_scannedDEFAULT_SEEKS
>> delta =  * --- * freeable;
>>  lru_pages shrinker->seeks
>>
>> Here we double the number of pages scanned in order to take into account
>> moves of on-LRU pages from the inactive list to the active list, which
>> we do not count in nr_pages_scanned.
>>
>> That said, shrinker->seeks=DEFAULT_SEEKS*N is equivalent to N seeks, so
>> why on the hell do we need it?
>>
>> IMO, the existence of the DEFAULT_SEEKS constant only causes confusion
>> for both users of the shrinker interface and those trying to understand
>> how slab shrinking works. The meaning of the `seeks' is perfectly
>> explained by the comment to it and there is no need in any obscure
>> constants for using it.
>>
>> That's why I'm sending this patch which completely removes DEFAULT_SEEKS
>> and makes all shrinkers use N instead of N*DEFAULT_SEEKS, documenting
>> the idea lying behind shrink_slab() in the meanwhile.
>>
>> Unfortunately, there are a few shrinkers that define seeks=1, which is
>> impossible to transfer to the new interface intact, namely:
>>
>>   nfsd_reply_cache_shrinker
>>   ttm_pool_manager::mm_shrink
>>   ttm_pool_manager::mm_shrink
>>   dm_bufio_client::shrinker
>>
>> It seems to me their authors were simply deceived by this mysterious
>> DEFAULT_SEEKS constant, because I've found no documentation why these
>> particular caches should be scanned more aggressively than the page and
>> other slab caches. For them, this patch leaves seeks=1. Thus, it DOES
>> introduce a functional change: the shrinkers enumerated above will be
>> scanned twice less intensively than they are now. I do not think that
>> this will cause any problems though.
>>
> um, yes.  DEFAULT_SEEKS is supposed to be "the number of seeks if you
> don't know any better".  Using DEFAULT_SEEKS*n is just daft.
>
> So why did I originally make DEFAULT_SEEKS=2?  Because I figured that to
> recreate (say) an inode would require a seek to the inode data then a
> seek back.  Is it legitimate to include the
> seek-back-to-what-you-were-doing-before seek in the cost of an inode
> reclaim?  I guess so...

Hmm, that explains this 2. Since we typically don't need to "seek back"
when recreating a cache page, as they are usually read in bunches by
readahead, the number of seeks to bring back a user page is 1, while the
number of seeks to recreate an average inode is 2, right?

Then to scan inodes and user pages so that they would generate
approximately the same number of seeks, we should calculate the number
of objects to scan as follows:

nr_objects_to_scan = nr_pages_scanned / lru_pages *
nr_freeable_objects /
shrinker->seeks

where shrinker->seeks = DEFAULT_SEEKS = 2 for inodes. But currently we
have four times that. I can explain why we should multiply this by 2 -
we do not count pages moving from active to inactive lrus in
nr_pages_scanned, and 2*nr_pages_scanned can be a good approximation for
that - but I have no idea why we multiply it by 4...

Thanks.

>
> If a filesystem were to require a seek to the superblock for every
> inode read (ok, bad example) then the cost of reestablishing that inode
> would be 3.
>
> All that being said, why did you go through and halve everything?  The
> cost of reestablishing an ext2 inode should be "2 seeks", but the patch
> makes it "1".
>

--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/


Re: [PATCH 2/3] mm: vmscan: get rid of DEFAULT_SEEKS and document shrink_slab logic

2014-02-04 Thread Andrew Morton
On Fri, 17 Jan 2014 23:25:30 +0400 Vladimir Davydov  
wrote:

> Each shrinker must define the number of seeks it takes to recreate a
> shrinkable cache object. It is used to balance slab reclaim vs page
> reclaim: assuming it costs one seek to replace an LRU page, we age equal
> percentages of the LRU and ageable caches. So far, everything sounds
> clear, but the code implementing this behavior is rather confusing.
> 
> First, there is the DEFAULT_SEEKS constant, which equals 2 for some
> reason:
> 
>   #define DEFAULT_SEEKS 2 /* A good number if you don't know better. */
> 
> Most shrinkers define `seeks' to be equal to DEFAULT_SEEKS, some use
> DEFAULT_SEEKS*N, and there are a few that totally ignore it. What is
> peculiar, dcache and icache shrinkers have seeks=DEFAULT_SEEKS although
> recreating an inode typically requires one seek. Does this mean that we
> scan twice more inodes than we should?
> 
> Actually, no. The point is that vmscan handles DEFAULT_SEEKS as if it
> were 1 (`delta' is the number of objects we are going to scan):
> 
>   shrink_slab_node():
> delta = (4 * nr_pages_scanned) / shrinker->seeks;
> delta *= freeable;
> do_div(delta, lru_pages + 1);
> 
> i.e.
> 
> 2 * nr_pages_scannedDEFAULT_SEEKS
> delta =  * --- * freeable;
>  lru_pages shrinker->seeks
> 
> Here we double the number of pages scanned in order to take into account
> moves of on-LRU pages from the inactive list to the active list, which
> we do not count in nr_pages_scanned.
> 
> That said, shrinker->seeks=DEFAULT_SEEKS*N is equivalent to N seeks, so
> why on the hell do we need it?
> 
> IMO, the existence of the DEFAULT_SEEKS constant only causes confusion
> for both users of the shrinker interface and those trying to understand
> how slab shrinking works. The meaning of the `seeks' is perfectly
> explained by the comment to it and there is no need in any obscure
> constants for using it.
> 
> That's why I'm sending this patch which completely removes DEFAULT_SEEKS
> and makes all shrinkers use N instead of N*DEFAULT_SEEKS, documenting
> the idea lying behind shrink_slab() in the meanwhile.
> 
> Unfortunately, there are a few shrinkers that define seeks=1, which is
> impossible to transfer to the new interface intact, namely:
> 
>   nfsd_reply_cache_shrinker
>   ttm_pool_manager::mm_shrink
>   ttm_pool_manager::mm_shrink
>   dm_bufio_client::shrinker
> 
> It seems to me their authors were simply deceived by this mysterious
> DEFAULT_SEEKS constant, because I've found no documentation why these
> particular caches should be scanned more aggressively than the page and
> other slab caches. For them, this patch leaves seeks=1. Thus, it DOES
> introduce a functional change: the shrinkers enumerated above will be
> scanned twice less intensively than they are now. I do not think that
> this will cause any problems though.
> 

um, yes.  DEFAULT_SEEKS is supposed to be "the number of seeks if you
don't know any better".  Using DEFAULT_SEEKS*n is just daft.

So why did I originally make DEFAULT_SEEKS=2?  Because I figured that to
recreate (say) an inode would require a seek to the inode data then a
seek back.  Is it legitimate to include the
seek-back-to-what-you-were-doing-before seek in the cost of an inode
reclaim?  I guess so...

If a filesystem were to require a seek to the superblock for every
inode read (ok, bad example) then the cost of reestablishing that inode
would be 3.

All that being said, why did you go through and halve everything?  The
cost of reestablishing an ext2 inode should be "2 seeks", but the patch
makes it "1".

--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/


[PATCH 2/3] mm: vmscan: get rid of DEFAULT_SEEKS and document shrink_slab logic

2014-01-17 Thread Vladimir Davydov
Each shrinker must define the number of seeks it takes to recreate a
shrinkable cache object. It is used to balance slab reclaim vs page
reclaim: assuming it costs one seek to replace an LRU page, we age equal
percentages of the LRU and ageable caches. So far, everything sounds
clear, but the code implementing this behavior is rather confusing.

First, there is the DEFAULT_SEEKS constant, which equals 2 for some
reason:

  #define DEFAULT_SEEKS 2 /* A good number if you don't know better. */

Most shrinkers define `seeks' to be equal to DEFAULT_SEEKS, some use
DEFAULT_SEEKS*N, and there are a few that totally ignore it. What is
peculiar, dcache and icache shrinkers have seeks=DEFAULT_SEEKS although
recreating an inode typically requires one seek. Does this mean that we
scan twice more inodes than we should?

Actually, no. The point is that vmscan handles DEFAULT_SEEKS as if it
were 1 (`delta' is the number of objects we are going to scan):

  shrink_slab_node():
delta = (4 * nr_pages_scanned) / shrinker->seeks;
delta *= freeable;
do_div(delta, lru_pages + 1);

i.e.

2 * nr_pages_scannedDEFAULT_SEEKS
delta =  * --- * freeable;
 lru_pages shrinker->seeks

Here we double the number of pages scanned in order to take into account
moves of on-LRU pages from the inactive list to the active list, which
we do not count in nr_pages_scanned.

That said, shrinker->seeks=DEFAULT_SEEKS*N is equivalent to N seeks, so
why on the hell do we need it?

IMO, the existence of the DEFAULT_SEEKS constant only causes confusion
for both users of the shrinker interface and those trying to understand
how slab shrinking works. The meaning of the `seeks' is perfectly
explained by the comment to it and there is no need in any obscure
constants for using it.

That's why I'm sending this patch which completely removes DEFAULT_SEEKS
and makes all shrinkers use N instead of N*DEFAULT_SEEKS, documenting
the idea lying behind shrink_slab() in the meanwhile.

Unfortunately, there are a few shrinkers that define seeks=1, which is
impossible to transfer to the new interface intact, namely:

  nfsd_reply_cache_shrinker
  ttm_pool_manager::mm_shrink
  ttm_pool_manager::mm_shrink
  dm_bufio_client::shrinker

It seems to me their authors were simply deceived by this mysterious
DEFAULT_SEEKS constant, because I've found no documentation why these
particular caches should be scanned more aggressively than the page and
other slab caches. For them, this patch leaves seeks=1. Thus, it DOES
introduce a functional change: the shrinkers enumerated above will be
scanned twice less intensively than they are now. I do not think that
this will cause any problems though.

Signed-off-by: Vladimir Davydov 
Cc: Andrew Morton 
Cc: Mel Gorman 
Cc: Michal Hocko 
Cc: Johannes Weiner 
Cc: Rik van Riel 
Cc: Dave Chinner 
Cc: Glauber Costa 
---
 arch/x86/kvm/mmu.c |2 +-
 drivers/gpu/drm/i915/i915_gem.c|2 +-
 drivers/md/bcache/btree.c  |2 +-
 drivers/staging/android/ashmem.c   |2 +-
 drivers/staging/android/lowmemorykiller.c  |2 +-
 drivers/staging/lustre/lustre/ldlm/ldlm_pool.c |4 +--
 drivers/staging/lustre/lustre/obdclass/lu_object.c |2 +-
 drivers/staging/lustre/lustre/ptlrpc/sec_bulk.c|2 +-
 fs/ext4/extents_status.c   |2 +-
 fs/gfs2/glock.c|2 +-
 fs/gfs2/quota.c|2 +-
 fs/mbcache.c   |2 +-
 fs/nfs/super.c |2 +-
 fs/quota/dquot.c   |2 +-
 fs/super.c |2 +-
 fs/ubifs/super.c   |2 +-
 fs/xfs/xfs_buf.c   |2 +-
 fs/xfs/xfs_qm.c|2 +-
 include/linux/shrinker.h   |1 -
 mm/huge_memory.c   |2 +-
 mm/vmscan.c|   31 ++--
 net/sunrpc/auth.c  |2 +-
 22 files changed, 36 insertions(+), 38 deletions(-)

diff --git a/arch/x86/kvm/mmu.c b/arch/x86/kvm/mmu.c
index 40772ef..b092ccc 100644
--- a/arch/x86/kvm/mmu.c
+++ b/arch/x86/kvm/mmu.c
@@ -4445,7 +4445,7 @@ mmu_shrink_count(struct shrinker *shrink, struct 
shrink_control *sc)
 static struct shrinker mmu_shrinker = {
.count_objects = mmu_shrink_count,
.scan_objects = mmu_shrink_scan,
-   .seeks = DEFAULT_SEEKS * 10,
+   .seeks = 10,
 };
 
 static void mmu_destroy_caches(void)
diff --git a/drivers/gpu/drm/i915/i915_gem.c b/drivers/gpu/drm/i915/i915_gem.c
index 76d3d1a..c779221 100644
--- a/drivers/gpu/drm/i915/i915_gem.c
+++ b/drivers/gpu/drm/i915/i915_gem.c
@@ -467