Re: [PATCH] zswap: Same-filled pages handling

2017-11-28 Thread Dan Streetman
On Mon, Nov 20, 2017 at 6:46 PM, Andrew Morton
 wrote:
>
> On Wed, 18 Oct 2017 10:48:32 + Srividya Desireddy 
>  wrote:
>
> > +/* Enable/disable handling same-value filled pages (enabled by default) */
> > +static bool zswap_same_filled_pages_enabled = true;
> > +module_param_named(same_filled_pages_enabled, 
> > zswap_same_filled_pages_enabled,
> > +bool, 0644);
>
> Do we actually need this?  Being able to disable the new feature shows
> a certain lack of confidence ;) I guess we can remove it later as that
> confidence grows.

No, it's not absolutely needed to have the param to enable/disable the
feature, but my concern is around how many pages actually benefit from
this, since it adds some overhead to check every page - the usefulness
of the feature depends entirely on the system workload.  So having a
way to disable it is helpful, for use cases that know it won't benefit
them.

>
> Please send a patch to document this parameter in
> Documentation/vm/zswap.txt.  And if you have time, please check that
> the rest of that file is up-to-date?

Srividya, can you send a patch to doc this feature please.

I'll check the rest of the file is correct.

>
> Thanks.
>


Re: [PATCH] zswap: Same-filled pages handling

2017-11-28 Thread Dan Streetman
On Mon, Nov 20, 2017 at 6:46 PM, Andrew Morton
 wrote:
>
> On Wed, 18 Oct 2017 10:48:32 + Srividya Desireddy 
>  wrote:
>
> > +/* Enable/disable handling same-value filled pages (enabled by default) */
> > +static bool zswap_same_filled_pages_enabled = true;
> > +module_param_named(same_filled_pages_enabled, 
> > zswap_same_filled_pages_enabled,
> > +bool, 0644);
>
> Do we actually need this?  Being able to disable the new feature shows
> a certain lack of confidence ;) I guess we can remove it later as that
> confidence grows.

No, it's not absolutely needed to have the param to enable/disable the
feature, but my concern is around how many pages actually benefit from
this, since it adds some overhead to check every page - the usefulness
of the feature depends entirely on the system workload.  So having a
way to disable it is helpful, for use cases that know it won't benefit
them.

>
> Please send a patch to document this parameter in
> Documentation/vm/zswap.txt.  And if you have time, please check that
> the rest of that file is up-to-date?

Srividya, can you send a patch to doc this feature please.

I'll check the rest of the file is correct.

>
> Thanks.
>


Re: [PATCH] zswap: Same-filled pages handling

2017-11-20 Thread Andrew Morton
On Wed, 18 Oct 2017 10:48:32 + Srividya Desireddy  
wrote:

> +/* Enable/disable handling same-value filled pages (enabled by default) */
> +static bool zswap_same_filled_pages_enabled = true;
> +module_param_named(same_filled_pages_enabled, 
> zswap_same_filled_pages_enabled,
> +bool, 0644);

Do we actually need this?  Being able to disable the new feature shows
a certain lack of confidence ;) I guess we can remove it later as that
confidence grows.

Please send a patch to document this parameter in
Documentation/vm/zswap.txt.  And if you have time, please check that
the rest of that file is up-to-date?

Thanks.



Re: [PATCH] zswap: Same-filled pages handling

2017-11-20 Thread Andrew Morton
On Wed, 18 Oct 2017 10:48:32 + Srividya Desireddy  
wrote:

> +/* Enable/disable handling same-value filled pages (enabled by default) */
> +static bool zswap_same_filled_pages_enabled = true;
> +module_param_named(same_filled_pages_enabled, 
> zswap_same_filled_pages_enabled,
> +bool, 0644);

Do we actually need this?  Being able to disable the new feature shows
a certain lack of confidence ;) I guess we can remove it later as that
confidence grows.

Please send a patch to document this parameter in
Documentation/vm/zswap.txt.  And if you have time, please check that
the rest of that file is up-to-date?

Thanks.



Re: [PATCH] zswap: Same-filled pages handling

2017-11-17 Thread Dan Streetman
On Thu, Nov 2, 2017 at 11:08 AM, Srividya Desireddy
 wrote:
>
> On Wed, Oct 19, 2017 at 6:38 AM, Matthew Wilcox wrote:
>> On Thu, Oct 19, 2017 at 12:31:18AM +0300, Timofey Titovets wrote:
>>> > +static void zswap_fill_page(void *ptr, unsigned long value)
>>> > +{
>>> > +   unsigned int pos;
>>> > +   unsigned long *page;
>>> > +
>>> > +   page = (unsigned long *)ptr;
>>> > +   if (value == 0)
>>> > +   memset(page, 0, PAGE_SIZE);
>>> > +   else {
>>> > +   for (pos = 0; pos < PAGE_SIZE / sizeof(*page); pos++)
>>> > +   page[pos] = value;
>>> > +   }
>>> > +}
>>>
>>> Same here, but with memcpy().
>>
>>No.  Use memset_l which is optimised for this specific job.
>
> I have tested this patch using memset_l() function in zswap_fill_page() on
> x86 64-bit system with 2GB RAM. The performance remains same.
> But, memset_l() funcion might be optimised in future.
> @Seth Jennings/Dan Streetman:  Should I use memset_l() function in this patch.

my testing showed also showed minimal if any difference when using
memset_l(), but it's simpler code and should never be slower than
looping.  I'll ack it if you want to send an additional patch making
this change (on top of the one I already acked).

>
> --
> To unsubscribe, send a message with 'unsubscribe linux-mm' in
> the body to majord...@kvack.org.  For more info on Linux MM,
> see: http://www.linux-mm.org/ .
> Don't email: mailto:"d...@kvack.org;> em...@kvack.org 


Re: [PATCH] zswap: Same-filled pages handling

2017-11-17 Thread Dan Streetman
On Thu, Nov 2, 2017 at 11:08 AM, Srividya Desireddy
 wrote:
>
> On Wed, Oct 19, 2017 at 6:38 AM, Matthew Wilcox wrote:
>> On Thu, Oct 19, 2017 at 12:31:18AM +0300, Timofey Titovets wrote:
>>> > +static void zswap_fill_page(void *ptr, unsigned long value)
>>> > +{
>>> > +   unsigned int pos;
>>> > +   unsigned long *page;
>>> > +
>>> > +   page = (unsigned long *)ptr;
>>> > +   if (value == 0)
>>> > +   memset(page, 0, PAGE_SIZE);
>>> > +   else {
>>> > +   for (pos = 0; pos < PAGE_SIZE / sizeof(*page); pos++)
>>> > +   page[pos] = value;
>>> > +   }
>>> > +}
>>>
>>> Same here, but with memcpy().
>>
>>No.  Use memset_l which is optimised for this specific job.
>
> I have tested this patch using memset_l() function in zswap_fill_page() on
> x86 64-bit system with 2GB RAM. The performance remains same.
> But, memset_l() funcion might be optimised in future.
> @Seth Jennings/Dan Streetman:  Should I use memset_l() function in this patch.

my testing showed also showed minimal if any difference when using
memset_l(), but it's simpler code and should never be slower than
looping.  I'll ack it if you want to send an additional patch making
this change (on top of the one I already acked).

>
> --
> To unsubscribe, send a message with 'unsubscribe linux-mm' in
> the body to majord...@kvack.org.  For more info on Linux MM,
> see: http://www.linux-mm.org/ .
> Don't email: mailto:"d...@kvack.org;> em...@kvack.org 


Re: [PATCH] zswap: Same-filled pages handling

2017-11-17 Thread Dan Streetman
On Wed, Oct 18, 2017 at 5:31 PM, Timofey Titovets  wrote:
>> +static int zswap_is_page_same_filled(void *ptr, unsigned long *value)
>> +{
>> +   unsigned int pos;
>> +   unsigned long *page;
>> +
>> +   page = (unsigned long *)ptr;
>> +   for (pos = 1; pos < PAGE_SIZE / sizeof(*page); pos++) {
>> +   if (page[pos] != page[0])
>> +   return 0;
>> +   }
>> +   *value = page[0];
>> +   return 1;
>> +}
>> +
>
> In theory you can speedup that check by memcmp(),
> And do something like first:
> memcmp(ptr, ptr + PAGE_SIZE/sizeof(*page)/2, PAGE_SIZE/2);
> After compare 1/4 with 2/4
> Then 1/8 with 2/8.
> And after do you check with pattern, only on first 512 bytes.
>
> Just because memcmp() on fresh CPU are crazy fast.
> That can easy make you check less expensive.

I did check this, and it is actually significantly worse; keep in mind
that doing it ^ way may is a smaller loop, but is actually doing more
memory comparisons.

>
>> +static void zswap_fill_page(void *ptr, unsigned long value)
>> +{
>> +   unsigned int pos;
>> +   unsigned long *page;
>> +
>> +   page = (unsigned long *)ptr;
>> +   if (value == 0)
>> +   memset(page, 0, PAGE_SIZE);
>> +   else {
>> +   for (pos = 0; pos < PAGE_SIZE / sizeof(*page); pos++)
>> +   page[pos] = value;
>> +   }
>> +}
>
> Same here, but with memcpy().
>
> P.S.
> I'm just too busy to make fast performance test in user space,
> but my recent experience with that CPU commands, show what that make a sense:
> KSM patch: https://patchwork.kernel.org/patch/9980803/
> User space tests: https://github.com/Nefelim4ag/memcmpe
> PAGE_SIZE: 65536, loop count: 1966080
> memcmp:  -28time: 3216 ms,  th: 40064.644611 MiB/s
> memcmpe: -28, offset: 62232 time: 3588 ms,  th: 35902.462390 MiB/s
> memcmpe: -28, offset: 62232 time: 71 ms,th: 1792233.164286 MiB/s
>
> IIRC, with code like our, you must see ~2.5GiB/s
>
> Thanks.
> --
> Have a nice day,
> Timofey.


Re: [PATCH] zswap: Same-filled pages handling

2017-11-17 Thread Dan Streetman
On Wed, Oct 18, 2017 at 5:31 PM, Timofey Titovets  wrote:
>> +static int zswap_is_page_same_filled(void *ptr, unsigned long *value)
>> +{
>> +   unsigned int pos;
>> +   unsigned long *page;
>> +
>> +   page = (unsigned long *)ptr;
>> +   for (pos = 1; pos < PAGE_SIZE / sizeof(*page); pos++) {
>> +   if (page[pos] != page[0])
>> +   return 0;
>> +   }
>> +   *value = page[0];
>> +   return 1;
>> +}
>> +
>
> In theory you can speedup that check by memcmp(),
> And do something like first:
> memcmp(ptr, ptr + PAGE_SIZE/sizeof(*page)/2, PAGE_SIZE/2);
> After compare 1/4 with 2/4
> Then 1/8 with 2/8.
> And after do you check with pattern, only on first 512 bytes.
>
> Just because memcmp() on fresh CPU are crazy fast.
> That can easy make you check less expensive.

I did check this, and it is actually significantly worse; keep in mind
that doing it ^ way may is a smaller loop, but is actually doing more
memory comparisons.

>
>> +static void zswap_fill_page(void *ptr, unsigned long value)
>> +{
>> +   unsigned int pos;
>> +   unsigned long *page;
>> +
>> +   page = (unsigned long *)ptr;
>> +   if (value == 0)
>> +   memset(page, 0, PAGE_SIZE);
>> +   else {
>> +   for (pos = 0; pos < PAGE_SIZE / sizeof(*page); pos++)
>> +   page[pos] = value;
>> +   }
>> +}
>
> Same here, but with memcpy().
>
> P.S.
> I'm just too busy to make fast performance test in user space,
> but my recent experience with that CPU commands, show what that make a sense:
> KSM patch: https://patchwork.kernel.org/patch/9980803/
> User space tests: https://github.com/Nefelim4ag/memcmpe
> PAGE_SIZE: 65536, loop count: 1966080
> memcmp:  -28time: 3216 ms,  th: 40064.644611 MiB/s
> memcmpe: -28, offset: 62232 time: 3588 ms,  th: 35902.462390 MiB/s
> memcmpe: -28, offset: 62232 time: 71 ms,th: 1792233.164286 MiB/s
>
> IIRC, with code like our, you must see ~2.5GiB/s
>
> Thanks.
> --
> Have a nice day,
> Timofey.


Re: [PATCH] zswap: Same-filled pages handling

2017-11-17 Thread Dan Streetman
On Wed, Oct 18, 2017 at 6:48 AM, Srividya Desireddy
 wrote:
>
> From: Srividya Desireddy 
> Date: Wed, 18 Oct 2017 15:39:02 +0530
> Subject: [PATCH] zswap: Same-filled pages handling
>
> Zswap is a cache which compresses the pages that are being swapped out
> and stores them into a dynamically allocated RAM-based memory pool.
> Experiments have shown that around 10-20% of pages stored in zswap
> are same-filled pages (i.e. contents of the page are all same), but
> these pages are handled as normal pages by compressing and allocating
> memory in the pool.
>
> This patch adds a check in zswap_frontswap_store() to identify same-filled
> page before compression of the page. If the page is a same-filled page, set
> zswap_entry.length to zero, save the same-filled value and skip the
> compression of the page and alloction of memory in zpool.
> In zswap_frontswap_load(), check if value of zswap_entry.length is zero
> corresponding to the page to be loaded. If zswap_entry.length is zero,
> fill the page with same-filled value. This saves the decompression time
> during load.
>
> On a ARM Quad Core 32-bit device with 1.5GB RAM by launching and
> relaunching different applications, out of ~64000 pages stored in
> zswap, ~11000 pages were same-value filled pages (including zero-filled
> pages) and ~9000 pages were zero-filled pages.
>
> An average of 17% of pages(including zero-filled pages) in zswap are
> same-value filled pages and 14% pages are zero-filled pages.
> An average of 3% of pages are same-filled non-zero pages.
>
> The below table shows the execution time profiling with the patch.
>
>   BaselineWith patch  % Improvement
> -
> *Zswap Store Time   26.5ms   18ms  32%
>  (of same value pages)
> *Zswap Load Time
>  (of same value pages)  25.5ms   13ms  49%
> -
>
> On Ubuntu PC with 2GB RAM, while executing kernel build and other test
> scripts and running multimedia applications, out of 36 pages
> stored in zswap 78000(~22%) of pages were found to be same-value filled
> pages (including zero-filled pages) and 64000(~17%) are zero-filled
> pages. So an average of %5 of pages are same-filled non-zero pages.
>
> The below table shows the execution time profiling with the patch.
>
>   BaselineWith patch  % Improvement
> -
> *Zswap Store Time   91ms74ms   19%
>  (of same value pages)
> *Zswap Load Time50ms7.5ms  85%
>  (of same value pages)
> -
>
> *The execution times may vary with test device used.

First, I'm really sorry for such a long delay in looking at this.

I did test this patch out this week, and I added some instrumentation
to check the performance impact, and tested with a small program to
try to check the best and worst cases.

When doing a lot of swap where all (or almost all) pages are
same-value, I found this patch does save both time and space,
significantly.  The exact improvement in time and space depends on
which compressor is being used, but roughly agrees with the numbers
you listed.

In the worst case situation, where all (or almost all) pages have the
same-value *except* the final long (meaning, zswap will check each
long on the entire page but then still have to pass the page to the
compressor), the same-value check is around 10-15% of the total time
spent in zswap_frontswap_store().  That's a not-insignificant amount
of time, but it's not huge.  Considering that most systems will
probably be swapping pages that aren't similar to the worst case
(although I don't have any data to know that), I'd say the improvement
is worth the possible worst-case performance impact.

>
> Signed-off-by: Srividya Desireddy 

Acked-by: Dan Streetman 

> ---
>  mm/zswap.c | 77 
> ++
>  1 file changed, 72 insertions(+), 5 deletions(-)
>
> diff --git a/mm/zswap.c b/mm/zswap.c
> index d39581a..4dd8b89 100644
> --- a/mm/zswap.c
> +++ b/mm/zswap.c
> @@ -49,6 +49,8 @@
>  static u64 zswap_pool_total_size;
>  /* The number of compressed pages currently stored in zswap */
>  static atomic_t zswap_stored_pages = ATOMIC_INIT(0);
> +/* The number of same-value filled pages currently stored in zswap */
> +static atomic_t zswap_same_filled_pages = ATOMIC_INIT(0);
>
>  /*
>   * The statistics below are not protected from concurrent access for
> @@ -116,6 +118,11 @@ static int zswap_compressor_param_set(const char *,
>  static unsigned int zswap_max_pool_percent = 20;
>  module_param_named(max_pool_percent, zswap_max_pool_percent, uint, 0644);
>
> +/* Enable/disable handling 

Re: [PATCH] zswap: Same-filled pages handling

2017-11-17 Thread Dan Streetman
On Wed, Oct 18, 2017 at 6:48 AM, Srividya Desireddy
 wrote:
>
> From: Srividya Desireddy 
> Date: Wed, 18 Oct 2017 15:39:02 +0530
> Subject: [PATCH] zswap: Same-filled pages handling
>
> Zswap is a cache which compresses the pages that are being swapped out
> and stores them into a dynamically allocated RAM-based memory pool.
> Experiments have shown that around 10-20% of pages stored in zswap
> are same-filled pages (i.e. contents of the page are all same), but
> these pages are handled as normal pages by compressing and allocating
> memory in the pool.
>
> This patch adds a check in zswap_frontswap_store() to identify same-filled
> page before compression of the page. If the page is a same-filled page, set
> zswap_entry.length to zero, save the same-filled value and skip the
> compression of the page and alloction of memory in zpool.
> In zswap_frontswap_load(), check if value of zswap_entry.length is zero
> corresponding to the page to be loaded. If zswap_entry.length is zero,
> fill the page with same-filled value. This saves the decompression time
> during load.
>
> On a ARM Quad Core 32-bit device with 1.5GB RAM by launching and
> relaunching different applications, out of ~64000 pages stored in
> zswap, ~11000 pages were same-value filled pages (including zero-filled
> pages) and ~9000 pages were zero-filled pages.
>
> An average of 17% of pages(including zero-filled pages) in zswap are
> same-value filled pages and 14% pages are zero-filled pages.
> An average of 3% of pages are same-filled non-zero pages.
>
> The below table shows the execution time profiling with the patch.
>
>   BaselineWith patch  % Improvement
> -
> *Zswap Store Time   26.5ms   18ms  32%
>  (of same value pages)
> *Zswap Load Time
>  (of same value pages)  25.5ms   13ms  49%
> -
>
> On Ubuntu PC with 2GB RAM, while executing kernel build and other test
> scripts and running multimedia applications, out of 36 pages
> stored in zswap 78000(~22%) of pages were found to be same-value filled
> pages (including zero-filled pages) and 64000(~17%) are zero-filled
> pages. So an average of %5 of pages are same-filled non-zero pages.
>
> The below table shows the execution time profiling with the patch.
>
>   BaselineWith patch  % Improvement
> -
> *Zswap Store Time   91ms74ms   19%
>  (of same value pages)
> *Zswap Load Time50ms7.5ms  85%
>  (of same value pages)
> -
>
> *The execution times may vary with test device used.

First, I'm really sorry for such a long delay in looking at this.

I did test this patch out this week, and I added some instrumentation
to check the performance impact, and tested with a small program to
try to check the best and worst cases.

When doing a lot of swap where all (or almost all) pages are
same-value, I found this patch does save both time and space,
significantly.  The exact improvement in time and space depends on
which compressor is being used, but roughly agrees with the numbers
you listed.

In the worst case situation, where all (or almost all) pages have the
same-value *except* the final long (meaning, zswap will check each
long on the entire page but then still have to pass the page to the
compressor), the same-value check is around 10-15% of the total time
spent in zswap_frontswap_store().  That's a not-insignificant amount
of time, but it's not huge.  Considering that most systems will
probably be swapping pages that aren't similar to the worst case
(although I don't have any data to know that), I'd say the improvement
is worth the possible worst-case performance impact.

>
> Signed-off-by: Srividya Desireddy 

Acked-by: Dan Streetman 

> ---
>  mm/zswap.c | 77 
> ++
>  1 file changed, 72 insertions(+), 5 deletions(-)
>
> diff --git a/mm/zswap.c b/mm/zswap.c
> index d39581a..4dd8b89 100644
> --- a/mm/zswap.c
> +++ b/mm/zswap.c
> @@ -49,6 +49,8 @@
>  static u64 zswap_pool_total_size;
>  /* The number of compressed pages currently stored in zswap */
>  static atomic_t zswap_stored_pages = ATOMIC_INIT(0);
> +/* The number of same-value filled pages currently stored in zswap */
> +static atomic_t zswap_same_filled_pages = ATOMIC_INIT(0);
>
>  /*
>   * The statistics below are not protected from concurrent access for
> @@ -116,6 +118,11 @@ static int zswap_compressor_param_set(const char *,
>  static unsigned int zswap_max_pool_percent = 20;
>  module_param_named(max_pool_percent, zswap_max_pool_percent, uint, 0644);
>
> +/* Enable/disable handling same-value filled pages (enabled by default) */
> +static bool zswap_same_filled_pages_enabled 

Re: [PATCH] zswap: Same-filled pages handling

2017-11-02 Thread Srividya Desireddy
 
On Wed, Oct 19, 2017 at 6:38 AM, Matthew Wilcox wrote: 
> On Thu, Oct 19, 2017 at 12:31:18AM +0300, Timofey Titovets wrote:
>> > +static void zswap_fill_page(void *ptr, unsigned long value)
>> > +{
>> > +   unsigned int pos;
>> > +   unsigned long *page;
>> > +
>> > +   page = (unsigned long *)ptr;
>> > +   if (value == 0)
>> > +   memset(page, 0, PAGE_SIZE);
>> > +   else {
>> > +   for (pos = 0; pos < PAGE_SIZE / sizeof(*page); pos++)
>> > +   page[pos] = value;
>> > +   }
>> > +}
>> 
>> Same here, but with memcpy().
>
>No.  Use memset_l which is optimised for this specific job.

I have tested this patch using memset_l() function in zswap_fill_page() on 
x86 64-bit system with 2GB RAM. The performance remains same. 
But, memset_l() funcion might be optimised in future. 
@Seth Jennings/Dan Streetman:  Should I use memset_l() function in this patch.


Re: [PATCH] zswap: Same-filled pages handling

2017-11-02 Thread Srividya Desireddy
 
On Wed, Oct 19, 2017 at 6:38 AM, Matthew Wilcox wrote: 
> On Thu, Oct 19, 2017 at 12:31:18AM +0300, Timofey Titovets wrote:
>> > +static void zswap_fill_page(void *ptr, unsigned long value)
>> > +{
>> > +   unsigned int pos;
>> > +   unsigned long *page;
>> > +
>> > +   page = (unsigned long *)ptr;
>> > +   if (value == 0)
>> > +   memset(page, 0, PAGE_SIZE);
>> > +   else {
>> > +   for (pos = 0; pos < PAGE_SIZE / sizeof(*page); pos++)
>> > +   page[pos] = value;
>> > +   }
>> > +}
>> 
>> Same here, but with memcpy().
>
>No.  Use memset_l which is optimised for this specific job.

I have tested this patch using memset_l() function in zswap_fill_page() on 
x86 64-bit system with 2GB RAM. The performance remains same. 
But, memset_l() funcion might be optimised in future. 
@Seth Jennings/Dan Streetman:  Should I use memset_l() function in this patch.


Re: [PATCH] zswap: Same-filled pages handling

2017-10-19 Thread Matthew Wilcox
On Wed, Oct 18, 2017 at 09:30:32PM -0700, Andi Kleen wrote:
> > Yes.  Every 64-bit repeating pattern is also a 32-bit repeating pattern.
> > Supporting a 64-bit pattern on a 32-bit kernel is painful, but it makes
> > no sense to *not* support a 64-bit pattern on a 64-bit kernel.  
> 
> But a 32bit repeating pattern is not necessarily a 64bit pattern.

Oops, I said it backwards.  What I mean is that if you have the repeating
pattern:

0x12345678 12345678 12345678 12345678 12345678 12345678

that's the same as the repeating pattern:

0x1234567812345678 1234567812345678 1234567812345678

so the 64-bit kernel is able to find all patterns that the 32-bit kernel is,
and more.



Re: [PATCH] zswap: Same-filled pages handling

2017-10-19 Thread Matthew Wilcox
On Wed, Oct 18, 2017 at 09:30:32PM -0700, Andi Kleen wrote:
> > Yes.  Every 64-bit repeating pattern is also a 32-bit repeating pattern.
> > Supporting a 64-bit pattern on a 32-bit kernel is painful, but it makes
> > no sense to *not* support a 64-bit pattern on a 64-bit kernel.  
> 
> But a 32bit repeating pattern is not necessarily a 64bit pattern.

Oops, I said it backwards.  What I mean is that if you have the repeating
pattern:

0x12345678 12345678 12345678 12345678 12345678 12345678

that's the same as the repeating pattern:

0x1234567812345678 1234567812345678 1234567812345678

so the 64-bit kernel is able to find all patterns that the 32-bit kernel is,
and more.



Re: [PATCH] zswap: Same-filled pages handling

2017-10-18 Thread Andi Kleen
> Yes.  Every 64-bit repeating pattern is also a 32-bit repeating pattern.
> Supporting a 64-bit pattern on a 32-bit kernel is painful, but it makes
> no sense to *not* support a 64-bit pattern on a 64-bit kernel.  

But a 32bit repeating pattern is not necessarily a 64bit pattern.

>This is the same approach used in zram, fwiw.

Sounds bogus.

-Andi


Re: [PATCH] zswap: Same-filled pages handling

2017-10-18 Thread Andi Kleen
> Yes.  Every 64-bit repeating pattern is also a 32-bit repeating pattern.
> Supporting a 64-bit pattern on a 32-bit kernel is painful, but it makes
> no sense to *not* support a 64-bit pattern on a 64-bit kernel.  

But a 32bit repeating pattern is not necessarily a 64bit pattern.

>This is the same approach used in zram, fwiw.

Sounds bogus.

-Andi


Re: [PATCH] zswap: Same-filled pages handling

2017-10-18 Thread Matthew Wilcox
On Wed, Oct 18, 2017 at 01:43:10PM -0700, Andi Kleen wrote:
> > +static int zswap_is_page_same_filled(void *ptr, unsigned long *value)
> > +{
> > +   unsigned int pos;
> > +   unsigned long *page;
> > +
> > +   page = (unsigned long *)ptr;
> > +   for (pos = 1; pos < PAGE_SIZE / sizeof(*page); pos++) {
> > +   if (page[pos] != page[0])
> > +   return 0;
> > +   }
> 
> So on 32bit it checks for 32bit repeating values and on 64bit
> for 64bit repeating values. Does that make sense?

Yes.  Every 64-bit repeating pattern is also a 32-bit repeating pattern.
Supporting a 64-bit pattern on a 32-bit kernel is painful, but it makes
no sense to *not* support a 64-bit pattern on a 64-bit kernel.  This is
the same approach used in zram, fwiw.



Re: [PATCH] zswap: Same-filled pages handling

2017-10-18 Thread Matthew Wilcox
On Wed, Oct 18, 2017 at 01:43:10PM -0700, Andi Kleen wrote:
> > +static int zswap_is_page_same_filled(void *ptr, unsigned long *value)
> > +{
> > +   unsigned int pos;
> > +   unsigned long *page;
> > +
> > +   page = (unsigned long *)ptr;
> > +   for (pos = 1; pos < PAGE_SIZE / sizeof(*page); pos++) {
> > +   if (page[pos] != page[0])
> > +   return 0;
> > +   }
> 
> So on 32bit it checks for 32bit repeating values and on 64bit
> for 64bit repeating values. Does that make sense?

Yes.  Every 64-bit repeating pattern is also a 32-bit repeating pattern.
Supporting a 64-bit pattern on a 32-bit kernel is painful, but it makes
no sense to *not* support a 64-bit pattern on a 64-bit kernel.  This is
the same approach used in zram, fwiw.



Re: [PATCH] zswap: Same-filled pages handling

2017-10-18 Thread Matthew Wilcox
On Thu, Oct 19, 2017 at 12:31:18AM +0300, Timofey Titovets wrote:
> > +static void zswap_fill_page(void *ptr, unsigned long value)
> > +{
> > +   unsigned int pos;
> > +   unsigned long *page;
> > +
> > +   page = (unsigned long *)ptr;
> > +   if (value == 0)
> > +   memset(page, 0, PAGE_SIZE);
> > +   else {
> > +   for (pos = 0; pos < PAGE_SIZE / sizeof(*page); pos++)
> > +   page[pos] = value;
> > +   }
> > +}
> 
> Same here, but with memcpy().

No.  Use memset_l which is optimised for this specific job.


Re: [PATCH] zswap: Same-filled pages handling

2017-10-18 Thread Matthew Wilcox
On Thu, Oct 19, 2017 at 12:31:18AM +0300, Timofey Titovets wrote:
> > +static void zswap_fill_page(void *ptr, unsigned long value)
> > +{
> > +   unsigned int pos;
> > +   unsigned long *page;
> > +
> > +   page = (unsigned long *)ptr;
> > +   if (value == 0)
> > +   memset(page, 0, PAGE_SIZE);
> > +   else {
> > +   for (pos = 0; pos < PAGE_SIZE / sizeof(*page); pos++)
> > +   page[pos] = value;
> > +   }
> > +}
> 
> Same here, but with memcpy().

No.  Use memset_l which is optimised for this specific job.


Re: [PATCH] zswap: Same-filled pages handling

2017-10-18 Thread Timofey Titovets
> +static int zswap_is_page_same_filled(void *ptr, unsigned long *value)
> +{
> +   unsigned int pos;
> +   unsigned long *page;
> +
> +   page = (unsigned long *)ptr;
> +   for (pos = 1; pos < PAGE_SIZE / sizeof(*page); pos++) {
> +   if (page[pos] != page[0])
> +   return 0;
> +   }
> +   *value = page[0];
> +   return 1;
> +}
> +

In theory you can speedup that check by memcmp(),
And do something like first:
memcmp(ptr, ptr + PAGE_SIZE/sizeof(*page)/2, PAGE_SIZE/2);
After compare 1/4 with 2/4
Then 1/8 with 2/8.
And after do you check with pattern, only on first 512 bytes.

Just because memcmp() on fresh CPU are crazy fast.
That can easy make you check less expensive.

> +static void zswap_fill_page(void *ptr, unsigned long value)
> +{
> +   unsigned int pos;
> +   unsigned long *page;
> +
> +   page = (unsigned long *)ptr;
> +   if (value == 0)
> +   memset(page, 0, PAGE_SIZE);
> +   else {
> +   for (pos = 0; pos < PAGE_SIZE / sizeof(*page); pos++)
> +   page[pos] = value;
> +   }
> +}

Same here, but with memcpy().

P.S.
I'm just too busy to make fast performance test in user space,
but my recent experience with that CPU commands, show what that make a sense:
KSM patch: https://patchwork.kernel.org/patch/9980803/
User space tests: https://github.com/Nefelim4ag/memcmpe
PAGE_SIZE: 65536, loop count: 1966080
memcmp:  -28time: 3216 ms,  th: 40064.644611 MiB/s
memcmpe: -28, offset: 62232 time: 3588 ms,  th: 35902.462390 MiB/s
memcmpe: -28, offset: 62232 time: 71 ms,th: 1792233.164286 MiB/s

IIRC, with code like our, you must see ~2.5GiB/s

Thanks.
-- 
Have a nice day,
Timofey.


Re: [PATCH] zswap: Same-filled pages handling

2017-10-18 Thread Timofey Titovets
> +static int zswap_is_page_same_filled(void *ptr, unsigned long *value)
> +{
> +   unsigned int pos;
> +   unsigned long *page;
> +
> +   page = (unsigned long *)ptr;
> +   for (pos = 1; pos < PAGE_SIZE / sizeof(*page); pos++) {
> +   if (page[pos] != page[0])
> +   return 0;
> +   }
> +   *value = page[0];
> +   return 1;
> +}
> +

In theory you can speedup that check by memcmp(),
And do something like first:
memcmp(ptr, ptr + PAGE_SIZE/sizeof(*page)/2, PAGE_SIZE/2);
After compare 1/4 with 2/4
Then 1/8 with 2/8.
And after do you check with pattern, only on first 512 bytes.

Just because memcmp() on fresh CPU are crazy fast.
That can easy make you check less expensive.

> +static void zswap_fill_page(void *ptr, unsigned long value)
> +{
> +   unsigned int pos;
> +   unsigned long *page;
> +
> +   page = (unsigned long *)ptr;
> +   if (value == 0)
> +   memset(page, 0, PAGE_SIZE);
> +   else {
> +   for (pos = 0; pos < PAGE_SIZE / sizeof(*page); pos++)
> +   page[pos] = value;
> +   }
> +}

Same here, but with memcpy().

P.S.
I'm just too busy to make fast performance test in user space,
but my recent experience with that CPU commands, show what that make a sense:
KSM patch: https://patchwork.kernel.org/patch/9980803/
User space tests: https://github.com/Nefelim4ag/memcmpe
PAGE_SIZE: 65536, loop count: 1966080
memcmp:  -28time: 3216 ms,  th: 40064.644611 MiB/s
memcmpe: -28, offset: 62232 time: 3588 ms,  th: 35902.462390 MiB/s
memcmpe: -28, offset: 62232 time: 71 ms,th: 1792233.164286 MiB/s

IIRC, with code like our, you must see ~2.5GiB/s

Thanks.
-- 
Have a nice day,
Timofey.


Re: [PATCH] zswap: Same-filled pages handling

2017-10-18 Thread Andi Kleen
Srividya Desireddy  writes:
>
> On a ARM Quad Core 32-bit device with 1.5GB RAM by launching and
> relaunching different applications, out of ~64000 pages stored in
> zswap, ~11000 pages were same-value filled pages (including zero-filled
> pages) and ~9000 pages were zero-filled pages.

What are the values for the non zero cases?

> +static int zswap_is_page_same_filled(void *ptr, unsigned long *value)
> +{
> + unsigned int pos;
> + unsigned long *page;
> +
> + page = (unsigned long *)ptr;
> + for (pos = 1; pos < PAGE_SIZE / sizeof(*page); pos++) {
> + if (page[pos] != page[0])
> + return 0;
> + }

So on 32bit it checks for 32bit repeating values and on 64bit
for 64bit repeating values. Does that make sense?

Did you test the patch on a 64bit system?

Overall I would expect this extra pass to be fairly expensive. It may
be better to add some special check to the compressor, and let
it abort if it sees a string of same values, and only do the check
then.

-Andi


Re: [PATCH] zswap: Same-filled pages handling

2017-10-18 Thread Andi Kleen
Srividya Desireddy  writes:
>
> On a ARM Quad Core 32-bit device with 1.5GB RAM by launching and
> relaunching different applications, out of ~64000 pages stored in
> zswap, ~11000 pages were same-value filled pages (including zero-filled
> pages) and ~9000 pages were zero-filled pages.

What are the values for the non zero cases?

> +static int zswap_is_page_same_filled(void *ptr, unsigned long *value)
> +{
> + unsigned int pos;
> + unsigned long *page;
> +
> + page = (unsigned long *)ptr;
> + for (pos = 1; pos < PAGE_SIZE / sizeof(*page); pos++) {
> + if (page[pos] != page[0])
> + return 0;
> + }

So on 32bit it checks for 32bit repeating values and on 64bit
for 64bit repeating values. Does that make sense?

Did you test the patch on a 64bit system?

Overall I would expect this extra pass to be fairly expensive. It may
be better to add some special check to the compressor, and let
it abort if it sees a string of same values, and only do the check
then.

-Andi


Re: [PATCH] zswap: Same-filled pages handling

2017-10-18 Thread Srividya Desireddy
On Wed, Oct 18, 2017 at 7:41 PM, Matthew Wilcox wrote: 
> On Wed, Oct 18, 2017 at 04:33:43PM +0300, Timofey Titovets wrote:
>> 2017-10-18 15:34 GMT+03:00 Matthew Wilcox :
>> > On Wed, Oct 18, 2017 at 10:48:32AM +, Srividya Desireddy wrote:
>> >> +static void zswap_fill_page(void *ptr, unsigned long value)
>> >> +{
>> >> + unsigned int pos;
>> >> + unsigned long *page;
>> >> +
>> >> + page = (unsigned long *)ptr;
>> >> + if (value == 0)
>> >> + memset(page, 0, PAGE_SIZE);
>> >> + else {
>> >> + for (pos = 0; pos < PAGE_SIZE / sizeof(*page); pos++)
>> >> + page[pos] = value;
>> >> + }
>> >> +}
>> >
>> > I think you meant:
>> >
>> > static void zswap_fill_page(void *ptr, unsigned long value)
>> > {
>> > memset_l(ptr, value, PAGE_SIZE / sizeof(unsigned long));
>> > }
>> 
>> IIRC kernel have special zero page, and if i understand correctly.
>> You can map all zero pages to that zero page and not touch zswap completely.
>> (Your situation look like some KSM case (i.e. KSM can handle pages
>> with same content), but i'm not sure if that applicable there)
> 
>You're confused by the word "same".  What Srividya meant was that the
>page is filled with a pattern, eg 0xfffefffefffefffe..., not that it is
>the same as any other page.

In kernel there is a special zero page or empty_zero_page which is in
general allocated in paging_init() function, to map all zero pages. But,
same-value-filled pages including zero pages exist in memory because
applications may be initializing the allocated pages with a value and not
using them; or the actual content written to the memory pages during 
execution itself is same-value, in case of multimedia data for example.

I had earlier posted a patch with similar implementaion of KSM concept 
for Zswap:
https://lkml.org/lkml/2016/8/17/171
https://lkml.org/lkml/2017/2/17/612

- Srividya


Re: [PATCH] zswap: Same-filled pages handling

2017-10-18 Thread Srividya Desireddy
On Wed, Oct 18, 2017 at 7:41 PM, Matthew Wilcox wrote: 
> On Wed, Oct 18, 2017 at 04:33:43PM +0300, Timofey Titovets wrote:
>> 2017-10-18 15:34 GMT+03:00 Matthew Wilcox :
>> > On Wed, Oct 18, 2017 at 10:48:32AM +, Srividya Desireddy wrote:
>> >> +static void zswap_fill_page(void *ptr, unsigned long value)
>> >> +{
>> >> + unsigned int pos;
>> >> + unsigned long *page;
>> >> +
>> >> + page = (unsigned long *)ptr;
>> >> + if (value == 0)
>> >> + memset(page, 0, PAGE_SIZE);
>> >> + else {
>> >> + for (pos = 0; pos < PAGE_SIZE / sizeof(*page); pos++)
>> >> + page[pos] = value;
>> >> + }
>> >> +}
>> >
>> > I think you meant:
>> >
>> > static void zswap_fill_page(void *ptr, unsigned long value)
>> > {
>> > memset_l(ptr, value, PAGE_SIZE / sizeof(unsigned long));
>> > }
>> 
>> IIRC kernel have special zero page, and if i understand correctly.
>> You can map all zero pages to that zero page and not touch zswap completely.
>> (Your situation look like some KSM case (i.e. KSM can handle pages
>> with same content), but i'm not sure if that applicable there)
> 
>You're confused by the word "same".  What Srividya meant was that the
>page is filled with a pattern, eg 0xfffefffefffefffe..., not that it is
>the same as any other page.

In kernel there is a special zero page or empty_zero_page which is in
general allocated in paging_init() function, to map all zero pages. But,
same-value-filled pages including zero pages exist in memory because
applications may be initializing the allocated pages with a value and not
using them; or the actual content written to the memory pages during 
execution itself is same-value, in case of multimedia data for example.

I had earlier posted a patch with similar implementaion of KSM concept 
for Zswap:
https://lkml.org/lkml/2016/8/17/171
https://lkml.org/lkml/2017/2/17/612

- Srividya


Re: [PATCH] zswap: Same-filled pages handling

2017-10-18 Thread Matthew Wilcox
On Wed, Oct 18, 2017 at 04:33:43PM +0300, Timofey Titovets wrote:
> 2017-10-18 15:34 GMT+03:00 Matthew Wilcox :
> > On Wed, Oct 18, 2017 at 10:48:32AM +, Srividya Desireddy wrote:
> >> +static void zswap_fill_page(void *ptr, unsigned long value)
> >> +{
> >> + unsigned int pos;
> >> + unsigned long *page;
> >> +
> >> + page = (unsigned long *)ptr;
> >> + if (value == 0)
> >> + memset(page, 0, PAGE_SIZE);
> >> + else {
> >> + for (pos = 0; pos < PAGE_SIZE / sizeof(*page); pos++)
> >> + page[pos] = value;
> >> + }
> >> +}
> >
> > I think you meant:
> >
> > static void zswap_fill_page(void *ptr, unsigned long value)
> > {
> > memset_l(ptr, value, PAGE_SIZE / sizeof(unsigned long));
> > }
> 
> IIRC kernel have special zero page, and if i understand correctly.
> You can map all zero pages to that zero page and not touch zswap completely.
> (Your situation look like some KSM case (i.e. KSM can handle pages
> with same content), but i'm not sure if that applicable there)

You're confused by the word "same".  What Srividya meant was that the
page is filled with a pattern, eg 0xfffefffefffefffe..., not that it is
the same as any other page.


Re: [PATCH] zswap: Same-filled pages handling

2017-10-18 Thread Matthew Wilcox
On Wed, Oct 18, 2017 at 04:33:43PM +0300, Timofey Titovets wrote:
> 2017-10-18 15:34 GMT+03:00 Matthew Wilcox :
> > On Wed, Oct 18, 2017 at 10:48:32AM +, Srividya Desireddy wrote:
> >> +static void zswap_fill_page(void *ptr, unsigned long value)
> >> +{
> >> + unsigned int pos;
> >> + unsigned long *page;
> >> +
> >> + page = (unsigned long *)ptr;
> >> + if (value == 0)
> >> + memset(page, 0, PAGE_SIZE);
> >> + else {
> >> + for (pos = 0; pos < PAGE_SIZE / sizeof(*page); pos++)
> >> + page[pos] = value;
> >> + }
> >> +}
> >
> > I think you meant:
> >
> > static void zswap_fill_page(void *ptr, unsigned long value)
> > {
> > memset_l(ptr, value, PAGE_SIZE / sizeof(unsigned long));
> > }
> 
> IIRC kernel have special zero page, and if i understand correctly.
> You can map all zero pages to that zero page and not touch zswap completely.
> (Your situation look like some KSM case (i.e. KSM can handle pages
> with same content), but i'm not sure if that applicable there)

You're confused by the word "same".  What Srividya meant was that the
page is filled with a pattern, eg 0xfffefffefffefffe..., not that it is
the same as any other page.


Re: [PATCH] zswap: Same-filled pages handling

2017-10-18 Thread Timofey Titovets
2017-10-18 15:34 GMT+03:00 Matthew Wilcox :
> On Wed, Oct 18, 2017 at 10:48:32AM +, Srividya Desireddy wrote:
>> +static void zswap_fill_page(void *ptr, unsigned long value)
>> +{
>> + unsigned int pos;
>> + unsigned long *page;
>> +
>> + page = (unsigned long *)ptr;
>> + if (value == 0)
>> + memset(page, 0, PAGE_SIZE);
>> + else {
>> + for (pos = 0; pos < PAGE_SIZE / sizeof(*page); pos++)
>> + page[pos] = value;
>> + }
>> +}
>
> I think you meant:
>
> static void zswap_fill_page(void *ptr, unsigned long value)
> {
> memset_l(ptr, value, PAGE_SIZE / sizeof(unsigned long));
> }
>
> (and you should see significantly better numbers at least on x86;
> I don't know if anyone's done an arm64 version of memset_l yet).
>
> --
> To unsubscribe, send a message with 'unsubscribe linux-mm' in
> the body to majord...@kvack.org.  For more info on Linux MM,
> see: http://www.linux-mm.org/ .
> Don't email: mailto:"d...@kvack.org;> em...@kvack.org 

IIRC kernel have special zero page, and if i understand correctly.
You can map all zero pages to that zero page and not touch zswap completely.
(Your situation look like some KSM case (i.e. KSM can handle pages
with same content), but i'm not sure if that applicable there)

Thanks.
-- 
Have a nice day,
Timofey.


Re: [PATCH] zswap: Same-filled pages handling

2017-10-18 Thread Timofey Titovets
2017-10-18 15:34 GMT+03:00 Matthew Wilcox :
> On Wed, Oct 18, 2017 at 10:48:32AM +, Srividya Desireddy wrote:
>> +static void zswap_fill_page(void *ptr, unsigned long value)
>> +{
>> + unsigned int pos;
>> + unsigned long *page;
>> +
>> + page = (unsigned long *)ptr;
>> + if (value == 0)
>> + memset(page, 0, PAGE_SIZE);
>> + else {
>> + for (pos = 0; pos < PAGE_SIZE / sizeof(*page); pos++)
>> + page[pos] = value;
>> + }
>> +}
>
> I think you meant:
>
> static void zswap_fill_page(void *ptr, unsigned long value)
> {
> memset_l(ptr, value, PAGE_SIZE / sizeof(unsigned long));
> }
>
> (and you should see significantly better numbers at least on x86;
> I don't know if anyone's done an arm64 version of memset_l yet).
>
> --
> To unsubscribe, send a message with 'unsubscribe linux-mm' in
> the body to majord...@kvack.org.  For more info on Linux MM,
> see: http://www.linux-mm.org/ .
> Don't email: mailto:"d...@kvack.org;> em...@kvack.org 

IIRC kernel have special zero page, and if i understand correctly.
You can map all zero pages to that zero page and not touch zswap completely.
(Your situation look like some KSM case (i.e. KSM can handle pages
with same content), but i'm not sure if that applicable there)

Thanks.
-- 
Have a nice day,
Timofey.


Re: [PATCH] zswap: Same-filled pages handling

2017-10-18 Thread Matthew Wilcox
On Wed, Oct 18, 2017 at 10:48:32AM +, Srividya Desireddy wrote:
> +static void zswap_fill_page(void *ptr, unsigned long value)
> +{
> + unsigned int pos;
> + unsigned long *page;
> +
> + page = (unsigned long *)ptr;
> + if (value == 0)
> + memset(page, 0, PAGE_SIZE);
> + else {
> + for (pos = 0; pos < PAGE_SIZE / sizeof(*page); pos++)
> + page[pos] = value;
> + }
> +}

I think you meant:

static void zswap_fill_page(void *ptr, unsigned long value)
{
memset_l(ptr, value, PAGE_SIZE / sizeof(unsigned long));
}

(and you should see significantly better numbers at least on x86;
I don't know if anyone's done an arm64 version of memset_l yet).


Re: [PATCH] zswap: Same-filled pages handling

2017-10-18 Thread Matthew Wilcox
On Wed, Oct 18, 2017 at 10:48:32AM +, Srividya Desireddy wrote:
> +static void zswap_fill_page(void *ptr, unsigned long value)
> +{
> + unsigned int pos;
> + unsigned long *page;
> +
> + page = (unsigned long *)ptr;
> + if (value == 0)
> + memset(page, 0, PAGE_SIZE);
> + else {
> + for (pos = 0; pos < PAGE_SIZE / sizeof(*page); pos++)
> + page[pos] = value;
> + }
> +}

I think you meant:

static void zswap_fill_page(void *ptr, unsigned long value)
{
memset_l(ptr, value, PAGE_SIZE / sizeof(unsigned long));
}

(and you should see significantly better numbers at least on x86;
I don't know if anyone's done an arm64 version of memset_l yet).