LGTM for whole series
Reviewed-by: Timofey Titovets
вт, 14 мая 2019 г. в 16:17, Oleksandr Natalenko :
>
> Document respective sysfs knob.
>
> Signed-off-by: Oleksandr Natalenko
> ---
> Documentation/admin-guide/mm/ksm.rst | 11 +++
> 1 file changed, 11 inserti
LGTM
Reviewed-by: Timofey Titovets
вт, 14 мая 2019 г. в 16:22, Aaron Tomlin :
>
> On Tue 2019-05-14 15:16 +0200, Oleksandr Natalenko wrote:
> > Present a new sysfs knob to mark task's anonymous memory as mergeable.
> >
> > To force merging task's VMAs, its PID is ech
пн, 13 мая 2019 г. в 14:33, Oleksandr Natalenko :
>
> Hi.
>
> On Mon, May 13, 2019 at 01:38:43PM +0300, Kirill Tkhai wrote:
> > On 10.05.2019 10:21, Oleksandr Natalenko wrote:
> > > By default, KSM works only on memory that is marked by madvise(). And the
> > > only way to get around that is to
вт, 13 нояб. 2018 г. в 20:59, Pavel Tatashin :
>
> On 18-11-13 15:23:50, Oleksandr Natalenko wrote:
> > Hi.
> >
> > > Yep. However, so far, it requires an application to explicitly opt in
> > > to this behavior, so it's not all that bad. Your patch would remove
> > > the requirement for
вт, 13 нояб. 2018 г. в 20:59, Pavel Tatashin :
>
> On 18-11-13 15:23:50, Oleksandr Natalenko wrote:
> > Hi.
> >
> > > Yep. However, so far, it requires an application to explicitly opt in
> > > to this behavior, so it's not all that bad. Your patch would remove
> > > the requirement for
Gentle ping
2018-01-03 6:09 GMT+03:00 Timofey Titovets <nefelim...@gmail.com>:
> 1. Pickup, Sioh Lee crc32 patch, after some long conversation
> 2. Merge with my work on xxhash
> 3. Add autoselect code to choice fastest hash helper.
>
> Base idea are same, replace jhash2
Gentle ping
2018-01-03 6:09 GMT+03:00 Timofey Titovets :
> 1. Pickup, Sioh Lee crc32 patch, after some long conversation
> 2. Merge with my work on xxhash
> 3. Add autoselect code to choice fastest hash helper.
>
> Base idea are same, replace jhash2 with something faster.
&
sertions(+), 126 deletions(-)
> delete mode 100644 fs/btrfs/hash.c
> delete mode 100644 fs/btrfs/hash.h
>
> --
> 2.7.4
>
> --
> To unsubscribe from this list: send the line "unsubscribe linux-btrfs" in
> the body of a message to majord...@vger.kernel.org
> Mor
tions(-)
> delete mode 100644 fs/btrfs/hash.c
> delete mode 100644 fs/btrfs/hash.h
>
> --
> 2.7.4
>
> --
> To unsubscribe from this list: send the line "unsubscribe linux-btrfs" in
> the body of a message to majord...@vger.kernel.org
> More majordomo info at htt
Hi,
2 month ago.
I start topic about replace jhash with xxhash.
That a another topic, about replace replace in memory hashing with xxhash.
Or at least make some light on that.
I use simple printk() in jhash/jhash2, to get correct input sizes,
so, at least on x86_64 systems, most of inputs are:
Hi,
2 month ago.
I start topic about replace jhash with xxhash.
That a another topic, about replace replace in memory hashing with xxhash.
Or at least make some light on that.
I use simple printk() in jhash/jhash2, to get correct input sizes,
so, at least on x86_64 systems, most of inputs are:
v6:
- Nothing, whole patchset version bump
Signed-off-by: Timofey Titovets <nefelim...@gmail.com>
---
include/linux/xxhash.h | 23 +++
1 file changed, 23 insertions(+)
diff --git a/include/linux/xxhash.h b/include/linux/xxhash.h
index 9e1f42cb57e9..52b073fea17f 100644
--- a/incl
v6:
- Nothing, whole patchset version bump
Signed-off-by: Timofey Titovets
---
include/linux/xxhash.h | 23 +++
1 file changed, 23 insertions(+)
diff --git a/include/linux/xxhash.h b/include/linux/xxhash.h
index 9e1f42cb57e9..52b073fea17f 100644
--- a/include/linux/xxhash.h
+++ b/incl
_64,
that makes them cache friendly. As we don't suffer from hash collisions,
change hash type from unsigned long back to u32.
- Fix kbuild robot warning, make all local functions static
Signed-off-by: Timofey Titovets <nefelim...@gmail.com>
Signed-off-by: leesioh <so...@o
_64,
that makes them cache friendly. As we don't suffer from hash collisions,
change hash type from unsigned long back to u32.
- Fix kbuild robot warning, make all local functions static
Signed-off-by: Timofey Titovets
Signed-off-by: leesioh
CC: Andrea Arcangeli
CC: linux...@kvack.or
ther problems exists.
No other problem exists.
> thanks.
>
> -sioh lee-
>
In sum, we can prove, change hash are useful and good performance
improvement in general.
With good potential on hardware acceleration on CPU.
Let's wait on advice of mm folks,
If that ok, and that do next if ne
ts.
No other problem exists.
> thanks.
>
> -sioh lee-
>
In sum, we can prove, change hash are useful and good performance
improvement in general.
With good potential on hardware acceleration on CPU.
Let's wait on advice of mm folks,
If that ok, and that do next if needed.
Thanks!
> 2017
JFYI performance on more fast/modern CPU:
Intel(R) Core(TM) i5-7200U CPU @ 2.50GHz
[ 172.651044] ksm: crc32c hash() 22633 MB/s
[ 172.776060] ksm: xxhash hash() 10920 MB/s
[ 172.776066] ksm: choice crc32c as hash function
JFYI performance on more fast/modern CPU:
Intel(R) Core(TM) i5-7200U CPU @ 2.50GHz
[ 172.651044] ksm: crc32c hash() 22633 MB/s
[ 172.776060] ksm: xxhash hash() 10920 MB/s
[ 172.776066] ksm: choice crc32c as hash function
all of fastcall()
- Don't alloc page for hash testing, use arch zero pages for that
Signed-off-by: Timofey Titovets <nefelim...@gmail.com>
Signed-off-by: leesioh <so...@os.korea.ac.kr>
CC: Andrea Arcangeli <aarca...@redhat.com>
CC: linux...@kvack.org
CC: k...@vger.kernel.
v5:
- Nothing, whole patchset version bump
Signed-off-by: Timofey Titovets <nefelim...@gmail.com>
---
include/linux/xxhash.h | 23 +++
1 file changed, 23 insertions(+)
diff --git a/include/linux/xxhash.h b/include/linux/xxhash.h
index 9e1f42cb57e9..52b073fea17f 100644
--- a/incl
all of fastcall()
- Don't alloc page for hash testing, use arch zero pages for that
Signed-off-by: Timofey Titovets
Signed-off-by: leesioh
CC: Andrea Arcangeli
CC: linux...@kvack.org
CC: k...@vger.kernel.org
---
mm/Kconfig | 4 +++
m
v5:
- Nothing, whole patchset version bump
Signed-off-by: Timofey Titovets
---
include/linux/xxhash.h | 23 +++
1 file changed, 23 insertions(+)
diff --git a/include/linux/xxhash.h b/include/linux/xxhash.h
index 9e1f42cb57e9..52b073fea17f 100644
--- a/include/linux/xxhash.h
+++ b/incl
*FACEPALM*,
Sorry, just forgot about numbering of old jhash2 -> xxhash conversion
Also pickup patch for xxhash - arch dependent xxhash() function that will use
fastest algo for current arch.
So next will be v5, as that must be v4.
Thanks.
2017-12-29 12:52 GMT+03:00 Timofey Titovets <n
*FACEPALM*,
Sorry, just forgot about numbering of old jhash2 -> xxhash conversion
Also pickup patch for xxhash - arch dependent xxhash() function that will use
fastest algo for current arch.
So next will be v5, as that must be v4.
Thanks.
2017-12-29 12:52 GMT+03:00 Timofey Titovets :
>
mode"
Two passible values:
- normal [default] - ksm use only madvice
- always [new] - ksm will search vma over all processes memory and
add it to the dedup list
Signed-off-by: Timofey Titovets <nefelim...@gmail.com>
---
Documentation/vm/ksm.t
mode"
Two passible values:
- normal [default] - ksm use only madvice
- always [new] - ksm will search vma over all processes memory and
add it to the dedup list
Signed-off-by: Timofey Titovets
---
Documentation/vm/ksm.txt | 3 +
mm/ksm.c
est and auto choice of fastest hash function
Signed-off-by: Timofey Titovets <nefelim...@gmail.com>
Signed-off-by: leesioh <so...@os.korea.ac.kr>
CC: Andrea Arcangeli <aarca...@redhat.com>
CC: linux...@kvack.org
CC: k...@vger.kernel.org
---
mm/Kconfig | 4
est and auto choice of fastest hash function
Signed-off-by: Timofey Titovets
Signed-off-by: leesioh
CC: Andrea Arcangeli
CC: linux...@kvack.org
CC: k...@vger.kernel.org
---
mm/Kconfig | 4 ++
mm/ksm.c | 133 -
2 files changed,
n machines with "big pages (64k)".
Thanks!
2017-10-02 15:58 GMT+03:00 Timofey Titovets <nefelim...@gmail.com>:
> Currently while search/inserting in RB tree,
> memcmp used for comparing out of tree pages with in tree pages.
>
> But on each compare step memcmp for pages
n machines with "big pages (64k)".
Thanks!
2017-10-02 15:58 GMT+03:00 Timofey Titovets :
> Currently while search/inserting in RB tree,
> memcmp used for comparing out of tree pages with in tree pages.
>
> But on each compare step memcmp for pages start at
> zero offset, i.e
Reviewed-by: Timofey Titovets <nefelim...@gmail.com>
2017-11-15 6:19 GMT+03:00 Kyeongdon Kim <kyeongdon@lge.com>:
> The current ksm is using memcmp to insert and search 'rb_tree'.
> It does cause very expensive computation cost.
> In order to reduce the time of this opera
Reviewed-by: Timofey Titovets
2017-11-15 6:19 GMT+03:00 Kyeongdon Kim :
> The current ksm is using memcmp to insert and search 'rb_tree'.
> It does cause very expensive computation cost.
> In order to reduce the time of this operation,
> we have added a checksum to traverse.
&g
*/
> lock_page(kpage);
> - stable_node = stable_tree_insert(kpage);
> + stable_node = stable_tree_insert(kpage, checksum);
> if (stable_node) {
> stable_tree_append(tree_rmap_item,
> stable_node,
>false);
> --
> 2.6.2
>
> --
> To unsubscribe, send a message with 'unsubscribe linux-mm' in
> the body to majord...@kvack.org. For more info on Linux MM,
> see: http://www.linux-mm.org/ .
> Don't email: mailto:"d...@kvack.org;> em...@kvack.org
Thanks,
anyway in general idea looks good.
Reviewed-by: Timofey Titovets <nefelim...@gmail.com>
--
Have a nice day,
Timofey.
; - stable_node = stable_tree_insert(kpage);
> + stable_node = stable_tree_insert(kpage, checksum);
> if (stable_node) {
> stable_tree_append(tree_rmap_item,
> stable_node,
>false);
> --
> 2.6.2
>
> --
> To unsubscribe, send a message with 'unsubscribe linux-mm' in
> the body to majord...@kvack.org. For more info on Linux MM,
> see: http://www.linux-mm.org/ .
> Don't email: mailto:"d...@kvack.org;> em...@kvack.org
Thanks,
anyway in general idea looks good.
Reviewed-by: Timofey Titovets
--
Have a nice day,
Timofey.
> +static int zswap_is_page_same_filled(void *ptr, unsigned long *value)
> +{
> + unsigned int pos;
> + unsigned long *page;
> +
> + page = (unsigned long *)ptr;
> + for (pos = 1; pos < PAGE_SIZE / sizeof(*page); pos++) {
> + if (page[pos] != page[0])
> +
> +static int zswap_is_page_same_filled(void *ptr, unsigned long *value)
> +{
> + unsigned int pos;
> + unsigned long *page;
> +
> + page = (unsigned long *)ptr;
> + for (pos = 1; pos < PAGE_SIZE / sizeof(*page); pos++) {
> + if (page[pos] != page[0])
> +
2017-10-18 15:34 GMT+03:00 Matthew Wilcox :
> On Wed, Oct 18, 2017 at 10:48:32AM +, Srividya Desireddy wrote:
>> +static void zswap_fill_page(void *ptr, unsigned long value)
>> +{
>> + unsigned int pos;
>> + unsigned long *page;
>> +
>> + page = (unsigned long
2017-10-18 15:34 GMT+03:00 Matthew Wilcox :
> On Wed, Oct 18, 2017 at 10:48:32AM +, Srividya Desireddy wrote:
>> +static void zswap_fill_page(void *ptr, unsigned long value)
>> +{
>> + unsigned int pos;
>> + unsigned long *page;
>> +
>> + page = (unsigned long *)ptr;
>> + if
that just RFC, i.e. does that type of optimization make a sense?
Thanks.
Changes:
v1 -> v2:
Add: configurable max_offset_error
Move logic to memcmpe()
Signed-off-by: Timofey Titovets <nefelim...@gmail.com>
---
mm/k
that just RFC, i.e. does that type of optimization make a sense?
Thanks.
Changes:
v1 -> v2:
Add: configurable max_offset_error
Move logic to memcmpe()
Signed-off-by: Timofey Titovets
---
mm/ksm.c |
last start offset where no diff in page content.
offset aligned to 1024, that a some type of magic value
For that value i get ~ same performance in bad case (where offset useless)
for memcmp_pages() with offset and without.
Signed-off-by: Timofey Titovets <nefelim...@gmail.com>
---
mm/ksm.
last start offset where no diff in page content.
offset aligned to 1024, that a some type of magic value
For that value i get ~ same performance in bad case (where offset useless)
for memcmp_pages() with offset and without.
Signed-off-by: Timofey Titovets
---
mm/ksm.c | 32
xxh32() - fast on both 32/64-bit platforms
xxh64() - fast only on 64-bit platform
Create xxhash() which will pickup fastest version
on compile time.
As result depends on cpu word size,
the main proporse of that - in memory hashing.
Signed-off-by: Timofey Titovets <nefelim...@gmail.com>
xxh32() - fast on both 32/64-bit platforms
xxh64() - fast only on 64-bit platform
Create xxhash() which will pickup fastest version
on compile time.
As result depends on cpu word size,
the main proporse of that - in memory hashing.
Signed-off-by: Timofey Titovets
Acked-by: Andi Kleen
Acked
hash.c -> xxhash.h
- replace xxhash_t with 'unsigned long'
- update kerneldoc above xxhash()
Timofey Titovets (2):
xxHash: create arch dependent 32/64-bit xxhash()
KSM: Replace jhash2 with xxhash
include/linux/xxhash.h | 23 +++
mm/Kconfig | 1 +
: sleep_millisecs = 20 - default
jhash2: ~4.7%
xxhash64: ~3.3%
- 11 / 18 ~= 0.6 -> Profit: ~40%
- 3.3/4.7 ~= 0.7 -> Profit: ~30%
Signed-off-by: Timofey Titovets <nefelim...@gmail.com>
Acked-by: Andi Kleen <a...@linux.intel.com>
Acked-by: Christian Borntraeger <bornt
hash.c -> xxhash.h
- replace xxhash_t with 'unsigned long'
- update kerneldoc above xxhash()
Timofey Titovets (2):
xxHash: create arch dependent 32/64-bit xxhash()
KSM: Replace jhash2 with xxhash
include/linux/xxhash.h | 23 +++
mm/Kconfig | 1 +
: sleep_millisecs = 20 - default
jhash2: ~4.7%
xxhash64: ~3.3%
- 11 / 18 ~= 0.6 -> Profit: ~40%
- 3.3/4.7 ~= 0.7 -> Profit: ~30%
Signed-off-by: Timofey Titovets
Acked-by: Andi Kleen
Acked-by: Christian Borntraeger
Cc: Linux-kernel
Cc: Linux-kvm
---
mm/Kconfig | 1 +
mm
2017-09-25 17:59 GMT+03:00 Matthew Wilcox <wi...@infradead.org>:
> On Fri, Sep 22, 2017 at 02:18:17AM +0300, Timofey Titovets wrote:
>> diff --git a/include/linux/xxhash.h b/include/linux/xxhash.h
>> index 9e1f42cb57e9..195a0ae10e9b 100644
>> --- a/include/linux/xxha
2017-09-25 17:59 GMT+03:00 Matthew Wilcox :
> On Fri, Sep 22, 2017 at 02:18:17AM +0300, Timofey Titovets wrote:
>> diff --git a/include/linux/xxhash.h b/include/linux/xxhash.h
>> index 9e1f42cb57e9..195a0ae10e9b 100644
>> --- a/include/linux/xxhash.h
>> +++ b/include
-by: Timofey Titovets <nefelim...@gmail.com>
Acked-by: Andi Kleen <a...@linux.intel.com>
Cc: Linux-kernel <linux-kernel@vger.kernel.org>
---
include/linux/xxhash.h | 24
lib/xxhash.c | 10 ++
2 files changed, 34 insertions(+)
diff --git a/includ
-by: Timofey Titovets
Acked-by: Andi Kleen
Cc: Linux-kernel
---
include/linux/xxhash.h | 24
lib/xxhash.c | 10 ++
2 files changed, 34 insertions(+)
diff --git a/include/linux/xxhash.h b/include/linux/xxhash.h
index 9e1f42cb57e9..195a0ae10e9b 100644
: sleep_millisecs = 20 - default
jhash2: ~4.7%
xxhash64: ~3.3%
- 11 / 18 ~= 0.6 -> Profit: ~40%
- 3.3/4.7 ~= 0.7 -> Profit: ~30%
Signed-off-by: Timofey Titovets <nefelim...@gmail.com>
Acked-by: Andi Kleen <a...@linux.intel.com>
---
mm/Kconfig | 1 +
mm/ksm.c | 14
: sleep_millisecs = 20 - default
jhash2: ~4.7%
xxhash64: ~3.3%
- 11 / 18 ~= 0.6 -> Profit: ~40%
- 3.3/4.7 ~= 0.7 -> Profit: ~30%
Signed-off-by: Timofey Titovets
Acked-by: Andi Kleen
---
mm/Kconfig | 1 +
mm/ksm.c | 14 +++---
2 files changed, 8 insertions(+), 7 del
hes
Timofey Titovets (2):
xxHash: create arch dependent 32/64-bit xxhash()
KSM: Replace jhash2 with xxhash
include/linux/xxhash.h | 24
lib/xxhash.c | 10 ++
mm/Kconfig | 1 +
mm/ksm.c | 14 +++---
4 files changed,
hes
Timofey Titovets (2):
xxHash: create arch dependent 32/64-bit xxhash()
KSM: Replace jhash2 with xxhash
include/linux/xxhash.h | 24
lib/xxhash.c | 10 ++
mm/Kconfig | 1 +
mm/ksm.c | 14 +++---
4 files changed,
-by: Timofey Titovets <nefelim...@gmail.com>
Acked-by: Andi Kleen <a...@linux.intel.com>
Cc: Linux-kernel <linux-kernel@vger.kernel.org>
---
include/linux/xxhash.h | 24
lib/xxhash.c | 10 ++
2 files changed, 34 insertions(+)
diff --git a/includ
-by: Timofey Titovets
Acked-by: Andi Kleen
Cc: Linux-kernel
---
include/linux/xxhash.h | 24
lib/xxhash.c | 10 ++
2 files changed, 34 insertions(+)
diff --git a/include/linux/xxhash.h b/include/linux/xxhash.h
index 9e1f42cb57e9..195a0ae10e9b 100644
: sleep_millisecs = 20 - default
jhash2: ~4.7%
xxhash64: ~3.3%
- 11 / 18 ~= 0.6 -> Profit: ~40%
- 3.3/4.7 ~= 0.7 -> Profit: ~30%
Signed-off-by: Timofey Titovets <nefelim...@gmail.com>
Acked-by: Andi Kleen <a...@linux.intel.com>
---
mm/Kconfig | 1 +
mm/ksm.c | 14
worlds,
create arch dependent xxhash() function that will use
fastest algo for current arch.
This a first patch.
Performance info and ksm update can be found in second patch.
Changelog:
v1 -> v2:
- Move xxhash() to xxhash.h/c and separate patches
Timofey Titovets (2):
xxHash: create a
: sleep_millisecs = 20 - default
jhash2: ~4.7%
xxhash64: ~3.3%
- 11 / 18 ~= 0.6 -> Profit: ~40%
- 3.3/4.7 ~= 0.7 -> Profit: ~30%
Signed-off-by: Timofey Titovets
Acked-by: Andi Kleen
---
mm/Kconfig | 1 +
mm/ksm.c | 14 +++---
2 files changed, 8 insertions(+), 7 del
worlds,
create arch dependent xxhash() function that will use
fastest algo for current arch.
This a first patch.
Performance info and ksm update can be found in second patch.
Changelog:
v1 -> v2:
- Move xxhash() to xxhash.h/c and separate patches
Timofey Titovets (2):
xxHash: create a
Sorry Markus, but main problem with your patches described at that page:
https://btrfs.wiki.kernel.org/index.php/Developer%27s_FAQ#How_not_to_start
I.e. it's cool that you try to help as you can, but not that way, thanks.
2017-08-21 16:27 GMT+03:00 SF Markus Elfring
Sorry Markus, but main problem with your patches described at that page:
https://btrfs.wiki.kernel.org/index.php/Developer%27s_FAQ#How_not_to_start
I.e. it's cool that you try to help as you can, but not that way, thanks.
2017-08-21 16:27 GMT+03:00 SF Markus Elfring :
>> That's will work,
>
>
Don't needed, and you did miss several similar places (L573 & L895) in
that file with explicit initialisation.
Reviewed-by: Timofey Titovets <nefelim...@gmail.com>
2017-08-20 23:20 GMT+03:00 SF Markus Elfring <elfr...@users.sourceforge.net>:
> From: Markus Elfring <elfr...
Don't needed, and you did miss several similar places (L573 & L895) in
that file with explicit initialisation.
Reviewed-by: Timofey Titovets
2017-08-20 23:20 GMT+03:00 SF Markus Elfring :
> From: Markus Elfring
> Date: Sun, 20 Aug 2017 22:02:54 +0200
>
> The variable "tm_
That's will work, but that's don't improve anything.
Reviewed-by: Timofey Titovets <nefelim...@gmail.com>
2017-08-20 23:18 GMT+03:00 SF Markus Elfring <elfr...@users.sourceforge.net>:
> From: Markus Elfring <elfr...@users.sourceforge.net>
> Date: Sun, 20 Aug 2017 21:36
That's will work, but that's don't improve anything.
Reviewed-by: Timofey Titovets
2017-08-20 23:18 GMT+03:00 SF Markus Elfring :
> From: Markus Elfring
> Date: Sun, 20 Aug 2017 21:36:31 +0200
> MIME-Version: 1.0
> Content-Type: text/plain; charset=UTF-8
> Content-Transfer
You use that doc [1], so it's okay, because that's style are more safe.
But i don't think that at now such cleanups are really usefull at now.
Because that's not improve anything.
Reviewed-by: Timofey Titovets <nefelim...@gmail.com>
[1] -
https://www.kernel.org/doc/html/v4.12/process/
You use that doc [1], so it's okay, because that's style are more safe.
But i don't think that at now such cleanups are really usefull at now.
Because that's not improve anything.
Reviewed-by: Timofey Titovets
[1] -
https://www.kernel.org/doc/html/v4.12/process/coding-style.html#allocating
Hi Nick Terrell,
If i understood all correctly,
zstd can compress (decompress) data in way compatible with gzip (zlib)
Do that also true for in kernel library?
If that true, does that make a sense to directly replace zlib with
zstd (configured to work like zlib) in place (as example for btrfs
zlib
Hi Nick Terrell,
If i understood all correctly,
zstd can compress (decompress) data in way compatible with gzip (zlib)
Do that also true for in kernel library?
If that true, does that make a sense to directly replace zlib with
zstd (configured to work like zlib) in place (as example for btrfs
zlib
It allow to control user mark new vma as VM_MERGEABLE or not
Create new sysfs interface /sys/kernel/mm/ksm/mark_new_vma
1 - enabled - mark new allocated vma as VM_MERGEABLE and add it to ksm queue
0 - disable it
Signed-off-by: Timofey Titovets
---
include/linux/ksm.h | 10 +-
mm/ksm.c
Implement two functions:
ksm_vm_flags_mod() - if existing flags supported by ksm - then mark like
VM_MERGEABLE
ksm_vma_add_new() - If vma marked as VM_MERGEABLE add it to ksm page queue
Signed-off-by: Timofey Titovets
---
include/linux/ksm.h | 31 +++
mm/mmap.c
Allowing to control mark_new_vma default value
Allowing work ksm on early allocated vmas
Signed-off-by: Timofey Titovets
---
mm/Kconfig | 7 +++
1 file changed, 7 insertions(+)
diff --git a/mm/Kconfig b/mm/Kconfig
index 1d1ae6b..90f40a6 100644
--- a/mm/Kconfig
+++ b/mm/Kconfig
@@ -340,6
Mb (deduped)/(used)
v2:
Added Kconfig for control default value of mark_new_vma
Added sysfs interface for control mark_new_vma
Splitted in several patches
v3:
Documentation for ksm changed for clarify new cha
Timofey Titovets (4):
KSM: Add auto flag new VMA
Signed-off-by: Timofey Titovets
---
Documentation/vm/ksm.txt | 7 +++
1 file changed, 7 insertions(+)
diff --git a/Documentation/vm/ksm.txt b/Documentation/vm/ksm.txt
index f34a8ee..880fdbf 100644
--- a/Documentation/vm/ksm.txt
+++ b/Documentation/vm/ksm.txt
@@ -24,6 +24,8 @@ KSM only
Mb (deduped)/(used)
v2:
Added Kconfig for control default value of mark_new_vma
Added sysfs interface for control mark_new_vma
Splitted in several patches
v3:
Documentation for ksm changed for clarify new cha
Timofey Titovets (4):
KSM: Add auto flag new VMA
Signed-off-by: Timofey Titovets nefelim...@gmail.com
---
Documentation/vm/ksm.txt | 7 +++
1 file changed, 7 insertions(+)
diff --git a/Documentation/vm/ksm.txt b/Documentation/vm/ksm.txt
index f34a8ee..880fdbf 100644
--- a/Documentation/vm/ksm.txt
+++ b/Documentation/vm/ksm.txt
@@ -24,6
Allowing to control mark_new_vma default value
Allowing work ksm on early allocated vmas
Signed-off-by: Timofey Titovets nefelim...@gmail.com
---
mm/Kconfig | 7 +++
1 file changed, 7 insertions(+)
diff --git a/mm/Kconfig b/mm/Kconfig
index 1d1ae6b..90f40a6 100644
--- a/mm/Kconfig
+++ b/mm
Implement two functions:
ksm_vm_flags_mod() - if existing flags supported by ksm - then mark like
VM_MERGEABLE
ksm_vma_add_new() - If vma marked as VM_MERGEABLE add it to ksm page queue
Signed-off-by: Timofey Titovets nefelim...@gmail.com
---
include/linux/ksm.h | 31
It allow to control user mark new vma as VM_MERGEABLE or not
Create new sysfs interface /sys/kernel/mm/ksm/mark_new_vma
1 - enabled - mark new allocated vma as VM_MERGEABLE and add it to ksm queue
0 - disable it
Signed-off-by: Timofey Titovets nefelim...@gmail.com
---
include/linux/ksm.h | 10
Implement two functions:
ksm_vm_flags_mod() - if existing flags supported by ksm - then mark like
VM_MERGEABLE
ksm_vma_add_new() - If vma marked as VM_MERGEABLE add it to ksm page queue
Signed-off-by: Timofey Titovets
---
include/linux/ksm.h | 31 +++
mm/mmap.c
Allowing to control mark_new_vma default value
Allowing work ksm on early allocated vmas
Signed-off-by: Timofey Titovets
---
mm/Kconfig | 7 +++
1 file changed, 7 insertions(+)
diff --git a/mm/Kconfig b/mm/Kconfig
index 1d1ae6b..90f40a6 100644
--- a/mm/Kconfig
+++ b/mm/Kconfig
@@ -340,6
apply it and enable ksm:
echo 1 | sudo tee /sys/kernel/mm/ksm/run
This show how much memory saved:
echo $[$(cat /sys/kernel/mm/ksm/pages_shared)*$(getconf PAGE_SIZE)/1024 ]KB
On my system i save ~1% of memory 26 Mb/2100 Mb (deduped)/(used)
Timofey Titovets (3):
KSM: Add auto flag new VMA
It allow for user to control process of marking new vma as VM_MERGEABLE
Create new sysfs interface /sys/kernel/mm/ksm/mark_new_vma
1 - enabled - mark new allocated vma as VM_MERGEABLE and add it to ksm queue
0 - disable it
Signed-off-by: Timofey Titovets
---
include/linux/ksm.h | 10
It allow for user to control process of marking new vma as VM_MERGEABLE
Create new sysfs interface /sys/kernel/mm/ksm/mark_new_vma
1 - enabled - mark new allocated vma as VM_MERGEABLE and add it to ksm queue
0 - disable it
Signed-off-by: Timofey Titovets nefelim...@gmail.com
---
include/linux
apply it and enable ksm:
echo 1 | sudo tee /sys/kernel/mm/ksm/run
This show how much memory saved:
echo $[$(cat /sys/kernel/mm/ksm/pages_shared)*$(getconf PAGE_SIZE)/1024 ]KB
On my system i save ~1% of memory 26 Mb/2100 Mb (deduped)/(used)
Timofey Titovets (3):
KSM: Add auto flag new VMA
Implement two functions:
ksm_vm_flags_mod() - if existing flags supported by ksm - then mark like
VM_MERGEABLE
ksm_vma_add_new() - If vma marked as VM_MERGEABLE add it to ksm page queue
Signed-off-by: Timofey Titovets nefelim...@gmail.com
---
include/linux/ksm.h | 31
Allowing to control mark_new_vma default value
Allowing work ksm on early allocated vmas
Signed-off-by: Timofey Titovets nefelim...@gmail.com
---
mm/Kconfig | 7 +++
1 file changed, 7 insertions(+)
diff --git a/mm/Kconfig b/mm/Kconfig
index 1d1ae6b..90f40a6 100644
--- a/mm/Kconfig
+++ b/mm
m internal tree
If you see broken patch lines i have also attach patch.
From db8ad0877146a69e1e5d5ab98825cefcf44a95bb Mon Sep 17 00:00:00 2001
From: Timofey Titovets
Date: Sat, 8 Nov 2014 03:02:52 +0300
Subject: [PATCH] KSM: Add auto flag new VMA as VM_MERGEABLE
Signed-off-by: Timofey Titovets
--
If you see broken patch lines i have also attach patch.
From db8ad0877146a69e1e5d5ab98825cefcf44a95bb Mon Sep 17 00:00:00 2001
From: Timofey Titovets nefelim...@gmail.com
Date: Sat, 8 Nov 2014 03:02:52 +0300
Subject: [PATCH] KSM: Add auto flag new VMA as VM_MERGEABLE
Signed-off-by: Timofey
2014-10-30 6:19 GMT+03:00 Matt :
> Hi Timofey,
> Hi List,
> don't forget to consider PKSM - it's supposed to be an improvement
> over UKSM & KSM:
>
> http://www.phoronix.com/scan.php?page=news_item=MTM0OTQ
> https://code.google.com/p/pksm/
>
> Kind Regards
>
> Matt
I can mistaking, as i know UKSM
2014-10-30 6:19 GMT+03:00 Matt jackdac...@gmail.com:
Hi Timofey,
Hi List,
don't forget to consider PKSM - it's supposed to be an improvement
over UKSM KSM:
http://www.phoronix.com/scan.php?page=news_itempx=MTM0OTQ
https://code.google.com/p/pksm/
Kind Regards
Matt
I can mistaking, as i
GPL and as i think we can feel free for port
and adopt code (with indicating the author)
Please, fix me if i mistake or miss something.
This is just stream of my thoughts %_%
---
> On Sat, Oct 25, 2014 at 09:32:01PM -0700, Andrew Morton wrote:
>> On Sat, 25 Oct 2014 22:25:56 +0300 Timof
for port
and adopt code (with indicating the author)
Please, fix me if i mistake or miss something.
This is just stream of my thoughts %_%
---
On Sat, Oct 25, 2014 at 09:32:01PM -0700, Andrew Morton wrote:
On Sat, 25 Oct 2014 22:25:56 +0300 Timofey Titovets nefelim...@gmail.com
wrote:
Good time
Good time of day, people.
I try to find 'mm' subsystem specific people and lists, but list
linux-mm looks dead and mail archive look like deprecated.
If i must to sent this message to another list or add CC people, let me know.
If questions are already asked (i can't find activity before), feel
Good time of day, people.
I try to find 'mm' subsystem specific people and lists, but list
linux-mm looks dead and mail archive look like deprecated.
If i must to sent this message to another list or add CC people, let me know.
If questions are already asked (i can't find activity before), feel
2014-08-24 8:41 GMT+03:00 Brian Norris :
> It looks like this intended to be 64-bit arithmetic, but it's actually
> performed as 32-bit. Fix that. (Note that 'increment' was being
> initialized twice, so this patch removes one of those.)
>
> Caught by Coverity Scan (CID 1201422).
>
>
1 - 100 of 121 matches
Mail list logo