Hi Jongseok,
Den fre 6 juli 2018 kl 07:11 skrev Jongseok Kim :
>
> During the processing of headless pages in z3fold_reclaim_page(),
> there was a problem that the zhdr pointed to another page
> or a page was already released in z3fold_free(). So, the wrong page
> is encoded in headless, or
Hi Jongseok,
Den fre 6 juli 2018 kl 07:11 skrev Jongseok Kim :
>
> During the processing of headless pages in z3fold_reclaim_page(),
> there was a problem that the zhdr pointed to another page
> or a page was already released in z3fold_free(). So, the wrong page
> is encoded in headless, or
unction.
This patch supersedes "[PATCH] z3fold: fix reclaim lock-ups".
Signed-off-by: Vitaly Wool
Reviewed-by: Snild Dolkow
---
mm/z3fold.c | 101
1 file changed, 62 insertions(+), 39 deletions(-)
diff --git a/mm/z3fold.c b/
unction.
This patch supersedes "[PATCH] z3fold: fix reclaim lock-ups".
Signed-off-by: Vitaly Wool
Reviewed-by: Snild Dolkow
---
mm/z3fold.c | 101
1 file changed, 62 insertions(+), 39 deletions(-)
diff --git a/mm/z3fold.c b/
object we couldn't correctly map before.
Signed-off-by: Vitaly Wool
---
mm/z3fold.c | 48 +++-
1 file changed, 35 insertions(+), 13 deletions(-)
diff --git a/mm/z3fold.c b/mm/z3fold.c
index 4b366d181f35..86359b565d45 100644
--- a/mm/z3fold.c
+++ b/m
object we couldn't correctly map before.
Signed-off-by: Vitaly Wool
---
mm/z3fold.c | 48 +++-
1 file changed, 35 insertions(+), 13 deletions(-)
diff --git a/mm/z3fold.c b/mm/z3fold.c
index 4b366d181f35..86359b565d45 100644
--- a/mm/z3fold.c
+++ b/m
Hi Jongseok,
Den tors 3 maj 2018 kl 08:36 skrev Jongseok Kim :
> In the processing of headless pages, there was a problem that the
> zhdr pointed to another page or a page was alread released in
> z3fold_free(). So, the wrong page is encoded in headless, or test_bit
> does not
Hi Jongseok,
Den tors 3 maj 2018 kl 08:36 skrev Jongseok Kim :
> In the processing of headless pages, there was a problem that the
> zhdr pointed to another page or a page was alread released in
> z3fold_free(). So, the wrong page is encoded in headless, or test_bit
> does not work properly in
Do not try to optimize in-page object layout while the page is
under reclaim. This fixes lock-ups on reclaim and improves reclaim
performance at the same time.
Reported-by: Guenter Roeck <li...@roeck-us.net>
Signed-off-by: Vitaly Wool <vitaly@sony.com>
---
mm/z
Do not try to optimize in-page object layout while the page is
under reclaim. This fixes lock-ups on reclaim and improves reclaim
performance at the same time.
Reported-by: Guenter Roeck
Signed-off-by: Vitaly Wool
---
mm/z3fold.c | 42 ++
1 file changed
Den tis 17 apr. 2018 kl 18:35 skrev Guenter Roeck :
> Getting better; the log is much less noisy. Unfortunately, there are still
> locking problems, resulting in a hung task. I copied the log message to [1].
> This is with [2] applied on top of v4.17-rc1.
Now this version
Den tis 17 apr. 2018 kl 18:35 skrev Guenter Roeck :
> Getting better; the log is much less noisy. Unfortunately, there are still
> locking problems, resulting in a hung task. I copied the log message to [1].
> This is with [2] applied on top of v4.17-rc1.
Now this version (this is a full patch
Hi Guenter,
> [ ... ]
> > Ugh. Could you please keep that patch and apply this on top:
> >
> > diff --git a/mm/z3fold.c b/mm/z3fold.c
> > index c0bca6153b95..e8a80d044d9e 100644
> > --- a/mm/z3fold.c
> > +++ b/mm/z3fold.c
> > @@ -840,6 +840,7 @@ static int z3fold_reclaim_page(struct z3fold_pool
Hi Guenter,
> [ ... ]
> > Ugh. Could you please keep that patch and apply this on top:
> >
> > diff --git a/mm/z3fold.c b/mm/z3fold.c
> > index c0bca6153b95..e8a80d044d9e 100644
> > --- a/mm/z3fold.c
> > +++ b/mm/z3fold.c
> > @@ -840,6 +840,7 @@ static int z3fold_reclaim_page(struct z3fold_pool
On 4/16/18 5:58 PM, Guenter Roeck wrote:
On Mon, Apr 16, 2018 at 02:43:01PM +0200, Vitaly Wool wrote:
Hey Guenter,
On 04/13/2018 07:56 PM, Guenter Roeck wrote:
On Fri, Apr 13, 2018 at 05:40:18PM +, Vitaly Wool wrote:
On Fri, Apr 13, 2018, 7:35 PM Guenter Roeck <li...@roeck-us.
On 4/16/18 5:58 PM, Guenter Roeck wrote:
On Mon, Apr 16, 2018 at 02:43:01PM +0200, Vitaly Wool wrote:
Hey Guenter,
On 04/13/2018 07:56 PM, Guenter Roeck wrote:
On Fri, Apr 13, 2018 at 05:40:18PM +, Vitaly Wool wrote:
On Fri, Apr 13, 2018, 7:35 PM Guenter Roeck wrote:
On Fri, Apr 13
Hey Guenter,
On 04/13/2018 07:56 PM, Guenter Roeck wrote:
On Fri, Apr 13, 2018 at 05:40:18PM +, Vitaly Wool wrote:
On Fri, Apr 13, 2018, 7:35 PM Guenter Roeck <li...@roeck-us.net> wrote:
On Fri, Apr 13, 2018 at 05:21:02AM +, Vitaly Wool wrote:
Hi Guenter,
Den fre 13 apr. 2
Hey Guenter,
On 04/13/2018 07:56 PM, Guenter Roeck wrote:
On Fri, Apr 13, 2018 at 05:40:18PM +, Vitaly Wool wrote:
On Fri, Apr 13, 2018, 7:35 PM Guenter Roeck wrote:
On Fri, Apr 13, 2018 at 05:21:02AM +, Vitaly Wool wrote:
Hi Guenter,
Den fre 13 apr. 2018 kl 00:01 skrev Guenter
Hi Guenter,
Den fre 13 apr. 2018 kl 00:01 skrev Guenter Roeck :
> Hi all,
> we are observing crashes with z3pool under memory pressure. The kernel
version
> used to reproduce the problem is v4.16-11827-g5d1365940a68, but the
problem was
> also seen with v4.14 based kernels.
Hi Guenter,
Den fre 13 apr. 2018 kl 00:01 skrev Guenter Roeck :
> Hi all,
> we are observing crashes with z3pool under memory pressure. The kernel
version
> used to reproduce the problem is v4.16-11827-g5d1365940a68, but the
problem was
> also seen with v4.14 based kernels.
just before I dig
Currently if z3fold couldn't find an unbuddied page it would first
try to pull a page off the stale list. The problem with this
approach is that we can't 100% guarantee that the page is not
processed by the workqueue thread at the same time unless we run
cancel_work_sync() on it, which we can't
Currently if z3fold couldn't find an unbuddied page it would first
try to pull a page off the stale list. The problem with this
approach is that we can't 100% guarantee that the page is not
processed by the workqueue thread at the same time unless we run
cancel_work_sync() on it, which we can't
Binder uses hlist for deferred list, which isn't a good match.
It's slow and requires mutual exclusion mechanism to protect its
operations. Moreover, having schedule_work() called under a mutex
may cause significant delays and creates noticeable adverse effect
on Binder performance.
Deferred list
Binder uses hlist for deferred list, which isn't a good match.
It's slow and requires mutual exclusion mechanism to protect its
operations. Moreover, having schedule_work() called under a mutex
may cause significant delays and creates noticeable adverse effect
on Binder performance.
Deferred list
It sometimes is necessary to be able to be able to use llist in
the following manner:
> if (node_unlisted(node))
> llst_add(node, list);
i. e. only add a node to the list if it's not already on a list.
This is not possible without taking locks because otherwise there's
an
It sometimes is necessary to be able to be able to use llist in
the following manner:
> if (node_unlisted(node))
> llst_add(node, list);
i. e. only add a node to the list if it's not already on a list.
This is not possible without taking locks because otherwise there's
an
will delete the
first node off the list and mark it as not being on any list.
Signed-off-by: Vitaly Wool <vitaly@sony.com>
---
include/linux/llist.h | 25 +
lib/llist.c | 29 +
2 files changed, 54 insertions(+)
diff --git a/i
will delete the
first node off the list and mark it as not being on any list.
Signed-off-by: Vitaly Wool
---
include/linux/llist.h | 25 +
lib/llist.c | 29 +
2 files changed, 54 insertions(+)
diff --git a/include/linux/llist.h b
compaction is scheduled. It then becomes compaction function's
responsibility to decrease the counter and quit immediately if
the page was actually freed.
Signed-off-by: Vitaly Wool <vitaly.w...@sonymobile.com>
Cc: stable <sta...@vger.kernel.org>
---
mm/z3fold.c | 10 --
1 file
compaction is scheduled. It then becomes compaction function's
responsibility to decrease the counter and quit immediately if
the page was actually freed.
Signed-off-by: Vitaly Wool
Cc: stable
---
mm/z3fold.c | 10 --
1 file changed, 8 insertions(+), 2 deletions(-)
diff --git a/mm/z3f
Hi Andrew,
2017-09-14 23:15 GMT+02:00 Andrew Morton <a...@linux-foundation.org>:
> On Thu, 14 Sep 2017 15:59:36 +0200 Vitaly Wool <vitalyw...@gmail.com> wrote:
>
>> Fix the situation when clear_bit() is called for page->private before
>> the page pointer is actua
Hi Andrew,
2017-09-14 23:15 GMT+02:00 Andrew Morton :
> On Thu, 14 Sep 2017 15:59:36 +0200 Vitaly Wool wrote:
>
>> Fix the situation when clear_bit() is called for page->private before
>> the page pointer is actually assigned. While at it, remove work_busy()
>>
Fix the situation when clear_bit() is called for page->private before
the page pointer is actually assigned. While at it, remove work_busy()
check because it is costly and does not give 100% guarantee anyway.
Signed-of-by: Vitaly Wool <vitalyw...@gmail.com>
---
mm/z3fold.c | 6 ++---
Fix the situation when clear_bit() is called for page->private before
the page pointer is actually assigned. While at it, remove work_busy()
check because it is costly and does not give 100% guarantee anyway.
Signed-of-by: Vitaly Wool
---
mm/z3fold.c | 6 ++
1 file changed, 2 inserti
a bug.
To avoid that, spin_lock() has to be taken earlier, before the
kref_put() call mentioned earlier.
Signed-off-by: Vitaly Wool <vitalyw...@gmail.com>
---
mm/z3fold.c | 4 +++-
1 file changed, 3 insertions(+), 1 deletion(-)
diff --git a/mm/z3fold.c b/mm/z3fold.c
index 486550
a bug.
To avoid that, spin_lock() has to be taken earlier, before the
kref_put() call mentioned earlier.
Signed-off-by: Vitaly Wool
---
mm/z3fold.c | 4 +++-
1 file changed, 3 insertions(+), 1 deletion(-)
diff --git a/mm/z3fold.c b/mm/z3fold.c
index 486550df32be..b04fa3ba1bf2 100644
--- a/mm
2017-09-06 2:19 GMT+02:00 Laura Abbott <labb...@redhat.com>:
> On 09/05/2017 05:55 AM, Vitaly Wool wrote:
>> ion page pool may become quite large and scattered all around
>> the kernel memory area. These pages are actually not used so
>> moving them around to reduce f
2017-09-06 2:19 GMT+02:00 Laura Abbott :
> On 09/05/2017 05:55 AM, Vitaly Wool wrote:
>> ion page pool may become quite large and scattered all around
>> the kernel memory area. These pages are actually not used so
>> moving them around to reduce fragmentation is quite che
-off-by: Vitaly Wool <vitalyw...@gmail.com>
---
drivers/staging/android/ion/ion.h | 2 +
drivers/staging/android/ion/ion_page_pool.c | 165 +++-
2 files changed, 163 insertions(+), 4 deletions(-)
diff --git a/drivers/staging/android/ion/ion.h
b/drivers/s
-off-by: Vitaly Wool
---
drivers/staging/android/ion/ion.h | 2 +
drivers/staging/android/ion/ion_page_pool.c | 165 +++-
2 files changed, 163 insertions(+), 4 deletions(-)
diff --git a/drivers/staging/android/ion/ion.h
b/drivers/staging/android/ion/ion.h
index
in for almost 6x performance increase.
Signed-off-by: Vitaly Wool <vitalyw...@gmail.com>
---
mm/z3fold.c | 479 +++-
1 file changed, 344 insertions(+), 135 deletions(-)
diff --git a/mm/z3fold.c b/mm/z3fold.c
index 54f63c4a809a..b44ce5
in for almost 6x performance increase.
Signed-off-by: Vitaly Wool
---
mm/z3fold.c | 479 +++-
1 file changed, 344 insertions(+), 135 deletions(-)
diff --git a/mm/z3fold.c b/mm/z3fold.c
index 54f63c4a809a..b44ce5059442 100644
--- a/mm/z3fold.c
will go up.
This patch also introduces two worker threads which: one for async
in-page object layout optimization and one for releasing freed
pages.
Signed-off-by: Vitaly Wool <vitalyw...@gmail.com>
---
mm/z3fold.c | 479 +++-
will go up.
This patch also introduces two worker threads which: one for async
in-page object layout optimization and one for releasing freed
pages.
Signed-off-by: Vitaly Wool
---
mm/z3fold.c | 479 +++-
1 file changed, 344 insertions
Stress testing of the current z3fold implementation on a 8-core system
revealed it was possible that a z3fold page deleted from its unbuddied
list in z3fold_alloc() would be put on another unbuddied list by
z3fold_free() while z3fold_alloc() is still processing it. This has
been introduced with
Stress testing of the current z3fold implementation on a 8-core system
revealed it was possible that a z3fold page deleted from its unbuddied
list in z3fold_alloc() would be put on another unbuddied list by
z3fold_free() while z3fold_alloc() is still processing it. This has
been introduced with
The patch "z3fold: add kref refcounting" introduced a bug in
z3fold_reclaim_page() with function exit that may leave pool->lock
spinlock held. Here comes the trivial fix.
Reported-by: Alexey Khoroshilov <khoroshi...@ispras.ru>
Signed-off-by: Vitaly Wool <vitalyw...@gmail.com
The patch "z3fold: add kref refcounting" introduced a bug in
z3fold_reclaim_page() with function exit that may leave pool->lock
spinlock held. Here comes the trivial fix.
Reported-by: Alexey Khoroshilov
Signed-off-by: Vitaly Wool
---
mm/z3fold.c | 1 +
1 file changed, 1 inser
With both coming and already present locking optimizations,
introducing kref to reference-count z3fold objects is the right
thing to do. Moreover, it makes buddied list no longer necessary,
and allows for a simpler handling of headless pages.
Signed-off-by: Vitaly Wool <vitalyw...@gmail.
With both coming and already present locking optimizations,
introducing kref to reference-count z3fold objects is the right
thing to do. Moreover, it makes buddied list no longer necessary,
and allows for a simpler handling of headless pages.
Signed-off-by: Vitaly Wool
---
mm/z3fold.c | 151
, using BIG_CHUNK_GAP define as
a threshold for middle chunk to be worth moving.
Signed-off-by: Vitaly Wool <vitalyw...@gmail.com>
---
mm/z3fold.c | 26 +-
1 file changed, 25 insertions(+), 1 deletion(-)
diff --git a/mm/z3fold.c b/mm/z3fold.c
index 98ab01f..be8b56e
, using BIG_CHUNK_GAP define as
a threshold for middle chunk to be worth moving.
Signed-off-by: Vitaly Wool
---
mm/z3fold.c | 26 +-
1 file changed, 25 insertions(+), 1 deletion(-)
diff --git a/mm/z3fold.c b/mm/z3fold.c
index 98ab01f..be8b56e 100644
--- a/mm/z3fold.c
+++ b
implements spinlock-based per-page locking mechanism which
is lightweight enough to normally fit ok into the z3fold header.
Signed-off-by: Vitaly Wool <vitalyw...@gmail.com>
---
mm/z3fold.c | 148 +++-
1 file changed, 106 insertions(
implements spinlock-based per-page locking mechanism which
is lightweight enough to normally fit ok into the z3fold header.
Signed-off-by: Vitaly Wool
---
mm/z3fold.c | 148 +++-
1 file changed, 106 insertions(+), 42 deletions(-)
diff --git a/mm
of num_free_chunks() and the address to
move the middle chunk to in case of in-page compaction in
z3fold_compact_page().
Signed-off-by: Vitaly Wool <vitalyw...@gmail.com>
---
mm/z3fold.c | 114 ++--
1 file changed, 64 insertions(+), 50 del
of num_free_chunks() and the address to
move the middle chunk to in case of in-page compaction in
z3fold_compact_page().
Signed-off-by: Vitaly Wool
---
mm/z3fold.c | 114 ++--
1 file changed, 64 insertions(+), 50 deletions(-)
diff --git a/mm/z3fold.c
This patch converts pages_nr per-pool counter to atomic64_t.
Signed-off-by: Vitaly Wool <vitalyw...@gmail.com>
---
mm/z3fold.c | 20 +---
1 file changed, 9 insertions(+), 11 deletions(-)
diff --git a/mm/z3fold.c b/mm/z3fold.c
index 207e5dd..2273789 100644
--- a/mm/z3fold.c
This patch converts pages_nr per-pool counter to atomic64_t.
Signed-off-by: Vitaly Wool
---
mm/z3fold.c | 20 +---
1 file changed, 9 insertions(+), 11 deletions(-)
diff --git a/mm/z3fold.c b/mm/z3fold.c
index 207e5dd..2273789 100644
--- a/mm/z3fold.c
+++ b/mm/z3fold.c
@@ -80,7
This is a new take on z3fold optimizations/fixes consolidation, revised after
comments from Dan ([1] - [6]).
The coming patches are to be applied on top of the following commit:
Author: zhong jiang
Date: Tue Dec 20 11:53:40 2016 +1100
mm/z3fold.c: limit first_num
This is a new take on z3fold optimizations/fixes consolidation, revised after
comments from Dan ([1] - [6]).
The coming patches are to be applied on top of the following commit:
Author: zhong jiang
Date: Tue Dec 20 11:53:40 2016 +1100
mm/z3fold.c: limit first_num to the actual range of
t but I can do that :)
>
> the header's already rounded up to chunk size, so if there's room then
> it won't take any extra memory. but it works either way.
So let's have it like this then:
With both coming and already present locking optimizations,
introducing kref to reference-count z
t;
> the header's already rounded up to chunk size, so if there's room then
> it won't take any extra memory. but it works either way.
So let's have it like this then:
With both coming and already present locking optimizations,
introducing kref to referen
On Wed, 11 Jan 2017 17:43:13 +0100
Vitaly Wool <vitalyw...@gmail.com> wrote:
> On Wed, Jan 11, 2017 at 5:28 PM, Dan Streetman <ddstr...@ieee.org> wrote:
> > On Wed, Jan 11, 2017 at 10:06 AM, Vitaly Wool <vitalyw...@gmail.com> wrote:
> >> z3fold_compact_page()
On Wed, 11 Jan 2017 17:43:13 +0100
Vitaly Wool wrote:
> On Wed, Jan 11, 2017 at 5:28 PM, Dan Streetman wrote:
> > On Wed, Jan 11, 2017 at 10:06 AM, Vitaly Wool wrote:
> >> z3fold_compact_page() currently only handles the situation when
> >> there's a single middle c
On Wed, Jan 11, 2017 at 6:39 PM, Dan Streetman <ddstr...@ieee.org> wrote:
> On Wed, Jan 11, 2017 at 12:27 PM, Vitaly Wool <vitalyw...@gmail.com> wrote:
>> On Wed, Jan 11, 2017 at 6:08 PM, Dan Streetman <ddstr...@ieee.org> wrote:
>>> On Wed, Jan 11, 20
On Wed, Jan 11, 2017 at 6:39 PM, Dan Streetman wrote:
> On Wed, Jan 11, 2017 at 12:27 PM, Vitaly Wool wrote:
>> On Wed, Jan 11, 2017 at 6:08 PM, Dan Streetman wrote:
>>> On Wed, Jan 11, 2017 at 10:06 AM, Vitaly Wool wrote:
>>>> With both coming and already
On Wed, Jan 11, 2017 at 6:08 PM, Dan Streetman <ddstr...@ieee.org> wrote:
> On Wed, Jan 11, 2017 at 10:06 AM, Vitaly Wool <vitalyw...@gmail.com> wrote:
>> With both coming and already present locking optimizations,
>> introducing kref to reference-count z3fold objects
On Wed, Jan 11, 2017 at 6:08 PM, Dan Streetman wrote:
> On Wed, Jan 11, 2017 at 10:06 AM, Vitaly Wool wrote:
>> With both coming and already present locking optimizations,
>> introducing kref to reference-count z3fold objects is the right
>> thing to do. Moreover, it makes b
On Wed, Jan 11, 2017 at 5:58 PM, Dan Streetman <ddstr...@ieee.org> wrote:
> On Wed, Jan 11, 2017 at 5:52 AM, Vitaly Wool <vitalyw...@gmail.com> wrote:
>> On Wed, Jan 4, 2017 at 7:42 PM, Dan Streetman <ddstr...@ieee.org> wrote:
>>> On Sun, Dec 25, 2016 at 7:40 P
On Wed, Jan 11, 2017 at 5:58 PM, Dan Streetman wrote:
> On Wed, Jan 11, 2017 at 5:52 AM, Vitaly Wool wrote:
>> On Wed, Jan 4, 2017 at 7:42 PM, Dan Streetman wrote:
>>> On Sun, Dec 25, 2016 at 7:40 PM, Vitaly Wool wrote:
>>>> With both coming and already
On Wed, Jan 11, 2017 at 5:28 PM, Dan Streetman <ddstr...@ieee.org> wrote:
> On Wed, Jan 11, 2017 at 10:06 AM, Vitaly Wool <vitalyw...@gmail.com> wrote:
>> z3fold_compact_page() currently only handles the situation when
>> there's a single middle chunk within the z3f
On Wed, Jan 11, 2017 at 5:28 PM, Dan Streetman wrote:
> On Wed, Jan 11, 2017 at 10:06 AM, Vitaly Wool wrote:
>> z3fold_compact_page() currently only handles the situation when
>> there's a single middle chunk within the z3fold page. However it
>> may be worth it to mov
implements spinlock-based per-page locking mechanism which
is lightweight enough to normally fit ok into the z3fold header.
Signed-off-by: Vitaly Wool <vitalyw...@gmail.com>
---
mm/z3fold.c | 148 +++-
1 file changed, 106 insertions(
implements spinlock-based per-page locking mechanism which
is lightweight enough to normally fit ok into the z3fold header.
Signed-off-by: Vitaly Wool
---
mm/z3fold.c | 148 +++-
1 file changed, 106 insertions(+), 42 deletions(-)
diff --git a/mm
, using BIG_CHUNK_GAP define as
a threshold for middle chunk to be worth moving.
Signed-off-by: Vitaly Wool <vitalyw...@gmail.com>
---
mm/z3fold.c | 26 +-
1 file changed, 25 insertions(+), 1 deletion(-)
diff --git a/mm/z3fold.c b/mm/z3fold.c
index 98ab01f..fca3310
This patch converts pages_nr per-pool counter to atomic64_t.
Signed-off-by: Vitaly Wool <vitalyw...@gmail.com>
---
mm/z3fold.c | 20 +---
1 file changed, 9 insertions(+), 11 deletions(-)
diff --git a/mm/z3fold.c b/mm/z3fold.c
index 207e5dd..2273789 100644
--- a/mm/z3fold.c
, using BIG_CHUNK_GAP define as
a threshold for middle chunk to be worth moving.
Signed-off-by: Vitaly Wool
---
mm/z3fold.c | 26 +-
1 file changed, 25 insertions(+), 1 deletion(-)
diff --git a/mm/z3fold.c b/mm/z3fold.c
index 98ab01f..fca3310 100644
--- a/mm/z3fold.c
+++ b
This patch converts pages_nr per-pool counter to atomic64_t.
Signed-off-by: Vitaly Wool
---
mm/z3fold.c | 20 +---
1 file changed, 9 insertions(+), 11 deletions(-)
diff --git a/mm/z3fold.c b/mm/z3fold.c
index 207e5dd..2273789 100644
--- a/mm/z3fold.c
+++ b/mm/z3fold.c
@@ -80,7
With both coming and already present locking optimizations,
introducing kref to reference-count z3fold objects is the right
thing to do. Moreover, it makes buddied list no longer necessary,
and allows for a simpler handling of headless pages.
Signed-off-by: Vitaly Wool <vitalyw...@gmail.
With both coming and already present locking optimizations,
introducing kref to reference-count z3fold objects is the right
thing to do. Moreover, it makes buddied list no longer necessary,
and allows for a simpler handling of headless pages.
Signed-off-by: Vitaly Wool
---
mm/z3fold.c | 145
of num_free_chunks() and the address to
move the middle chunk to in case of in-page compaction in
z3fold_compact_page().
Signed-off-by: Vitaly Wool <vitalyw...@gmail.com>
---
mm/z3fold.c | 114 ++--
1 file changed, 64 insertions(+), 50 del
of num_free_chunks() and the address to
move the middle chunk to in case of in-page compaction in
z3fold_compact_page().
Signed-off-by: Vitaly Wool
---
mm/z3fold.c | 114 ++--
1 file changed, 64 insertions(+), 50 deletions(-)
diff --git a/mm/z3fold.c
This is a consolidation of z3fold optimizations and fixes done so far, revised
after comments from Dan ([1], [2], [3], [4]).
The coming patches are to be applied on top of the following commit:
Author: zhong jiang
Date: Tue Dec 20 11:53:40 2016 +1100
mm/z3fold.c:
This is a consolidation of z3fold optimizations and fixes done so far, revised
after comments from Dan ([1], [2], [3], [4]).
The coming patches are to be applied on top of the following commit:
Author: zhong jiang
Date: Tue Dec 20 11:53:40 2016 +1100
mm/z3fold.c: limit first_num to the
On Wed, Jan 4, 2017 at 7:42 PM, Dan Streetman <ddstr...@ieee.org> wrote:
> On Sun, Dec 25, 2016 at 7:40 PM, Vitaly Wool <vitalyw...@gmail.com> wrote:
>> With both coming and already present locking optimizations,
>> introducing kref to reference-count z3fold objects
On Wed, Jan 4, 2017 at 7:42 PM, Dan Streetman wrote:
> On Sun, Dec 25, 2016 at 7:40 PM, Vitaly Wool wrote:
>> With both coming and already present locking optimizations,
>> introducing kref to reference-count z3fold objects is the right
>> thing to do. Moreover, it makes b
On Wed, Jan 4, 2017 at 4:43 PM, Dan Streetman wrote:
>> static int z3fold_compact_page(struct z3fold_header *zhdr)
>> {
>> struct page *page = virt_to_page(zhdr);
>> - void *beg = zhdr;
>> + int ret = 0;
>
> I still don't understand why you're adding ret
On Wed, Jan 4, 2017 at 4:43 PM, Dan Streetman wrote:
>> static int z3fold_compact_page(struct z3fold_header *zhdr)
>> {
>> struct page *page = virt_to_page(zhdr);
>> - void *beg = zhdr;
>> + int ret = 0;
>
> I still don't understand why you're adding ret and using goto.
With both coming and already present locking optimizations,
introducing kref to reference-count z3fold objects is the right
thing to do. Moreover, it makes buddied list no longer necessary,
and allows for a simpler handling of headless pages.
Signed-off-by: Vitaly Wool <vitalyw...@gmail.
With both coming and already present locking optimizations,
introducing kref to reference-count z3fold objects is the right
thing to do. Moreover, it makes buddied list no longer necessary,
and allows for a simpler handling of headless pages.
Signed-off-by: Vitaly Wool
---
mm/z3fold.c | 137
of num_free_chunks() and the address to
move the middle chunk to in case of in-page compaction in
z3fold_compact_page().
Signed-off-by: Vitaly Wool <vitalyw...@gmail.com>
---
mm/z3fold.c | 161
1 file changed, 87 insertions(+), 74 del
of num_free_chunks() and the address to
move the middle chunk to in case of in-page compaction in
z3fold_compact_page().
Signed-off-by: Vitaly Wool
---
mm/z3fold.c | 161
1 file changed, 87 insertions(+), 74 deletions(-)
diff --git a/mm/z3fold.c
implements raw spinlock-based per-page locking mechanism which
is lightweight enough to normally fit ok into the z3fold header.
Signed-off-by: Vitaly Wool <vitalyw...@gmail.com>
---
mm/z3fold.c | 148 +++-
1 file changed, 106 insertions(
implements raw spinlock-based per-page locking mechanism which
is lightweight enough to normally fit ok into the z3fold header.
Signed-off-by: Vitaly Wool
---
mm/z3fold.c | 148 +++-
1 file changed, 106 insertions(+), 42 deletions(-)
diff --git
to less actual page allocations on hot path due to denser
in-page allocation).
This patch adds the relevant code, using BIG_CHUNK_GAP define as a
threshold for middle chunk to be worth moving.
Signed-off-by: Vitaly Wool <vitalyw...@gmail.com>
---
mm/z3fold.
to less actual page allocations on hot path due to denser
in-page allocation).
This patch adds the relevant code, using BIG_CHUNK_GAP define as a
threshold for middle chunk to be worth moving.
Signed-off-by: Vitaly Wool
---
mm/z3fold.c | 60
to less actual page allocations on hot path due to denser
in-page allocation).
This patch adds the relevant code, using BIG_CHUNK_GAP define as a
threshold for middle chunk to be worth moving.
Signed-off-by: Vitaly Wool <vitalyw...@gmail.com>
---
mm/z3fold.
to less actual page allocations on hot path due to denser
in-page allocation).
This patch adds the relevant code, using BIG_CHUNK_GAP define as a
threshold for middle chunk to be worth moving.
Signed-off-by: Vitaly Wool
---
mm/z3fold.c | 60
Convert pages_nr per-pool counter to atomic64_t so that we won't have
to care about locking for reading/updating it.
Signed-off-by: Vitaly Wool <vitalyw...@gmail.com>
---
mm/z3fold.c | 20 +---
1 file changed, 9 insertions(+), 11 deletions(-)
diff --git a/mm/z3fold.
Convert pages_nr per-pool counter to atomic64_t so that we won't have
to care about locking for reading/updating it.
Signed-off-by: Vitaly Wool
---
mm/z3fold.c | 20 +---
1 file changed, 9 insertions(+), 11 deletions(-)
diff --git a/mm/z3fold.c b/mm/z3fold.c
index 207e5dd
101 - 200 of 460 matches
Mail list logo