When try the file readahead by posix_fadvise(), I find it can't work properly.
For example, posix_fadvise(POSIX_FADV_WILLNEED) a 10MB file, the kernel
actually readahead only 512KB data to the page cache, even if there are enough
free memory in the machine.
When trace to kernel, I find the
If SPARSEMEM, use page_ext in mem_section
if !SPARSEMEM, use page_ext in pgdata
Signed-off-by: Weijie Yang <weijie.y...@samsung.com>
---
include/linux/mmzone.h | 2 +-
1 file changed, 1 insertion(+), 1 deletion(-)
diff --git a/include/linux/mmzone.h b/include/linux/mmzone.h
index c
If SPARSEMEM, use page_ext in mem_section
if !SPARSEMEM, use page_ext in pgdata
Signed-off-by: Weijie Yang
---
include/linux/mmzone.h | 2 +-
1 file changed, 1 insertion(+), 1 deletion(-)
diff --git a/include/linux/mmzone.h b/include/linux/mmzone.h
index c60df92..43c412c 100644
--- a/include
cma_mutex and uses per-cma area alloc_lock,
this allows concurrent cma pages allocation for different cma areas while
protects access to the same pageblocks.
Signed-off-by: Weijie Yang
---
mm/cma.c |6 +++---
mm/cma.h |1 +
2 files changed, 4 insertions(+), 3 deletions(-)
diff --git a/mm
cma_mutex and uses per-cma area alloc_lock,
this allows concurrent cma pages allocation for different cma areas while
protects access to the same pageblocks.
Signed-off-by: Weijie Yang weijie.y...@samsung.com
---
mm/cma.c |6 +++---
mm/cma.h |1 +
2 files changed, 4 insertions(+), 3 deletions
This patch clears zram disk io accounting when reset the zram device,
if don't do this, the residual io accounting stat will affect the
diskstat in the next zram active cycle.
Signed-off-by: Weijie Yang
---
drivers/block/zram/zram_drv.c |2 ++
1 file changed, 2 insertions(+)
diff --git
This patch clears zram disk io accounting when reset the zram device,
if don't do this, the residual io accounting stat will affect the
diskstat in the next zram active cycle.
Signed-off-by: Weijie Yang weijie.y...@samsung.com
---
drivers/block/zram/zram_drv.c |2 ++
1 file changed, 2
prevents the buddy page getting allocated
they are not the same zone->lock.
If we cann't remove the zone_id check statement, it's better handle
this rare race. This patch fixes this by placing the zone_id check
before the VM_BUG_ON_PAGE check.
Signed-off-by: Weijie Yang
Acked-by: Mel Gorman
the buddy page getting allocated
they are not the same zone-lock.
If we cann't remove the zone_id check statement, it's better handle
this rare race. This patch fixes this by placing the zone_id check
before the VM_BUG_ON_PAGE check.
Signed-off-by: Weijie Yang weijie.y...@samsung.com
Acked-by: Mel
ping. Any comments?
On Wed, Nov 12, 2014 at 5:50 PM, Weijie Yang wrote:
> This is a RFC patch, because current PAGE_SIZE is equal to PAGE_CACHE_SIZE,
> there isn't any difference and issue when running.
>
> However, the current code mixes these two aligned_size in
On Tue, Dec 9, 2014 at 5:49 PM, Vlastimil Babka wrote:
> On 12/09/2014 08:51 AM, Weijie Yang wrote:
>>
>> The freepage_migratetype is a temporary cached value which represents
>> the free page's pageblock migratetype. Now we use it in two scenarios:
>>
>> 1.
On Tue, Dec 9, 2014 at 5:24 PM, Vlastimil Babka wrote:
> On 12/09/2014 08:51 AM, Weijie Yang wrote:
>>
>> when we test the pages in a range is free or not, there is a little
>> chance we encounter some page which is not in buddy but page_count is 0.
>> That means tha
On Tue, Dec 9, 2014 at 5:59 PM, Mel Gorman wrote:
> On Tue, Dec 09, 2014 at 03:40:35PM +0800, Weijie Yang wrote:
>> If the free page and its buddy has different zone id, the current
>> zone->lock cann't prevent buddy page getting allocated, this could
>> trigger VM_BU
On Tue, Dec 9, 2014 at 5:59 PM, Mel Gorman mgor...@suse.de wrote:
On Tue, Dec 09, 2014 at 03:40:35PM +0800, Weijie Yang wrote:
If the free page and its buddy has different zone id, the current
zone-lock cann't prevent buddy page getting allocated, this could
trigger VM_BUG_ON_PAGE in a very
On Tue, Dec 9, 2014 at 5:24 PM, Vlastimil Babka vba...@suse.cz wrote:
On 12/09/2014 08:51 AM, Weijie Yang wrote:
when we test the pages in a range is free or not, there is a little
chance we encounter some page which is not in buddy but page_count is 0.
That means that page could
On Tue, Dec 9, 2014 at 5:49 PM, Vlastimil Babka vba...@suse.cz wrote:
On 12/09/2014 08:51 AM, Weijie Yang wrote:
The freepage_migratetype is a temporary cached value which represents
the free page's pageblock migratetype. Now we use it in two scenarios:
1. Use it as a cached value in page
ping. Any comments?
On Wed, Nov 12, 2014 at 5:50 PM, Weijie Yang weijie.y...@samsung.com wrote:
This is a RFC patch, because current PAGE_SIZE is equal to PAGE_CACHE_SIZE,
there isn't any difference and issue when running.
However, the current code mixes these two aligned_size inconsistently
. Use it in page alloc path to update NR_FREE_CMA_PAGES statistics.
This patch aims at the scenario 1 and removes two redundant
set_freepage_migratetype() calls, which will make sense in the hot path.
Signed-off-by: Weijie Yang
---
mm/page_alloc.c |2 --
1 file changed, 2 deletions(-)
diff
ion behavior
by rechecking migratetype) patch series have ensure this.
So the freepage_migratetype check for page_count==0 page in
__test_page_isolated_in_pageblock() is meaningless.
This patch removes the unnecessary freepage_migratetype check.
Signed-off-by: Weijie Yang
---
mm/page_isolation.c |
[MIGRATE_ISOLATE].
This patch removes the unnecessary freepage_migratetype check and the
redundant page moving.
Signed-off-by: Weijie Yang
---
mm/page_isolation.c | 17 +
1 file changed, 1 insertion(+), 16 deletions(-)
diff --git a/mm/page_isolation.c b/mm/page_isolation.c
index
ddy) is truehold zone_2 lock
page_order(buddy) == order is true alloc buddy
trigger VM_BUG_ON_PAGE(page_count(buddy) != 0)
This patch fixes this issue by placing the zone id check before
the VM_BUG_ON_PAGE check.
Signed-off-by: Weijie Yang
---
mm/page_alloc.c |
) is truehold zone_2 lock
page_order(buddy) == order is true alloc buddy
trigger VM_BUG_ON_PAGE(page_count(buddy) != 0)
This patch fixes this issue by placing the zone id check before
the VM_BUG_ON_PAGE check.
Signed-off-by: Weijie Yang weijie.y...@samsung.com
---
mm
[MIGRATE_ISOLATE].
This patch removes the unnecessary freepage_migratetype check and the
redundant page moving.
Signed-off-by: Weijie Yang weijie.y...@samsung.com
---
mm/page_isolation.c | 17 +
1 file changed, 1 insertion(+), 16 deletions(-)
diff --git a/mm/page_isolation.c b/mm
behavior
by rechecking migratetype) patch series have ensure this.
So the freepage_migratetype check for page_count==0 page in
__test_page_isolated_in_pageblock() is meaningless.
This patch removes the unnecessary freepage_migratetype check.
Signed-off-by: Weijie Yang weijie.y...@samsung.com
---
mm
. Use it in page alloc path to update NR_FREE_CMA_PAGES statistics.
This patch aims at the scenario 1 and removes two redundant
set_freepage_migratetype() calls, which will make sense in the hot path.
Signed-off-by: Weijie Yang weijie.y...@samsung.com
---
mm/page_alloc.c |2 --
1 file changed, 2
On Thu, Nov 20, 2014 at 5:28 AM, Vlastimil Babka wrote:
> On 11/17/2014 11:40 AM, Weijie Yang wrote:
>> The commit ad53f92e(fix incorrect isolation behavior by rechecking
>> migratetype)
>> patch series describe the race between page isolation and free path, and try
>
On Thu, Nov 20, 2014 at 5:28 AM, Vlastimil Babka vba...@suse.cz wrote:
On 11/17/2014 11:40 AM, Weijie Yang wrote:
The commit ad53f92e(fix incorrect isolation behavior by rechecking
migratetype)
patch series describe the race between page isolation and free path, and try
to
fix the freepage
On Wed, Nov 19, 2014 at 6:29 AM, Seth Jennings wrote:
> On Tue, Nov 18, 2014 at 04:51:36PM +0800, Weijie Yang wrote:
>> If a frontswap dup-store failed, it should invalidate the expired page
>> in the backend, or it could trigger some data corruption issue.
>> Suc
On Wed, Nov 19, 2014 at 6:29 AM, Seth Jennings sjenni...@variantweb.net wrote:
On Tue, Nov 18, 2014 at 04:51:36PM +0800, Weijie Yang wrote:
If a frontswap dup-store failed, it should invalidate the expired page
in the backend, or it could trigger some data corruption issue.
Such as:
1. use
failure.
Signed-off-by: Weijie Yang
---
mm/frontswap.c |4 +++-
1 files changed, 3 insertions(+), 1 deletions(-)
diff --git a/mm/frontswap.c b/mm/frontswap.c
index c30eec5..f2a3571 100644
--- a/mm/frontswap.c
+++ b/mm/frontswap.c
@@ -244,8 +244,10 @@ int __frontswap_store(struct page *page
failure.
Signed-off-by: Weijie Yang weijie.y...@samsung.com
---
mm/frontswap.c |4 +++-
1 files changed, 3 insertions(+), 1 deletions(-)
diff --git a/mm/frontswap.c b/mm/frontswap.c
index c30eec5..f2a3571 100644
--- a/mm/frontswap.c
+++ b/mm/frontswap.c
@@ -244,8 +244,10 @@ int
free the page to the
free_list to avoid subsequent misusing stale value, and use a WARN_ON_ONCE
to catch a potential undetected race between isolatation and free path.
Signed-off-by: Weijie Yang
---
mm/page_alloc.c |1 +
mm/page_isolation.c | 17 +
2 files changed, 6
free the page to the
free_list to avoid subsequent misusing stale value, and use a WARN_ON_ONCE
to catch a potential undetected race between isolatation and free path.
Signed-off-by: Weijie Yang weijie.y...@samsung.com
---
mm/page_alloc.c |1 +
mm/page_isolation.c | 17 +
2
On Fri, Oct 31, 2014 at 3:25 PM, Joonsoo Kim wrote:
> There are two paths to reach core free function of buddy allocator,
> __free_one_page(), one is free_one_page()->__free_one_page() and the
> other is free_hot_cold_page()->free_pcppages_bulk()->__free_one_page().
> Each paths has race
On Fri, Oct 31, 2014 at 3:25 PM, Joonsoo Kim iamjoonsoo@lge.com wrote:
There are two paths to reach core free function of buddy allocator,
__free_one_page(), one is free_one_page()-__free_one_page() and the
other is free_hot_cold_page()-free_pcppages_bulk()-__free_one_page().
Each paths
On Thu, Nov 13, 2014 at 3:34 AM, Michal Hocko wrote:
> On Thu 06-11-14 16:08:02, Weijie Yang wrote:
>> In the undo path of start_isolate_page_range(), we need to check
>> the pfn validity before access its page, or it will trigger an
>> addressing exception if ther
ge in now accessed" -> "page is now
>> accessed"
>>
>> Signed-off-by: Mahendran Ganesh
> Acked-by: Minchan Kim
Acked-by: Weijie Yang
> To be clear about "fixes this issue", it's not a bug but just clean up
> so it doesn't change any behavi
wanted.
According to man-page, mincore uses PAGE_SIZE as its size unit, so this patch
uses PAGE_SIZE instead of PAGE_CACHE_SIZE.
Signed-off-by: Weijie Yang
---
mm/mincore.c | 19 +--
1 files changed, 13 insertions(+), 6 deletions(-)
diff --git a/mm/mincore.c b/mm/mincore.c
index
wanted.
According to man-page, mincore uses PAGE_SIZE as its size unit, so this patch
uses PAGE_SIZE instead of PAGE_CACHE_SIZE.
Signed-off-by: Weijie Yang weijie.y...@samsung.com
---
mm/mincore.c | 19 +--
1 files changed, 13 insertions(+), 6 deletions(-)
diff --git a/mm/mincore.c
-by: Mahendran Ganesh opensource.gan...@gmail.com
Acked-by: Minchan Kim minc...@kernel.org
Acked-by: Weijie Yang weijie.y...@samsung.com
To be clear about fixes this issue, it's not a bug but just clean up
so it doesn't change any behavior.
Thanks!
--
Kind regards,
Minchan Kim
--
To unsubscribe
On Thu, Nov 13, 2014 at 3:34 AM, Michal Hocko mho...@suse.cz wrote:
On Thu 06-11-14 16:08:02, Weijie Yang wrote:
In the undo path of start_isolate_page_range(), we need to check
the pfn validity before access its page, or it will trigger an
addressing exception if there is hole in the zone
When encounter pte is a swap entry, the current code handles two cases:
migration and normal swapentry, but we have a third case: hwpoison page.
This patch adds hwpoison page handle, consider hwpoison page incore as
same as migration.
Signed-off-by: Weijie Yang
---
mm/mincore.c |4 ++--
1
On Wed, Nov 12, 2014 at 6:23 AM, Andrew Morton
wrote:
> On Thu, 06 Nov 2014 16:08:02 +0800 Weijie Yang
> wrote:
>
>> In the undo path of start_isolate_page_range(), we need to check
>> the pfn validity before access its page, or it will trigger an
>> addressing
On Wed, Nov 12, 2014 at 6:23 AM, Andrew Morton
a...@linux-foundation.org wrote:
On Thu, 06 Nov 2014 16:08:02 +0800 Weijie Yang weijie.y...@samsung.com
wrote:
In the undo path of start_isolate_page_range(), we need to check
the pfn validity before access its page, or it will trigger
When encounter pte is a swap entry, the current code handles two cases:
migration and normal swapentry, but we have a third case: hwpoison page.
This patch adds hwpoison page handle, consider hwpoison page incore as
same as migration.
Signed-off-by: Weijie Yang weijie.y...@samsung.com
---
mm
On Thu, Nov 6, 2014 at 4:49 PM, Joonsoo Kim wrote:
> On Thu, Nov 06, 2014 at 04:09:08PM +0800, Weijie Yang wrote:
>> If race between isolatation and allocation happens, we could need to move
>> some freepages to MIGRATE_ISOLATE in __test_page_isolated_in_pageblock().
>> The
.
This patch fixes this rare issue.
Signed-off-by: Weijie Yang
---
mm/page_isolation.c |5 -
1 files changed, 4 insertions(+), 1 deletions(-)
diff --git a/mm/page_isolation.c b/mm/page_isolation.c
index 3ddc8b3..15b51de 100644
--- a/mm/page_isolation.c
+++ b/mm/page_isolation.c
@@ -193,12
In the undo path of start_isolate_page_range(), we need to check
the pfn validity before access its page, or it will trigger an
addressing exception if there is hole in the zone.
Signed-off-by: Weijie Yang
---
mm/page_isolation.c |7 +--
1 files changed, 5 insertions(+), 2 deletions
On Thu, Nov 6, 2014 at 4:49 PM, Joonsoo Kim iamjoonsoo@lge.com wrote:
On Thu, Nov 06, 2014 at 04:09:08PM +0800, Weijie Yang wrote:
If race between isolatation and allocation happens, we could need to move
some freepages to MIGRATE_ISOLATE in __test_page_isolated_in_pageblock().
The current
.
This patch fixes this rare issue.
Signed-off-by: Weijie Yang weijie.y...@samsung.com
---
mm/page_isolation.c |5 -
1 files changed, 4 insertions(+), 1 deletions(-)
diff --git a/mm/page_isolation.c b/mm/page_isolation.c
index 3ddc8b3..15b51de 100644
--- a/mm/page_isolation.c
+++ b/mm
In the undo path of start_isolate_page_range(), we need to check
the pfn validity before access its page, or it will trigger an
addressing exception if there is hole in the zone.
Signed-off-by: Weijie Yang weijie.y...@samsung.com
---
mm/page_isolation.c |7 +--
1 files changed, 5
t
>>> - the CMA region is not 16 M aligned
>
> On Wed, Nov 05 2014, Weijie Yang wrote:
>> I think the device driver should ensure that situation could not occur,
>> by assign suitable alignment parameter in cma_declare_contiguous().
>
> What about default CMA area
On Wed, Nov 05 2014, Weijie Yang wrote:
I think the device driver should ensure that situation could not occur,
by assign suitable alignment parameter in cma_declare_contiguous().
What about default CMA area? Besides, I think principle of least
surprise applies here and alignment should
On Wed, Nov 5, 2014 at 12:18 PM, Gregory Fong wrote:
> On Tue, Nov 4, 2014 at 2:27 PM, Michal Nazarewicz wrote:
>> On Tue, Nov 04 2014, Gregory Fong wrote:
>>> The alignment in cma_alloc() is done w.r.t. the bitmap. This is a
>>> problem when, for example:
>>>
>>> - a device requires 16M (order
On Wed, Nov 5, 2014 at 12:18 PM, Gregory Fong gregory.0...@gmail.com wrote:
On Tue, Nov 4, 2014 at 2:27 PM, Michal Nazarewicz min...@mina86.com wrote:
On Tue, Nov 04 2014, Gregory Fong wrote:
The alignment in cma_alloc() is done w.r.t. the bitmap. This is a
problem when, for example:
- a
zram could kunmap_atomic a NULL pointer in a rare situation:
a zram page become a full-zeroed page after a partial write io.
The current code doesn't handle this case and kunmap_atomic a
NULL porinter, which panic the kernel.
This patch fixes this issue.
Signed-off-by: Weijie Yang
---
drivers
zram could kunmap_atomic a NULL pointer in a rare situation:
a zram page become a full-zeroed page after a partial write io.
The current code doesn't handle this case and kunmap_atomic a
NULL porinter, which panic the kernel.
This patch fixes this issue.
Signed-off-by: Weijie Yang weijie.y
Commit-ID: 3c325f8233c35fb35dec3744ba01634aab4ea36a
Gitweb: http://git.kernel.org/tip/3c325f8233c35fb35dec3744ba01634aab4ea36a
Author: Weijie Yang
AuthorDate: Fri, 24 Oct 2014 17:00:34 +0800
Committer: Ingo Molnar
CommitDate: Tue, 28 Oct 2014 07:36:50 +0100
x86, cma: Reserve DMA
Commit-ID: 3c325f8233c35fb35dec3744ba01634aab4ea36a
Gitweb: http://git.kernel.org/tip/3c325f8233c35fb35dec3744ba01634aab4ea36a
Author: Weijie Yang weijie.y...@samsung.com
AuthorDate: Fri, 24 Oct 2014 17:00:34 +0800
Committer: Ingo Molnar mi...@kernel.org
CommitDate: Tue, 28 Oct 2014 07
B
access meta, get a NULL value
init zram, done
init_done() is true
access meta->mem_pool, get a NULL pointer BUG
This patch fixes this issue.
Signed-off-by: Weijie Yang
Acked-by: Minchan Kim
Acked-by: Sergey Senozhatsky
---
drivers/block/z
On Sun, Oct 26, 2014 at 9:41 AM, Minchan Kim wrote:
> Hello,
>
> On Sat, Oct 25, 2014 at 05:25:11PM +0800, Weijie Yang wrote:
>> The commit 461a8eee6a ("zram: report maximum used memory") introduces a new
>> knob "mem_used_max" in zram.stats
On Sun, Oct 26, 2014 at 9:41 AM, Minchan Kim minc...@kernel.org wrote:
Hello,
On Sat, Oct 25, 2014 at 05:25:11PM +0800, Weijie Yang wrote:
The commit 461a8eee6a (zram: report maximum used memory) introduces a new
knob mem_used_max in zram.stats sysfs, and wants to reset it via write 0
B
access meta, get a NULL value
init zram, done
init_done() is true
access meta-mem_pool, get a NULL pointer BUG
This patch fixes this issue.
Signed-off-by: Weijie Yang weijie.y...@samsung.com
Acked-by: Minchan Kim minc...@kernel.org
Acked-by: Sergey
value
init zram, done
init_done() is true
access meta->mem_pool, get a NULL pointer BUG
This patch fixes this issue.
Signed-off-by: Weijie Yang
---
drivers/block/zram/zram_drv.c |5 +++--
1 files changed, 3 insertions(+), 2 deleti
The commit 461a8eee6a ("zram: report maximum used memory") introduces a new
knob "mem_used_max" in zram.stats sysfs, and wants to reset it via write 0
to the sysfs interface.
However, the current code cann't reset it correctly, so let's fix it.
Signed-off-by: Weijie Yang
---
The commit 461a8eee6a (zram: report maximum used memory) introduces a new
knob mem_used_max in zram.stats sysfs, and wants to reset it via write 0
to the sysfs interface.
However, the current code cann't reset it correctly, so let's fix it.
Signed-off-by: Weijie Yang weijie.y...@samsung.com
value
init zram, done
init_done() is true
access meta-mem_pool, get a NULL pointer BUG
This patch fixes this issue.
Signed-off-by: Weijie Yang weijie.y...@samsung.com
---
drivers/block/zram/zram_drv.c |5 +++--
1 files changed, 3
order consistent in functions declaration
and definition.
Signed-off-by: Weijie Yang
---
include/linux/cma.h |8
1 files changed, 4 insertions(+), 4 deletions(-)
diff --git a/include/linux/cma.h b/include/linux/cma.h
index 0430ed0..a93438b 100644
--- a/include/linux/cma.h
+++ b
r initmem_init() so that
high_memory is initialized before accessed.
Reported-by: Fengguang Wu
Signed-off-by: Weijie Yang
---
arch/x86/kernel/setup.c |2 +-
1 files changed, 1 insertions(+), 1 deletions(-)
diff --git a/arch/x86/kernel/setup.c b/arch/x86/kernel/setup.c
index 235cfd3..ab08aa2 100644
...@intel.com
Signed-off-by: Weijie Yang weijie.y...@samsung.com
---
arch/x86/kernel/setup.c |2 +-
1 files changed, 1 insertions(+), 1 deletions(-)
diff --git a/arch/x86/kernel/setup.c b/arch/x86/kernel/setup.c
index 235cfd3..ab08aa2 100644
--- a/arch/x86/kernel/setup.c
+++ b/arch/x86/kernel/setup.c
order consistent in functions declaration
and definition.
Signed-off-by: Weijie Yang weijie.y...@samsung.com
---
include/linux/cma.h |8
1 files changed, 4 insertions(+), 4 deletions(-)
diff --git a/include/linux/cma.h b/include/linux/cma.h
index 0430ed0..a93438b 100644
--- a/include
On Fri, Oct 24, 2014 at 7:42 AM, Laurent Pinchart
wrote:
> Hi Michal,
>
> On Thursday 23 October 2014 18:53:36 Michal Nazarewicz wrote:
>> On Thu, Oct 23 2014, Laurent Pinchart wrote:
>> > If activation of the CMA area fails its mutex won't be initialized,
>> > leading to an oops at allocation
On Thu, Oct 9, 2014 at 10:04 AM, Fengguang Wu wrote:
> Hi Marek,
>
> FYI, we noticed the below changes on
>
> git://git.kernel.org/pub/scm/linux/kernel/git/next/linux-next.git master
> commit 478e86d7c8c5f41e29abb81b05b459d24bdc71a2 ("mm: cma: adjust address
> limit to avoid hitting low/high
On Thu, Oct 9, 2014 at 10:04 AM, Fengguang Wu fengguang...@intel.com wrote:
Hi Marek,
FYI, we noticed the below changes on
git://git.kernel.org/pub/scm/linux/kernel/git/next/linux-next.git master
commit 478e86d7c8c5f41e29abb81b05b459d24bdc71a2 (mm: cma: adjust address
limit to avoid
On Fri, Oct 24, 2014 at 7:42 AM, Laurent Pinchart
laurent.pinch...@ideasonboard.com wrote:
Hi Michal,
On Thursday 23 October 2014 18:53:36 Michal Nazarewicz wrote:
On Thu, Oct 23 2014, Laurent Pinchart wrote:
If activation of the CMA area fails its mutex won't be initialized,
leading to an
On Thu, Oct 16, 2014 at 11:35 AM, Hui Zhu wrote:
> In fallbacks of page_alloc.c, MIGRATE_CMA is the fallback of
> MIGRATE_MOVABLE.
> MIGRATE_MOVABLE will use MIGRATE_CMA when it doesn't have a page in
> order that Linux kernel want.
>
> If a system that has a lot of user space program is running,
On Thu, Oct 16, 2014 at 11:35 AM, Hui Zhu zhu...@xiaomi.com wrote:
In fallbacks of page_alloc.c, MIGRATE_CMA is the fallback of
MIGRATE_MOVABLE.
MIGRATE_MOVABLE will use MIGRATE_CMA when it doesn't have a page in
order that Linux kernel want.
If a system that has a lot of user space program
alloc kvm_rma_pages, it will input 15 as
expected align value, after using current computing, however, we get 0 as
cma bitmap aligned mask other than 511.
This patch fixes the cma bitmap aligned mask compute way.
Signed-off-by: Weijie Yang
---
mm/cma.c |5 -
1 file changed, 4 insertions(+), 1 de
kvm_rma_pages, it will input 15 as
expected align value, after using current computing, however, we get 0 as
cma bitmap aligned mask other than 511.
This patch fixes the cma bitmap aligned mask compute way.
Signed-off-by: Weijie Yang weijie.y...@samsung.com
---
mm/cma.c |5 -
1 file changed, 4
On Tue, Sep 23, 2014 at 12:48 PM, 朱辉 wrote:
>
>
> On 09/23/14 12:18, Greg KH wrote:
>> On Tue, Sep 23, 2014 at 10:57:09AM +0800, Hui Zhu wrote:
>>> The cause of this issue is when free memroy size is low and a lot of task is
>>> trying to shrink the memory, the task that is killed by lowmemkiller
On Tue, Sep 23, 2014 at 12:48 PM, 朱辉 zhu...@xiaomi.com wrote:
On 09/23/14 12:18, Greg KH wrote:
On Tue, Sep 23, 2014 at 10:57:09AM +0800, Hui Zhu wrote:
The cause of this issue is when free memroy size is low and a lot of task is
trying to shrink the memory, the task that is killed by
On Fri, Aug 29, 2014 at 4:12 PM, Mel Gorman wrote:
> On Fri, Aug 29, 2014 at 03:03:19PM +0800, Weijie Yang wrote:
>> When enter page_alloc slowpath, we wakeup kswapd on every pgdat
>> according to the zonelist and high_zoneidx. However, this doesn't
>> take nodemask in
of
for_each_zone_zonelist() in wake_all_kswapds() to avoid the above situation.
Signed-off-by: Weijie Yang
---
mm/page_alloc.c |9 ++---
1 file changed, 6 insertions(+), 3 deletions(-)
diff --git a/mm/page_alloc.c b/mm/page_alloc.c
index 18cee0d..29b595a 100644
--- a/mm/page_alloc.c
+++ b/mm/page_alloc.c
of
for_each_zone_zonelist() in wake_all_kswapds() to avoid the above situation.
Signed-off-by: Weijie Yang weijie.y...@samsung.com
---
mm/page_alloc.c |9 ++---
1 file changed, 6 insertions(+), 3 deletions(-)
diff --git a/mm/page_alloc.c b/mm/page_alloc.c
index 18cee0d..29b595a 100644
--- a/mm/page_alloc.c
On Fri, Aug 29, 2014 at 4:12 PM, Mel Gorman mgor...@suse.de wrote:
On Fri, Aug 29, 2014 at 03:03:19PM +0800, Weijie Yang wrote:
When enter page_alloc slowpath, we wakeup kswapd on every pgdat
according to the zonelist and high_zoneidx. However, this doesn't
take nodemask into account
On Tue, Jun 3, 2014 at 4:22 PM, Minchan Kim wrote:
> Hello,
>
> On Tue, Jun 03, 2014 at 03:59:06PM +0800, Weijie Yang wrote:
>> On Mon, Jun 2, 2014 at 8:43 AM, Minchan Kim wrote:
>> > Hello Weijie,
>> >
>> > Thanks for resending.
>> > Below
On Mon, Jun 2, 2014 at 8:43 AM, Minchan Kim wrote:
> Hello Weijie,
>
> Thanks for resending.
> Below are mostly nitpicks.
>
> On Fri, May 30, 2014 at 04:34:44PM +0800, Weijie Yang wrote:
>> Currently, we use a rwlock tb_lock to protect concurrent access to
>> the
On Mon, Jun 2, 2014 at 8:43 AM, Minchan Kim minc...@kernel.org wrote:
Hello Weijie,
Thanks for resending.
Below are mostly nitpicks.
On Fri, May 30, 2014 at 04:34:44PM +0800, Weijie Yang wrote:
Currently, we use a rwlock tb_lock to protect concurrent access to
the whole zram meta table
On Tue, Jun 3, 2014 at 4:22 PM, Minchan Kim minc...@kernel.org wrote:
Hello,
On Tue, Jun 03, 2014 at 03:59:06PM +0800, Weijie Yang wrote:
On Mon, Jun 2, 2014 at 8:43 AM, Minchan Kim minc...@kernel.org wrote:
Hello Weijie,
Thanks for resending.
Below are mostly nitpicks.
On Fri, May
lag() to zram_test_zero()
- add some comments
Changes since v2: https://lkml.org/lkml/2014/5/15/113
- change size type from int to size_t in zram_set_obj_size()
- refactor zram_set_obj_size() to make it readable
- add comments
Signed-off-by: Weijie Yang
---
drivers/block/zram/zram_drv.c |
() to zram_test_zero()
- add some comments
Changes since v2: https://lkml.org/lkml/2014/5/15/113
- change size type from int to size_t in zram_set_obj_size()
- refactor zram_set_obj_size() to make it readable
- add comments
Signed-off-by: Weijie Yang weijie.y...@samsung.com
---
drivers/block
Hello,
Sorry for my late reply, because of a biz trip.
On Wed, May 21, 2014 at 3:51 PM, Minchan Kim wrote:
> Hello Andrew,
>
> On Tue, May 20, 2014 at 03:10:51PM -0700, Andrew Morton wrote:
>> On Thu, 15 May 2014 16:00:47 +0800 Weijie Yang
>> wrote:
>>
>> >
Hello,
Sorry for my late reply, because of a biz trip.
On Wed, May 21, 2014 at 3:51 PM, Minchan Kim minc...@kernel.org wrote:
Hello Andrew,
On Tue, May 20, 2014 at 03:10:51PM -0700, Andrew Morton wrote:
On Thu, 15 May 2014 16:00:47 +0800 Weijie Yang weijie.y...@samsung.com
wrote
On Fri, May 16, 2014 at 2:51 PM, Minchan Kim wrote:
> Hello Andrew,
>
> On Thu, May 15, 2014 at 02:38:56PM -0700, Andrew Morton wrote:
>> On Thu, 15 May 2014 16:00:47 +0800 Weijie Yang
>> wrote:
>>
>> > Currently, we use a rwlock tb_lock to protect concurren
On Fri, May 16, 2014 at 2:51 PM, Minchan Kim minc...@kernel.org wrote:
Hello Andrew,
On Thu, May 15, 2014 at 02:38:56PM -0700, Andrew Morton wrote:
On Thu, 15 May 2014 16:00:47 +0800 Weijie Yang weijie.y...@samsung.com
wrote:
Currently, we use a rwlock tb_lock to protect concurrent access
According to calculation, ZS_SIZE_CLASSES value is 255
on systems with 4K page size, not 254. The old value may
forget count the ZS_MIN_ALLOC_SIZE in.
This patch fixup this trivial issue in the comments.
Signed-off-by: Weijie Yang
---
mm/zsmalloc.c |2 +-
1 file changed, 1 insertion(+), 1
lag() to zram_test_zero()
- add some comments
- change the patch subject
Signed-off-by: Weijie Yang
---
drivers/block/zram/zram_drv.c | 84 +++--
drivers/block/zram/zram_drv.h | 22 ---
2 files changed, 63 insertions(+), 43 deletions(-)
diff --
() to zram_test_zero()
- add some comments
- change the patch subject
Signed-off-by: Weijie Yang weijie.y...@samsung.com
---
drivers/block/zram/zram_drv.c | 84 +++--
drivers/block/zram/zram_drv.h | 22 ---
2 files changed, 63 insertions(+), 43
According to calculation, ZS_SIZE_CLASSES value is 255
on systems with 4K page size, not 254. The old value may
forget count the ZS_MIN_ALLOC_SIZE in.
This patch fixup this trivial issue in the comments.
Signed-off-by: Weijie Yang weijie.y...@samsung.com
---
mm/zsmalloc.c |2 +-
1 file
On Thu, May 8, 2014 at 2:24 PM, Minchan Kim wrote:
> On Wed, May 07, 2014 at 11:52:59PM +0900, Joonsoo Kim wrote:
>> >> Most popular use of zram is the in-memory swap for small embedded system
>> >> so I don't want to increase memory footprint without good reason although
>> >> it makes synthetic
1 - 100 of 289 matches
Mail list logo