On Wed, 1 May 2024 02:01:20 -0400 Michael S. Tsirkin
>
> and then it failed testing.
>
So did my patch [1] but then the reason was spotted [2,3]
[1] https://lore.kernel.org/lkml/20240430110209.4310-1-hdan...@sina.com/
[2] https://lore.kernel.org/lkml/20240430225005.4368-1-hdan...@sina.com/
[3]
On Tue, Apr 30, 2024 at 11:23:04AM -0500, Mike Christie wrote:
> On 4/30/24 8:05 AM, Edward Adam Davis wrote:
> > static int vhost_task_fn(void *data)
> > {
> > struct vhost_task *vtsk = data;
> > @@ -51,7 +51,7 @@ static int vhost_task_fn(void *data)
> > schedule();
> >
On Sat, 03 Feb 2024 14:16:16 +0800 Ubisectech Sirius
> Hello.
> We are Ubisectech Sirius Team, the vulnerability lab of China ValiantSec.
> Recently, our team has discovered a issue in Linux kernel
> 6.8.0-rc2-g6764c317b6bb.
> Attached to the email were a POC file of the issue.
Could you test
On 23 Dec 2022 15:51:52 +0900 Daisuke Matsuda
> @@ -137,15 +153,27 @@ void rxe_sched_task(struct rxe_task *task)
> if (task->destroyed)
> return;
>
> - tasklet_schedule(>tasklet);
> + /*
> + * busy-loop while qp reset is in progress.
> + * This may be
On Tue, 16 Apr 2019 20:38:34 +0200 Christian König wrote:
> + /**
> + * @unpin_dma_buf:
> + *
> + * This is called by dma_buf_unpin and lets the exporter know that an
> + * importer doesn't need to the DMA-buf to stay were it is any more.
> + *
s/need to/need/
On Tue, 16 Apr 2019 20:38:35 +0200 Christian König wrote:
> @ -331,14 +282,19 @@ EXPORT_SYMBOL(drm_gem_map_dma_buf);
> * @sgt: scatterlist info of the buffer to unmap
> * @dir: direction of DMA transfer
> *
> - * Not implemented. The unmap is done at drm_gem_map_detach(). This can be
> - *
On Tue, 16 Apr 2019 20:38:32 +0200 Christian König wrote:
> @@ -688,9 +689,9 @@ struct sg_table *dma_buf_map_attachment(struct
> dma_buf_attachment *attach,
> if (attach->sgt)
> return attach->sgt;
>
> - sg_table = attach->dmabuf->ops->map_dma_buf(attach, direction);
>
On Tue, 16 Apr 2019 20:38:33 +0200 Christian König wrote:
> Each importer can now provide an invalidate_mappings callback.
>
> This allows the exporter to provide the mappings without the need to pin
> the backing store.
>
> v2: don't try to invalidate mappings when the callback is NULL,
>
On Tue, 16 Apr 2019 20:38:31 +0200 Christian König wrote:
> Add function variants which can be called with the reservation lock
> already held.
>
> v2: reordered, add lockdep asserts, fix kerneldoc
> v3: rebased on sgt caching
>
> Signed-off-by: Christian König
> ---
>
On April 11, 2017 10:06 PM Vlastimil Babka wrote:
>
> static void cpuset_change_task_nodemask(struct task_struct *tsk,
> nodemask_t *newmems)
> {
> - bool need_loop;
> -
> task_lock(tsk);
> - /*
> - * Determine if a loop is necessary if
On April 11, 2017 10:06 PM Vlastimil Babka wrote:
>
> static void cpuset_change_task_nodemask(struct task_struct *tsk,
> nodemask_t *newmems)
> {
> - bool need_loop;
> -
> task_lock(tsk);
> - /*
> - * Determine if a loop is necessary if
50
> syscall_return_slowpath+0x184/0x1c0
> entry_SYSCALL_64_fastpath+0xab/0xad
>
> Reported-by: Vegard Nossum <vegard.nos...@gmail.com>
> Signed-off-by: Mike Kravetz <mike.krav...@oracle.com>
> ---
Acked-by: Hillf Danton <hillf...@alibaba-inc.com>
50
> syscall_return_slowpath+0x184/0x1c0
> entry_SYSCALL_64_fastpath+0xab/0xad
>
> Reported-by: Vegard Nossum
> Signed-off-by: Mike Kravetz
> ---
Acked-by: Hillf Danton
On April 10, 2017 5:54 PM Xishi Qiu wrote:
> On 2017/4/10 17:37, Hillf Danton wrote:
>
> > On April 10, 2017 4:57 PM Xishi Qiu wrote:
> >> On 2017/4/10 14:42, Hillf Danton wrote:
> >>
> >>> On April 08, 2017 9:40 PM zhong Jiang wrote:
> >>>&
On April 10, 2017 5:54 PM Xishi Qiu wrote:
> On 2017/4/10 17:37, Hillf Danton wrote:
>
> > On April 10, 2017 4:57 PM Xishi Qiu wrote:
> >> On 2017/4/10 14:42, Hillf Danton wrote:
> >>
> >>> On April 08, 2017 9:40 PM zhong Jiang wrote:
> >>>&
On April 10, 2017 4:57 PM Xishi Qiu wrote:
> On 2017/4/10 14:42, Hillf Danton wrote:
>
> > On April 08, 2017 9:40 PM zhong Jiang wrote:
> >>
> >> when runing the stabile docker cases in the vm. The following issue will
> >> come up.
> >&
On April 10, 2017 4:57 PM Xishi Qiu wrote:
> On 2017/4/10 14:42, Hillf Danton wrote:
>
> > On April 08, 2017 9:40 PM zhong Jiang wrote:
> >>
> >> when runing the stabile docker cases in the vm. The following issue will
> >> come up.
> >&
On April 08, 2017 9:40 PM zhong Jiang wrote:
>
> when runing the stabile docker cases in the vm. The following issue will
> come up.
>
> #40 [8801b57ffb30] async_page_fault at 8165c9f8
> [exception RIP: down_read_trylock+5]
> RIP: 810aca65 RSP: 8801b57ffbe8
On April 08, 2017 9:40 PM zhong Jiang wrote:
>
> when runing the stabile docker cases in the vm. The following issue will
> come up.
>
> #40 [8801b57ffb30] async_page_fault at 8165c9f8
> [exception RIP: down_read_trylock+5]
> RIP: 810aca65 RSP: 8801b57ffbe8
than I can double thank you, Mike:)
Acked-by: Hillf Danton <hillf...@alibaba-inc.com>
you, Mike:)
Acked-by: Hillf Danton
ode, but the change makes it
> more
> robust.
>
> Suggested-by: Michal Hocko <mho...@suse.com>
> Signed-off-by: Vlastimil Babka <vba...@suse.cz>
> ---
Acked-by: Hillf Danton <hillf...@alibaba-inc.com>
ode, but the change makes it
> more
> robust.
>
> Suggested-by: Michal Hocko
> Signed-off-by: Vlastimil Babka
> ---
Acked-by: Hillf Danton
Andrey Ryabinin <aryabi...@virtuozzo.com>
> Signed-off-by: Vlastimil Babka <vba...@suse.cz>
> Cc: <sta...@vger.kernel.org>
> ---
Acked-by: Hillf Danton <hillf...@alibaba-inc.com>
> mm/page_alloc.c | 3 ++-
> 1 file changed, 2 insertions(+), 1 deletion(-)
>
>
ue. There is no such known context, but let's
> play it safe and make __alloc_pages_direct_compact() robust for cases where
> PF_MEMALLOC is already set.
>
> Fixes: a8161d1ed609 ("mm, page_alloc: restructure direct compaction handling
> in slowpath")
> Reported-by: Andrey
On March 31, 2017 2:49 PM Michal Hocko wrote:
> On Fri 31-03-17 11:49:49, Hillf Danton wrote:
> [...]
> > > -/* Can fail with -ENOMEM from allocating a wait table with vmalloc() or
> > > - * alloc_bootmem_node_nopanic()/memblock_virt_alloc_node_nopanic() */
On March 31, 2017 2:49 PM Michal Hocko wrote:
> On Fri 31-03-17 11:49:49, Hillf Danton wrote:
> [...]
> > > -/* Can fail with -ENOMEM from allocating a wait table with vmalloc() or
> > > - * alloc_bootmem_node_nopanic()/memblock_virt_alloc_node_nopanic() */
On March 30, 2017 7:55 PM Michal Hocko wrote:
>
> +static void __meminit resize_zone_range(struct zone *zone, unsigned long
> start_pfn,
> + unsigned long nr_pages)
> +{
> + unsigned long old_end_pfn = zone_end_pfn(zone);
> +
> + if (start_pfn < zone->zone_start_pfn)
> +
On March 30, 2017 7:55 PM Michal Hocko wrote:
>
> +static void __meminit resize_zone_range(struct zone *zone, unsigned long
> start_pfn,
> + unsigned long nr_pages)
> +{
> + unsigned long old_end_pfn = zone_end_pfn(zone);
> +
> + if (start_pfn < zone->zone_start_pfn)
> +
On March 30, 2017 7:55 PM Michal Hocko wrote:
>
> From: Michal Hocko
>
> init_currently_empty_zone doesn't have any error to return yet it is
> still an int and callers try to be defensive and try to handle potential
> error. Remove this nonsense and simplify all callers.
>
On March 30, 2017 7:55 PM Michal Hocko wrote:
>
> From: Michal Hocko
>
> init_currently_empty_zone doesn't have any error to return yet it is
> still an int and callers try to be defensive and try to handle potential
> error. Remove this nonsense and simplify all callers.
>
It is already cut
On March 30, 2017 7:55 PM Michal Hocko wrote:
>
> @@ -5535,9 +5535,6 @@ int __meminit init_currently_empty_zone(struct zone
> *zone,
> zone_start_pfn, (zone_start_pfn + size));
>
> zone_init_free_lists(zone);
> - zone->initialized = 1;
> -
> - return 0;
> }
On March 30, 2017 7:55 PM Michal Hocko wrote:
>
> @@ -5535,9 +5535,6 @@ int __meminit init_currently_empty_zone(struct zone
> *zone,
> zone_start_pfn, (zone_start_pfn + size));
>
> zone_init_free_lists(zone);
> - zone->initialized = 1;
> -
> - return 0;
> }
On March 28, 2017 1:06 AM Vito Caputo wrote:
>
> The existing path and memory cleanups appear to be in reverse order, and
> there's no iput() potentially leaking the inode in the last two error gotos.
>
> Also make put_memory shmem_unacct_size() conditional on !inode since if we
> entered
On March 28, 2017 1:06 AM Vito Caputo wrote:
>
> The existing path and memory cleanups appear to be in reverse order, and
> there's no iput() potentially leaking the inode in the last two error gotos.
>
> Also make put_memory shmem_unacct_size() conditional on !inode since if we
> entered
astpath+0x1f/0xc2
>
> Analysis provided by Tetsuo Handa <penguin-ker...@i-love.sakura.ne.jp>
> v2: Remove now redundant initialization in hugetlbfs_get_root
>
> Reported-by: Dmitry Vyukov <dvyu...@google.com>
> Signed-off-by: Mike Kravetz <mike.krav...@oracle.com>
> ---
Acked-by: Hillf Danton <hillf...@alibaba-inc.com>
f/0xc2
>
> Analysis provided by Tetsuo Handa
> v2: Remove now redundant initialization in hugetlbfs_get_root
>
> Reported-by: Dmitry Vyukov
> Signed-off-by: Mike Kravetz
> ---
Acked-by: Hillf Danton
get+0x158/0x230 ipc/shm.c:657
> > entry_SYSCALL_64_fastpath+0x1f/0xc2
> > RIP: resv_map_release+0x265/0x330 mm/hugetlb.c:742
> >
> > Reported-by: Dmitry Vyukov <dvyu...@google.com>
> > Signed-off-by: Mike Kravetz <mike.krav...@oracle.com>
> > ---
>
0 fs/hugetlbfs/inode.c:1306
> > newseg+0x422/0xd30 ipc/shm.c:575
> > ipcget_new ipc/util.c:285 [inline]
> > ipcget+0x21e/0x580 ipc/util.c:639
> > SYSC_shmget ipc/shm.c:673 [inline]
> > SyS_shmget+0x158/0x230 ipc/shm.c:657
> > entry_SYSCALL_64_fastpath+0
; SyS_shmget+0x158/0x230 ipc/shm.c:657
> entry_SYSCALL_64_fastpath+0x1f/0xc2
> RIP: resv_map_release+0x265/0x330 mm/hugetlb.c:742
>
> Reported-by: Dmitry Vyukov <dvyu...@google.com>
> Signed-off-by: Mike Kravetz <mike.krav...@oracle.com>
> ---
Acked-by: Hillf D
; SyS_shmget+0x158/0x230 ipc/shm.c:657
> entry_SYSCALL_64_fastpath+0x1f/0xc2
> RIP: resv_map_release+0x265/0x330 mm/hugetlb.c:742
>
> Reported-by: Dmitry Vyukov
> Signed-off-by: Mike Kravetz
> ---
Acked-by: Hillf Danton
> mm/hugetlb.c | 4 +++-
> 1 file changed, 3
on-present for hugetlb is
> not correct, because pmd_present() checks multiple bits (not only
> _PAGE_PRESENT) for historical reason and it can misjudge hugetlb state.
>
> Fixes: e66f17ff7177 ("mm/hugetlb: take page table lock in follow_huge_pmd()")
> Signed-off-by: Nao
on-present for hugetlb is
> not correct, because pmd_present() checks multiple bits (not only
> _PAGE_PRESENT) for historical reason and it can misjudge hugetlb state.
>
> Fixes: e66f17ff7177 ("mm/hugetlb: take page table lock in follow_huge_pmd()")
> Signed-off-by: Naoya
On March 21, 2017 5:10 PM Dmitry Vyukov wrote:
>
> @@ -60,15 +60,8 @@ void notrace __sanitizer_cov_trace_pc(void)
> /*
>* We are interested in code coverage as a function of a syscall inputs,
>* so we ignore code executed in interrupts.
> - * The checks for whether we
On March 21, 2017 5:10 PM Dmitry Vyukov wrote:
>
> @@ -60,15 +60,8 @@ void notrace __sanitizer_cov_trace_pc(void)
> /*
>* We are interested in code coverage as a function of a syscall inputs,
>* so we ignore code executed in interrupts.
> - * The checks for whether we
On March 15, 2017 5:00 PM Aaron Lu wrote:
> void tlb_finish_mmu(struct mmu_gather *tlb, unsigned long start, unsigned
> long end)
> {
> + struct batch_free_struct *batch_free, *n;
> +
s/*n/*next/
> tlb_flush_mmu(tlb);
>
> /* keep the page table cache within bounds */
>
On March 15, 2017 5:00 PM Aaron Lu wrote:
> void tlb_finish_mmu(struct mmu_gather *tlb, unsigned long start, unsigned
> long end)
> {
> + struct batch_free_struct *batch_free, *n;
> +
s/*n/*next/
> tlb_flush_mmu(tlb);
>
> /* keep the page table cache within bounds */
>
echanism to kswapd. So, add kswapd_failures check
> on the throttle_direct_reclaim condition.
>
> Signed-off-by: Shakeel Butt <shake...@google.com>
> Suggested-by: Michal Hocko <mho...@suse.com>
> Suggested-by: Johannes Weiner <han...@cmpxchg.org>
> ---
Acked-by: Hillf Danton <hillf...@alibaba-inc.com>
echanism to kswapd. So, add kswapd_failures check
> on the throttle_direct_reclaim condition.
>
> Signed-off-by: Shakeel Butt
> Suggested-by: Michal Hocko
> Suggested-by: Johannes Weiner
> ---
Acked-by: Hillf Danton
an can activate it.
> There is no point to introduce new return value SWAP_DIRTY
> in ttu at the moment.
>
> Acked-by: Kirill A. Shutemov <kirill.shute...@linux.intel.com>
> Signed-off-by: Minchan Kim <minc...@kernel.org>
> ---
Acked-by: Hillf Danton <hillf...@aliba
an can activate it.
> There is no point to introduce new return value SWAP_DIRTY
> in ttu at the moment.
>
> Acked-by: Kirill A. Shutemov
> Signed-off-by: Minchan Kim
> ---
Acked-by: Hillf Danton
> include/linux/rmap.h | 1 -
> mm/rmap.c| 6 +++---
> mm/vmsca
On March 13, 2017 8:36 AM Minchan Kim wrote:
>
> Anyone doesn't use ret variable. Remove it.
>
> Acked-by: Kirill A. Shutemov <kirill.shute...@linux.intel.com>
> Signed-off-by: Minchan Kim <minc...@kernel.org>
> ---
Acked-by: Hillf Danton <hillf...@alibaba-inc
On March 13, 2017 8:36 AM Minchan Kim wrote:
>
> Anyone doesn't use ret variable. Remove it.
>
> Acked-by: Kirill A. Shutemov
> Signed-off-by: Minchan Kim
> ---
Acked-by: Hillf Danton
> mm/rmap.c | 3 +--
> 1 file changed, 1 insertion(+), 2 deletions(-)
>
&g
On March 07, 2017 12:24 AM Johannes Weiner wrote:
> On Mon, Mar 06, 2017 at 10:37:40AM +0900, Minchan Kim wrote:
> > On Fri, Mar 03, 2017 at 08:59:54AM +0100, Michal Hocko wrote:
> > > On Fri 03-03-17 10:26:09, Minchan Kim wrote:
> > > > On Tue, Feb 28, 2017 at 04:39:59PM -0500, Johannes Weiner
On March 07, 2017 12:24 AM Johannes Weiner wrote:
> On Mon, Mar 06, 2017 at 10:37:40AM +0900, Minchan Kim wrote:
> > On Fri, Mar 03, 2017 at 08:59:54AM +0100, Michal Hocko wrote:
> > > On Fri 03-03-17 10:26:09, Minchan Kim wrote:
> > > > On Tue, Feb 28, 2017 at 04:39:59PM -0500, Johannes Weiner
On March 03, 2017 5:45 AM Laura Abbott wrote:
>
> +static struct sg_table *dup_sg_table(struct sg_table *table)
> +{
> + struct sg_table *new_table;
> + int ret, i;
> + struct scatterlist *sg, *new_sg;
> +
> + new_table = kzalloc(sizeof(*new_table), GFP_KERNEL);
> + if
On March 03, 2017 5:45 AM Laura Abbott wrote:
>
> +static struct sg_table *dup_sg_table(struct sg_table *table)
> +{
> + struct sg_table *new_table;
> + int ret, i;
> + struct scatterlist *sg, *new_sg;
> +
> + new_table = kzalloc(sizeof(*new_table), GFP_KERNEL);
> + if
On March 02, 2017 11:11 PM Kirill A. Shutemov wrote:
>
> Basically the same race as with numa balancing in change_huge_pmd(), but
> a bit simpler to mitigate: we don't need to preserve dirty/young flags
> here due to MADV_FREE functionality.
>
> Signed-off-by: Kirill A. Shutemov
On March 02, 2017 11:11 PM Kirill A. Shutemov wrote:
>
> Basically the same race as with numa balancing in change_huge_pmd(), but
> a bit simpler to mitigate: we don't need to preserve dirty/young flags
> here due to MADV_FREE functionality.
>
> Signed-off-by: Kirill A. Shutemov
> Cc: Minchan
On March 02, 2017 2:39 PM Minchan Kim wrote:
> @@ -1424,7 +1424,8 @@ static int try_to_unmap_one(struct page *page, struct
> vm_area_struct *vma,
> } else if (!PageSwapBacked(page)) {
> /* dirty MADV_FREE page */
Nit: enrich the comment
On March 02, 2017 2:39 PM Minchan Kim wrote:
> @@ -1424,7 +1424,8 @@ static int try_to_unmap_one(struct page *page, struct
> vm_area_struct *vma,
> } else if (!PageSwapBacked(page)) {
> /* dirty MADV_FREE page */
Nit: enrich the comment
g tricks for pages skipped due to zone constraints.
>
> Signed-off-by: Johannes Weiner <han...@cmpxchg.org>
> ---
Acked-by: Hillf Danton <hillf...@alibaba-inc.com>
g tricks for pages skipped due to zone constraints.
>
> Signed-off-by: Johannes Weiner
> ---
Acked-by: Hillf Danton
<han...@cmpxchg.org>
> ---
Acked-by: Hillf Danton <hillf...@alibaba-inc.com>
eclaiming a few pages, the backoff function gets reset also,
> and so is of little help in these scenarios.
>
> We might want a backoff function for when there IS progress, but not
> enough to be satisfactory. But this isn't that. Remove it.
>
> Signed-off-by: Johannes Weiner
> ---
Acked-by: Hillf Danton
d-off-by: Johannes Weiner <han...@cmpxchg.org>
> ---
Acked-by: Hillf Danton <hillf...@alibaba-inc.com>
d-off-by: Johannes Weiner
> ---
Acked-by: Hillf Danton
ny meaningful way.
>
> Remove the counter and the unused pgdat_reclaimable().
>
> Signed-off-by: Johannes Weiner <han...@cmpxchg.org>
> ---
Acked-by: Hillf Danton <hillf...@alibaba-inc.com>
ny meaningful way.
>
> Remove the counter and the unused pgdat_reclaimable().
>
> Signed-off-by: Johannes Weiner
> ---
Acked-by: Hillf Danton
> ---
> mm/vmscan.c | 19 +++++------
> 1 file changed, 5 insertions(+), 14 deletions(-)
>
Acked-by: Hillf Danton <hillf...@alibaba-inc.com>
n.c | 19 +------
> 1 file changed, 5 insertions(+), 14 deletions(-)
>
Acked-by: Hillf Danton
rce_scan stuff, as well as the ugly multi-pass target
> calculation that it necessitated.
>
> Signed-off-by: Johannes Weiner <han...@cmpxchg.org>
> ---
Acked-by: Hillf Danton <hillf...@alibaba-inc.com>
rce_scan stuff, as well as the ugly multi-pass target
> calculation that it necessitated.
>
> Signed-off-by: Johannes Weiner
> ---
Acked-by: Hillf Danton
s to be a spurious change in this patch as I doubt the
> series was tested with laptop_mode, and neither is that particular
> change mentioned in the changelog. Remove it, it's still recent.
>
> Signed-off-by: Johannes Weiner <han...@cmpxchg.org>
> ---
Acked-by: Hillf Danton <hillf...@alibaba-inc.com>
s to be a spurious change in this patch as I doubt the
> series was tested with laptop_mode, and neither is that particular
> change mentioned in the changelog. Remove it, it's still recent.
>
> Signed-off-by: Johannes Weiner
> ---
Acked-by: Hillf Danton
the same pgdat over and over again doesn't make sense.
>
> Fixes: 599d0c954f91 ("mm, vmscan: move LRU lists to node")
> Signed-off-by: Johannes Weiner <han...@cmpxchg.org>
> ---
Acked-by: Hillf Danton <hillf...@alibaba-inc.com>
the same pgdat over and over again doesn't make sense.
>
> Fixes: 599d0c954f91 ("mm, vmscan: move LRU lists to node")
> Signed-off-by: Johannes Weiner
> ---
Acked-by: Hillf Danton
h (Michal)
>
> Reported-by: Jia He <hejia...@gmail.com>
> Signed-off-by: Johannes Weiner <han...@cmpxchg.org>
> Tested-by: Jia He <hejia...@gmail.com>
> Acked-by: Michal Hocko <mho...@suse.com>
> ---
Acked-by: Hillf Danton <hillf...@alibaba-inc.com>
m/internal.h (Michal)
>
> Reported-by: Jia He
> Signed-off-by: Johannes Weiner
> Tested-by: Jia He
> Acked-by: Michal Hocko
> ---
Acked-by: Hillf Danton
.@techsingularity.net>
> Cc: Andrew Morton <a...@linux-foundation.org>
> Acked-by: Johannes Weiner <han...@cmpxchg.org>
> Acked-by: Minchan Kim <minc...@kernel.org>
> Signed-off-by: Shaohua Li <s...@fb.com>
> ---
Acked-by: Hillf Danton <hillf...@alibaba-inc.com>
> Signed-off-by: Shaohua Li
> ---
Acked-by: Hillf Danton
h Dickins <hu...@google.com>
> Cc: Rik van Riel <r...@redhat.com>
> Cc: Mel Gorman <mgor...@techsingularity.net>
> Cc: Andrew Morton <a...@linux-foundation.org>
> Acked-by: Johannes Weiner <han...@cmpxchg.org>
> Signed-off-by: Shaohua Li <s...@fb.com>
> ---
Acked-by: Hillf Danton <hillf...@alibaba-inc.com>
Gorman
> Cc: Andrew Morton
> Acked-by: Johannes Weiner
> Signed-off-by: Shaohua Li
> ---
Acked-by: Hillf Danton
m>
> Cc: Minchan Kim <minc...@kernel.org>
> Cc: Hugh Dickins <hu...@google.com>
> Cc: Rik van Riel <r...@redhat.com>
> Cc: Mel Gorman <mgor...@techsingularity.net>
> Cc: Andrew Morton <a...@linux-foundation.org>
> Suggested-by: Johannes Weiner <han...@cmpxchg.org>
> Signed-off-by: Shaohua Li <s...@fb.com>
> ---
Acked-by: Hillf Danton <hillf...@alibaba-inc.com>
n Kim
> Cc: Hugh Dickins
> Cc: Rik van Riel
> Cc: Mel Gorman
> Cc: Andrew Morton
> Suggested-by: Johannes Weiner
> Signed-off-by: Shaohua Li
> ---
Acked-by: Hillf Danton
..@cmpxchg.org>
> Cc: Rik van Riel <r...@redhat.com>
> Cc: Mel Gorman <mgor...@techsingularity.net>
> Cc: Andrew Morton <a...@linux-foundation.org>
> Signed-off-by: Shaohua Li <s...@fb.com>
> ---
Acked-by: Hillf Danton <hillf...@alibaba-inc.com>
o
> reclaim too many MADV_FREE pages before used once pages.
>
> Based on Minchan's original patch
>
> Cc: Michal Hocko
> Cc: Minchan Kim
> Cc: Hugh Dickins
> Cc: Johannes Weiner
> Cc: Rik van Riel
> Cc: Mel Gorman
> Cc: Andrew Morton
> Signed-off-by: Shaohua Li
> ---
Acked-by: Hillf Danton
Cc: Andrew Morton <a...@linux-foundation.org>
> Acked-by: Johannes Weiner <han...@cmpxchg.org>
> Signed-off-by: Shaohua Li <s...@fb.com>
> ---
Acked-by: Hillf Danton <hillf...@alibaba-inc.com>
umption doesn't hold any more, so fix them.
>
> Cc: Michal Hocko
> Cc: Minchan Kim
> Cc: Hugh Dickins
> Cc: Rik van Riel
> Cc: Mel Gorman
> Cc: Andrew Morton
> Acked-by: Johannes Weiner
> Signed-off-by: Shaohua Li
> ---
Acked-by: Hillf Danton
c: Mel Gorman <mgor...@techsingularity.net>
> Cc: Andrew Morton <a...@linux-foundation.org>
> Suggested-by: Johannes Weiner <han...@cmpxchg.org>
> Signed-off-by: Shaohua Li <s...@fb.com>
> ---
Acked-by: Hillf Danton <hillf...@alibaba-inc.com>
MAP is unnecessary. If no other flags set (for
> example, TTU_MIGRATION), an unmap is implied.
>
> Cc: Michal Hocko
> Cc: Minchan Kim
> Cc: Hugh Dickins
> Cc: Rik van Riel
> Cc: Mel Gorman
> Cc: Andrew Morton
> Suggested-by: Johannes Weiner
> Signed-off-by: Shaohua Li
> ---
Acked-by: Hillf Danton
On February 21, 2017 12:34 AM Vlastimil Babka wrote:
> On 02/16/2017 09:21 AM, Hillf Danton wrote:
> > Right, but the order-3 request can also come up while kswapd is active and
> > gives up order-5.
>
> "Giving up on order-5" means it will set sc.order to 0,
On February 21, 2017 12:34 AM Vlastimil Babka wrote:
> On 02/16/2017 09:21 AM, Hillf Danton wrote:
> > Right, but the order-3 request can also come up while kswapd is active and
> > gives up order-5.
>
> "Giving up on order-5" means it will set sc.order to 0,
argument passed.
>
> Signed-off-by: Aneesh Kumar K.V <aneesh.ku...@linux.vnet.ibm.com>
Fix: bae473a423 ("mm: introduce fault_env")
Acked-by: Hillf Danton <hillf...@alibaba-inc.com>
> ---
> mm/huge_memory.c | 2 +-
> 1 file changed, 1 insertion(+), 1 deletion(-)
>
>
argument passed.
>
> Signed-off-by: Aneesh Kumar K.V
Fix: bae473a423 ("mm: introduce fault_env")
Acked-by: Hillf Danton
> ---
> mm/huge_memory.c | 2 +-
> 1 file changed, 1 insertion(+), 1 deletion(-)
>
> diff --git a/mm/huge_memory.c b/mm/huge_memory.c
> index 5f3ad65c8
On February 16, 2017 4:11 PM Mel Gorman wrote:
> On Thu, Feb 16, 2017 at 02:23:08PM +0800, Hillf Danton wrote:
> > On February 15, 2017 5:23 PM Mel Gorman wrote:
> > > */
> > > static int kswapd(void *p)
> > > {
> > > - unsigned int alloc_order, r
On February 16, 2017 4:11 PM Mel Gorman wrote:
> On Thu, Feb 16, 2017 at 02:23:08PM +0800, Hillf Danton wrote:
> > On February 15, 2017 5:23 PM Mel Gorman wrote:
> > > */
> > > static int kswapd(void *p)
> > > {
> > > - unsigned int alloc_order, r
On February 15, 2017 5:23 PM Mel Gorman wrote:
> */
> static int kswapd(void *p)
> {
> - unsigned int alloc_order, reclaim_order, classzone_idx;
> + unsigned int alloc_order, reclaim_order;
> + unsigned int classzone_idx = MAX_NR_ZONES - 1;
> pg_data_t *pgdat =
On February 15, 2017 5:23 PM Mel Gorman wrote:
> */
> static int kswapd(void *p)
> {
> - unsigned int alloc_order, reclaim_order, classzone_idx;
> + unsigned int alloc_order, reclaim_order;
> + unsigned int classzone_idx = MAX_NR_ZONES - 1;
> pg_data_t *pgdat =
egligible.
>
> This patch is included with the data in case a bisection leads to this area.
> This patch is also a pre-requisite for the rest of the series.
>
> Signed-off-by: Shantanu Goel <sgoe...@yahoo.com>
> Signed-off-by: Mel Gorman <mgor...@techsingularity.net>
1 - 100 of 1077 matches
Mail list logo