On 8/26/19 8:11 AM, Vlastimil Babka wrote:
On 7/20/19 1:32 AM, Ralph Campbell wrote:
When CONFIG_MIGRATE_VMA_HELPER is enabled, migrate_vma() calls
migrate_vma_collect() which initializes a struct mm_walk but
didn't initialize mm_walk.pud_entry. (Found by code inspection)
Use a C structure initialization to make sure it is set to NULL.

Fixes: 8763cb45ab967 ("mm/migrate: new memory migration helper for use with
device memory")
Cc: sta...@vger.kernel.org
Signed-off-by: Ralph Campbell <rcampb...@nvidia.com>
Cc: "Jérôme Glisse" <jgli...@redhat.com>
Cc: Andrew Morton <a...@linux-foundation.org>

So this bug can manifest by some garbage address on stack being called, right? I
wonder, how comes it didn't actually happen yet?

Right.
Probably because HMM isn't widely being used in production yet.


---
  mm/migrate.c | 17 +++++++----------
  1 file changed, 7 insertions(+), 10 deletions(-)

diff --git a/mm/migrate.c b/mm/migrate.c
index 515718392b24..a42858d8e00b 100644
--- a/mm/migrate.c
+++ b/mm/migrate.c
@@ -2340,16 +2340,13 @@ static int migrate_vma_collect_pmd(pmd_t *pmdp,
  static void migrate_vma_collect(struct migrate_vma *migrate)
  {
        struct mmu_notifier_range range;
-       struct mm_walk mm_walk;
-
-       mm_walk.pmd_entry = migrate_vma_collect_pmd;
-       mm_walk.pte_entry = NULL;
-       mm_walk.pte_hole = migrate_vma_collect_hole;
-       mm_walk.hugetlb_entry = NULL;
-       mm_walk.test_walk = NULL;
-       mm_walk.vma = migrate->vma;
-       mm_walk.mm = migrate->vma->vm_mm;
-       mm_walk.private = migrate;
+       struct mm_walk mm_walk = {
+               .pmd_entry = migrate_vma_collect_pmd,
+               .pte_hole = migrate_vma_collect_hole,
+               .vma = migrate->vma,
+               .mm = migrate->vma->vm_mm,
+               .private = migrate,
+       };
mmu_notifier_range_init(&range, MMU_NOTIFY_CLEAR, 0, NULL, mm_walk.mm,
                                migrate->start,


Reply via email to