Re: [PATCH 7/9] mm, hugetlb: add VM_NORESERVE check in vma_has_reserves()
On Tue, Jul 16, 2013 at 11:17:23AM +0530, Aneesh Kumar K.V wrote: > Joonsoo Kim writes: > > > On Mon, Jul 15, 2013 at 08:41:12PM +0530, Aneesh Kumar K.V wrote: > >> Joonsoo Kim writes: > >> > >> > If we map the region with MAP_NORESERVE and MAP_SHARED, > >> > we can skip to check reserve counting and eventually we cannot be ensured > >> > to allocate a huge page in fault time. > >> > With following example code, you can easily find this situation. > >> > > >> > Assume 2MB, nr_hugepages = 100 > >> > > >> > fd = hugetlbfs_unlinked_fd(); > >> > if (fd < 0) > >> > return 1; > >> > > >> > size = 200 * MB; > >> > flag = MAP_SHARED; > >> > p = mmap(NULL, size, PROT_READ|PROT_WRITE, flag, fd, 0); > >> > if (p == MAP_FAILED) { > >> > fprintf(stderr, "mmap() failed: %s\n", strerror(errno)); > >> > return -1; > >> > } > >> > > >> > size = 2 * MB; > >> > flag = MAP_ANONYMOUS | MAP_SHARED | MAP_HUGETLB | MAP_NORESERVE; > >> > p = mmap(NULL, size, PROT_READ|PROT_WRITE, flag, -1, 0); > >> > if (p == MAP_FAILED) { > >> > fprintf(stderr, "mmap() failed: %s\n", strerror(errno)); > >> > } > >> > p[0] = '0'; > >> > sleep(10); > >> > > >> > During executing sleep(10), run 'cat /proc/meminfo' on another process. > >> > You'll find a mentioned problem. > >> > > >> > Solution is simple. We should check VM_NORESERVE in vma_has_reserves(). > >> > This prevent to use a pre-allocated huge page if free count is under > >> > the reserve count. > >> > >> You have a problem with this patch, which i guess you are fixing in > >> patch 9. Consider two process > >> > >> a) MAP_SHARED on fd > >> b) MAP_SHARED | MAP_NORESERVE on fd > >> > >> We should allow the (b) to access the page even if VM_NORESERVE is set > >> and we are out of reserve space . > > > > I can't get your point. > > Please elaborate more on this. > > > One process mmap with MAP_SHARED and another one with MAP_SHARED | > MAP_NORESERVE > Now the first process will result in reserving the pages from the hugtlb > pool. Now if the second process try to dequeue huge page and we don't > have free space we will fail because > > vma_has_reservers will now return zero because VM_NORESERVE is set > and we can have (h->free_huge_pages - h->resv_huge_pages) == 0; I think that this behavior is correct, because a user who mapped with VM_NORESERVE should not think their allocation always succeed. With patch 9, he can be ensured to succeed, but I think it is side-effect. > The below hunk in your patch 9 handles that > > +if (vma->vm_flags & VM_NORESERVE) { > +/* > + * This address is already reserved by other process(chg == 0), > + * so, we should decreament reserved count. Without > + * decreamenting, reserve count is remained after releasing > + * inode, because this allocated page will go into page cache > + * and is regarded as coming from reserved pool in releasing > + * step. Currently, we don't have any other solution to deal > + * with this situation properly, so add work-around here. > + */ > +if (vma->vm_flags & VM_MAYSHARE && chg == 0) > +return 1; > +else > +return 0; > +} > > so may be both of these should be folded ? I think that these patches should not be folded, because these handle two separate issues. Reserve count mismatch issue mentioned in patch 9 is not introduced by patch 7. Thanks. > > -aneesh > > -- > To unsubscribe, send a message with 'unsubscribe linux-mm' in > the body to majord...@kvack.org. For more info on Linux MM, > see: http://www.linux-mm.org/ . > Don't email: mailto:"d...@kvack.org;> em...@kvack.org -- To unsubscribe from this list: send the line "unsubscribe linux-kernel" in the body of a message to majord...@vger.kernel.org More majordomo info at http://vger.kernel.org/majordomo-info.html Please read the FAQ at http://www.tux.org/lkml/
Re: [PATCH 7/9] mm, hugetlb: add VM_NORESERVE check in vma_has_reserves()
On Tue, Jul 16, 2013 at 11:17:23AM +0530, Aneesh Kumar K.V wrote: Joonsoo Kim iamjoonsoo@lge.com writes: On Mon, Jul 15, 2013 at 08:41:12PM +0530, Aneesh Kumar K.V wrote: Joonsoo Kim iamjoonsoo@lge.com writes: If we map the region with MAP_NORESERVE and MAP_SHARED, we can skip to check reserve counting and eventually we cannot be ensured to allocate a huge page in fault time. With following example code, you can easily find this situation. Assume 2MB, nr_hugepages = 100 fd = hugetlbfs_unlinked_fd(); if (fd 0) return 1; size = 200 * MB; flag = MAP_SHARED; p = mmap(NULL, size, PROT_READ|PROT_WRITE, flag, fd, 0); if (p == MAP_FAILED) { fprintf(stderr, mmap() failed: %s\n, strerror(errno)); return -1; } size = 2 * MB; flag = MAP_ANONYMOUS | MAP_SHARED | MAP_HUGETLB | MAP_NORESERVE; p = mmap(NULL, size, PROT_READ|PROT_WRITE, flag, -1, 0); if (p == MAP_FAILED) { fprintf(stderr, mmap() failed: %s\n, strerror(errno)); } p[0] = '0'; sleep(10); During executing sleep(10), run 'cat /proc/meminfo' on another process. You'll find a mentioned problem. Solution is simple. We should check VM_NORESERVE in vma_has_reserves(). This prevent to use a pre-allocated huge page if free count is under the reserve count. You have a problem with this patch, which i guess you are fixing in patch 9. Consider two process a) MAP_SHARED on fd b) MAP_SHARED | MAP_NORESERVE on fd We should allow the (b) to access the page even if VM_NORESERVE is set and we are out of reserve space . I can't get your point. Please elaborate more on this. One process mmap with MAP_SHARED and another one with MAP_SHARED | MAP_NORESERVE Now the first process will result in reserving the pages from the hugtlb pool. Now if the second process try to dequeue huge page and we don't have free space we will fail because vma_has_reservers will now return zero because VM_NORESERVE is set and we can have (h-free_huge_pages - h-resv_huge_pages) == 0; I think that this behavior is correct, because a user who mapped with VM_NORESERVE should not think their allocation always succeed. With patch 9, he can be ensured to succeed, but I think it is side-effect. The below hunk in your patch 9 handles that +if (vma-vm_flags VM_NORESERVE) { +/* + * This address is already reserved by other process(chg == 0), + * so, we should decreament reserved count. Without + * decreamenting, reserve count is remained after releasing + * inode, because this allocated page will go into page cache + * and is regarded as coming from reserved pool in releasing + * step. Currently, we don't have any other solution to deal + * with this situation properly, so add work-around here. + */ +if (vma-vm_flags VM_MAYSHARE chg == 0) +return 1; +else +return 0; +} so may be both of these should be folded ? I think that these patches should not be folded, because these handle two separate issues. Reserve count mismatch issue mentioned in patch 9 is not introduced by patch 7. Thanks. -aneesh -- To unsubscribe, send a message with 'unsubscribe linux-mm' in the body to majord...@kvack.org. For more info on Linux MM, see: http://www.linux-mm.org/ . Don't email: a href=mailto:d...@kvack.org; em...@kvack.org /a -- To unsubscribe from this list: send the line unsubscribe linux-kernel in the body of a message to majord...@vger.kernel.org More majordomo info at http://vger.kernel.org/majordomo-info.html Please read the FAQ at http://www.tux.org/lkml/
Re: [PATCH 7/9] mm, hugetlb: add VM_NORESERVE check in vma_has_reserves()
Joonsoo Kim writes: > On Mon, Jul 15, 2013 at 08:41:12PM +0530, Aneesh Kumar K.V wrote: >> Joonsoo Kim writes: >> >> > If we map the region with MAP_NORESERVE and MAP_SHARED, >> > we can skip to check reserve counting and eventually we cannot be ensured >> > to allocate a huge page in fault time. >> > With following example code, you can easily find this situation. >> > >> > Assume 2MB, nr_hugepages = 100 >> > >> > fd = hugetlbfs_unlinked_fd(); >> > if (fd < 0) >> > return 1; >> > >> > size = 200 * MB; >> > flag = MAP_SHARED; >> > p = mmap(NULL, size, PROT_READ|PROT_WRITE, flag, fd, 0); >> > if (p == MAP_FAILED) { >> > fprintf(stderr, "mmap() failed: %s\n", strerror(errno)); >> > return -1; >> > } >> > >> > size = 2 * MB; >> > flag = MAP_ANONYMOUS | MAP_SHARED | MAP_HUGETLB | MAP_NORESERVE; >> > p = mmap(NULL, size, PROT_READ|PROT_WRITE, flag, -1, 0); >> > if (p == MAP_FAILED) { >> > fprintf(stderr, "mmap() failed: %s\n", strerror(errno)); >> > } >> > p[0] = '0'; >> > sleep(10); >> > >> > During executing sleep(10), run 'cat /proc/meminfo' on another process. >> > You'll find a mentioned problem. >> > >> > Solution is simple. We should check VM_NORESERVE in vma_has_reserves(). >> > This prevent to use a pre-allocated huge page if free count is under >> > the reserve count. >> >> You have a problem with this patch, which i guess you are fixing in >> patch 9. Consider two process >> >> a) MAP_SHARED on fd >> b) MAP_SHARED | MAP_NORESERVE on fd >> >> We should allow the (b) to access the page even if VM_NORESERVE is set >> and we are out of reserve space . > > I can't get your point. > Please elaborate more on this. One process mmap with MAP_SHARED and another one with MAP_SHARED | MAP_NORESERVE Now the first process will result in reserving the pages from the hugtlb pool. Now if the second process try to dequeue huge page and we don't have free space we will fail because vma_has_reservers will now return zero because VM_NORESERVE is set and we can have (h->free_huge_pages - h->resv_huge_pages) == 0; The below hunk in your patch 9 handles that + if (vma->vm_flags & VM_NORESERVE) { + /* + * This address is already reserved by other process(chg == 0), + * so, we should decreament reserved count. Without + * decreamenting, reserve count is remained after releasing + * inode, because this allocated page will go into page cache + * and is regarded as coming from reserved pool in releasing + * step. Currently, we don't have any other solution to deal + * with this situation properly, so add work-around here. + */ + if (vma->vm_flags & VM_MAYSHARE && chg == 0) + return 1; + else + return 0; + } so may be both of these should be folded ? -aneesh -- To unsubscribe from this list: send the line "unsubscribe linux-kernel" in the body of a message to majord...@vger.kernel.org More majordomo info at http://vger.kernel.org/majordomo-info.html Please read the FAQ at http://www.tux.org/lkml/
Re: [PATCH 7/9] mm, hugetlb: add VM_NORESERVE check in vma_has_reserves()
On Mon, Jul 15, 2013 at 08:41:12PM +0530, Aneesh Kumar K.V wrote: > Joonsoo Kim writes: > > > If we map the region with MAP_NORESERVE and MAP_SHARED, > > we can skip to check reserve counting and eventually we cannot be ensured > > to allocate a huge page in fault time. > > With following example code, you can easily find this situation. > > > > Assume 2MB, nr_hugepages = 100 > > > > fd = hugetlbfs_unlinked_fd(); > > if (fd < 0) > > return 1; > > > > size = 200 * MB; > > flag = MAP_SHARED; > > p = mmap(NULL, size, PROT_READ|PROT_WRITE, flag, fd, 0); > > if (p == MAP_FAILED) { > > fprintf(stderr, "mmap() failed: %s\n", strerror(errno)); > > return -1; > > } > > > > size = 2 * MB; > > flag = MAP_ANONYMOUS | MAP_SHARED | MAP_HUGETLB | MAP_NORESERVE; > > p = mmap(NULL, size, PROT_READ|PROT_WRITE, flag, -1, 0); > > if (p == MAP_FAILED) { > > fprintf(stderr, "mmap() failed: %s\n", strerror(errno)); > > } > > p[0] = '0'; > > sleep(10); > > > > During executing sleep(10), run 'cat /proc/meminfo' on another process. > > You'll find a mentioned problem. > > > > Solution is simple. We should check VM_NORESERVE in vma_has_reserves(). > > This prevent to use a pre-allocated huge page if free count is under > > the reserve count. > > You have a problem with this patch, which i guess you are fixing in > patch 9. Consider two process > > a) MAP_SHARED on fd > b) MAP_SHARED | MAP_NORESERVE on fd > > We should allow the (b) to access the page even if VM_NORESERVE is set > and we are out of reserve space . I can't get your point. Please elaborate more on this. Thanks. > > so may be you should rearrange the patch ? > > > > > Signed-off-by: Joonsoo Kim > > > > diff --git a/mm/hugetlb.c b/mm/hugetlb.c > > index 6c1eb9b..f6a7a4e 100644 > > --- a/mm/hugetlb.c > > +++ b/mm/hugetlb.c > > @@ -464,6 +464,8 @@ void reset_vma_resv_huge_pages(struct vm_area_struct > > *vma) > > /* Returns true if the VMA has associated reserve pages */ > > static int vma_has_reserves(struct vm_area_struct *vma) > > { > > + if (vma->vm_flags & VM_NORESERVE) > > + return 0; > > if (vma->vm_flags & VM_MAYSHARE) > > return 1; > > if (is_vma_resv_set(vma, HPAGE_RESV_OWNER)) > > -- > > 1.7.9.5 > > -- > To unsubscribe, send a message with 'unsubscribe linux-mm' in > the body to majord...@kvack.org. For more info on Linux MM, > see: http://www.linux-mm.org/ . > Don't email: mailto:"d...@kvack.org;> em...@kvack.org -- To unsubscribe from this list: send the line "unsubscribe linux-kernel" in the body of a message to majord...@vger.kernel.org More majordomo info at http://vger.kernel.org/majordomo-info.html Please read the FAQ at http://www.tux.org/lkml/
Re: [PATCH 7/9] mm, hugetlb: add VM_NORESERVE check in vma_has_reserves()
Joonsoo Kim writes: > If we map the region with MAP_NORESERVE and MAP_SHARED, > we can skip to check reserve counting and eventually we cannot be ensured > to allocate a huge page in fault time. > With following example code, you can easily find this situation. > > Assume 2MB, nr_hugepages = 100 > > fd = hugetlbfs_unlinked_fd(); > if (fd < 0) > return 1; > > size = 200 * MB; > flag = MAP_SHARED; > p = mmap(NULL, size, PROT_READ|PROT_WRITE, flag, fd, 0); > if (p == MAP_FAILED) { > fprintf(stderr, "mmap() failed: %s\n", strerror(errno)); > return -1; > } > > size = 2 * MB; > flag = MAP_ANONYMOUS | MAP_SHARED | MAP_HUGETLB | MAP_NORESERVE; > p = mmap(NULL, size, PROT_READ|PROT_WRITE, flag, -1, 0); > if (p == MAP_FAILED) { > fprintf(stderr, "mmap() failed: %s\n", strerror(errno)); > } > p[0] = '0'; > sleep(10); > > During executing sleep(10), run 'cat /proc/meminfo' on another process. > You'll find a mentioned problem. > > Solution is simple. We should check VM_NORESERVE in vma_has_reserves(). > This prevent to use a pre-allocated huge page if free count is under > the reserve count. You have a problem with this patch, which i guess you are fixing in patch 9. Consider two process a) MAP_SHARED on fd b) MAP_SHARED | MAP_NORESERVE on fd We should allow the (b) to access the page even if VM_NORESERVE is set and we are out of reserve space . so may be you should rearrange the patch ? > > Signed-off-by: Joonsoo Kim > > diff --git a/mm/hugetlb.c b/mm/hugetlb.c > index 6c1eb9b..f6a7a4e 100644 > --- a/mm/hugetlb.c > +++ b/mm/hugetlb.c > @@ -464,6 +464,8 @@ void reset_vma_resv_huge_pages(struct vm_area_struct *vma) > /* Returns true if the VMA has associated reserve pages */ > static int vma_has_reserves(struct vm_area_struct *vma) > { > + if (vma->vm_flags & VM_NORESERVE) > + return 0; > if (vma->vm_flags & VM_MAYSHARE) > return 1; > if (is_vma_resv_set(vma, HPAGE_RESV_OWNER)) > -- > 1.7.9.5 -- To unsubscribe from this list: send the line "unsubscribe linux-kernel" in the body of a message to majord...@vger.kernel.org More majordomo info at http://vger.kernel.org/majordomo-info.html Please read the FAQ at http://www.tux.org/lkml/
Re: [PATCH 7/9] mm, hugetlb: add VM_NORESERVE check in vma_has_reserves()
Joonsoo Kim writes: > If we map the region with MAP_NORESERVE and MAP_SHARED, > we can skip to check reserve counting and eventually we cannot be ensured > to allocate a huge page in fault time. > With following example code, you can easily find this situation. > > Assume 2MB, nr_hugepages = 100 > > fd = hugetlbfs_unlinked_fd(); > if (fd < 0) > return 1; > > size = 200 * MB; > flag = MAP_SHARED; > p = mmap(NULL, size, PROT_READ|PROT_WRITE, flag, fd, 0); > if (p == MAP_FAILED) { > fprintf(stderr, "mmap() failed: %s\n", strerror(errno)); > return -1; > } > > size = 2 * MB; > flag = MAP_ANONYMOUS | MAP_SHARED | MAP_HUGETLB | MAP_NORESERVE; > p = mmap(NULL, size, PROT_READ|PROT_WRITE, flag, -1, 0); > if (p == MAP_FAILED) { > fprintf(stderr, "mmap() failed: %s\n", strerror(errno)); > } > p[0] = '0'; > sleep(10); > > During executing sleep(10), run 'cat /proc/meminfo' on another process. > You'll find a mentioned problem. > > Solution is simple. We should check VM_NORESERVE in vma_has_reserves(). > This prevent to use a pre-allocated huge page if free count is under > the reserve count. > > Signed-off-by: Joonsoo Kim Reviewed-by: Aneesh Kumar K.V May be it is better to say "non reserved shared mapping should not eat into reserve space. So return error when we don't find enough free space." > > diff --git a/mm/hugetlb.c b/mm/hugetlb.c > index 6c1eb9b..f6a7a4e 100644 > --- a/mm/hugetlb.c > +++ b/mm/hugetlb.c > @@ -464,6 +464,8 @@ void reset_vma_resv_huge_pages(struct vm_area_struct *vma) > /* Returns true if the VMA has associated reserve pages */ > static int vma_has_reserves(struct vm_area_struct *vma) > { > + if (vma->vm_flags & VM_NORESERVE) > + return 0; > if (vma->vm_flags & VM_MAYSHARE) > return 1; > if (is_vma_resv_set(vma, HPAGE_RESV_OWNER)) > -- > 1.7.9.5 -- To unsubscribe from this list: send the line "unsubscribe linux-kernel" in the body of a message to majord...@vger.kernel.org More majordomo info at http://vger.kernel.org/majordomo-info.html Please read the FAQ at http://www.tux.org/lkml/
[PATCH 7/9] mm, hugetlb: add VM_NORESERVE check in vma_has_reserves()
If we map the region with MAP_NORESERVE and MAP_SHARED, we can skip to check reserve counting and eventually we cannot be ensured to allocate a huge page in fault time. With following example code, you can easily find this situation. Assume 2MB, nr_hugepages = 100 fd = hugetlbfs_unlinked_fd(); if (fd < 0) return 1; size = 200 * MB; flag = MAP_SHARED; p = mmap(NULL, size, PROT_READ|PROT_WRITE, flag, fd, 0); if (p == MAP_FAILED) { fprintf(stderr, "mmap() failed: %s\n", strerror(errno)); return -1; } size = 2 * MB; flag = MAP_ANONYMOUS | MAP_SHARED | MAP_HUGETLB | MAP_NORESERVE; p = mmap(NULL, size, PROT_READ|PROT_WRITE, flag, -1, 0); if (p == MAP_FAILED) { fprintf(stderr, "mmap() failed: %s\n", strerror(errno)); } p[0] = '0'; sleep(10); During executing sleep(10), run 'cat /proc/meminfo' on another process. You'll find a mentioned problem. Solution is simple. We should check VM_NORESERVE in vma_has_reserves(). This prevent to use a pre-allocated huge page if free count is under the reserve count. Signed-off-by: Joonsoo Kim diff --git a/mm/hugetlb.c b/mm/hugetlb.c index 6c1eb9b..f6a7a4e 100644 --- a/mm/hugetlb.c +++ b/mm/hugetlb.c @@ -464,6 +464,8 @@ void reset_vma_resv_huge_pages(struct vm_area_struct *vma) /* Returns true if the VMA has associated reserve pages */ static int vma_has_reserves(struct vm_area_struct *vma) { + if (vma->vm_flags & VM_NORESERVE) + return 0; if (vma->vm_flags & VM_MAYSHARE) return 1; if (is_vma_resv_set(vma, HPAGE_RESV_OWNER)) -- 1.7.9.5 -- To unsubscribe from this list: send the line "unsubscribe linux-kernel" in the body of a message to majord...@vger.kernel.org More majordomo info at http://vger.kernel.org/majordomo-info.html Please read the FAQ at http://www.tux.org/lkml/
[PATCH 7/9] mm, hugetlb: add VM_NORESERVE check in vma_has_reserves()
If we map the region with MAP_NORESERVE and MAP_SHARED, we can skip to check reserve counting and eventually we cannot be ensured to allocate a huge page in fault time. With following example code, you can easily find this situation. Assume 2MB, nr_hugepages = 100 fd = hugetlbfs_unlinked_fd(); if (fd 0) return 1; size = 200 * MB; flag = MAP_SHARED; p = mmap(NULL, size, PROT_READ|PROT_WRITE, flag, fd, 0); if (p == MAP_FAILED) { fprintf(stderr, mmap() failed: %s\n, strerror(errno)); return -1; } size = 2 * MB; flag = MAP_ANONYMOUS | MAP_SHARED | MAP_HUGETLB | MAP_NORESERVE; p = mmap(NULL, size, PROT_READ|PROT_WRITE, flag, -1, 0); if (p == MAP_FAILED) { fprintf(stderr, mmap() failed: %s\n, strerror(errno)); } p[0] = '0'; sleep(10); During executing sleep(10), run 'cat /proc/meminfo' on another process. You'll find a mentioned problem. Solution is simple. We should check VM_NORESERVE in vma_has_reserves(). This prevent to use a pre-allocated huge page if free count is under the reserve count. Signed-off-by: Joonsoo Kim iamjoonsoo@lge.com diff --git a/mm/hugetlb.c b/mm/hugetlb.c index 6c1eb9b..f6a7a4e 100644 --- a/mm/hugetlb.c +++ b/mm/hugetlb.c @@ -464,6 +464,8 @@ void reset_vma_resv_huge_pages(struct vm_area_struct *vma) /* Returns true if the VMA has associated reserve pages */ static int vma_has_reserves(struct vm_area_struct *vma) { + if (vma-vm_flags VM_NORESERVE) + return 0; if (vma-vm_flags VM_MAYSHARE) return 1; if (is_vma_resv_set(vma, HPAGE_RESV_OWNER)) -- 1.7.9.5 -- To unsubscribe from this list: send the line unsubscribe linux-kernel in the body of a message to majord...@vger.kernel.org More majordomo info at http://vger.kernel.org/majordomo-info.html Please read the FAQ at http://www.tux.org/lkml/
Re: [PATCH 7/9] mm, hugetlb: add VM_NORESERVE check in vma_has_reserves()
Joonsoo Kim iamjoonsoo@lge.com writes: If we map the region with MAP_NORESERVE and MAP_SHARED, we can skip to check reserve counting and eventually we cannot be ensured to allocate a huge page in fault time. With following example code, you can easily find this situation. Assume 2MB, nr_hugepages = 100 fd = hugetlbfs_unlinked_fd(); if (fd 0) return 1; size = 200 * MB; flag = MAP_SHARED; p = mmap(NULL, size, PROT_READ|PROT_WRITE, flag, fd, 0); if (p == MAP_FAILED) { fprintf(stderr, mmap() failed: %s\n, strerror(errno)); return -1; } size = 2 * MB; flag = MAP_ANONYMOUS | MAP_SHARED | MAP_HUGETLB | MAP_NORESERVE; p = mmap(NULL, size, PROT_READ|PROT_WRITE, flag, -1, 0); if (p == MAP_FAILED) { fprintf(stderr, mmap() failed: %s\n, strerror(errno)); } p[0] = '0'; sleep(10); During executing sleep(10), run 'cat /proc/meminfo' on another process. You'll find a mentioned problem. Solution is simple. We should check VM_NORESERVE in vma_has_reserves(). This prevent to use a pre-allocated huge page if free count is under the reserve count. Signed-off-by: Joonsoo Kim iamjoonsoo@lge.com Reviewed-by: Aneesh Kumar K.V aneesh.ku...@linux.vnet.ibm.com May be it is better to say non reserved shared mapping should not eat into reserve space. So return error when we don't find enough free space. diff --git a/mm/hugetlb.c b/mm/hugetlb.c index 6c1eb9b..f6a7a4e 100644 --- a/mm/hugetlb.c +++ b/mm/hugetlb.c @@ -464,6 +464,8 @@ void reset_vma_resv_huge_pages(struct vm_area_struct *vma) /* Returns true if the VMA has associated reserve pages */ static int vma_has_reserves(struct vm_area_struct *vma) { + if (vma-vm_flags VM_NORESERVE) + return 0; if (vma-vm_flags VM_MAYSHARE) return 1; if (is_vma_resv_set(vma, HPAGE_RESV_OWNER)) -- 1.7.9.5 -- To unsubscribe from this list: send the line unsubscribe linux-kernel in the body of a message to majord...@vger.kernel.org More majordomo info at http://vger.kernel.org/majordomo-info.html Please read the FAQ at http://www.tux.org/lkml/
Re: [PATCH 7/9] mm, hugetlb: add VM_NORESERVE check in vma_has_reserves()
Joonsoo Kim iamjoonsoo@lge.com writes: If we map the region with MAP_NORESERVE and MAP_SHARED, we can skip to check reserve counting and eventually we cannot be ensured to allocate a huge page in fault time. With following example code, you can easily find this situation. Assume 2MB, nr_hugepages = 100 fd = hugetlbfs_unlinked_fd(); if (fd 0) return 1; size = 200 * MB; flag = MAP_SHARED; p = mmap(NULL, size, PROT_READ|PROT_WRITE, flag, fd, 0); if (p == MAP_FAILED) { fprintf(stderr, mmap() failed: %s\n, strerror(errno)); return -1; } size = 2 * MB; flag = MAP_ANONYMOUS | MAP_SHARED | MAP_HUGETLB | MAP_NORESERVE; p = mmap(NULL, size, PROT_READ|PROT_WRITE, flag, -1, 0); if (p == MAP_FAILED) { fprintf(stderr, mmap() failed: %s\n, strerror(errno)); } p[0] = '0'; sleep(10); During executing sleep(10), run 'cat /proc/meminfo' on another process. You'll find a mentioned problem. Solution is simple. We should check VM_NORESERVE in vma_has_reserves(). This prevent to use a pre-allocated huge page if free count is under the reserve count. You have a problem with this patch, which i guess you are fixing in patch 9. Consider two process a) MAP_SHARED on fd b) MAP_SHARED | MAP_NORESERVE on fd We should allow the (b) to access the page even if VM_NORESERVE is set and we are out of reserve space . so may be you should rearrange the patch ? Signed-off-by: Joonsoo Kim iamjoonsoo@lge.com diff --git a/mm/hugetlb.c b/mm/hugetlb.c index 6c1eb9b..f6a7a4e 100644 --- a/mm/hugetlb.c +++ b/mm/hugetlb.c @@ -464,6 +464,8 @@ void reset_vma_resv_huge_pages(struct vm_area_struct *vma) /* Returns true if the VMA has associated reserve pages */ static int vma_has_reserves(struct vm_area_struct *vma) { + if (vma-vm_flags VM_NORESERVE) + return 0; if (vma-vm_flags VM_MAYSHARE) return 1; if (is_vma_resv_set(vma, HPAGE_RESV_OWNER)) -- 1.7.9.5 -- To unsubscribe from this list: send the line unsubscribe linux-kernel in the body of a message to majord...@vger.kernel.org More majordomo info at http://vger.kernel.org/majordomo-info.html Please read the FAQ at http://www.tux.org/lkml/
Re: [PATCH 7/9] mm, hugetlb: add VM_NORESERVE check in vma_has_reserves()
On Mon, Jul 15, 2013 at 08:41:12PM +0530, Aneesh Kumar K.V wrote: Joonsoo Kim iamjoonsoo@lge.com writes: If we map the region with MAP_NORESERVE and MAP_SHARED, we can skip to check reserve counting and eventually we cannot be ensured to allocate a huge page in fault time. With following example code, you can easily find this situation. Assume 2MB, nr_hugepages = 100 fd = hugetlbfs_unlinked_fd(); if (fd 0) return 1; size = 200 * MB; flag = MAP_SHARED; p = mmap(NULL, size, PROT_READ|PROT_WRITE, flag, fd, 0); if (p == MAP_FAILED) { fprintf(stderr, mmap() failed: %s\n, strerror(errno)); return -1; } size = 2 * MB; flag = MAP_ANONYMOUS | MAP_SHARED | MAP_HUGETLB | MAP_NORESERVE; p = mmap(NULL, size, PROT_READ|PROT_WRITE, flag, -1, 0); if (p == MAP_FAILED) { fprintf(stderr, mmap() failed: %s\n, strerror(errno)); } p[0] = '0'; sleep(10); During executing sleep(10), run 'cat /proc/meminfo' on another process. You'll find a mentioned problem. Solution is simple. We should check VM_NORESERVE in vma_has_reserves(). This prevent to use a pre-allocated huge page if free count is under the reserve count. You have a problem with this patch, which i guess you are fixing in patch 9. Consider two process a) MAP_SHARED on fd b) MAP_SHARED | MAP_NORESERVE on fd We should allow the (b) to access the page even if VM_NORESERVE is set and we are out of reserve space . I can't get your point. Please elaborate more on this. Thanks. so may be you should rearrange the patch ? Signed-off-by: Joonsoo Kim iamjoonsoo@lge.com diff --git a/mm/hugetlb.c b/mm/hugetlb.c index 6c1eb9b..f6a7a4e 100644 --- a/mm/hugetlb.c +++ b/mm/hugetlb.c @@ -464,6 +464,8 @@ void reset_vma_resv_huge_pages(struct vm_area_struct *vma) /* Returns true if the VMA has associated reserve pages */ static int vma_has_reserves(struct vm_area_struct *vma) { + if (vma-vm_flags VM_NORESERVE) + return 0; if (vma-vm_flags VM_MAYSHARE) return 1; if (is_vma_resv_set(vma, HPAGE_RESV_OWNER)) -- 1.7.9.5 -- To unsubscribe, send a message with 'unsubscribe linux-mm' in the body to majord...@kvack.org. For more info on Linux MM, see: http://www.linux-mm.org/ . Don't email: a href=mailto:d...@kvack.org; em...@kvack.org /a -- To unsubscribe from this list: send the line unsubscribe linux-kernel in the body of a message to majord...@vger.kernel.org More majordomo info at http://vger.kernel.org/majordomo-info.html Please read the FAQ at http://www.tux.org/lkml/
Re: [PATCH 7/9] mm, hugetlb: add VM_NORESERVE check in vma_has_reserves()
Joonsoo Kim iamjoonsoo@lge.com writes: On Mon, Jul 15, 2013 at 08:41:12PM +0530, Aneesh Kumar K.V wrote: Joonsoo Kim iamjoonsoo@lge.com writes: If we map the region with MAP_NORESERVE and MAP_SHARED, we can skip to check reserve counting and eventually we cannot be ensured to allocate a huge page in fault time. With following example code, you can easily find this situation. Assume 2MB, nr_hugepages = 100 fd = hugetlbfs_unlinked_fd(); if (fd 0) return 1; size = 200 * MB; flag = MAP_SHARED; p = mmap(NULL, size, PROT_READ|PROT_WRITE, flag, fd, 0); if (p == MAP_FAILED) { fprintf(stderr, mmap() failed: %s\n, strerror(errno)); return -1; } size = 2 * MB; flag = MAP_ANONYMOUS | MAP_SHARED | MAP_HUGETLB | MAP_NORESERVE; p = mmap(NULL, size, PROT_READ|PROT_WRITE, flag, -1, 0); if (p == MAP_FAILED) { fprintf(stderr, mmap() failed: %s\n, strerror(errno)); } p[0] = '0'; sleep(10); During executing sleep(10), run 'cat /proc/meminfo' on another process. You'll find a mentioned problem. Solution is simple. We should check VM_NORESERVE in vma_has_reserves(). This prevent to use a pre-allocated huge page if free count is under the reserve count. You have a problem with this patch, which i guess you are fixing in patch 9. Consider two process a) MAP_SHARED on fd b) MAP_SHARED | MAP_NORESERVE on fd We should allow the (b) to access the page even if VM_NORESERVE is set and we are out of reserve space . I can't get your point. Please elaborate more on this. One process mmap with MAP_SHARED and another one with MAP_SHARED | MAP_NORESERVE Now the first process will result in reserving the pages from the hugtlb pool. Now if the second process try to dequeue huge page and we don't have free space we will fail because vma_has_reservers will now return zero because VM_NORESERVE is set and we can have (h-free_huge_pages - h-resv_huge_pages) == 0; The below hunk in your patch 9 handles that + if (vma-vm_flags VM_NORESERVE) { + /* + * This address is already reserved by other process(chg == 0), + * so, we should decreament reserved count. Without + * decreamenting, reserve count is remained after releasing + * inode, because this allocated page will go into page cache + * and is regarded as coming from reserved pool in releasing + * step. Currently, we don't have any other solution to deal + * with this situation properly, so add work-around here. + */ + if (vma-vm_flags VM_MAYSHARE chg == 0) + return 1; + else + return 0; + } so may be both of these should be folded ? -aneesh -- To unsubscribe from this list: send the line unsubscribe linux-kernel in the body of a message to majord...@vger.kernel.org More majordomo info at http://vger.kernel.org/majordomo-info.html Please read the FAQ at http://www.tux.org/lkml/