Re: [PATCH] tmpfs: don't undo fallocate past its last page
On Mon, May 16, 2016 at 4:59 AM, Vlastimil Babkawrote: > On 05/08/2016 03:16 PM, Anthony Romano wrote: >> >> When fallocate is interrupted it will undo a range that extends one byte >> past its range of allocated pages. This can corrupt an in-use page by >> zeroing out its first byte. Instead, undo using the inclusive byte range. > > > Huh, good catch. So why is shmem_undo_range() adding +1 to the value in the > first place? The only other caller is shmem_truncate_range() and all *its* > callers do subtract 1 to avoid the same issue. So a nicer fix would be to > remove all this +1/-1 madness. Or is there some subtle corner case I'm > missing? Bumping this thread as I don't think this patch has gotten picked up. And cc'ing folks from 1635f6a74152f1dcd1b888231609d64875f0a81a. Also, resending because I forgot to remove the HTML mime-type to make vger happy. Thank you, Brandon >> Signed-off-by: Anthony Romano > > > Looks like a stable candidate patch. Can you point out the commit that > introduced the bug, for the Fixes: tag? > > Thanks, > Vlastimil > > >> --- >> mm/shmem.c | 2 +- >> 1 file changed, 1 insertion(+), 1 deletion(-) >> >> diff --git a/mm/shmem.c b/mm/shmem.c >> index 719bd6b..f0f9405 100644 >> --- a/mm/shmem.c >> +++ b/mm/shmem.c >> @@ -2238,7 +2238,7 @@ static long shmem_fallocate(struct file *file, int >> mode, loff_t offset, >> /* Remove the !PageUptodate pages we added */ >> shmem_undo_range(inode, >> (loff_t)start << PAGE_SHIFT, >> - (loff_t)index << PAGE_SHIFT, true); >> + ((loff_t)index << PAGE_SHIFT) - 1, true); >> goto undone; >> } >> >> >
Re: [PATCH] tmpfs: don't undo fallocate past its last page
On Mon, May 16, 2016 at 4:59 AM, Vlastimil Babka wrote: > On 05/08/2016 03:16 PM, Anthony Romano wrote: >> >> When fallocate is interrupted it will undo a range that extends one byte >> past its range of allocated pages. This can corrupt an in-use page by >> zeroing out its first byte. Instead, undo using the inclusive byte range. > > > Huh, good catch. So why is shmem_undo_range() adding +1 to the value in the > first place? The only other caller is shmem_truncate_range() and all *its* > callers do subtract 1 to avoid the same issue. So a nicer fix would be to > remove all this +1/-1 madness. Or is there some subtle corner case I'm > missing? Bumping this thread as I don't think this patch has gotten picked up. And cc'ing folks from 1635f6a74152f1dcd1b888231609d64875f0a81a. Also, resending because I forgot to remove the HTML mime-type to make vger happy. Thank you, Brandon >> Signed-off-by: Anthony Romano > > > Looks like a stable candidate patch. Can you point out the commit that > introduced the bug, for the Fixes: tag? > > Thanks, > Vlastimil > > >> --- >> mm/shmem.c | 2 +- >> 1 file changed, 1 insertion(+), 1 deletion(-) >> >> diff --git a/mm/shmem.c b/mm/shmem.c >> index 719bd6b..f0f9405 100644 >> --- a/mm/shmem.c >> +++ b/mm/shmem.c >> @@ -2238,7 +2238,7 @@ static long shmem_fallocate(struct file *file, int >> mode, loff_t offset, >> /* Remove the !PageUptodate pages we added */ >> shmem_undo_range(inode, >> (loff_t)start << PAGE_SHIFT, >> - (loff_t)index << PAGE_SHIFT, true); >> + ((loff_t)index << PAGE_SHIFT) - 1, true); >> goto undone; >> } >> >> >
Re: [PATCH] tmpfs: don't undo fallocate past its last page
On 05/08/2016 03:16 PM, Anthony Romano wrote: When fallocate is interrupted it will undo a range that extends one byte past its range of allocated pages. This can corrupt an in-use page by zeroing out its first byte. Instead, undo using the inclusive byte range. Huh, good catch. So why is shmem_undo_range() adding +1 to the value in the first place? The only other caller is shmem_truncate_range() and all *its* callers do subtract 1 to avoid the same issue. So a nicer fix would be to remove all this +1/-1 madness. Or is there some subtle corner case I'm missing? Signed-off-by: Anthony RomanoLooks like a stable candidate patch. Can you point out the commit that introduced the bug, for the Fixes: tag? Thanks, Vlastimil --- mm/shmem.c | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/mm/shmem.c b/mm/shmem.c index 719bd6b..f0f9405 100644 --- a/mm/shmem.c +++ b/mm/shmem.c @@ -2238,7 +2238,7 @@ static long shmem_fallocate(struct file *file, int mode, loff_t offset, /* Remove the !PageUptodate pages we added */ shmem_undo_range(inode, (loff_t)start << PAGE_SHIFT, - (loff_t)index << PAGE_SHIFT, true); + ((loff_t)index << PAGE_SHIFT) - 1, true); goto undone; }
Re: [PATCH] tmpfs: don't undo fallocate past its last page
On 05/08/2016 03:16 PM, Anthony Romano wrote: When fallocate is interrupted it will undo a range that extends one byte past its range of allocated pages. This can corrupt an in-use page by zeroing out its first byte. Instead, undo using the inclusive byte range. Huh, good catch. So why is shmem_undo_range() adding +1 to the value in the first place? The only other caller is shmem_truncate_range() and all *its* callers do subtract 1 to avoid the same issue. So a nicer fix would be to remove all this +1/-1 madness. Or is there some subtle corner case I'm missing? Signed-off-by: Anthony Romano Looks like a stable candidate patch. Can you point out the commit that introduced the bug, for the Fixes: tag? Thanks, Vlastimil --- mm/shmem.c | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/mm/shmem.c b/mm/shmem.c index 719bd6b..f0f9405 100644 --- a/mm/shmem.c +++ b/mm/shmem.c @@ -2238,7 +2238,7 @@ static long shmem_fallocate(struct file *file, int mode, loff_t offset, /* Remove the !PageUptodate pages we added */ shmem_undo_range(inode, (loff_t)start << PAGE_SHIFT, - (loff_t)index << PAGE_SHIFT, true); + ((loff_t)index << PAGE_SHIFT) - 1, true); goto undone; }
[PATCH] tmpfs: don't undo fallocate past its last page
When fallocate is interrupted it will undo a range that extends one byte past its range of allocated pages. This can corrupt an in-use page by zeroing out its first byte. Instead, undo using the inclusive byte range. Signed-off-by: Anthony Romano--- mm/shmem.c | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/mm/shmem.c b/mm/shmem.c index 719bd6b..f0f9405 100644 --- a/mm/shmem.c +++ b/mm/shmem.c @@ -2238,7 +2238,7 @@ static long shmem_fallocate(struct file *file, int mode, loff_t offset, /* Remove the !PageUptodate pages we added */ shmem_undo_range(inode, (loff_t)start << PAGE_SHIFT, - (loff_t)index << PAGE_SHIFT, true); + ((loff_t)index << PAGE_SHIFT) - 1, true); goto undone; } -- 2.8.1
[PATCH] tmpfs: don't undo fallocate past its last page
When fallocate is interrupted it will undo a range that extends one byte past its range of allocated pages. This can corrupt an in-use page by zeroing out its first byte. Instead, undo using the inclusive byte range. Signed-off-by: Anthony Romano --- mm/shmem.c | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/mm/shmem.c b/mm/shmem.c index 719bd6b..f0f9405 100644 --- a/mm/shmem.c +++ b/mm/shmem.c @@ -2238,7 +2238,7 @@ static long shmem_fallocate(struct file *file, int mode, loff_t offset, /* Remove the !PageUptodate pages we added */ shmem_undo_range(inode, (loff_t)start << PAGE_SHIFT, - (loff_t)index << PAGE_SHIFT, true); + ((loff_t)index << PAGE_SHIFT) - 1, true); goto undone; } -- 2.8.1