ted chunks. This is orthogonal;
my patch allows to trim free space inside allocated chunks (which,
somewhat accidentally, works for most btrfs filesystems since a long
time) even with filesystems like yours.
Kind regards,
Lutz Euler
--
To unsubscribe from this list: send the line "unsubscrib
regards,
Lutz Euler
--
To unsubscribe from this list: send the line unsubscribe linux-btrfs in
Hello,
happy new year to you, too!
[Martin Steigerwald:]
Happy new year!
Am Samstag, 3. Januar 2015, 16:30:51 schrieb Lutz Euler:
Commit 2cac13e41bf5b99ffc426bd28dfd2248df1dfa67, fix trim 0 bytes after
a device delete, said:
A user reported a bug of btrfs's trim, that is we will trim
to trim a filesystem only partly, due to the broken mapping
this often didn't work anyway.)
V2:
- Rebased onto 3.9. (still applies to and works with 3.19-rc2)
- Take range-minlen into account.
Reported-by: Lutz Euler lutz.eu...@freenet.de
Signed-off-by: Lutz Euler lutz.eu...@freenet.de
---
fs/btrfs
version as the old one did not apply
any more since about kernel 3.9., and had a flaw (range-minlen was
being ignored).
I am using this patch on my system since two years without problems with
kernel versions ranging from 3.3 to 3.16.
Looking forward to a review,
Lutz Euler
--
To unsubscribe from
Josef Bacik wrote:
On Mon, Aug 19, 2013 at 11:44:19PM +0200, Lutz Euler wrote:
Hi,
during a balance I got the BUG from the subject line, followed by
BUG: unable to handle kernel paging request at 0008a940.
This should be fixed in btrfs-next. Thanks,
Thanks for the info
Josef Bacik wrote:
Yup it's
Btrfs: change how we queue blocks for backref checking
Thanks, found it.
Greetings,
Lutz
--
To unsubscribe from this list: send the line unsubscribe linux-btrfs in
the body of a message to majord...@vger.kernel.org
More majordomo info at
Hi,
during a balance I got the BUG from the subject line, followed by
BUG: unable to handle kernel paging request at 0008a940.
The machine needed to be rebooted afterwards; the filesystem was
successfully mounted and the balance resumed and finished successfully.
I then ran scrub which
Hi,
the issue I raised in this thread is still not fixed. So, to get things
going, I prepared a patch myself which I will send in a separate email.
The subject will be really fix trim 0 bytes after a device delete.
I would be grateful if you could consider the patch -- with it applied
I finally
to only partly trim a filesystem, due to the broken mapping
this often didn't work anyway.)
Reported-by: Lutz Euler lutz.eu...@freenet.de
Signed-off-by: Lutz Euler lutz.eu...@freenet.de
---
fs/btrfs/extent-tree.c | 55 +--
1 files changed, 25 insertions
Hi,
the issue I raised in this thread is still not fixed. Please, could
someone look into it? A fix should be simple, especially as the issue is
easily reproducable. (In case the information in this thread so far is
not sufficient for that I will happily provide a recipe; just ask.)
Also, I
Hi,
I have seen that in the meantime a patch for this issue has found
its way into 3.3-rc5. Unfortunately with this kernel my filesystem still
says 0 bytes were trimmed, so the bug is not fixed for me.
Now that might just have been caused by the fact that the patch that
went into 3.3-rc5 (as
Hi,
Liu Bo wrote:
Actually I have no idea how to deal with this properly :(
Because btrfs supports multi-devices so that we have to set the
filesystem logical range to [0, (u64)-1] to get things to work well,
while other filesystems's logical range is [0, device's total_bytes].
What's
Hi,
tl;dr: btrfs_trim_fs, IMHO, needs more thorough surgery.
Thanks for providing the new patch. I think it will work in the case
that fstrim is called without specifying a range to trim (that is,
defaulting to the whole filesystem), but I didn't test that yet, sorry.
Instead, I have been
Hi Liu,
thanks for looking into this!
You wrote:
Would you please test the following patch on your box?
thanks,
liubo
diff --git a/fs/btrfs/extent-tree.c b/fs/btrfs/extent-tree.c
index 77ea23c..b6e2c92 100644
--- a/fs/btrfs/extent-tree.c
+++ b/fs/btrfs/extent-tree.c
@@ -7653,9
... maybe even the block group cache is nonfunctional then?
I am using a btrfs file system, mirrored data and metadata on two SSDs,
and recently tried for the first time to run fstrim on it. fstrim
unexpectedly said 0 bytes were trimmed. strace -T shows that it spends
only a few microseconds in
Hi,
I was rsyncing some 54 GB onto a newly created btrfs filesystem,
both metadata and data in RAID1 and using lzo compression, when
after transferring less than half of the amount rsync stopped
with no space left on device, while the destination filesystem
was less than half filled. I started
17 matches
Mail list logo