On Wed, Apr 19, 2017 at 11:07:45PM +0200, Adam Borowski wrote:
> Too many people come complaining about losing their data -- and indeed,
> there's no warning outside a wiki and the mailing list tribal knowledge.
> Message severity chosen for consistency with XFS -- "alert" makes dmesg
> produce nic
Free the alloced memory and close dir before exit.
Signed-off-by: Lu Fengqi
---
tests/fssum.c | 35 ++-
1 file changed, 30 insertions(+), 5 deletions(-)
diff --git a/tests/fssum.c b/tests/fssum.c
index 83bd4106..8be44547 100644
--- a/tests/fssum.c
+++ b/tests/fss
At 04/26/2017 03:25 AM, Stefan Priebe - Profihost AG wrote:
Hello Qu,
still noone on this one? Or is this one solved in another way in 4.10 or
4.11 or is compression just experimental? Haven't seen a note on this.
Still pushing the patchset in in-band dedupe patchset.
I'll re-push them in s
At 04/26/2017 01:58 AM, Goffredo Baroncelli wrote:
I Qu,
I tested these two patches on top of 4.10.12; however when I corrupt disk1, It
seems that BTRFS is still unable to rebuild parity.
Because in the past the patches set V4 was composed by 5 patches and this one
(V5) is composed by only
Am Tue, 25 Apr 2017 00:02:13 -0400
schrieb "J. Hart" :
> I have a remote machine with a filesystem for which I periodically
> take incremental snapshots for historical reasons. These snapshots
> are stored in an archival filesystem tree on a file server. Older
> snapshots are removed and newer o
Hello Qu,
still noone on this one? Or is this one solved in another way in 4.10 or
4.11 or is compression just experimental? Haven't seen a note on this.
Thanks,
Stefan
Am 27.02.2017 um 14:43 schrieb Stefan Priebe - Profihost AG:
> Hi,
>
> can please anybody comment on that one? Josef? Chris? I
Remove NULL test on kmap()
Signed-off-by: Fabian Frederick
---
fs/btrfs/check-integrity.c | 8 +---
1 file changed, 1 insertion(+), 7 deletions(-)
diff --git a/fs/btrfs/check-integrity.c b/fs/btrfs/check-integrity.c
index ab14c2e..496eb00 100644
--- a/fs/btrfs/check-integrity.c
+++ b/fs/btr
I Qu,
I tested these two patches on top of 4.10.12; however when I corrupt disk1, It
seems that BTRFS is still unable to rebuild parity.
Because in the past the patches set V4 was composed by 5 patches and this one
(V5) is composed by only 2 patches, are these 2 sufficient to solve all known
b
Hi,
I”ve been trying to run btrfs as my primary work filesystem for about 3-4
months now on Fedora 25 systems. I ran a few times into filesystem corruptions.
At least one I attributed to a damaged disk, but the last one is with a brand
new 3T disk that reports no SMART errors. Worse yet, in at
On Tue, 2017-04-25 at 13:19 +0200, Jan Kara wrote:
> On Tue 25-04-17 06:35:13, Jeff Layton wrote:
> > On Tue, 2017-04-25 at 10:17 +0200, Jan Kara wrote:
> > > On Mon 24-04-17 13:14:36, Jeff Layton wrote:
> > > > On Mon, 2017-04-24 at 18:04 +0200, Jan Kara wrote:
> > > > > On Mon 24-04-17 09:22:49,
On 04/25/2017 10:21 AM, David Sterba wrote:
On Wed, Apr 19, 2017 at 12:51:03PM +0200, David Sterba wrote:
Hi,
a single-patch pull request, the qgroup use can trigger an underflow warning
frequently. The warning is for debugging and should not be in the final release
of 4.11 as we won't be able
On Wed, Apr 19, 2017 at 12:51:03PM +0200, David Sterba wrote:
> Hi,
>
> a single-patch pull request, the qgroup use can trigger an underflow warning
> frequently. The warning is for debugging and should not be in the final
> release
> of 4.11 as we won't be able to fix it.
Ping?
--
To unsubscrib
On 25/04/17 05:02, J. Hart wrote:
> I have a remote machine with a filesystem for which I periodically take
> incremental snapshots for historical reasons. These snapshots are
> stored in an archival filesystem tree on a file server. Older snapshots
> are removed and newer ones added on a rotatio
On Tue 25-04-17 06:35:13, Jeff Layton wrote:
> On Tue, 2017-04-25 at 10:17 +0200, Jan Kara wrote:
> > On Mon 24-04-17 13:14:36, Jeff Layton wrote:
> > > On Mon, 2017-04-24 at 18:04 +0200, Jan Kara wrote:
> > > > On Mon 24-04-17 09:22:49, Jeff Layton wrote:
> > > > > This ensures that we see errors
On Tue, 2017-04-25 at 10:17 +0200, Jan Kara wrote:
> On Mon 24-04-17 13:14:36, Jeff Layton wrote:
> > On Mon, 2017-04-24 at 18:04 +0200, Jan Kara wrote:
> > > On Mon 24-04-17 09:22:49, Jeff Layton wrote:
> > > > This ensures that we see errors on fsync when writeback fails.
> > > >
> > > > Signed-
On Mon 24-04-17 13:14:36, Jeff Layton wrote:
> On Mon, 2017-04-24 at 18:04 +0200, Jan Kara wrote:
> > On Mon 24-04-17 09:22:49, Jeff Layton wrote:
> > > This ensures that we see errors on fsync when writeback fails.
> > >
> > > Signed-off-by: Jeff Layton
> >
> > Hum, but do we really want to clo
My question is, what does it do then when a new modification-write comes
in to the compressed no-cow file, and the modification isn't as
compressible as the data it replaced?
If any extent that is attempted to compress is found not compressible
then it would mark inode as no-compress. Then i
David,
Based on the comments received, I have to pull back patch 1/7 to 3/7
and instead have replaced them with a patch as below.
[patch] btrfs: add framework to handle device flush error as a volume
Thanks, Anand
On 04/19/2017 12:29 PM, Anand Jain wrote:
On 04/18/2017 09:54 PM,
This adds comments to the flush error handling part of
the code, and hopes to maintain the same logic with a
framework which can be used to handle the errors at the
volume level.
Signed-off-by: Anand Jain
---
fs/btrfs/disk-io.c | 58 ++
fs/btrf
For fuzzed image bko-156811-bad-parent-ref-qgroup-verify.raw, it cause
qgroup to report -ENOMEM.
But the fact is, such image is heavy damaged so there is not valid root
item for extent tree.
Normal extent tree key in root tree should be (EXTENT_TREE ROOT_ITEM 0),
while in that fuzzed image, we go
When a 0 sized block group item is found, set_extent_bits() will not
really set any bits.
While set_state_private() still inserts allocated block group cache into
block group extent_io_tree.
So at close_ctree() time, we won't free the private block group cache
stored since we can't find any bit se
21 matches
Mail list logo