On Wed, Dec 30, 2015 at 09:15:44PM -0500, Sanidhya Solanki wrote:
> On Wed, 30 Dec 2015 18:18:26 +0100
> David Sterba wrote:
>
> > That's just the comment copied, the changelog does not explain why
> > it's ok to do just the run_xor there. It does not seem trivial to me.
> >
On Thu, Dec 31, 2015 at 08:46:36AM +0800, Qu Wenruo wrote:
> > Let me note that a good reputation is also built from patch reviews
> > (hint hint).
>
> I must admit I'm a bad reviewer.
> As when I review something, I always has an eager to rewrite part or all
> the patch to follow my idea, even
On Wed, Dec 30, 2015 at 08:26:00PM +0100, Christoph Anton Mitterer wrote:
> On Wed, 2015-12-30 at 18:39 +0100, David Sterba wrote:
> > The closest would be to read the files and look for any reported
> > errors.
> Doesn't that fail for any multi-device setup, in which case btrfs reads
> the blocks
On Wed, Dec 30, 2015 at 04:21:47PM -0500, Sanidhya Solanki wrote:
> On Wed, 30 Dec 2015 17:17:22 +0100
> David Sterba wrote:
>
> > Let me note that a good reputation is also built from patch reviews
> > (hint hint).
>
> Unfortunately, not too many patches coming in for BTRFS
From: Filipe Manana
If we failed to create a hard link we were not always releasing the
the transaction handle we got before, resulting in a memory leak and
preventing any other tasks from being able to commit the current
transaction.
Fix this by always releasing our
Le 05/01/2016 14:04, David Goodwin a écrit :
> Using btrfs progs 4.3.1 on a Vanilla kernel.org 4.1.15 kernel.
>
> time btrfs device delete /dev/xvdh /backups
>
> real13936m56.796s
> user0m0.000s
> sys 1351m48.280s
>
>
> (which is about 9 days).
>
> Where :
>
> /dev/xvdh was 120gb in
Hello Alphazo,
I am a mere btrfs user, but given the discussions I regularly see here
about difficulties with degraded filesystems I wouldn't rely on this
(yet?) as a regular work strategy, even if it's supposed to work.
If you're familiar with git, perhaps git-annex could be an alternative.
Hello all and excuse me if this is a silly question. I looked around in
the wiki and list archives but couldn't find any in-depth discussion
about this:
I just realized that, since raid1 in btrfs is special (meaning only two
copies in different devices), the effect in terms of resilience
On 2016-01-05 08:04, David Goodwin wrote:
Using btrfs progs 4.3.1 on a Vanilla kernel.org 4.1.15 kernel.
time btrfs device delete /dev/xvdh /backups
real13936m56.796s
user0m0.000s
sys 1351m48.280s
(which is about 9 days).
Where :
/dev/xvdh was 120gb in size.
OK, based on the
On Wed, Dec 16, 2015 at 11:57:38AM +0900, Tsutomu Itoh wrote:
> diff --git a/fs/btrfs/disk-io.c b/fs/btrfs/disk-io.c
> index 974be09..dcc1f15 100644
> --- a/fs/btrfs/disk-io.c
> +++ b/fs/btrfs/disk-io.c
> @@ -2709,7 +2709,7 @@ int open_ctree(struct super_block *sb,
>* In the long term,
On Thu, Dec 03, 2015 at 01:45:44PM -0500, Neil Horman wrote:
> Noticed this while doing some snapshots in a chroot environment
>
> btrfs receive can set root_path to either realmnt, which is passed in from the
> command line, or to a heap allocated via find_mount_root in do_receive. We
> should
On Mon, Dec 21, 2015 at 11:50:23PM +0800, Geliang Tang wrote:
> Use list_for_each_entry*() to simplify the code.
>
> Signed-off-by: Geliang Tang
Reviewed-by: David Sterba
--
To unsubscribe from this list: send the line "unsubscribe linux-btrfs" in
the
On Fri, Dec 18, 2015 at 10:17:00PM +0800, Geliang Tang wrote:
> Use list_for_each_entry_safe() instead of list_for_each_safe() to
> simplify the code.
>
> Signed-off-by: Geliang Tang
Reviewed-by: David Sterba
--
To unsubscribe from this list: send the
On 5 January 2016 at 01:57, Qu Wenruo wrote:
>>
>> Data, single: total=106.79GiB, used=82.01GiB
>> System, single: total=4.00MiB, used=16.00KiB
>> Metadata, single: total=2.01GiB, used=1.51GiB
>> GlobalReserve, single: total=512.00MiB, used=0.00B
>
>
> That's the btrfs fi
On Fri, Dec 18, 2015 at 10:16:59PM +0800, Geliang Tang wrote:
> Use list_for_each_entry*() instead of list_for_each*() to simplify
> the code.
>
> Signed-off-by: Geliang Tang
Reviewed-by: David Sterba
--
To unsubscribe from this list: send the line
Chris Murphy posted on Sun, 03 Jan 2016 14:33:40 -0700 as excerpted:
> kernel-4.4.0-0.rc6.git0.1.fc24.x86_64 btrfs-progs 4.3.1
>
> There was some copy pasting, hence /mnt/brick vs /mnt/brick2 confusion,
> but the volume was always cleanly mounted and umounted.
>
> The biggest problem I have
On Mon, Dec 07, 2015 at 10:26:05AM -0800, Liu Bo wrote:
> On Mon, Dec 07, 2015 at 03:37:43PM +0100, David Sterba wrote:
> > On Fri, Dec 04, 2015 at 09:58:04AM -0800, Liu Bo wrote:
> > > This disables repair process on ro cases as it can cause system
> > > to be unresponsive on the ASSERT() in
On Tue, 5 Jan 2016 10:22:36 +0100
David Sterba wrote:
> If the data a rerecovered, why is -EIO still returned?
In the other places in the file where the code appears, the submitted
patch is all that is required to do the xor. I think we also need to
include the following line:
On Tue, Jan 05, 2016 at 03:01:38PM +0100, David Sterba wrote:
> On Thu, Dec 03, 2015 at 01:45:44PM -0500, Neil Horman wrote:
> > Noticed this while doing some snapshots in a chroot environment
> >
> > btrfs receive can set root_path to either realmnt, which is passed in from
> > the
> > command
On Tue, Dec 08, 2015 at 09:25:03AM +, sam tygier wrote:
> Signed-off-by: Sam Tygier
>
> From: Sam Tygier
> Date: Sat, 3 Oct 2015 16:43:48 +0100
> Subject: [PATCH] Btrfs: Check metadata redundancy on balance
>
> When converting a filesystem via
Christoph Anton Mitterer posted on Sat, 02 Jan 2016 06:12:46 +0100 as
excerpted:
> On Fri, 2015-12-25 at 08:06 +, Duncan wrote:
>> I wasn't personally sure if 4.1 itself was affected or not, but the
>> wiki says don't use 4.1.1 as it's broken with this bug, with the
>> quick-fix in 4.1.2, so
On Tue, 2016-01-05 at 11:44 +0100, David Sterba wrote:
> We have a full 32 bit number space, so multiples of power of 2 are
> also
> possible if that makes sense.
Hmm that would make a maximum of 4GiB RAID chunks...
perhaps we should reserve some of the higher bits for a multiplier, in
case 4GiB
On Tue, 2016-01-05 at 15:34 +, Duncan wrote:
> >What exactly was that bug in 4.1.1 mkfs and how would one notice
> > that
> > one suffers from it?
> > I created a number of personal filesystems that I use
> > "productively" and
> > I'm not 100% sure during which version I've created them... :/
Am Dienstag, 5. Januar 2016, 15:34:35 CET schrieb Duncan:
> Christoph Anton Mitterer posted on Sat, 02 Jan 2016 06:12:46 +0100 as
>
> excerpted:
> > On Fri, 2015-12-25 at 08:06 +, Duncan wrote:
> >> I wasn't personally sure if 4.1 itself was affected or not, but the
> >> wiki says don't use
Hello,
TL;DR ==
btrfs 3x500GB RAID 5 - One device failed. Added a new device (btrfs device add)
and tried to remove the failed device (btrfs device delete).
I tried to mount the array in degraded mode, but that didn't work either. After
multiple attempts (including adding back the failed
Rasmus Abrahamsen posted on Fri, 01 Jan 2016 21:20:13 +0100 as excerpted:
> I accidentically sent my messages directly to Duncan, I am copying them
> in here.
>
> Hello Duncan,
>
> Thank you for the amazing response. Wow, you are awesome.
Just a note to mention that real life (TM) got in the
On Tue, Jan 05, 2016 at 04:33:02PM +, fdman...@kernel.org wrote:
> From: Filipe Manana
>
> If we failed to create a hard link we were not always releasing the
> the transaction handle we got before, resulting in a memory leak and
> preventing any other tasks from being
Christoph Anton Mitterer posted on Mon, 04 Jan 2016 01:05:02 +0100 as
excerpted:
> On Sun, 2016-01-03 at 15:00 +, Duncan wrote:
>> But now that I think about it, balance does read the chunk in ordered
>> to rewrite its contents, and that read, like all reads, should normally
>> be checksum
From: Mike Christie
The following patches separate the operation (write, read, discard,
etc) from the flags in bi_rw/cmd_flags. This patch adds definitions
for request/bio operations, adds fields to the request/bio to set
them, and some temporary compat code so the
point, we abused them so much we just made cmd_flags
64 bits, so we could add more.
The following patches seperate the operation (read, write discard,
flush, etc) from cmd_flags/bi_rw.
This patchset was made against linux-next from today Jan 5 2016.
(git tag next-20160105).
v2.
1. Dropped arg
On Sun, Jan 03, 2016 at 03:26:25AM +0100, Christoph Anton Mitterer wrote:
> On Sun, 2016-01-03 at 09:37 +0800, Qu Wenruo wrote:
> > And since you are making the stripe size configurable, then user is
> > responsible for any too large or too small stripe size setting.
> That pops up the questions,
From: Mike Christie
This patch has drbd set the bio bi_op to a REQ_OP, and rq_flag_bits
to bi_rw.
Lars and Philip, I might have split this patch up a little weird.
Thisi patch handles setting up the bio, and then patch 30
From: Mike Christie
The bio bi_op and bi_rw is now setup, so there is no need
to pass around the rq_flag_bits bits too.
Signed-off-by: Mike Christie
---
fs/btrfs/compression.c | 9 -
fs/btrfs/disk-io.c | 30 --
From: Mike Christie
This patch converts the simple bi_rw use cases in the block,
drivers, mm and fs code to use bi_op for a REQ_OP and bi_rw
for rq_flag_bits.
These should be simple one liner cases, so I just did them
in one patch. The next patches handle the more
From: Mike Christie
This patch has the mpage.c code set the bio bi_op to a REQ_OP, and
rq_flag_bits to bi_rw.
I have run xfstest with xfs, but I am not sure
if I have stressed these code paths well.
Signed-off-by: Mike Christie
---
fs/mpage.c | 41
From: Mike Christie
This patch has btrfs set the bio bi_op to a REQ_OP, and rq_flag_bits
to bi_rw.
Signed-off-by: Mike Christie
---
fs/btrfs/check-integrity.c | 19 +--
fs/btrfs/compression.c | 4
fs/btrfs/disk-io.c |
From: Mike Christie
This patch has nilfs set the bio bi_op to a REQ_OP, and
rq_flag_bits to bi_rw.
This patch is compile tested only.
Signed-off-by: Mike Christie
---
fs/nilfs2/segbuf.c | 18 ++
1 file changed, 10 insertions(+), 8
From: Mike Christie
This patch has xfs set the bio bi_op to a REQ_OP, and
rq_flag_bits to bi_rw.
Note:
I have run xfs tests on these btrfs patches. There were some failures
with and without the patches. I have not had time to track down why
xfstest fails without the
From: Mike Christie
We no longer pass in a bitmap of rq_flag_bits bits
to __btrfs_map_block. It will always be a REQ_OP,
or the btrfs specific REQ_GET_READ_MIRRORS,
so this drops the bit tests.
Signed-off-by: Mike Christie
---
fs/btrfs/extent-tree.c |
From: Mike Christie
This patch has gfs2 set the bio bi_op to a REQ_OP, and
rq_flag_bits to bi_rw.
This patch is compile tested only.
v2:
Bob, I did not add your signed off, because there was
the gfs2_submit_bhs changes since last time you reviewed
it.
Signed-off-by: Mike
From: Mike Christie
This patch has hfsplus set the bio bi_op to a REQ_OP, and
rq_flag_bits to bi_rw.
This patch is compile tested only.
Signed-off-by: Mike Christie
---
fs/hfsplus/hfsplus_fs.h | 2 +-
fs/hfsplus/part_tbl.c | 5 +++--
From: Mike Christie
This patch has f2fs set the bio bi_op to a REQ_OP, and
rq_flag_bits to bi_rw.
This patch is compile tested only.
Signed-off-by: Mike Christie
---
fs/f2fs/checkpoint.c| 10 ++
fs/f2fs/data.c | 33
From: Mike Christie
This patch has the dio code set the bio bi_op to a REQ_OP.
It also begins to convert btrfs's dio_submit_t related code,
because of the submit_io callout use. In the btrfs_submit_direct
change, I OR'd the op and flag back together. It is only temporary.
From: Mike Christie
This has callers of submit_bio/submit_bio_wait set the bio->bi_rw
instead of passing it in. This makes that use the same as
generic_make_request and how we set the other bio fields.
Signed-off-by: Mike Christie
---
block/bio.c
From: Mike Christie
This patch has btrfs's submit_one_bio callers set
the bio->bi_op to a REQ_OP and the bi_rw to rq_flag_bits.
The next patches will continue to convert btrfs,
so submit_bio_hook and merge_bio_hook
related code will be modified to take only the bio. I did
From: Mike Christie
This has submit_bh users pass in the operation and flags separately,
so we can setup the bio->bi_op and bio-bi_rw flags.
Signed-off-by: Mike Christie
---
drivers/md/bitmap.c | 4 ++--
fs/btrfs/check-integrity.c | 24
From: Mike Christie
This has ll_rw_block users pass in the operation and flags separately,
so we can setup the bio->bi_op and bio-bi_rw flags.
Signed-off-by: Mike Christie
---
fs/buffer.c | 19 ++-
fs/ext4/inode.c
From: Mike Christie
The bio and request struct now store the operation in
bio->bi_op/request->op. This patch has blktrace not
check bi_rw/cmd_flags.
This patch is only compile tested.
Signed-off-by: Mike Christie
---
include/linux/blktrace_api.h |
From: Mike Christie
This patch has md/raid set the bio bi_op to a REQ_OP, and
rq_flag_bits to bi_rw.
This patch is compile tested only.
Signed-off-by: Mike Christie
---
drivers/md/bitmap.c | 2 +-
drivers/md/dm-raid.c | 5 +++--
From: Mike Christie
It looks like dm stats cares about the data direction
(READ vs WRITE) and does not need the bio/request flags.
Commands like REQ_FLUSH, REQ_DISCARD and REQ_WRITE_SAME
are currently always set with REQ_WRITE, so the extra check for
REQ_DISCARD in
From: Mike Christie
The block layer will set the correct READ/WRITE operation flags/fields
when creating a request, so there is not need for drivers to set the
REQ_WRITE flag.
This patch is compile tested only.
Signed-off-by: Mike Christie
---
From: Mike Christie
This patch converts the request related block layer code to set
request->op to a REQ_OP and cmd_flags to rq_flag_bits.
There is some tmp compat code when setting up cmd_flags so it
still carries both the op and flags. It will be removed in
in later
From: Mike Christie
This patch has xen set the bio bi_op to a REQ_OP, and rq_flag_bits
to bi_rw.
This patch is compile tested only.
Signed-off-by: Mike Christie
---
drivers/block/xen-blkback/blkback.c | 29 +
1 file
From: Mike Christie
This patch has the block driver use the request->op for REQ_OP
operations and cmd_flags for rq_flag_bits.
I have only tested scsi and rbd.
Signed-off-by: Mike Christie
---
drivers/block/loop.c | 6 +++---
From: Mike Christie
This patch has the target modules set the bio bi_op to a REQ_OP, and
rq_flag_bits to bi_rw.
This patch is compile tested only.
Signed-off-by: Mike Christie
---
drivers/target/target_core_iblock.c | 38
From: Mike Christie
This patch has the pm swap code set the bio bi_op to a REQ_OP, and
rq_flag_bits to bi_rw.
This patch is compile tested only.
Signed-off-by: Mike Christie
---
kernel/power/swap.c | 31 +++
1 file
From: Mike Christie
This patch has bcache set the bio bi_op to a REQ_OP, and rq_flag_bits
to bi_rw.
This patch is compile tested only.
Signed-off-by: Mike Christie
---
drivers/md/bcache/btree.c | 2 ++
drivers/md/bcache/debug.c | 2 ++
From: Mike Christie
This patch has dm set the bio bi_op to a REQ_OP, and rq_flag_bits
to bi_rw.
I did some basic dm tests, but I think this patch should
be considered compile tested only. I have not tested all the
dm targets and I did not stress every code path I have
From: Mike Christie
This patch has ocfs2 set the bio bi_op to a REQ_OP, and
rq_flag_bits to bi_rw.
This patch is compile tested only.
Signed-off-by: Mike Christie
---
fs/ocfs2/cluster/heartbeat.c | 11 +++
1 file changed, 7 insertions(+), 4
On Tue, Jan 5, 2016 at 7:50 AM, Duncan <1i5t5.dun...@cox.net> wrote:
>
> If however you mounted it degraded,rw at some point, then I'd say the bug
> is in wetware, as in that case, based on my understanding, it's working
> as intended. I was inclined to believe that was what happened based on
>
Using btrfs progs 4.3.1 on a Vanilla kernel.org 4.1.15 kernel.
time btrfs device delete /dev/xvdh /backups
real13936m56.796s
user0m0.000s
sys 1351m48.280s
(which is about 9 days).
Where :
/dev/xvdh was 120gb in size.
/backups is a single / "raid 0" volume that now looks like
Chris Bainbridge wrote on 2016/01/05 13:41 +:
On 5 January 2016 at 01:57, Qu Wenruo wrote:
Data, single: total=106.79GiB, used=82.01GiB
System, single: total=4.00MiB, used=16.00KiB
Metadata, single: total=2.01GiB, used=1.51GiB
GlobalReserve, single:
On Wed, Jan 06, 2016 at 08:57:28AM +0800, Qu Wenruo wrote:
>
> Since you took the image of the corrupted fs, would you please try the
> following commands on the corrupted fs?
>
> $ btrfs-debug-tree -b 67239936
Command runs then segfaults:
leaf 67239936 items 92 free space 9138 generation
From: Mike Christie
To avoid confusion between REQ_OP_FLUSH, which is handled by
request_fn drivers, and upper layers requesting the block layer
perform a flush sequence along with possibly a WRITE, this patch
renames REQ_FLUSH to REQ_PREFLUSH.
Signed-off-by: Mike Christie
From: Mike Christie
The last patch added a REQ_OP_FLUSH for request_fn drivers
and the next patch renames REQ_FLUSH to REQ_PREFLUSH which
will be used by file systems and make_request_fn drivers.
This leaves REQ_FLUSH/REQ_FUA defined for drivers to tell
the block layer if
From: Mike Christie
We no longer use REQ_WRITE. REQ_WRITE_SAME and REQ_DISCARD,
so this patch removes them.
Signed-off-by: Mike Christie
---
include/linux/blk_types.h | 19 +--
include/linux/fs.h | 21 +++--
From: Mike Christie
This adds a REQ_OP_FLUSH operation that is sent to request_fn
based drivers by the block layer's flush code, instead of
sending requests with the request->cmd_flags REQ_FLUSH bit set.
For the following 3 flush related patches, I have not tested
every
From: Mike Christie
There is no need for bi_op/op and bi_rw to be so large
now, so this patch shrinks them.
Signed-off-by: Mike Christie
---
block/blk-core.c | 2 +-
drivers/md/dm-flakey.c | 2 +-
drivers/md/raid5.c | 13
Hi Mike,
[auto build test WARNING on next-20160105]
[cannot apply to dm/for-next v4.4-rc8 v4.4-rc7 v4.4-rc6 v4.4-rc8]
[if your patch is applied to the wrong git tree, please drop us a note to help
improving the system]
url:
https://github.com/0day-ci/linux/commits/mchristi-redhat-com
Hi Mike,
[auto build test ERROR on next-20160105]
[cannot apply to dm/for-next v4.4-rc8 v4.4-rc7 v4.4-rc6 v4.4-rc8]
[if your patch is applied to the wrong git tree, please drop us a note to help
improving the system]
url:
https://github.com/0day-ci/linux/commits/mchristi-redhat-com/separate
Hi Mike,
[auto build test ERROR on next-20160105]
[cannot apply to dm/for-next v4.4-rc8 v4.4-rc7 v4.4-rc6 v4.4-rc8]
[if your patch is applied to the wrong git tree, please drop us a note to help
improving the system]
url:
https://github.com/0day-ci/linux/commits/mchristi-redhat-com/separate
In the course of the few btrfs crashes I had on my USB backup drive
(NOT the drive from my other bug report, which is an internal SATA
drive) - in the last 6 months or so - I ended up having a 4 to 5 bad
checksums reported by scrub.
This drive is used to synchronize snapshots from my main
On 2016-01-05 07:25, Sylvain Joyeux wrote:
In the course of the few btrfs crashes I had on my USB backup drive
(NOT the drive from my other bug report, which is an internal SATA
drive) - in the last 6 months or so - I ended up having a 4 to 5 bad
checksums reported by scrub.
This drive is used
On Wed, Oct 07, 2015 at 07:40:46PM +0530, Chandan Rajendra wrote:
> On Wednesday 07 Oct 2015 11:25:03 David Sterba wrote:
> > On Mon, Oct 05, 2015 at 10:14:24PM +0530, Chandan Rajendra wrote:
> > > + if (unlikely(root->highest_objectid >= BTRFS_LAST_FREE_OBJECTID)) {
> > > +
> What does btrfs-show-super /dev/sda5 list as output for incompat_flags ?
incompat_flags 0x161
( MIXED_BACKREF |
BIG_METADATA |
EXTENDED_IREF |
SKINNY_METADATA )
> It would be much
Hi, David,
On 2016/01/05 23:12, David Sterba wrote:
On Wed, Dec 16, 2015 at 11:57:38AM +0900, Tsutomu Itoh wrote:
diff --git a/fs/btrfs/disk-io.c b/fs/btrfs/disk-io.c
index 974be09..dcc1f15 100644
--- a/fs/btrfs/disk-io.c
+++ b/fs/btrfs/disk-io.c
@@ -2709,7 +2709,7 @@ int open_ctree(struct
76 matches
Mail list logo