[PATCH v3 0/2] btrfs-progs: doc: update btrfs device remove

2017-10-15 Thread Misono, Tomohiro
This updates help/doc of "btrfs device remove".

First patch adds the explanation that delete is the alias of remove to help 
message.
Second patch adds the description of "remove missing", which is currently only
written in wikipage, and example of device removal.

v1->v2:
 split the patch and updates the messages
v2->v3
 withdrow "remove missing-all" feature
 
Tomohiro Misono (2):
btrfs-progs: device: add description of alias to help message
btrfs-progs: doc: add description of missing and example of device 
remove

 Documentation/btrfs-device.asciidoc | 20 +++-
 cmds-device.c   | 9 +-
 2 files changed, 27 insertions(+), 2 deletions(-)

-- 
2.9.5

--
To unsubscribe from this list: send the line "unsubscribe linux-btrfs" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


[PATCH v3 2/2] btrfs-progs: doc: add description of missing and example of device remove

2017-10-15 Thread Misono, Tomohiro
This patch updates help/document of "btrfs device remove" in two points:

1. Add explanation of 'missing' for 'device remove'. This is only
written in wikipage currently.
(https://btrfs.wiki.kernel.org/index.php/Using_Btrfs_with_Multiple_Devices)

2. Add example of device removal in the man document. This is because
that explanation of "remove" says "See the example section below", but
there is no example of removal currently.

Signed-off-by: Tomohiro Misono 
Reviewed-by: Satoru Takeuchi 
---
 Documentation/btrfs-device.asciidoc | 19 ++-
 cmds-device.c   |  8 
 2 files changed, 26 insertions(+), 1 deletion(-)

diff --git a/Documentation/btrfs-device.asciidoc 
b/Documentation/btrfs-device.asciidoc
index 88822ec..dd60415 100644
--- a/Documentation/btrfs-device.asciidoc
+++ b/Documentation/btrfs-device.asciidoc
@@ -68,13 +68,17 @@ Remove device(s) from a filesystem identified by 
 Device removal must satisfy the profile constraints, otherwise the command
 fails. The filesystem must be converted to profile(s) that would allow the
 removal. This can typically happen when going down from 2 devices to 1 and
-using the RAID1 profile. See the example section below.
+using the RAID1 profile. See the *TYPICAL USECASES* section below.
 +
 The operation can take long as it needs to move all data from the device.
 +
 It is possible to delete the device that was used to mount the filesystem. The
 device entry in mount table will be replaced by another device name with the
 lowest device id.
++
+If device is mounted as degraded mode (-o degraded), special term "missing"
+can be used for . In that case, the first device that is described by
+the filesystem metadata, but not preseted at the mount time will be removed.
 
 *delete* | [|...] ::
 Alias of remove kept for backward compatibility
@@ -206,6 +210,19 @@ data or the block groups occupy the whole first device.
 The device size of '/dev/sdb' as seen by the filesystem remains unchanged, but
 the logical space from 50-100GiB will be unused.
 
+ REMOVE DEVICE 
+
+Device removal must satisfy the profile constraints, otherwise the command
+fails. For example:
+
+ $ btrfs device remove /dev/sda /mnt
+ ERROR: error removing device '/dev/sda': unable to go below two devices on 
raid1
+
+In order to remove a device, you need to convert the profile in this case:
+
+ $ btrfs balance start -mconvert=dup -dconvert=single /mnt
+ $ btrfs device remove /dev/sda /mnt
+
 DEVICE STATS
 
 
diff --git a/cmds-device.c b/cmds-device.c
index 3b6b985..d28ed0f 100644
--- a/cmds-device.c
+++ b/cmds-device.c
@@ -224,9 +224,16 @@ static int _cmd_device_remove(int argc, char **argv,
return !!ret;
 }
 
+#define COMMON_USAGE_REMOVE_DELETE \
+   "", \
+   "If 'missing' is specified for , the first device that is", \
+   "described by the filesystem metadata, but not presented at the", \
+   "mount time will be removed."
+
 static const char * const cmd_device_remove_usage[] = {
"btrfs device remove | [|...] ",
"Remove a device from a filesystem",
+   COMMON_USAGE_REMOVE_DELETE,
NULL
 };
 
@@ -238,6 +245,7 @@ static int cmd_device_remove(int argc, char **argv)
 static const char * const cmd_device_delete_usage[] = {
"btrfs device delete | [|...] ",
"Remove a device from a filesystem (alias of \"btrfs device remove\")",
+   COMMON_USAGE_REMOVE_DELETE,
NULL
 };
 
-- 
2.9.5

--
To unsubscribe from this list: send the line "unsubscribe linux-btrfs" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


[PATCH v3 1/2] btrfs-progs: device: add description of alias to help message

2017-10-15 Thread Misono, Tomohiro
State the 'delete' is the alias of 'remove' as the man page says.

Signed-off-by: Tomohiro Misono 
Reviewed-by: Satoru Takeuchi 
---
 cmds-device.c | 2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)

diff --git a/cmds-device.c b/cmds-device.c
index 4337eb2..3b6b985 100644
--- a/cmds-device.c
+++ b/cmds-device.c
@@ -237,7 +237,7 @@ static int cmd_device_remove(int argc, char **argv)
 
 static const char * const cmd_device_delete_usage[] = {
"btrfs device delete | [|...] ",
-   "Remove a device from a filesystem",
+   "Remove a device from a filesystem (alias of \"btrfs device remove\")",
NULL
 };
 
-- 
2.9.5

--
To unsubscribe from this list: send the line "unsubscribe linux-btrfs" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: [PATCH v2 3/3] btrfs-progs: device: add remove missing-all

2017-10-15 Thread Misono, Tomohiro


On 2017/10/16 12:30, Anand Jain wrote:
> 
> 
> On 10/13/2017 01:27 PM, Duncan wrote:
>> Misono, Tomohiro posted on Wed, 11 Oct 2017 11:18:50 +0900 as excerpted:
>>
>>> Add 'btrfs remove missing-all' to remove all the missing devices at once
>>> for improving usability.
>>>
>>> Example:
>>>   sudo mkfs.btrfs -f -d raid1 /dev/sdb1 /dev/sdb2 /dev/sdb3 /dev/sdb4
>>>   sudo wipefs -a /dev/sdb1 /dev/sdb3
>>>   sudo mount -o degraded /dev/sdb2 /mnt <-- 
> 
> 
>I agree with Duncan here. This step itself will fail even with RO
>option. Do you have any patch that is not in the ML which will
>make this step a success in the first place ?
> 
> Thanks, Anand
> 

commit 21634a19f646 ("btrfs: Introduce a function to check if all chunks a OK
for degraded rw mount") allow this from 4.14 (I checked on 4.14-rc4).
But I will withdraw this patch as Duncan suggests.

Thanks,
Tomohiro

--
To unsubscribe from this list: send the line "unsubscribe linux-btrfs" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: [PATCH v2 3/3] btrfs-progs: device: add remove missing-all

2017-10-15 Thread Misono, Tomohiro
On 2017/10/13 14:27, Duncan wrote:
> Misono, Tomohiro posted on Wed, 11 Oct 2017 11:18:50 +0900 as excerpted:
> 
>> Add 'btrfs remove missing-all' to remove all the missing devices at once
>> for improving usability.
>>
>> Example:
>>  sudo mkfs.btrfs -f -d raid1 /dev/sdb1 /dev/sdb2 /dev/sdb3 /dev/sdb4
>>  sudo wipefs -a /dev/sdb1 /dev/sdb3
>>  sudo mount -o degraded /dev/sdb2 /mnt
>>  sudo btrfs filesystem show /mnt
>>  sudo btrfs device remove missing-all /mnt
>>  sudo btrfs filesystem show /mnt
> 
> 
> There's a reason remove missing-all hasn't yet been implemented.
> 
> Note that the above would be very unlikely to work once a filesystem has 
> been used in any significant way, because raid1 and raid10 are explicitly 
> chunk pairs, *NOT* duplicated N times across N devices.  So with two 
> devices missing, chances are that both copies of some chunks will be 
> missing as well, so the filesystem would no longer be mountable degraded-
> writable, only degraded-readonly, in which case device remove won't work 
> at all because the filesystem is readonly.
> 
> In fact, until the recent per-chunk check patches went in, it was 
> impossible to mount-writable a raid1 missing two devices at all, because 
> the safeguards simply assumed some chunks would be entirely missing.
> 
> The only case in which more than a single device missing is likely to be 
> mountable degraded-writable (so device remove will work at all) is raid6, 
> tho with recent patches there's narrow cases in which it /might/ be 
> doable with raid1 as well.
> 
> Now you may still wish to implement remove missing-all for raid6 mode and 
> for the unusual corner-case raid1/raid10 in which it might work, but the 
> documentation should be pretty clear that save for raid6 it can't be 
> expected to work in most cases.
> 
> Given that, I think remove missing-all hasn't been implemented as it 
> simply hasn't been considered to be worth the bother for the narrow use-
> cases in which it will actually work.

Thanks for the comments.

I thought this is useful, but agree that this is for rare case and might be
confusing. So, I will drop 3rd patch and just resend 1st/2nd again.

Regards,
Tomohiro

--
To unsubscribe from this list: send the line "unsubscribe linux-btrfs" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: [PATCH v2] Btrfs: avoid losing data raid profile when deleting a device

2017-10-15 Thread Anand Jain



On 10/14/2017 04:51 AM, Liu Bo wrote:

On Wed, Oct 11, 2017 at 10:38:50AM +0300, Nikolay Borisov wrote:



On 10.10.2017 20:53, Liu Bo wrote:

We've avoided data losing raid profile when doing balance, but it
turns out that deleting a device could also result in the same
problem

This fixes the problem by creating an empty data chunk before
relocating the data chunk.


Why is this needed - copy the metadata of the to-be-relocated chunk into
the newly created empty chunk? I don't entirely understand that code but
doesn't this seem a bit like a hack in order to stash some information?
Perhaps you could elaborate the logic a bit more in the changelog?



Metadata/System chunk are supposed to have non-zero bytes all the time
so their raid profile is persistent.


I think this changelog is a bit scarce on detail as to the culprit of
the problem. Could you perhaps put a sentence or two what the underlying
logic which deletes the raid profile if a chunk is empty ?



Fair enough.

The problem is as same as what commit 2c9fe8355258 ("btrfs: Fix
lost-data-profile caused by balance bg") had fixed.

Similar to doing balance, deleting a device can also move all chunks
on this disk to other available disks, after 'move' successfully,
it'll remove those chunks.

If our last data chunk is empty and part of it happens to be on this
disk, then there is no data chunk in this btrfs after deleting the
device successfully, any following write will try to create a new data
chunk which ends up with a single data chunk because the only
available data raid profile is 'single'.


 So you are referring to a raid1 group profile which contains 3 or more
 devices otherwise single group file is what it will fit ? Is there
 reproducer ?

Thanks, Anand



thanks,
-liubo



Reported-by: James Alandt 
Signed-off-by: Liu Bo 
---

v2: - return the correct error.
 - move helper ahead of __btrfs_balance().

  fs/btrfs/volumes.c | 84 ++
  1 file changed, 65 insertions(+), 19 deletions(-)

diff --git a/fs/btrfs/volumes.c b/fs/btrfs/volumes.c
index 4a72c45..a74396d 100644
--- a/fs/btrfs/volumes.c
+++ b/fs/btrfs/volumes.c
@@ -3018,6 +3018,48 @@ static int btrfs_relocate_sys_chunks(struct 
btrfs_fs_info *fs_info)
return ret;
  }
  
+/*

+ * return 1 : allocate a data chunk successfully,
+ * return <0: errors during allocating a data chunk,
+ * return 0 : no need to allocate a data chunk.
+ */
+static int btrfs_may_alloc_data_chunk(struct btrfs_fs_info *fs_info,
+ u64 chunk_offset)
+{
+   struct btrfs_block_group_cache *cache;
+   u64 bytes_used;
+   u64 chunk_type;
+
+   cache = btrfs_lookup_block_group(fs_info, chunk_offset);
+   ASSERT(cache);
+   chunk_type = cache->flags;
+   btrfs_put_block_group(cache);
+
+   if (chunk_type & BTRFS_BLOCK_GROUP_DATA) {
+   spin_lock(_info->data_sinfo->lock);
+   bytes_used = fs_info->data_sinfo->bytes_used;
+   spin_unlock(_info->data_sinfo->lock);
+
+   if (!bytes_used) {
+   struct btrfs_trans_handle *trans;
+   int ret;
+
+   trans = btrfs_join_transaction(fs_info->tree_root);
+   if (IS_ERR(trans))
+   return PTR_ERR(trans);
+
+   ret = btrfs_force_chunk_alloc(trans, fs_info,
+ BTRFS_BLOCK_GROUP_DATA);
+   btrfs_end_transaction(trans);
+   if (ret < 0)
+   return ret;
+
+   return 1;
+   }
+   }
+   return 0;
+}
+
  static int insert_balance_item(struct btrfs_fs_info *fs_info,
   struct btrfs_balance_control *bctl)
  {
@@ -3476,7 +3518,6 @@ static int __btrfs_balance(struct btrfs_fs_info *fs_info)
u32 count_meta = 0;
u32 count_sys = 0;
int chunk_reserved = 0;
-   u64 bytes_used = 0;
  
  	/* step one make some room on all the devices */

devices = _info->fs_devices->devices;
@@ -3635,28 +3676,21 @@ static int __btrfs_balance(struct btrfs_fs_info 
*fs_info)
goto loop;
}
  
-		ASSERT(fs_info->data_sinfo);

-   spin_lock(_info->data_sinfo->lock);
-   bytes_used = fs_info->data_sinfo->bytes_used;
-   spin_unlock(_info->data_sinfo->lock);
-
-   if ((chunk_type & BTRFS_BLOCK_GROUP_DATA) &&
-   !chunk_reserved && !bytes_used) {
-   trans = btrfs_start_transaction(chunk_root, 0);
-   if (IS_ERR(trans)) {
-   mutex_unlock(_info->delete_unused_bgs_mutex);
-   ret = PTR_ERR(trans);
-   goto error;
-   }
-

Re: [PATCH v2 3/3] btrfs-progs: device: add remove missing-all

2017-10-15 Thread Anand Jain



On 10/13/2017 01:27 PM, Duncan wrote:

Misono, Tomohiro posted on Wed, 11 Oct 2017 11:18:50 +0900 as excerpted:


Add 'btrfs remove missing-all' to remove all the missing devices at once
for improving usability.

Example:
  sudo mkfs.btrfs -f -d raid1 /dev/sdb1 /dev/sdb2 /dev/sdb3 /dev/sdb4
  sudo wipefs -a /dev/sdb1 /dev/sdb3
  sudo mount -o degraded /dev/sdb2 /mnt <-- 



  I agree with Duncan here. This step itself will fail even with RO
  option. Do you have any patch that is not in the ML which will
  make this step a success in the first place ?

Thanks, Anand



  sudo btrfs filesystem show /mnt
  sudo btrfs device remove missing-all /mnt
  sudo btrfs filesystem show /mnt



There's a reason remove missing-all hasn't yet been implemented.

Note that the above would be very unlikely to work once a filesystem has
been used in any significant way, because raid1 and raid10 are explicitly
chunk pairs, *NOT* duplicated N times across N devices.  So with two
devices missing, chances are that both copies of some chunks will be
missing as well, so the filesystem would no longer be mountable degraded-
writable, only degraded-readonly, in which case device remove won't work
at all because the filesystem is readonly.

In fact, until the recent per-chunk check patches went in, it was
impossible to mount-writable a raid1 missing two devices at all, because
the safeguards simply assumed some chunks would be entirely missing.

The only case in which more than a single device missing is likely to be
mountable degraded-writable (so device remove will work at all) is raid6,
tho with recent patches there's narrow cases in which it /might/ be
doable with raid1 as well.

Now you may still wish to implement remove missing-all for raid6 mode and
for the unusual corner-case raid1/raid10 in which it might work, but the
documentation should be pretty clear that save for raid6 it can't be
expected to work in most cases.

Given that, I think remove missing-all hasn't been implemented as it
simply hasn't been considered to be worth the bother for the narrow use-
cases in which it will actually work.


--
To unsubscribe from this list: send the line "unsubscribe linux-btrfs" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: Why isnt NOCOW attributes propogated on snapshot transfers?

2017-10-15 Thread Qu Wenruo



On 2017年10月15日 09:19, Cerem Cem ASLAN wrote:

`btrfs send | btrfs receive` removes NOCOW attributes. Is it a bug or
a feature? If it's a feature, how can we keep these attributes if we
need to?


It seems that, current send doesn't have support for extra inode flags.

Send can only send out c/m/atime, uid/gid and xattr.

Since NODATACOW is stored in inode item flags, not xattr, so it's not 
supported yet.


Thanks,
Qu

--
To unsubscribe from this list: send the line "unsubscribe linux-btrfs" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


--
To unsubscribe from this list: send the line "unsubscribe linux-btrfs" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: [PATCH 3/4] Btrfs: handle unaligned tail of data ranges more efficient

2017-10-15 Thread Timofey Titovets
May be then just add a comment at least at one of that functions?
Like:
/*
 * Handle unaligned end, end is inclusive, so always unaligned
 */

Or something like:
/*
 * It's obvious, kernel use paging, so range
 * Almost always have aligned start like 0
 * and unaligned end like 8192 - 1
 */

Or we assume that everybody who look at kernel code, must understood
that basic things?

Thanks


2017-10-10 19:37 GMT+03:00 David Sterba :
> On Tue, Oct 03, 2017 at 06:06:03PM +0300, Timofey Titovets wrote:
>> At now while switch page bits in data ranges
>> we always hande +1 page, for cover case
>> where end of data range is not page aligned
>
> The 'end' is inclusive and thus not aligned in most cases, ie. it's
> offset 4095 in the page, so the IS_ALIGNED is allways true and the code
> is equivalent to the existing condition (index <= end_index).



-- 
Have a nice day,
Timofey.
--
To unsubscribe from this list: send the line "unsubscribe linux-btrfs" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: Is it safe to use btrfs on top of different types of devices?

2017-10-15 Thread Duncan
Zoltán Ivánfi posted on Sun, 15 Oct 2017 10:30:52 +0200 as excerpted:

> You assumed correctly that what I really wanted to ask about was btrfs
> on SATA+USB, thanks for answering that questions as well. Based on your
> replies I feel assured that btrfs should not be affected by this
> particular issue due to operating on the filesystem level and not on the
> block device level; but USB connectivity issues can still lead to
> problems.
> 
> Do these USB connectivity issues lead to data corruption? Naturally for
> raid0 they will, but for raid1 I suppose they shouldn't as one copy of
> the data remains intact.

FWIW, the problems I've /personally/ had with raid1 here, on both mdraid 
and btrfs raid, 100% SATA connections on both, with older SATA-1 spinning 
rust for the mdraid, and newer SSDs for btrfs raid, have to do with 
suspend to RAM, or hibernate (suspend to disk).  I finally gave up on 
it.  The problem is that in resume, one device is inevitably slower than 
the others to come back up, and will often get kicked from the array.

On original bootup there's a mechanism that waits for all devices, but 
apparently it's not activated on resume from suspend-to-RAM.  With mdraid 
(longer ago, with a machine that would hibernate but not suspend to ram) 
the device can be readded and will resync.  Btrfs (as of a couple years 
ago anyway, with a machine that would suspend to RAM but I've not 
actually tried hibernate as I've not configured a swap partition to 
suspend to) remains a bit behind in that area, however, and the slow 
device remains unsynced, eventually forcing a full reboot, where it comes 
up with the raid, but still must be manually synced via a btrfs scrub.  
Since I end up having to reboot and do a manual sync anyway, it's simply 
not worth doing suspend-to-ram in the first place, and I've started just 
shutting down or leaving the machine running.  That seems to work much 
better for both cases.

I've not personally tried raid1 (of either btrfs or mdraid) on USB, so I 
have no personal experience there, but as I said, we do get more reports 
of problems with USB-connected btrfs raid, than with SATA.  Most of the 
problems are fixable, and the reports have lessened as btrfs has matured, 
but I'd not recommend it or consider it worth the hassle.

What I'd recommend instead, if USB connectivity is all that's available 
(as with many appliance-type machines, my router, for instance, tho I'm 
not actually using the feature there), is larger capacity, then use btrfs 
in dup mode so it gets to use btrfs checksumming not just for error 
detection, but correction as well (a big advantage of both raid1 and dup 
mode), and do actual backups to other devices.  (Btrfs send/receive can 
be used for the backups, tho here I just alternate backups and use a 
simpler mkfs and midnight-commander copying with btrfs lzo compression.)

I tend to heavily partition and use smaller, independent btrfs anyway, 
over the huge multi-TB single btrfs that other people seem to favor, so a 
4 TB single device in dup mode for 2 TB capacity is larger by an order of 
magnitude than any of my filesystems (I'd certainly partition up anything 
that big, even for dup mode), tho I can imagine a 4 TB device in dup mode 
for 2 TB capacity would cramp the style of some users.

-- 
Duncan - List replies preferred.   No HTML msgs.
"Every nonfree program has a lord, a master --
and if you use the program, he is your master."  Richard Stallman

--
To unsubscribe from this list: send the line "unsubscribe linux-btrfs" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: Is it safe to use btrfs on top of different types of devices?

2017-10-15 Thread Zoltán Ivánfi
Hi,

Thanks for the replies.

As you both pointed out, I shouldn't have described the issue having
to do with hotplugging. I got confused by this use-case being somewhat
emphasized in the description of the bug I linked to. As for the
question of why I think that I got bitten by that bug in particular:
It matches my experiences as I can recall them (used RAID on SATA+USB,
got bio too big device error messages, data got corrupted).

You assumed correctly that what I really wanted to ask about was btrfs
on SATA+USB, thanks for answering that questions as well. Based on
your replies I feel assured that btrfs should not be affected by this
particular issue due to operating on the filesystem level and not on
the block device level; but USB connectivity issues can still lead to
problems.

Do these USB connectivity issues lead to data corruption? Naturally
for raid0 they will, but for raid1 I suppose they shouldn't as one
copy of the data remains intact.

Thanks,

Zoltan

On Sat, Oct 14, 2017 at 9:00 PM, Zoltán Ivánfi  wrote:
> Dear Btrfs Experts,
>
> A few years ago I tried to use a RAID1 mdadm array of a SATA and a USB
> disk, which lead to strange error messages and data corruption. I did
> some searching back then and found out that using hot-pluggable
> devices with mdadm is a paved road to data corruption. Reading through
> that old bug again I see that it was autoclosed due to old age but
> still hasn't been addressed:
> https://bugs.launchpad.net/ubuntu/+source/linux/+bug/320638
>
> I would like to ask whether btrfs may also be prone to data corruption
> issues in this scenario (due to the same underlying issue as the one
> described in the bug above for mdadm), or is btrfs unaffected by the
> underlying issue and is safe to use with a mix of regular and
> hot-pluggable devices as well?
>
> Thanks,
>
> Zoltan
--
To unsubscribe from this list: send the line "unsubscribe linux-btrfs" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html