Re: Btrfs read-only after btrfs-convert from Ext4 workaround
On Sun, Jul 12, 2015 at 7:23 PM, René Pfeiffer l...@luchs.at wrote: Hello! I also hit the Btrfs read-only after btrfs-convert bug when converting my Ext4 root filesystem with btrfs-convert. I used the btrfs tools v4.0 and Linux kernel 4.1.1. As a workaround I mounted the converted Btrfs read-only and copied it to a new Btrfs created by mkfs.btrfs (from the btrfs tools v4.0). Debian 8 system booted without problems, and the bug hasn't been seen (so far). Output of uname, btrfs, and the dmesg log is attached. Let me know if you need anything else. The old Btrfs is still on another disk, and I can extract information from it. If you can run 'btrfs check' on it (without repair) using btrfs-progs 4.0, and 3.19.1, and report the results of each, that would be really useful. -- Chris Murphy -- To unsubscribe from this list: send the line unsubscribe linux-btrfs in the body of a message to majord...@vger.kernel.org More majordomo info at http://vger.kernel.org/majordomo-info.html
[PATCH 2/2 v2] btrfs-progs: device delete to accept devid
This patch introduces new option devid for the command btrfs device delete device_path|devid[..] mnt In a user reported issue on a 3-disk-RAID1, one disk failed with its SB unreadable. Now with this patch user will have a choice to delete the device using devid. The other method we could do, is to match the input device_path to the available device_paths with in the kernel. But that won't work in all the cases, like what if user provided mapper path when the path within the kernel is a non-mapper path. This patch depends on the below kernel patch for the new feature to work, however it will fail-back to the old interface for the kernel without the patch Btrfs: device delete by devid Signed-off-by: Anand Jain anand.j...@oracle.com --- v1-v2: rebase on latest devel Documentation/btrfs-device.asciidoc | 2 +- cmds-device.c | 43 + ioctl.h | 8 +++ 3 files changed, 43 insertions(+), 10 deletions(-) diff --git a/Documentation/btrfs-device.asciidoc b/Documentation/btrfs-device.asciidoc index 2827598..61ede6e 100644 --- a/Documentation/btrfs-device.asciidoc +++ b/Documentation/btrfs-device.asciidoc @@ -74,7 +74,7 @@ do not perform discard by default -f|--force force overwrite of existing filesystem on the given disk(s) -*remove* dev [dev...] path:: +*remove* dev|devid [dev|devid...] path:: Remove device(s) from a filesystem identified by path. *delete* dev [dev...] path:: diff --git a/cmds-device.c b/cmds-device.c index 0e60500..4c9b19a 100644 --- a/cmds-device.c +++ b/cmds-device.c @@ -164,16 +164,34 @@ static int _cmd_rm_dev(int argc, char **argv, const char * const *usagestr) struct btrfs_ioctl_vol_args arg; int res; - if (!is_block_device(argv[i])) { + struct btrfs_ioctl_vol_args_v3 argv3 = {0}; + int its_num = false; + + if (is_numerical(argv[i])) { + argv3.devid = arg_strtou64(argv[i]); + its_num = true; + } else if (is_block_device(argv[i])) { + strncpy_null(argv3.name, argv[i]); + } else { fprintf(stderr, - ERROR: %s is not a block device\n, argv[i]); + ERROR: %s is not a block device or devid\n, argv[i]); ret++; continue; } - memset(arg, 0, sizeof(arg)); - strncpy_null(arg.name, argv[i]); - res = ioctl(fdmnt, BTRFS_IOC_RM_DEV, arg); + res = ioctl(fdmnt, BTRFS_IOC_RM_DEV_V2, argv3); e = errno; + if (res e == ENOTTY) { + if (its_num) { + fprintf(stderr, + Error: Kernel does not support delete by devid\n); + ret = 1; + continue; + } + memset(arg, 0, sizeof(arg)); + strncpy_null(arg.name, argv[i]); + res = ioctl(fdmnt, BTRFS_IOC_RM_DEV, arg); + e = errno; + } if (res) { const char *msg; @@ -181,9 +199,16 @@ static int _cmd_rm_dev(int argc, char **argv, const char * const *usagestr) msg = btrfs_err_str(res); else msg = strerror(e); - fprintf(stderr, - ERROR: error removing the device '%s' - %s\n, - argv[i], msg); + + if (its_num) + fprintf(stderr, + ERROR: error removing the devid '%llu' - %s\n, + argv3.devid, msg); + else + fprintf(stderr, + ERROR: error removing the device '%s' - %s\n, + argv[i], msg); + ret++; } } @@ -193,7 +218,7 @@ static int _cmd_rm_dev(int argc, char **argv, const char * const *usagestr) } static const char * const cmd_rm_dev_usage[] = { - btrfs device remove device [device...] path, + btrfs device remove device|devid [device|devid...] path, Remove a device from a filesystem, NULL }; diff --git a/ioctl.h b/ioctl.h index dff015a..6870931 100644 --- a/ioctl.h +++ b/ioctl.h @@ -40,6 +40,12 @@ struct btrfs_ioctl_vol_args { char name[BTRFS_PATH_NAME_MAX + 1]; }; +struct btrfs_ioctl_vol_args_v3 { + __s64 fd; + char name[BTRFS_PATH_NAME_MAX + 1]; + __u64 devid; +}; + #define BTRFS_DEVICE_PATH_NAME_MAX 1024 #define
Re: Can't remove missing device
On 07/11/2015 01:28 AM, None None wrote: I can't apply your patch on btrfs-progs v4.1 nor v4.0 http://www.spinics.net/lists/linux-btrfs/msg43422.html git apply --check error: Documentation/btrfs-device.txt: No such file or directory error: patch failed: cmds-device.c:169 error: cmds-device.c: patch does not apply I have rebased it on latest now. Kindly find v2. http://www.spinics.net/lists/linux-btrfs/msg43646.html git apply --check does not return any errors for the kernel patch with 4.1 Are these patches included in the new 4.2-rc1 kernel? No. Also isn't missing for cases when a device is not available anymore, why would I want to delete a device by ID? Its for the similar situation where you need to replace the device with out reading the src-device. Thanks, Anand Anand Jain anand.j...@oracle.com írta: The patches sent before helps to delete device without reading the device to be deleted. So it should help here. Can you try, [PATCH V2 1/8] Btrfs: device delete by devid [PATCH 2/2] btrfs-progs: device delete to accept devid Thanks, Anand On 07/10/2015 12:05 PM, None None wrote: One of my 3TB drives failed (not recognized anymore) recently so I got two new 4TB drives, I mounted the fs with -o degraded and used btrfs dev add to add the new drives then I did btrfs dev del missing. Now delete missing always returns an error ERROR: error removing the device 'missing' - Input/output error According to dmesg sda returns bad data but the smart values for it seem fine. How do I get the FS working again? Debian/SID, kernel v4.1 # btrfs fi df /srv/ Data, RAID5: total=18.96TiB, used=18.52TiB System, RAID1: total=32.00MiB, used=2.30MiB Metadata, RAID1: total=24.06GiB, used=22.09GiB GlobalReserve, single: total=512.00MiB, used=0.00B # btrfs fi sho Label: none uuid: ---- Total devices 11 FS bytes used 18.54TiB devid1 size 2.73TiB used 2.56TiB path /dev/sdh devid2 size 2.73TiB used 2.63TiB path /dev/sdg devid3 size 2.73TiB used 2.64TiB path /dev/sdj devid4 size 2.73TiB used 2.60TiB path /dev/sdk devid5 size 2.73TiB used 2.63TiB path /dev/sdb devid6 size 2.73TiB used 2.73TiB path /dev/sda devid9 size 2.73TiB used 2.73TiB path /dev/sdd devid 10 size 2.73TiB used 2.73TiB path /dev/sdl devid 11 size 3.64TiB used 2.66GiB path /dev/sdc devid 12 size 3.64TiB used 2.66GiB path /dev/sde *** Some devices missing btrfs-progs v4.0 # dmesg | tail -n 40 [ 9474.630480] BTRFS warning (device sda): csum failed ino 384 off 2927886336 csum 1204172668 expected csum 3738892907 [ 9474.630487] BTRFS warning (device sda): csum failed ino 384 off 2927919104 csum 729502971 expected csum 57406087 [ 9474.630493] BTRFS warning (device sda): csum failed ino 384 off 2927923200 csum 1688454633 expected csum 4263548653 [ 9474.630495] BTRFS warning (device sda): csum failed ino 384 off 2927927296 csum 3679588162 expected csum 4283532667 [ 9484.066796] BTRFS info (device sda): relocating block group 66338809643008 flags 129 [ 9505.492349] __readpage_endio_check: 6 callbacks suppressed [ 9505.492356] BTRFS warning (device sda): csum failed ino 385 off 2927886336 csum 1204172668 expected csum 3738892907 [ 9505.492366] BTRFS warning (device sda): csum failed ino 385 off 2927890432 csum 645393967 expected csum 1519548271 [ 9505.492372] BTRFS warning (device sda): csum failed ino 385 off 2927894528 csum 3254966910 expected csum 2168664573 [ 9505.492377] BTRFS warning (device sda): csum failed ino 385 off 2927898624 csum 3464250141 expected csum 1621289634 [ 9505.492382] BTRFS warning (device sda): csum failed ino 385 off 2927902720 csum 2214000308 expected csum 2797028572 [ 9505.492387] BTRFS warning (device sda): csum failed ino 385 off 2927906816 csum 3719155761 expected csum 561200354 [ 9505.492392] BTRFS warning (device sda): csum failed ino 385 off 2927910912 csum 98768328 expected csum 1311354303 [ 9505.492397] BTRFS warning (device sda): csum failed ino 385 off 2927915008 csum 996429330 expected csum 1552366519 [ 9505.492402] BTRFS warning (device sda): csum failed ino 385 off 2927919104 csum 729502971 expected csum 57406087 [ 9505.492407] BTRFS warning (device sda): csum failed ino 385 off 2927923200 csum 1688454633 expected csum 4263548653 [ 9515.428150] BTRFS info (device sda): relocating block group 66338809643008 flags 129 [ 9534.605158] __readpage_endio_check: 7 callbacks suppressed [ 9534.605165] BTRFS warning (device sda): csum failed ino 386 off 2927886336 csum 1204172668 expected csum 3738892907 [ 9534.605174] BTRFS warning (device sda): csum failed ino 386 off 2927890432 csum 645393967 expected csum 1519548271 [ 9534.605184] BTRFS warning (device sda): csum failed ino 386 off 2927894528 csum 3254966910 expected csum 2168664573 [ 9534.605192] BTRFS warning (device sda): csum
Re: btrfs full, but not full, can't rebalance
Just a final note -- I'm finally back in person with the CentOS 7 server and so booted it to the latest kernel-ml from elrepo. It is a 4.1 kernel. But while still remote with the older 3.10 kernel, I also tried doing a 'mount -oremount,clear_cache /' I can't swear it helped, but things did seem to get better afterwards... so it may be worth a try for anyone reading this thread with a similar problem later. (Unless someone can definitively say that clear_cache doesn't work with -oremount.) Rich -- To unsubscribe from this list: send the line unsubscribe linux-btrfs in the body of a message to majord...@vger.kernel.org More majordomo info at http://vger.kernel.org/majordomo-info.html
Re: Defrag operations sometimes don't work.
On Sun, Jul 12, 2015 at 2:54 AM, Martin Steigerwald mar...@lichtvoll.de wrote: Note however: Even without sync BTRFS will defragment the file. It may just take a while till the new extents are written. So there is no need to call sync after btrfs fi defrag. My purpose in calling sync in that transcript was to ensure each invocation of filefrag returned the correct results. Why are you trying to defragment anyway? What are you trying to achieve / solve? There are several reasons. Some are technical and some are emotional. Technical reason 1: I sometimes play computer games that are delivered through Steam. If you're not familiar with it, Steam is a sort of online storefront application for Linux and other platforms that--I believe--delivers game patches by overwriting selected ranges in each game's files. Lots of small overwrites are the main culprit behind small files with thousand or tens of thousands of fragments on btrfs. All I know for sure is that some game files have become heavily fragmented and load times seem excessive, especially given that they're older games compared to my hardware. Technical reason 2: Whenever a new Ubuntu release comes out, I make sure to download and burn the ISO before I upgrade my OS. Usually I download the ISO using bittorrent, which uses lots of random writes creating heavily fragmented downloads that severely impact read speeds from my non-SSD hard disk. This increases my load quite a bit when I seed the download back to other people. I've solved this problem in the past by pausing the bittorrent client, copying the file to another location, deleting the original, moving the copy back into place with the same name, and resuming the client, but that's a pain. I'd rather defragment instead. Emotional reason 1: Heavily fragmented files on rotating media just feel slovenly to me, like a person wearing dirty clothes or a bathroom that hasn't been cleaned in a while. Emotional reason 2: I switched from Windows to Linux in the late 90s. At that time, RAM was expensive, so I often found myself in situations where my kernel was dropping pages from the filesystem cache to make room for application memory. Rereading those files later was very slow if they were fragmented, so I religiously defragmented my Windows 95 and 98 machines every week. When I switched to Linux, I was dismayed to learn that ext2 had no defragmentation tool, and I was very skeptical of claims that Linux doesn't need defragmentation, especially considering that most of the people making those claims probably had no idea how fragmented their own filesystems were. In the years that followed, I was continually disappointed as stable defragmentation tools did not appear for ext2 or any other popular Linux filesystems, so I'm relieved and excited not to be in that predicament anymore. I'm now seeing that recursive defragging doesn't work the way I expect. Running $ btrfs fi defrag -r /path/to returns almost immediately and does not reduce the number of fragments in /path/to/file. However, running $ btrfs fi defrag /path/to/file does reduce the number of fragments. Well, I have no idea about this one. I have the same behavior with btrfs --version: Btrfs v3.17 v4.0 Thanks for checking, Eric -- To unsubscribe from this list: send the line unsubscribe linux-btrfs in the body of a message to majord...@vger.kernel.org More majordomo info at http://vger.kernel.org/majordomo-info.html
Btrfs read-only after btrfs-convert from Ext4 workaround
Hello! I also hit the Btrfs read-only after btrfs-convert bug when converting my Ext4 root filesystem with btrfs-convert. I used the btrfs tools v4.0 and Linux kernel 4.1.1. As a workaround I mounted the converted Btrfs read-only and copied it to a new Btrfs created by mkfs.btrfs (from the btrfs tools v4.0). Debian 8 system booted without problems, and the bug hasn't been seen (so far). Output of uname, btrfs, and the dmesg log is attached. Let me know if you need anything else. The old Btrfs is still on another disk, and I can extract information from it. Best regards, René. -- )\._.,--,'``. fL Let GNU/Linux work for you while you take a nap. /, _.. \ _\ (`._ ,. R. Pfeiffer lynx at luchs.at + http://web.luchs.at/ `._.-(,_..'--(,_..'`-.;.' - System administration + Consulting + Teaching - Got mail delivery problems? http://web.luchs.at/information/blockedmail.php Linux nephtys 4.1.1 #1 SMP Thu Jul 2 03:28:56 CEST 2015 x86_64 GNU/Linux btrfs-progs v4.0 btrfs-progs v4.0 Data, single: total=450.00GiB, used=237.20GiB System, single: total=32.00MiB, used=48.00KiB Metadata, single: total=188.00GiB, used=96.78GiB GlobalReserve, single: total=512.00MiB, used=0.00B [0.00] Initializing cgroup subsys cpuset [0.00] Initializing cgroup subsys cpu [0.00] Initializing cgroup subsys cpuacct [0.00] Linux version 4.1.1 (lynx@nephtys) (gcc version 4.9.2 (Debian 4.9.2-10) ) #1 SMP Thu Jul 2 03:28:56 CEST 2015 [0.00] Command line: BOOT_IMAGE=/vmlinuz-4.1.1 root=UUID=703fc8b4-b2b9-470b-af2f-9aae9536c2fb ro nordrand rd.auto rd.auto=1 [0.00] e820: BIOS-provided physical RAM map: [0.00] BIOS-e820: [mem 0x-0x0009cfff] usable [0.00] BIOS-e820: [mem 0x0009d000-0x0009] reserved [0.00] BIOS-e820: [mem 0x000e-0x000f] reserved [0.00] BIOS-e820: [mem 0x0010-0xcfcdcfff] usable [0.00] BIOS-e820: [mem 0xcfcdd000-0xdce3efff] reserved [0.00] BIOS-e820: [mem 0xdce3f000-0xdcf7efff] ACPI NVS [0.00] BIOS-e820: [mem 0xdcf7f000-0xdcffefff] ACPI data [0.00] BIOS-e820: [mem 0xdcfff000-0xdf9f] reserved [0.00] BIOS-e820: [mem 0xf800-0xfbff] reserved [0.00] BIOS-e820: [mem 0xfec0-0xfec00fff] reserved [0.00] BIOS-e820: [mem 0xfed08000-0xfed08fff] reserved [0.00] BIOS-e820: [mem 0xfed1-0xfed19fff] reserved [0.00] BIOS-e820: [mem 0xfed1c000-0xfed1] reserved [0.00] BIOS-e820: [mem 0xfee0-0xfee00fff] reserved [0.00] BIOS-e820: [mem 0xff00-0xff000fff] reserved [0.00] BIOS-e820: [mem 0xffc0-0x] reserved [0.00] BIOS-e820: [mem 0x0001-0x00031e5f] usable [0.00] NX (Execute Disable) protection: active [0.00] SMBIOS 2.7 present. [0.00] DMI: LENOVO 20C6003AGE/20C6003AGE, BIOS J9ET91WW (2.11 ) 07/04/2014 [0.00] e820: update [mem 0x-0x0fff] usable == reserved [0.00] e820: remove [mem 0x000a-0x000f] usable [0.00] e820: last_pfn = 0x31e600 max_arch_pfn = 0x4 [0.00] MTRR default type: write-back [0.00] MTRR fixed ranges enabled: [0.00] 0-9 write-back [0.00] A-B uncachable [0.00] C-F write-protect [0.00] MTRR variable ranges enabled: [0.00] 0 base 00E000 mask 7FE000 uncachable [0.00] 1 base 00DE00 mask 7FFE00 uncachable [0.00] 2 base 00DD00 mask 7FFF00 uncachable [0.00] 3 disabled [0.00] 4 disabled [0.00] 5 disabled [0.00] 6 disabled [0.00] 7 disabled [0.00] 8 disabled [0.00] 9 disabled [0.00] PAT configuration [0-7]: WB WC UC- UC WB WC UC- UC [0.00] e820: last_pfn = 0xcfcdd max_arch_pfn = 0x4 [0.00] Scanning 1 areas for low memory corruption [0.00] Base memory trampoline at [88097000] 97000 size 24576 [0.00] Using GB pages for direct mapping [0.00] init_memory_mapping: [mem 0x-0x000f] [0.00] [mem 0x-0x000f] page 4k [0.00] BRK [0x2d7cd000, 0x2d7cdfff] PGTABLE [0.00] BRK [0x2d7ce000, 0x2d7cefff] PGTABLE [0.00] BRK [0x2d7cf000, 0x2d7c] PGTABLE [0.00] init_memory_mapping: [mem 0x31e40-0x31e5f] [0.00] [mem 0x31e40-0x31e5f] page 2M [0.00] BRK [0x2d7d, 0x2d7d0fff] PGTABLE [0.00] init_memory_mapping: [mem 0x3-0x31e3f] [0.00] [mem 0x3-0x31e3f] page 2M [0.00] init_memory_mapping: [mem 0x2e000-0x2] [0.00] [mem
Re: Anyone tried out btrbk yet?
On Sat, Jul 11, 2015 at 08:55:23PM -0700, Marc MERLIN wrote: On Fri, Jul 10, 2015 at 12:46:24PM +0200, Axel Burri wrote: On 2015-07-09 14:26, Martin Steigerwald wrote: Well I may try it for one of my BTRFS volumes in addition to the rsync backup for now. I would like to give all options on command line, but well, maybe it can completely replace my current script if I put everything in its configuration. One reason why btrbk exists is that I wanted to have all my backups configured in a config file. I use btrbk to backup several hosts to several backup locations (around 20 subvolumes), which made setting everything up with command-line options very cumbersome. For simpler setups you might be better off with command-line based tools (e.g. Marcs btrfs-subvolume-backup, which I like for it's smallness). I just had another look at btrbk, and it's obviously a lot more featureful. It's almost 10 times bigger in lines of code than btrfs-subvolume-backup :) Anyway, to others, if you're happy using a tool that just does that you need without having to dig into it and without you caring how btrfs send/receive, works, btrbk looks like the better choice. If you'd like something short-ish to look at and see how it works and/or want something simple in shell you can modify quickly, btrfs-subvolume-backup might be better for you. Actually there is one thing my backup script doesn't do and it looks like btrbk doesn't do either. When I travel I'm often on crappy hotel wireless and my incremental backups never succeed, and they just get bigger every day, so succeed even less next time. At the same time, I have an issue with btrfs where my backup server cause a backup over ssh on a local network to take 12H or more when the data to transfer isn't that much. From what I can tell it's a problem where btrfs receive just pauses for one second or more, and kills the TCP flow. TCP restarts slowly and ramps up just to be stopped again. As a result, it takes forever to finish. Since I don't know if this btrfs stall problem will get looked into or fixed anytime soon, it would be great for the incremental backup to go to a local spool directory, and then for the backup script to just copy it with rsync until it makes it to the other side. Only then is btrfs receive run with that file, and then the next incremental backup is tried. But this is a fair amount of logic to get this right, and I just haven't taken the time to write it. Would it be helpful to others, or am I the only one with this problem? Thanks, Marc -- A mouse is a device used to point at the xterm you want to type in - A.S.R. Microsoft is to operating systems what McDonalds is to gourmet cooking Home page: http://marc.merlins.org/ | PGP 1024R/763BE901 -- To unsubscribe from this list: send the line unsubscribe linux-btrfs in the body of a message to majord...@vger.kernel.org More majordomo info at http://vger.kernel.org/majordomo-info.html
Re: Defrag operations sometimes don't work.
Am Samstag, 11. Juli 2015, 10:40:29 schrieb erp...@gmail.com: On Sat, Jul 11, 2015 at 4:12 AM, Martin Steigerwald mar...@lichtvoll.de wrote: Always do sync after a btrfs fi defrag and before measuring with filefrag. The kernel may not have written everything. I have seen this repeatedly that the extent count drops further after a sync, following btrfs fi defrag. First of all, thank you for your help. This fixed the problem for defragging individual files. Note however: Even without sync BTRFS will defragment the file. It may just take a while till the new extents are written. So there is no need to call sync after btrfs fi defrag. Why are you trying to defragment anyway? What are you trying to achieve / solve? I'm now seeing that recursive defragging doesn't work the way I expect. Running $ btrfs fi defrag -r /path/to returns almost immediately and does not reduce the number of fragments in /path/to/file. However, running $ btrfs fi defrag /path/to/file does reduce the number of fragments. Well, I have no idea about this one. I have the same behavior with btrfs --version: Btrfs v3.17 v4.0 Ciao, -- Martin -- To unsubscribe from this list: send the line unsubscribe linux-btrfs in the body of a message to majord...@vger.kernel.org More majordomo info at http://vger.kernel.org/majordomo-info.html
Wiki suggestions
Hi, I hope it's not out of place, but I have a few suggestions for the Wiki: - The presentation NYLUG Presents: Chris Mason on Btrfs (May 14th 2015) at https://www.youtube.com/watch?v=W3QRWUfBua8 would make a nice addition to the Articles, presentations, podcasts section. - The same goes for Why you should consider using btrfs ... like Google does. at https://www.youtube.com/watch?v=6DplcPrQjvA. - coreutils 8.24 was released early this month: https://lists.gnu.org/archive/html/coreutils-announce/2015-07/msg0.html. - While I'm at it: hilights should be highlights in the btrfs-progs 4.1.1 news entry. - The Linux v4.1 news entry is still TBD ;-) . Greetings -- Marc Joliet -- People who think they know everything really annoy those of us who know we don't - Bjarne Stroustrup pgpFwZKnbRCn9.pgp Description: Digitale Signatur von OpenPGP
question about should_cow_block() and BTRFS_HEADER_FLAG_WRITTEN
Greetings, Looking at the code of should_cow_block(), I see: if (btrfs_header_generation(buf) == trans-transid !btrfs_header_flag(buf, BTRFS_HEADER_FLAG_WRITTEN) ... So if the extent buffer has been written to disk, and now is changed again in the same transaction, we insist on COW'ing it. Can anybody explain why COW is needed in this case? The transaction has not committed yet, so what is the danger of rewriting to the same location on disk? My understanding was that a tree block needs to be COW'ed at most once in the same transaction. But I see that this is not the case. I am asking because I am doing some profiling of btrfs metadata work under heavy loads, and I see that sometimes btrfs COW's almost twice more tree blocks than the total metadata size. Thanks, Alex. -- To unsubscribe from this list: send the line unsubscribe linux-btrfs in the body of a message to majord...@vger.kernel.org More majordomo info at http://vger.kernel.org/majordomo-info.html