Re: BTRFS and power loss ~= corruption?
AFAIK, ZFS compats lying disks by rolling back to the latest mountable uber block (i.e. the latest tree that was completely and successfully written to disk), does btrfs do something similar today ? On Wed, Aug 24, 2011 at 7:06 PM, Mitch Harder mitch.har...@sabayonlinux.org wrote: On Wed, Aug 24, 2011 at 10:13 AM, Berend Dekens bt...@cyberwizzard.nl wrote: On 24/08/11 17:04, Arne Jansen wrote: On 24.08.2011 17:01, Berend Dekens wrote: On 24/08/11 15:31, Arne Jansen wrote: On 24.08.2011 15:11, Berend Dekens wrote: Hi, I have followed the progress made in the btrfs filesystem over time and while I have experimented with it a little in a VM, I have not yet used it in a production machine. While the lack of a complete fsck was a major issue (I read the update that the first working version is about to be released) I am still worried about an issue I see popping up. How is it possible that a copy-on-write filesystem becomes corrupted if a power failure occurs? I assume this means that even (hard) resetting a computer can result in a corrupt filesystem. I thought the idea of COW was that whatever happens, you can always mount in a semi-consistent state? As far as I can see, you wind up with this: - No outstanding writes when power down - File write complete, tree structure is updated. Since everything is hashed and duplicated, unless the update propagates to the highest level, the write will simply disappear upon failure. While this might be rectified with a fsck, there should be no problems mounting the filesystem (read-only if need be) - Writes are not completed on all disks/partitions at the same time. The checksums will detect these errors and once again, the write disappears unless it is salvaged by a fsck. Am I missing something? How come there seem to be plenty people with a corrupt btrfs after a power failure? And why haven't I experienced similar issues where a filesystem becomes unmountable with say NTFS or Ext3/4? Problems arise when in your scenario writes from higher levels in the tree hit the disk earlier than updates on lower levels. In this case the tree is broken and the fs is unmountable. Of course btrfs takes care of the order it writes, but problems arise when the disk is lying about whether a write is stable on disk, i.e. about cache flushes or barriers. Ah, I see. So the issue is not with the software implementation at all but only arises when hardware acknowledges flushes and barriers before they actually complete? It doesn't mean there aren't any bugs left in the software stack ;) Naturally, but the fact that its very likely that the corruption stories I've been reading about are caused by misbehaving hardware set my mind at ease about experimenting further with btrfs (although I will await the fsck before attempting things in production). Is this a common problem of hard disks? Only of very cheap ones. USB enclosures might add to the problem, too. Also some SSDs are rumored to be bad in this regard. Another problem are layers between btrfs and the hardware, like encryption. I am - and will be - using btrfs straight on hard disks, no lvm, (soft)raid, encryption or other layers. My hard drives are not that fancy (no 15k raptors here); I usually buy hardware from the major suppliers (WD, Maxtor, Seagate, Hitachi etc). Also, until the fast cache mode for SSDs in combination with rotating hardware becomes stable, I'll stick to ordinary hard drives. Thank you for clarifying things. I have to admit I've been beginning to wonder if we picked up a regression somewhere along the way with respect to corruptions after power outages. I'm lucky enough to have very unreliable power. Btrfs was always robust for me on power outages until recently. Now I've recently had two corrupted volumes on unclean shutdowns and power outages. -- To unsubscribe from this list: send the line unsubscribe linux-btrfs in the body of a message to majord...@vger.kernel.org More majordomo info at http://vger.kernel.org/majordomo-info.html -- To unsubscribe from this list: send the line unsubscribe linux-btrfs in the body of a message to majord...@vger.kernel.org More majordomo info at http://vger.kernel.org/majordomo-info.html
Re: Blog: BTRFS is effectively stable
For example: No device-yanking tests were done. No power-cord yanking tests were done. No device cables were yanked, shaken, or plugged/unplugged in rapid succession. No dd the raw device underneath the filesystem while doing file I/O tests were done. No recovery tests were done. Any reallife tests to show how close we are to becoming really stable ? i.e ideally I'd like to know that we're for example 85% stable failing N tests -- To unsubscribe from this list: send the line unsubscribe linux-btrfs in the body of a message to majord...@vger.kernel.org More majordomo info at http://vger.kernel.org/majordomo-info.html
Re: kernel .32, btrfs-vol -b, why is metadata=data
In other words, btrfs-show could tell you that 19GB has been used, but df could say that 0 bytes are in use in the FS. Thanks Chris for the clarification. So despite saying 19G are used, I shouldn't be worried about running out of disk space, since these are just pre-allocated areas. Perhaps btrfs-show should show how much areas are really used, besides how much are just pre-allocated. The thing is, I keep monitoring that number to avoid ENOSPC and now I know that number is not accurate. Thanks for the stellar work Regards -- To unsubscribe from this list: send the line unsubscribe linux-btrfs in the body of a message to majord...@vger.kernel.org More majordomo info at http://vger.kernel.org/majordomo-info.html
kernel .32, btrfs-vol -b, why is metadata=data
Hi everyone, I'm running kernel 2.6.32-0.65.rc8.git5.fc13.x86_64. And I ran btrfs-vol -b, however for 10G of data I still have 9G of metadata! How do I fix this ? [r...@matrix ~]# btrfs-vol -b / ioctl returns 0 You have mail in /var/spool/mail/root [r...@matrix ~]# btrfs-show failed to read /dev/sr0 Label: none uuid: 06b0d069-b1cb-48c4-b26f-c5088a2360d2 Total devices 1 FS bytes used 10.43GB devid1 size 25.72GB used 19.02GB path /dev/dm-1 Btrfs Btrfs v0.19 -- To unsubscribe from this list: send the line unsubscribe linux-btrfs in the body of a message to majord...@vger.kernel.org More majordomo info at http://vger.kernel.org/majordomo-info.html
Re: No space left, although 16G are there
The system is insisting I am out of disk space! [r...@matrix tmp]# df -h / FilesystemSize Used Avail Use% Mounted on /dev/mapper/vgkimo-lvF12 26G 10G 16G 39% / [r...@matrix tmp]# dd if=/dev/zero of=bigfile bs=1M count=500 dd: writing `bigfile': No space left on device 61+0 records in 60+0 records out 62914560 bytes (63 MB) copied, 0.40297 s, 156 MB/s You have mail in /var/spool/mail/root On Thu, Nov 26, 2009 at 11:41 PM, Ahmed Kamal email.ahmedka...@googlemail.com wrote: Hi folks, I am running a Fedora-12 system (2.6.31.5-127.fc12.x86_64) with a btrfs root fs. While running a yum upgrade with around 200MB of updates, the system became significantly slow (3 seconds for Chrome to scroll down after hitting space bar!) and I noticed in /var/log/messages Nov 26 22:12:08 localhost kernel: no space left, need 61440, 18579456 delalloc bytes, 10387451904 bytes_used, 0 bytes_reserved, 0 bytes_pinned, 0 bytes_read only, 0 may use 10406068224 total Although df -h shows 16G of free space for the root fs. I thought I'd report this. Let me know if you want more diagnostics. Regards -- To unsubscribe from this list: send the line unsubscribe linux-btrfs in the body of a message to majord...@vger.kernel.org More majordomo info at http://vger.kernel.org/majordomo-info.html
Re: No space left, although 16G are there
More info [r...@matrix ~]# btrfs-show failed to read /dev/sr0 Label: none uuid: 06b0d069-b1cb-48c4-b26f-c5088a2360d2 Total devices 1 FS bytes used 9.99GB devid1 size 25.72GB used 25.72GB path /dev/dm-1 Btrfs Btrfs v0.19 On Thu, Nov 26, 2009 at 11:47 PM, Ahmed Kamal email.ahmedka...@googlemail.com wrote: The system is insisting I am out of disk space! [r...@matrix tmp]# df -h / Filesystem Size Used Avail Use% Mounted on /dev/mapper/vgkimo-lvF12 26G 10G 16G 39% / [r...@matrix tmp]# dd if=/dev/zero of=bigfile bs=1M count=500 dd: writing `bigfile': No space left on device 61+0 records in 60+0 records out 62914560 bytes (63 MB) copied, 0.40297 s, 156 MB/s You have mail in /var/spool/mail/root On Thu, Nov 26, 2009 at 11:41 PM, Ahmed Kamal email.ahmedka...@googlemail.com wrote: Hi folks, I am running a Fedora-12 system (2.6.31.5-127.fc12.x86_64) with a btrfs root fs. While running a yum upgrade with around 200MB of updates, the system became significantly slow (3 seconds for Chrome to scroll down after hitting space bar!) and I noticed in /var/log/messages Nov 26 22:12:08 localhost kernel: no space left, need 61440, 18579456 delalloc bytes, 10387451904 bytes_used, 0 bytes_reserved, 0 bytes_pinned, 0 bytes_read only, 0 may use 10406068224 total Although df -h shows 16G of free space for the root fs. I thought I'd report this. Let me know if you want more diagnostics. Regards -- To unsubscribe from this list: send the line unsubscribe linux-btrfs in the body of a message to majord...@vger.kernel.org More majordomo info at http://vger.kernel.org/majordomo-info.html
Re: Btrfs development plans
But now Oracle can re-license Solaris and merge ZFS with btrfs. Just kidding, I don't think it would be technically feasible. May I suggest the name ZbtrFS :) Sorry couldn't resist. On a more serious note though, is there any technical benefits that justify continuing to push money in btrfs -- To unsubscribe from this list: send the line unsubscribe linux-btrfs in the body of a message to majord...@vger.kernel.org More majordomo info at http://vger.kernel.org/majordomo-info.html
Re: single disk reed solomon codes
An experiment of applying RS codes for protecting data, worth a look http://ttsiodras.googlepages.com/rsbep.html He overwrites a series of 127 sectors and still manages to correctly recover his data. We all know disks give us unreadable sectors every now and then, so at least on workstations/laptops this could really be useful ? Advantage over single-disk-raid1 is storage efficiency (4.2MB becomes 5.2MB), that means we get 80% of useable disk space, instead of 50% if I decide to raid1 everything ? On Mon, Jul 21, 2008 at 6:03 PM, Dongjun Shin [EMAIL PROTECTED] wrote: On Mon, Jul 21, 2008 at 4:40 PM, Ahmed Kamal [EMAIL PROTECTED] wrote: I definitely hope btrfs has this per-object copies property too. However, simply replicating the whole contents of a directory, wastes too much disk space, as opposed to RS codes Although adding redundancy mechanism will help increasing the integrity of data, I'm not sure whether repeating the same kind of mechanism twice will help. (AFAIK, RS is common in HDD and BCH is common in flash due to their own physical characteristics) I think it is better to have another redundancy mechanism (like RAID1) which is independent of the algorithm used by the underlying storage. -- Dongjun -- To unsubscribe from this list: send the line unsubscribe linux-btrfs in the body of a message to [EMAIL PROTECTED] More majordomo info at http://vger.kernel.org/majordomo-info.html
Re: crash when mounting
Well, yeah sure. But I was kind of hoping my playing/testing is going to help you guys fix it. So, does that traceback help you pin point the problem ? If not, is there anything I can do, to help with that ? I believe this crash should be re-producible .. haven't tested that though Regards On Mon, Aug 4, 2008 at 3:47 PM, Chris Mason [EMAIL PROTECTED] wrote: On Sun, 2008-08-03 at 02:12 +0300, Ahmed Kamal wrote: Hi guys, I was playing on vmware with btrfs on complete disks /dev/sd{b,c,d,e}. Next I decided to use partitions, so I created /dev/sd{b,c,d,e}1 and used those, worked fine! Afterward, I mistakenly re-ran an old command on the full disk ( mount -t btrfs -o subvol=. /dev/sdb /mnt/ ) notice this is sdb not sdb1, and I got this spectacular kernel freeze. Let me know if that's some bug. It would be nice if we didn't oops, there is clearly some hardening to do in the failure paths for corrupt filesystems. -chris -- To unsubscribe from this list: send the line unsubscribe linux-btrfs in the body of a message to [EMAIL PROTECTED] More majordomo info at http://vger.kernel.org/majordomo-info.html
crash when mounting
Hi guys, I was playing on vmware with btrfs on complete disks /dev/sd{b,c,d,e}. Next I decided to use partitions, so I created /dev/sd{b,c,d,e}1 and used those, worked fine! Afterward, I mistakenly re-ran an old command on the full disk ( mount -t btrfs -o subvol=. /dev/sdb /mnt/ ) notice this is sdb not sdb1, and I got this spectacular kernel freeze. Let me know if that's some bug. Thanks [EMAIL PROTECTED] tests]# mount -t btrfs -o subvol=. /dev/sdb /mnt/ Message from [EMAIL PROTECTED] at Aug 3 05:09:33 ... kernel: [ cut here ] Message from [EMAIL PROTECTED] at Aug 3 05:09:33 ... kernel: invalid opcode: [#1] SMP Segmentation fault [EMAIL PROTECTED] tests]# Message from [EMAIL PROTECTED] at Aug 3 05:09:33 ... kernel: Process mount (pid: 18986, ti=dc539000 task=ded4ae70 task.ti=dc539000) Message from [EMAIL PROTECTED] at Aug 3 05:09:33 ... kernel: Stack: e0dc9e73 01c7b000 1000 c1407134 c140a6bc Message from [EMAIL PROTECTED] at Aug 3 05:09:33 ... kernel:0282 00011220 df436880 00011270 c846e118 c846e120 c0463c6b Message from [EMAIL PROTECTED] at Aug 3 05:09:33 ... kernel:dc539c40 c0463ed0 00011270 dc539c68 dc539c78 e0dc23a6 01c7b000 Message from [EMAIL PROTECTED] at Aug 3 05:09:33 ... kernel: Call Trace: Message from [EMAIL PROTECTED] at Aug 3 05:09:33 ... kernel: [mempool_alloc_slab+14/16] ? mempool_alloc_slab+0xe/0x10 Message from [EMAIL PROTECTED] at Aug 3 05:09:33 ... kernel: [mempool_alloc+66/224] ? mempool_alloc+0x42/0xe0 Message from [EMAIL PROTECTED] at Aug 3 05:09:33 ... kernel: [e0dc23a6] ? set_extent_bit+0xa3/0x337 [btrfs] Message from [EMAIL PROTECTED] at Aug 3 05:09:33 ... kernel: [bio_add_page+39/46] ? bio_add_page+0x27/0x2e Message from [EMAIL PROTECTED] at Aug 3 05:09:33 ... kernel: [e0dc5ded] ? btrfs_map_block+0x19/0x1b [btrfs] Message from [EMAIL PROTECTED] at Aug 3 05:09:33 ... kernel: [e0dc5e4c] ? btrfs_map_bio+0x5d/0x1b7 [btrfs] Message from [EMAIL PROTECTED] at Aug 3 05:09:33 ... kernel: [e0dc3ab3] ? end_bio_extent_readpage+0x0/0x339 [btrfs] Message from [EMAIL PROTECTED] at Aug 3 05:09:33 ... kernel: [e0dad33a] ? __btree_submit_bio_hook+0x42/0x4b [btrfs] Message from [EMAIL PROTECTED] at Aug 3 05:09:33 ... kernel: [e0dae650] ? btree_submit_bio_hook+0x15/0x3b [btrfs] Message from [EMAIL PROTECTED] at Aug 3 05:09:33 ... kernel: [e0dae63b] ? btree_submit_bio_hook+0x0/0x3b [btrfs] Message from [EMAIL PROTECTED] at Aug 3 05:09:33 ... kernel: [e0dc0fc2] ? submit_one_bio+0xdf/0x10c [btrfs] Message from [EMAIL PROTECTED] at Aug 3 05:09:33 ... kernel: [e0dc3854] ? read_extent_buffer_pages+0x276/0x3c6 [btrfs] Message from [EMAIL PROTECTED] at Aug 3 05:09:33 ... kernel: [e0dc16cd] ? add_lru+0x22/0x69 [btrfs] Message from [EMAIL PROTECTED] at Aug 3 05:09:33 ... kernel: [e0dacbf7] ? btree_read_extent_buffer_pages+0x3a/0x8e [btrfs] Message from [EMAIL PROTECTED] at Aug 3 05:09:33 ... kernel: [e0daf19b] ? btree_get_extent+0x0/0x1cd [btrfs] Message from [EMAIL PROTECTED] at Aug 3 05:09:33 ... kernel: [e0dad922] ? read_tree_block+0x3e/0x52 [btrfs] Message from [EMAIL PROTECTED] at Aug 3 05:09:33 ... kernel: [e0dae0a9] ? open_ctree+0x6d4/0x825 [btrfs] Message from [EMAIL PROTECTED] at Aug 3 05:09:33 ... kernel: [e0da027e] ? btrfs_get_sb_bdev+0x103/0x284 [btrfs] Message from [EMAIL PROTECTED] at Aug 3 05:09:33 ... kernel: [e0da07bb] ? btrfs_parse_options+0x261/0x26e [btrfs] Message from [EMAIL PROTECTED] at Aug 3 05:09:33 ... kernel: [e0da080c] ? btrfs_get_sb+0x44/0x60 [btrfs] Message from [EMAIL PROTECTED] at Aug 3 05:09:33 ... kernel: [vfs_kern_mount+130/245] ? vfs_kern_mount+0x82/0xf5 Message from [EMAIL PROTECTED] at Aug 3 05:09:33 ... kernel: [do_kern_mount+50/186] ? do_kern_mount+0x32/0xba Message from [EMAIL PROTECTED] at Aug 3 05:09:33 ... kernel: [do_new_mount+66/108] ? do_new_mount+0x42/0x6c Message from [EMAIL PROTECTED] at Aug 3 05:09:33 ... kernel: [do_mount+420/450] ? do_mount+0x1a4/0x1c2 Message from [EMAIL PROTECTED] at Aug 3 05:09:33 ... kernel: [__get_free_pages+72/79] ? __get_free_pages+0x48/0x4f Message from [EMAIL PROTECTED] at Aug 3 05:09:33 ... kernel: [copy_mount_options+42/249] ? copy_mount_options+0x2a/0xf9 Message from [EMAIL PROTECTED] at Aug 3 05:09:33 ... kernel: [sys_mount+102/158] ? sys_mount+0x66/0x9e Message from [EMAIL PROTECTED] at Aug 3 05:09:33 ... kernel: [syscall_call+7/11] ? syscall_call+0x7/0xb Message from [EMAIL PROTECTED] at Aug 3 05:09:33 ... kernel: === Message from [EMAIL PROTECTED] at Aug 3 05:09:33 ... kernel: Code: 7d 1c 00 0f 95 45 93 84 c0 74 27 31 c0 80 7d 93 00 0f 85 0c 04 00 00 8b 55 10 ff 72 04 ff 32 57 56 68 73 9e dc e0 e8 38 91 86 df 0f 0b 83 c4 14 eb fe 8b 4d ac 8b 51 10 8b 41 0c 39 fa 77 23 72 Message from [EMAIL PROTECTED] at Aug 3 05:09:33 ... kernel: EIP: [e0dc59a6]
Re: Fix: btrfsctl arguments handling
Is this not a valid patch/fix ? Who do I have to bug to get this merged :) On Fri, Jul 18, 2008 at 7:44 PM, Ahmed Kamal [EMAIL PROTECTED] wrote: That's probably a more proper patch # HG changeset patch # Signed-Off-By: Ahmed Kamal [EMAIL PROTECTED] # Date 1216410189 -10800 # Node ID f35e2b3b25a97d42452ec90b6c524721d9c9941f # Parent 1aa4b32e3efd452531cb0b883edfcc3761487fca Fixing btrfsctl argument handling diff -r 1aa4b32e3efd -r f35e2b3b25a9 btrfsctl.c --- a/btrfsctl.cTue Jun 10 10:09:18 2008 -0400 +++ b/btrfsctl.cFri Jul 18 22:43:09 2008 +0300 @@ -73,7 +73,7 @@ btrfs_scan_one_dir(/dev, 1); exit(0); } - for (i = 1; i ac - 1; i++) { + for (i = 1; i = ac - 1; i++) { if (strcmp(av[i], -s) == 0) { if (i + 1 = ac - 1) { fprintf(stderr, -s requires an arg); On Fri, Jul 18, 2008 at 7:37 PM, Ahmed Kamal [EMAIL PROTECTED] wrote: Hi, btrfsctl -A in the current -unstable branch, does not result in the error message designated for it, namely -A requires an arg\n. Turns out the whole loop was being skipped! Please find a patch attached that fixed it for me. diff -r 1aa4b32e3efd btrfsctl.c --- a/btrfsctl.cTue Jun 10 10:09:18 2008 -0400 +++ b/btrfsctl.cFri Jul 18 22:34:46 2008 +0300 @@ -73,7 +73,7 @@ btrfs_scan_one_dir(/dev, 1); exit(0); } - for (i = 1; i ac - 1; i++) { + for (i = 1; i = ac - 1; i++) { if (strcmp(av[i], -s) == 0) { if (i + 1 = ac - 1) { fprintf(stderr, -s requires an arg); PS: Is this the correct way to submit patches ? -- To unsubscribe from this list: send the line unsubscribe linux-btrfs in the body of a message to [EMAIL PROTECTED] More majordomo info at http://vger.kernel.org/majordomo-info.html
single disk reed solomon codes
Hi, Since btrfs is someday going to be the default FS for Linux, and will be on so many single disk PCs and laptops, I was thinking it should be a good idea to insert some redundancy in single disk deployments. Of course it can help with disk failures, since it's obviously a single disk, but it can help with bit-rot, and with hardware sector read errors. To get that we'd need to implement some kind of forward error correction, possibly reed solomon code. I am not sure why no filesystem seems to implement such scheme, although I believe at the hardware level, such schemes are being used (so the idea is applicable) ? Not that I am an expert on such matters, but I thought I'd drop that suggestion here, maybe at least I'll know why no one else seems to do that Regards -- To unsubscribe from this list: send the line unsubscribe linux-btrfs in the body of a message to [EMAIL PROTECTED] More majordomo info at http://vger.kernel.org/majordomo-info.html
Re: single disk reed solomon codes
RS-based error correction for themselves. If we're unlucky in our choice of error correction, it might even be possible to end up in a situation where the only errors we'd _see_ are the ones which were uncorrectable. but since at the FS level, the redundancy would be at a different place, than the hardware level redundancy, it might be correctable to you -- To unsubscribe from this list: send the line unsubscribe linux-btrfs in the body of a message to [EMAIL PROTECTED] More majordomo info at http://vger.kernel.org/majordomo-info.html
Re: QA suite plans
Thanks man, I got myself a wiki account, and get btrfs up and running in a VM. Will start planning for the test suite On Thu, Jul 17, 2008 at 5:25 PM, Miguel Sousa Filipe [EMAIL PROTECTED] wrote: Hi there, On Wed, Jul 16, 2008 at 2:22 PM, Ahmed Kamal [EMAIL PROTECTED] wrote: Ok, cool. If the torture suite is a bunch of scripts, that is probably something I can help with. I don't know what's the contribute policy for this project, but if contributors are welcome I would like to start a wiki page to gather test suite ideas, and I'd start writing scripts to execute the tests and report back results A bit intro about myself: I come from a system engineering background. I've been introduced to Linux around 10 years ago. I'm an RHCE, and currently work as a system engineer, integrating solutions and consulting on them. I have been involved with the Fedora community. Action Items: - How do I get access to create a wiki page point your browser to: http://btrfs.wiki.kernel.org create one account there - Any instructions on checking out the code and building it ? see: http://btrfs.wiki.kernel.org/index.php/Btrfs_source_repositories To build the code, use make, read the INSTALL file - I'm planning on using a VM for testing, any specific VMs recommended (VirtualBox?) not that I know of. Kind regards -- Miguel Sousa Filipe -- To unsubscribe from this list: send the line unsubscribe linux-btrfs in the body of a message to [EMAIL PROTECTED] More majordomo info at http://vger.kernel.org/majordomo-info.html
btrfsctl -A not returning useful information
[EMAIL PROTECTED] progs-unstable]# btrfsctl -A /dev/sdb ioctl returns 0 [EMAIL PROTECTED] progs-unstable]# btrfsctl -A /dev/sdc ioctl returns 0 /dev/sdb has a btrfs, while /dev/sdc is blank. What's that output supposed to mean ? Is it a bug ? -- To unsubscribe from this list: send the line unsubscribe linux-btrfs in the body of a message to [EMAIL PROTECTED] More majordomo info at http://vger.kernel.org/majordomo-info.html
QA suite plans
Hi Team, I have been following the btrfs project since Chris announced it last year. I am happy to see v1.0 is planned in Q4. This is awesome, we can finally get something like ZFS on Linux. The project pace is nothing short of amazing. Thank you :) I notice the plans contain QA suite. I would like to ask if there are any written plans for this QA suite yet ? Is the suite going to be kernel code, or is it basically going to be a set of scripts using btrfs userspace commands for regression testing ? Thank you -- To unsubscribe from this list: send the line unsubscribe linux-btrfs in the body of a message to [EMAIL PROTECTED] More majordomo info at http://vger.kernel.org/majordomo-info.html