Re: btrfs: open_ctree failed after power loss
Hi! I've just updated kernel to 2.6.38. Now there are no troubles with mounting after btrfsck and brtfs-select-super. Thank you guys! Sincerely, Viacheslav Dobromyslov On Sat, Feb 5, 2011 at 4:53 PM, Viacheslav Dobromyslov slavik.do...@gmail.com wrote: Hello, friends! I have some troubles with removable disk after powering off without unmounting. 'btrfsck' and 'btrfs-select-super' didn't help. # uname -srvo Linux 2.6.37-gentoo #2 SMP PREEMPT Tue Jan 11 20:58:56 VLAT 2011 GNU/Linux Installed sys-fs/btrfs-progs-0.19-r2. # btrfs fi show failed to read /dev/sr0 Label: none uuid: acf790ef-13ba-4c08-b0d3-3ab9938b5b94 Total devices 1 FS bytes used 9.97GB devid 1 size 14.65GB used 14.65GB path /dev/sda4 Label: none uuid: 53b9ab33-8049-46e0-a90f-b601446abc79 Total devices 1 FS bytes used 20.35GB devid 1 size 150.00GB used 23.29GB path /dev/sdc2 # mount -t btrfs /dev/sdc2 /mnt/ mount: wrong fs type, bad option, bad superblock on /dev/sdc2, missing codepage or helper program, or other error In some cases useful info is found in syslog - try dmesg | tail or so # dmesg device fsid e046498033abb953-79bc6a4401b60fa9 devid 1 transid 433 /dev/sdc2 btrfs: open_ctree failed # btrfsck /dev/sdc2 found 21851738112 bytes used err is 0 total csum bytes: 21297140 total tree bytes: 43466752 total fs tree bytes: 17068032 btree space waste bytes: 7526756 file data blocks allocated: 21808271360 referenced 21808271360 Btrfs Btrfs v0.19 # btrfs-select-super -s 1 /dev/sdc2 using SB copy 1, bytenr 67108864 # btrfs-debug-tree /dev/sdc2 root tree leaf 23651717120 items 12 free space 1987 generation 433 owner 1 fs uuid 53b9ab33-8049-46e0-a90f-b601446abc79 chunk uuid 461537d2-d586-4f0c-9469-09d8933bce0a item 0 key (EXTENT_TREE ROOT_ITEM 0) itemoff 3756 itemsize 239 root data bytenr 23651721216 level 2 dirid 0 refs 1 . item 30 key (12298 EXTENT_DATA 0) itemoff 1548 itemsize 53 extent data disk byte 22261157888 nr 13418496 extent data offset 0 nr 13418496 ram 13418496 extent compression 0 data reloc tree key (DATA_RELOC_TREE ROOT_ITEM 0) leaf 29380608 items 2 free space 3773 generation 5 owner 18446744073709551607 fs uuid 53b9ab33-8049-46e0-a90f-b601446abc79 chunk uuid 461537d2-d586-4f0c-9469-09d8933bce0a item 0 key (FIRST_CHUNK_TREE INODE_ITEM 0) itemoff 3835 itemsize 160 inode generation 4 size 0 block group 0 mode 40555 links 1 item 1 key (FIRST_CHUNK_TREE INODE_REF 256) itemoff 3823 itemsize 12 inode ref index 0 namelen 2 name: .. total bytes 161067429888 bytes used 21851738112 uuid 53b9ab33-8049-46e0-a90f-b601446abc79 Btrfs Btrfs v0.19 I've already found similar trouble http://www.spinics.net/lists/linux- btrfs/msg07572.html . but that solution didn't help. -- To unsubscribe from this list: send the line unsubscribe linux-btrfs in the body of a message to majord...@vger.kernel.org More majordomo info at http://vger.kernel.org/majordomo-info.html
Calculating/estimating the process of an ongoing balance
-BEGIN PGP SIGNED MESSAGE- Hash: SHA1 Hi all, During a btrfs filesystem balance there are lines like the following one in dmesg btrfs: relocating block group 2122280075264 flags 9 Since the big number is strictly decreasing, I wonder whether this can be used as some progress counter? For example, do I get something like a percentage of what has been done/balanced by the following formula: (h-l)/h where h is the highest, i.e. first, value and l is the last, i.e. smallest, value? Just for interest: There are two types of these lines appearing. One type with 'flags 9' and one with 'flags 20'. Do the first ones refer to block groups holding metadata and the second ones to block groups holding data or is it totally different (and more complicated)? Thanks, Andreas Philipp -BEGIN PGP SIGNATURE- Version: GnuPG v1.4.9 (MingW32) Comment: Using GnuPG with Mozilla - http://enigmail.mozdev.org/ iQIcBAEBAgAGBQJNhI0qAAoJEJIcBJ3+XkgiwhEP/A/6I3y9+oNckdx0tvwHjQM2 lFJvPSv6F5jZPY3fzzgPVhq1jg/92/ixWr1S2vWAKZc1V6Jt0+DOlNbLgEBXnywD U9nxWGvuddkb4pgXV7kvTh6a4jjhmN8Wx9oc5jvpN3ThqKcoEqnA2DUj9PcfIuhv enF1w57RKPGWRwp21Zg88GKCFrC0/oNfEg0SImG42wm/n5QE6/AY8V8yXGim8nCD MgjHGRxGtJ2IMTrusw9htlUrpPnZ6GNeLeShLSDipMxHTrFvg2MyQ8l6l+AKqLzo jHVUGpFnS3pm6zK0JbEyKqOEFkVtrilc4sGbdtslv5lp+HD93mT6IcPp3FgnLApF em1Mf1jWiXHMRpbwxJXq0n+9I3I8w3zix8fY+dmb5oGDR8tO5jz3oQxm9JWUCrXn M8eScNE0KgydZU1/F6vP4vsc5rB3Wi6ZGCkKxcsIfdVJAdYdUBvAwLRXTc6w5kJI goR3bswyiWb93eNJfODaUEpt3EXuZJ0hTgSANADb22TxBZqszey5mydgkY6cAa1a 7nWARcs1vdZzStmTbSp0YAkUjE8IIjAIgwuISB9/s5tYVc3TEC1Nqgiz81eqfcIE ezdIXNsKf+6ZJFW9XbSMZ9OMSC1p4kQn+QyUvwcbmxl8iXyjM08iaw7/tj62U2ev bZ4nen4cKMS2ACgGnwLE =Gjkc -END PGP SIGNATURE- -- To unsubscribe from this list: send the line unsubscribe linux-btrfs in the body of a message to majord...@vger.kernel.org More majordomo info at http://vger.kernel.org/majordomo-info.html
Re: Calculating/estimating the process of an ongoing balance
On Sat, Mar 19, 2011 at 12:02:03PM +0100, Andreas Philipp wrote: Hi all, During a btrfs filesystem balance there are lines like the following one in dmesg btrfs: relocating block group 2122280075264 flags 9 Since the big number is strictly decreasing, I wonder whether this can be used as some progress counter? For example, do I get something like a percentage of what has been done/balanced by the following formula: (h-l)/h where h is the highest, i.e. first, value and l is the last, i.e. smallest, value? The patches have been posted to the list, although I'm not sure what their current status is [1]. If you feel like it, apply and test them, any feedback is more than welcome. [1] - http://thread.gmane.org/gmane.comp.file-systems.btrfs/7487/ Just for interest: There are two types of these lines appearing. One type with 'flags 9' and one with 'flags 20'. Do the first ones refer to block groups holding metadata and the second ones to block groups holding data or is it totally different (and more complicated)? Yes, you are right. 9 designates a raid0 data block group and 20 means a raid1 metadata group. Thanks, Andreas Philipp Thanks, Ilya -- To unsubscribe from this list: send the line unsubscribe linux-btrfs in the body of a message to majord...@vger.kernel.org More majordomo info at http://vger.kernel.org/majordomo-info.html
btrfs fi df units
I notice when I issue a btrfs fi df the result is in units of GB (for a large filesystem -- maybe it's smaller for smaller filesystems). Is there any way to force the units? I'd like to see the granularity of KBs if possible. Cheers, b. signature.asc Description: OpenPGP digital signature
Re: btrfs fi df units
On Sat, Mar 19, 2011 at 01:17:21PM -0400, Brian J. Murrell wrote: I notice when I issue a btrfs fi df the result is in units of GB (for a large filesystem -- maybe it's smaller for smaller filesystems). Is there any way to force the units? I'd like to see the granularity of KBs if possible. There's a patch set out there[1] that fixes the binary/decimal units inconsistencies in the output of the btrfs tools. The patch also allows you to display plain bytes. Hugo. [1] http://www.mail-archive.com/linux-btrfs@vger.kernel.org/msg06517.html -- === Hugo Mills: hugo@... carfax.org.uk | darksatanic.net | lug.org.uk === PGP key: 515C238D from wwwkeys.eu.pgp.net or http://www.carfax.org.uk --- Anyone who claims their cryptographic protocol is secure is --- either a genius or a fool. Given the genius/fool ratio for our species, the odds aren't good. signature.asc Description: Digital signature
btrfs fi df gives only the total size that is currently allocated
Hi I made a test RAID 10 with several old disks with various sizes. I copied some files (~800MB) When using btrfs fi df /mountpoint I get Data: total=1.00GB, used=800.00MB When I copy another ~800MB I get a total size of 2GB. This goes on and on until I hit the max size of the RAID. e.g. Data: total=5.00GB, used=4.97GB Is there a way to see what the max size will be without having to fill the RAID first? Thanks, Gal P.S. df -h doesn't help either because of the different disk sizes. -- To unsubscribe from this list: send the line unsubscribe linux-btrfs in the body of a message to majord...@vger.kernel.org More majordomo info at http://vger.kernel.org/majordomo-info.html
Re: btrfs fi df gives only the total size that is currently allocated
On Sat, 2011-03-19 at 21:16 +0200, Gal Buki wrote: Hi I made a test RAID 10 with several old disks with various sizes. I copied some files (~800MB) When using btrfs fi df /mountpoint I get Data: total=1.00GB, used=800.00MB When I copy another ~800MB I get a total size of 2GB. This goes on and on until I hit the max size of the RAID. e.g. Data: total=5.00GB, used=4.97GB Is there a way to see what the max size will be without having to fill the RAID first? As I understand it: the simple answer is, unfortunately, “no”. Because metadata and data chunks are allocated on demand depending on how you use the space, the best you could do would be to make a guess based on current allocation ratios. That is something which is pretty hard to do manually though— particularly in the case of differently-sized disks—so some sort of estimation tool could be useful. (But it would be just that: an estimate, not an exact count.) This will get worse once btrfs supports having data with different raid levels on the same filesystem, because you’ll have different amounts of “available” space depending on which raid type the data in question is stored with. -- Calvin Walton calvin.wal...@kepstin.ca -- To unsubscribe from this list: send the line unsubscribe linux-btrfs in the body of a message to majord...@vger.kernel.org More majordomo info at http://vger.kernel.org/majordomo-info.html