[PATCH] Btrfs: make error return negative in btrfs_sync_file()
It appears the error return should be negative Signed-off-by: Roel Kluin roel.kl...@gmail.com --- But I fail to see how ret can be positive, unless maybe when we already did a BUG()? diff --git a/fs/btrfs/file.c b/fs/btrfs/file.c index c020335..9d08096 100644 --- a/fs/btrfs/file.c +++ b/fs/btrfs/file.c @@ -1133,7 +1133,7 @@ int btrfs_sync_file(struct file *file, struct dentry *dentry, int datasync) } mutex_lock(dentry-d_inode-i_mutex); out: - return ret 0 ? EIO : ret; + return ret 0 ? -EIO : ret; } static const struct vm_operations_struct btrfs_file_vm_ops = { -- To unsubscribe from this list: send the line unsubscribe linux-btrfs in the body of a message to majord...@vger.kernel.org More majordomo info at http://vger.kernel.org/majordomo-info.html
Re: btrfs kernel oops and hot storage removing
Hello Maksim, Maksim 'max_posedon' Melnikau wrote (ao): I'm running btrfs on my sheevaplug on storage attached via usb. I use multi-device configuration for testing (use different partitions for emulate this). I catched kernel oops on hot removing storage (without umount/etc). First one was one device decide reboot themselves, second when I manually turned it off and on. Basically I don't expect correct btrfs working in such situation, but as I know it designed to work correctly in raid configuration, so, may be somebody expect better behavior even in such situation. You pull the entire raid array when removing the USB storage, as the Sheevaplug has only one USB port. That will not work and it is not the fault of btrfs :-) With kind regards, Sander -- Humilis IT Services and Solutions http://www.humilis.net -- To unsubscribe from this list: send the line unsubscribe linux-btrfs in the body of a message to majord...@vger.kernel.org More majordomo info at http://vger.kernel.org/majordomo-info.html
[GIT PULL] Btrfs updates for 2.6.33
Hello everyone, The btrfs-unstable master branch has some updates: git://git.kernel.org/pub/scm/linux/kernel/git/mason/btrfs-unstable.git master It will pull into either 2.6.32 or 2.6.33-git. These are bug fixes, mostly around btrfs multi-device code and replacing failed drives. It also includes a fix for the orphan cleanup fix in the last pull, this version is much better. Outside of fixes, this adds mount -o compress-force, which won't back off compressing files when part of the file doesn't compress well. Aneesh Kumar K.V (1) commits (+9/-4): Btrfs: Use correct values when updating inode i_size on fallocate Chris Mason (1) commits (+11/-2): Btrfs: Add mount -o compress-force Josef Bacik (4) commits (+15/-10): Btrfs: do not mark the chunk as readonly if in degraded mode (+5/-0) Btrfs: check total number of devices when removing missing (+2/-2) Btrfs: check return value of open_bdev_exclusive properly (+2/-2) Btrfs: run orphan cleanup on default fs root (+6/-6) Miao Xie (1) commits (+0/-14): Btrfs: remove tree_search() in extent_map.c Yang Hongyang (1) commits (+1/-0): Btrfs: fix a memory leak in btrfs_init_acl Total: (8) commits fs/btrfs/acl.c|1 + fs/btrfs/ctree.h |1 + fs/btrfs/disk-io.c|6 ++ fs/btrfs/extent_map.c | 14 -- fs/btrfs/inode.c | 22 +++--- fs/btrfs/super.c |9 - fs/btrfs/volumes.c| 13 + 7 files changed, 36 insertions(+), 30 deletions(-) -- To unsubscribe from this list: send the line unsubscribe linux-btrfs in the body of a message to majord...@vger.kernel.org More majordomo info at http://vger.kernel.org/majordomo-info.html
Re: RAID-10 arrays built with btrfs md report 2x difference in available size?
noticing from above ... size 931.51GB used 2.03GB ... 'used' more than the 'size'? more confused ... For me, it looks as if 2.03GB is way smaller than 931.51GB (2 931), no? Everything seems to be fine here. And regarding your original mail: it seems that df is still lying about the size of the btrfs fs, check http://www.mail-archive.com/linux-btrfs@vger.kernel.org/msg00758.html-- To unsubscribe from this list: send the line unsubscribe linux-btrfs in the body of a message to majord...@vger.kernel.org More majordomo info at http://vger.kernel.org/majordomo-info.html
Re: RAID-10 arrays built with btrfs md report 2x difference in available size?
For me, it looks as if 2.03GB is way smaller than 931.51GB (2 931), no? Everything seems to be fine here. gagh! i saw TB, not GB. 8-/ And regarding your original mail: it seems that df is still lying about the size of the btrfs fs, check http://www.mail-archive.com/linux-btrfs@vger.kernel.org/msg00758.html it is, and reading - df is lying. The total bytes in the FS include all 4 drives. I need to fix up the math for the total available space., it looks like its under control. thx! -- To unsubscribe from this list: send the line unsubscribe linux-btrfs in the body of a message to majord...@vger.kernel.org More majordomo info at http://vger.kernel.org/majordomo-info.html
Re: RAID-10 arrays built with btrfs md report 2x difference in available size?
it is, and reading - df is lying. The total bytes in the FS include all 4 drives. I need to fix up the math for the total available space., it looks like its under control. thx! I think so too -- I have six 1TB drives on RAID-10 btrfs and it shows that I have 5.5TB free space .. how that can be ? # df -h FilesystemSize Used Avail Use% Mounted on /dev/sde1 66G 3.8G 59G 7% / /dev/sda 5.5T 28K 5.5T 1% /mnt/btrfs -- To unsubscribe from this list: send the line unsubscribe linux-btrfs in the body of a message to majord...@vger.kernel.org More majordomo info at http://vger.kernel.org/majordomo-info.html
how to check data and metadata type
Just a short question: How can I check the data and metadata modes of a multi-device btrfs device? btrfs-show is of no help and df is still not showing the correct size of it either (using latest btrfs kernel module and btrfs tools).-- To unsubscribe from this list: send the line unsubscribe linux-btrfs in the body of a message to majord...@vger.kernel.org More majordomo info at http://vger.kernel.org/majordomo-info.html
Re: RAID-10 arrays built with btrfs md report 2x difference in available size?
RK wrote: I think so too -- I have six 1TB drives on RAID-10 btrfs and it shows that I have 5.5TB free space .. how that can be ? # df -h FilesystemSize Used Avail Use% Mounted on /dev/sde1 66G 3.8G 59G 7% / /dev/sda 5.5T 28K 5.5T 1% /mnt/btrfs As has been discussed multiple times on the list, btrfs reports RAW storage so 6 x 1TB is 6 TB. And the use rate will be double for each block written (i.e. 2 blocks used) for raid10 (or raid1). And yes, it is not what you expect, but it is the only method that can remain accurate under the mixed raid modes possible on a per-file-basis in btrfs. jim -- To unsubscribe from this list: send the line unsubscribe linux-btrfs in the body of a message to majord...@vger.kernel.org More majordomo info at http://vger.kernel.org/majordomo-info.html
panic during rebalance, and now upon mount
Hi folks, During a very lengthy btrfs-vol -b (3.5 days in), btrfs BUGged out. Upon rebooting and trying to mount that fs, the exact same bug (with the exact same call trace) happens. I moved up to 2.6.33-rc6 from gentoo-maintained 2.6.32-r2 to see what would happen, and it appears to panic at the equivalent line of the same source file as before. Let me know if I can do anything to assist. I won't do anything to the disks for the next few days in case some forensics will be useful. [ 154.899692] device label bk0 devid 14 transid 34 /dev/mapper/btrn [ 154.958264] btrfs: use compression [ 202.394048] [ cut here ] [ 202.394136] kernel BUG at fs/btrfs/extent-tree.c:5377! [ 202.394220] invalid opcode: [#1] SMP [ 202.394372] last sysfs file: /sys/devices/virtual/block/md1/md/metadata_version [ 202.394500] CPU 5 [ 202.394655] Pid: 5838, comm: btrfs-relocate- Tainted: GW 2.6.33-rc6 #1 P55M-GD45 (MS-7588) /MS-7588 [ 202.394787] RIP: 0010:[8129e5ad] [8129e5ad] walk_up_proc+0x37d/0x3c0 [ 202.394955] RSP: 0018:880139729ca0 EFLAGS: 00010282 [ 202.395039] RAX: 0218 RBX: 88013c460300 RCX: 880139728000 [ 202.395127] RDX: 8800 RSI: fff8 RDI: 880138ac08e0 [ 202.395214] RBP: 880139729d00 R08: 0008 R09: 0001 [ 202.395301] R10: 0001 R11: 0001 R12: 880138ab8880 [ 202.395389] R13: R14: 88013f72f880 R15: 88013b646800 [ 202.395476] FS: () GS:88002834() knlGS: [ 202.395606] CS: 0010 DS: ES: CR0: 8005003b [ 202.395691] CR2: 00425f40 CR3: 018d3000 CR4: 06e0 [ 202.395778] DR0: DR1: DR2: [ 202.395865] DR3: DR6: 0ff0 DR7: 0400 [ 202.395953] Process btrfs-relocate- (pid: 5838, threadinfo 880139728000, task 88013f0e28f0) [ 202.396083] Stack: [ 202.396162] 880139729cf0 0002 88013f72f880 0206 [ 202.397142] 0 880139729d30 880138ac08e0 [ 202.397444] 0 88013c460300 88013f72f880 880139728000 [ 202.397856] Call Trace: [ 202.397937] [8129e72f] walk_up_tree+0x13f/0x1c0 [ 202.398023] [8129f99c] btrfs_drop_snapshot+0x21c/0x600 [ 202.398110] [812a9dd0] ? __btrfs_end_transaction+0x100/0x170 [ 202.398198] [812e7d7d] merge_func+0x7d/0xc0 [ 202.398284] [812d25aa] worker_loop+0x17a/0x540 [ 202.398379] [812d2430] ? worker_loop+0x0/0x540 [ 202.398487] [812d2430] ? worker_loop+0x0/0x540 [ 202.398611] [81095936] kthread+0x96/0xa0 [ 202.398697] [81034bd4] kernel_thread_helper+0x4/0x10 [ 202.398784] [816ac869] ? restore_args+0x0/0x30 [ 202.398869] [810958a0] ? kthread+0x0/0xa0 [ 202.398953] [81034bd0] ? kernel_thread_helper+0x0/0x10 [ 202.399039] Code: 6d db b6 6d 48 c1 f8 03 48 0f af c2 48 ba 00 00 00 00 00 88 ff ff 48 c1 e0 0c 48 8b 44 10 58 ff 49 1c 48 39 c6 0f 84 ab fd ff ff 0f 0b eb fe 0f 1f 80 00 00 00 00 47 8b 4c ae 60 45 85 c9 0f 85 [ 202.401551] RIP [8129e5ad] walk_up_proc+0x37d/0x3c0 [ 202.401671] RSP 880139729ca0 [ 202.401796] ---[ end trace 4c085bcc2bd215f6 ]--- -- Troy -- To unsubscribe from this list: send the line unsubscribe linux-btrfs in the body of a message to majord...@vger.kernel.org More majordomo info at http://vger.kernel.org/majordomo-info.html