Re: btrfs balance on single device
On Wed, Dec 18, 2013 at 11:29 AM, Leonidas Spyropoulos artafi...@gmail.com wrote: On Wed, Dec 18, 2013 at 11:05:29AM +, Hugo Mills wrote: On Wed, Dec 18, 2013 at 10:44:43AM +, Leonidas Spyropoulos wrote: I'm using the same subject as it might be relevant, feel free to change it.# I'm trying to do some maintenance to the system running over a btrfs file system on root (/). I started a balance on the '/' partition and it failed with the below information: $ sudo btrfs balance start / [sudo] password for inglor: ERROR: error during balancing '/' - No space left on device There may be more info in syslog - try dmesg | tail $ dmesg | tail [93827.115887] btrfs: found 29461 extents [93827.481849] btrfs: relocating block group 29855055872 flags 1 [93841.646011] btrfs: found 33171 extents [93851.421207] btrfs: found 33171 extents [93851.782054] btrfs: relocating block group 28781314048 flags 1 [93866.815342] btrfs: found 52535 extents [93877.159354] btrfs: found 52534 extents [93877.356805] btrfs: relocating block group 28747759616 flags 34 [93880.287185] btrfs: found 1 extents [93880.608798] btrfs: 1 enospc errors during balance You don't specify your kernel version, but if it's older than 3.11 or so, you should probably upgrade -- 3.10 and earlier had occasional bugs where the block reserve system never kept enough blocks free to add a new metadata chunk when it was needed, which led to exactly this kind of symptom. You are right, apologies. It is an up to date Archlinux box with a kernel: $ uname -a Linux tiamat 3.12.5-1-ARCH #1 SMP PREEMPT Thu Dec 12 12:57:31 CET 2013 x86_64 GNU/Linux Alternatively, and this is a bit of a long shot given that the error seems to have been while relocating your system chunk (which argues against this particular diagnosis), but: Do you have a large file on that filesystem (larger than 1 GiB)? Unlikely since the btrfs file system in question is '/' exluding /opt and /media directories (these are other partitions) $ sudo find / -type f -size +1048576k -and -not -path /media* -print /proc/kcore find: `/proc/27221/task/27221/fd/5': No such file or directory find: `/proc/27221/task/27221/fdinfo/5': No such file or directory find: `/proc/27221/fd/5': No such file or directory find: `/proc/27221/fdinfo/5': No such file or directory find: `/run/user/1000/gvfs': Permission denied inglor@tiamat ~$ If so, I would recommend switching to a 3.12 kernel, and running a defrag on the file. There's a known and now-fixed bug where you can get ENOSPC while balancing, if a file has an extent larger than 1 GiB in size. (The bug being that there's an extent over 1 GiB in size in the first place). I might try the defrag option anyway and restart the balance operation, see if this will help anyway. Some progress on this. I managed to do a balance on data only. The problem seems to be happening when doing a metadata balance $ sudo btrfs balance start -m / [sudo] password for inglor: ERROR: error during balancing '/' - No space left on device There may be more info in syslog - try dmesg | tail $ dmesg | tail [171492.384314] systemd-journald[183]: Deleted empty journal /var/log/journal/64cfb6f6c9d1625e7fa463c20475/user-120@8b61bc353813451babcaa25dfc82c64e--.journal (2924544 bytes). [171492.384375] systemd-journald[183]: Vacuuming done, freed 2924544 bytes [172242.011051] btrfs: relocating block group 109781712896 flags 36 [172242.075298] btrfs: relocating block group 109748158464 flags 34 [172242.286016] btrfs: found 1 extents [172242.419286] btrfs: 1 enospc errors during balance Is there a way to recreate the metadata? (I'm guessing the answer is balance..) Thanks, Leonidas Hugo. -- === Hugo Mills: hugo@... carfax.org.uk | darksatanic.net | lug.org.uk === PGP key: 65E74AC0 from wwwkeys.eu.pgp.net or http://www.carfax.org.uk --- I'd make a joke about UDP, but I don't know if --- anyone's actually listening... -- Caution: breathing may be hazardous to your health. #include stdio.h int main(){printf(%s,\x4c\x65\x6f\x6e\x69\x64\x61\x73);} -- To unsubscribe from this list: send the line unsubscribe linux-btrfs in the body of a message to majord...@vger.kernel.org More majordomo info at http://vger.kernel.org/majordomo-info.html
Re: [PATCH] btrfs: ioctls would need unique id
On Thu, Dec 19, 2013 at 12:06:32PM +0800, Anand Jain wrote: BTRFS_IOC_SET_FEATURES and BTRFS_IOC_GET_SUPPORTED_FEATURES conflicts with BTRFS_IOC_GET_FEATURES Signed-off-by: Anand Jain anand.j...@oracle.com --- include/uapi/linux/btrfs.h |4 ++-- 1 files changed, 2 insertions(+), 2 deletions(-) diff --git a/include/uapi/linux/btrfs.h b/include/uapi/linux/btrfs.h index 7d7f776..0fe736e 100644 --- a/include/uapi/linux/btrfs.h +++ b/include/uapi/linux/btrfs.h @@ -634,9 +634,9 @@ struct btrfs_ioctl_fslist_args { struct btrfs_ioctl_fslist_args) #define BTRFS_IOC_GET_FEATURES _IOR(BTRFS_IOCTL_MAGIC, 57, \ struct btrfs_ioctl_feature_flags) -#define BTRFS_IOC_SET_FEATURES _IOW(BTRFS_IOCTL_MAGIC, 57, \ +#define BTRFS_IOC_SET_FEATURES _IOW(BTRFS_IOCTL_MAGIC, 58, \ struct btrfs_ioctl_feature_flags[2]) -#define BTRFS_IOC_GET_SUPPORTED_FEATURES _IOR(BTRFS_IOCTL_MAGIC, 57, \ +#define BTRFS_IOC_GET_SUPPORTED_FEATURES _IOR(BTRFS_IOCTL_MAGIC, 59, \ struct btrfs_ioctl_feature_flags[3]) The ioctls are distinct as tehy are, as they have different-sized parameters. Changing these numbers is changing the public interface, which is a big no-no. Hugo. -- === Hugo Mills: hugo@... carfax.org.uk | darksatanic.net | lug.org.uk === PGP key: 65E74AC0 from wwwkeys.eu.pgp.net or http://www.carfax.org.uk --- I think that everything darkling says is actually a joke. --- It's just that we haven't worked out most of them yet. signature.asc Description: Digital signature
[PATCH v6] btrfs-progs: fs show should handle if subvol(s) mounted
as of now with out this patch user would see fsinfo per btrfs mount path but which mean multiple entry if more than one subvol is mounted of the same fsid. so this patch will handle that nicely. Signed-off-by: Anand Jain anand.j...@oracle.com Signed-off-by: David Sterba dste...@suse.cz --- v6: This patch depends on the kernel new ioctl BTRFS_IOC_GET_FSLIST That means configurations running old kernel new btrfs-progs would end-up with ENOTTY which when would fail to show btrfs which are mounted. As Chris Patch 5aff090a3951e7d787b32bb5c49adfec65091385 introduced a cool way to get around the issue without kernel dependencies, now the V6 - will check if there is ENOTTY error if so will fall back to Chris logic. However it will still print the ENOTTY error so that user will upgrade the kernel. v5: fixup missed mem free, thanks David v4: rebase on integration-20131114 v3: accepts Josef suggested v2: accepts Zach suggested cmds-filesystem.c | 83 - utils.c | 60 ++ utils.h |1 + 3 files changed, 143 insertions(+), 1 deletions(-) diff --git a/cmds-filesystem.c b/cmds-filesystem.c index 46c6eaa..c50a65f 100644 --- a/cmds-filesystem.c +++ b/cmds-filesystem.c @@ -397,6 +397,29 @@ static int print_one_fs(struct btrfs_ioctl_fs_info_args *fs_info, return 0; } +static void handle_print(char *mnt, char *label) +{ + int fd; + struct btrfs_ioctl_fs_info_args fs_info_arg; + struct btrfs_ioctl_dev_info_args *dev_info_arg = NULL; + struct btrfs_ioctl_space_args *space_info_arg; + + if (get_fs_info(mnt, fs_info_arg, dev_info_arg)) { + fprintf(stdout, ERROR: get_fs_info failed\n); + return; + } + + fd = open(mnt, O_RDONLY); + if (fd != -1 !get_df(fd, space_info_arg)) { + print_one_fs(fs_info_arg, dev_info_arg, + space_info_arg, label, mnt); + kfree(space_info_arg); + } + if (fd != -1) + close(fd); + kfree(dev_info_arg); +} + /* This function checks if the given input parameter is * an uuid or a path * return -1: some error in the given input @@ -429,6 +452,56 @@ static int check_arg_type(char *input) return BTRFS_ARG_UNKNOWN; } +static int btrfs_scan_kernel_v2(void *search) +{ + int ret = 0; + char label[BTRFS_LABEL_SIZE]; + char mnt[BTRFS_PATH_NAME_MAX + 1]; + struct btrfs_ioctl_fslist *fslist; + struct btrfs_ioctl_fslist *fslist_saved; + u64 cnt_fs; + int cnt_mnt; + __u8 *fsid; + __u64 flags; + int found = 0; + + ret = get_fslist(fslist, cnt_fs); + if (ret) + return ret; + fslist_saved = fslist; + while (cnt_fs--) { + fsid = fslist-fsid; + flags = fslist-flags; + fslist++; + if (!(flags BTRFS_FS_MOUNTED)) + continue; + memset(mnt, 0, BTRFS_PATH_NAME_MAX + 1); + memset(label, 0, sizeof(label)); + ret = fsid_to_mntpt(fsid, mnt, cnt_mnt); + if (ret) + break; + + if (get_label_mounted(mnt, label)) { + ret = 1; + break; + } + + if (search !match_search_item_kernel(fsid, + mnt, label, search)) + continue; + + handle_print(mnt, label); + if (search) { + found = 1; + break; + } + } + kfree(fslist_saved); + if (search !found) + return 1; + return ret; +} + static int btrfs_scan_kernel(void *search) { int ret = 0, fd; @@ -605,6 +678,12 @@ static int cmd_show(int argc, char **argv) goto devs_only; } } + } else if (type == BTRFS_ARG_MNTPOINT) { + char label[BTRFS_LABEL_SIZE]; + if (get_label_mounted(search, label)) + return 1; + handle_print(search, label); + return 0; } } @@ -612,7 +691,9 @@ static int cmd_show(int argc, char **argv) goto devs_only; /* show mounted btrfs */ - ret = btrfs_scan_kernel(search); + ret = btrfs_scan_kernel_v2(search); + if (ret == -ENOTTY) + ret = btrfs_scan_kernel(search); if (search !ret) return 0; diff --git a/utils.c b/utils.c index 136ec0e..d505184 100644 --- a/utils.c +++ b/utils.c @@ -2202,3 +2202,63 @@ out: close(fd); return ret; } + +/* This finds the mount point for a given fsid, + * subvols of the same
Unmountable Array After Drive Failure During Device Deletion
I'm using btrfs in data and metadata RAID10 on drives (not on md or any other fanciness.) I was removing a drive (btrfs dev del) and during that operation, a different drive in the array failed. Having not had this happen before, I shut down the machine immediately due to the extremely loud piezo buzzer on the drive controller card. I attempted to do so cleanly, but the buzzer cut through my patience and after 4 minutes I cut the power. Afterwards, I located and removed the failed drive from the system, and then got back to linux. The array no longer mounts (failed to read the system array on sdc), with nearly identical messages when attempted with -o recovery and -o recovery,ro. btrfsck asserts and coredumps, as usual. The drive that was being removed is devid 9 in the array, and is /dev/sdm1 in the btrfs fi show seen below. Kernel 3.12.4-1-ARCH, btrfs-progs v0.20-rc1-358-g194aa4a-dirty (archlinux build.) Can I recover the array? == dmesg during failure == ... sd 0:2:3:0: [sdd] Unhandled error code sd 0:2:3:0: [sdd] Result: hostbyte=0x04 driverbyte=0x00 sd 0:2:3:0: [sdd] CDB: cdb[0]=0x2a: 2a 00 26 89 5b 00 00 00 80 00 end_request: I/O error, dev sdd, sector 646535936 btrfs_dev_stat_print_on_error: 7791 callbacks suppressed btrfs: bdev /dev/sdd errs: wr 315858, rd 230194, flush 0, corrupt 0, gen 0 sd 0:2:3:0: [sdd] Unhandled error code sd 0:2:3:0: [sdd] Result: hostbyte=0x04 driverbyte=0x00 sd 0:2:3:0: [sdd] CDB: cdb[0]=0x2a: 2a 00 26 89 5b 80 00 00 80 00 end_request: I/O error, dev sdd, sector 646536064 ... == dmesg after new boot, mounting attempt == btrfs: device label lake devid 11 transid 4893967 /dev/sda btrfs: disk space caching is enabled btrfs: failed to read the system array on sdc btrfs: open_ctree failed == dmesg after new boot, mounting attempt with -o recovery,ro == btrfs: device label lake devid 11 transid 4893967 /dev/sda btrfs: enabling auto recovery btrfs: disk space caching is enabled btrfs: failed to read the system array on sdc btrfs: open_ctree failed == btrfsck == deep# btrfsck /dev/sda warning, device 14 is missing warning devid 14 not found already parent transid verify failed on 87601116364800 wanted 4893969 found 4893913 parent transid verify failed on 87601116364800 wanted 4893969 found 4893913 parent transid verify failed on 87601116381184 wanted 4893969 found 4893913 parent transid verify failed on 87601116381184 wanted 4893969 found 4893913 parent transid verify failed on 87601115320320 wanted 4893969 found 4893913 parent transid verify failed on 87601115320320 wanted 4893969 found 4893913 parent transid verify failed on 87601117097984 wanted 4893969 found 4892460 parent transid verify failed on 87601117097984 wanted 4893969 found 4892460 Ignoring transid failure Checking filesystem on /dev/sda UUID: d5e17c49-d980-4bde-bd96-3c8bc95ea077 checking extents parent transid verify failed on 87601117159424 wanted 4893969 found 4893913 parent transid verify failed on 87601117159424 wanted 4893969 found 4893913 parent transid verify failed on 87601116368896 wanted 4893969 found 4893913 parent transid verify failed on 87601116368896 wanted 4893969 found 4893913 parent transid verify failed on 87601117163520 wanted 4893969 found 4893913 parent transid verify failed on 87601117163520 wanted 4893969 found 4893913 parent transid verify failed on 87601117638656 wanted 4893969 found 4893913 parent transid verify failed on 87601117638656 wanted 4893969 found 4893913 Ignoring transid failure parent transid verify failed on 87601117171712 wanted 4893969 found 4893913 parent transid verify failed on 87601117171712 wanted 4893969 found 4893913 parent transid verify failed on 87601117175808 wanted 4893969 found 4893913 parent transid verify failed on 87601117175808 wanted 4893969 found 4893913 parent transid verify failed on 87601117188096 wanted 4893969 found 4893913 parent transid verify failed on 87601117188096 wanted 4893969 found 4893913 parent transid verify failed on 87601116807168 wanted 4893969 found 4893913 parent transid verify failed on 87601116807168 wanted 4893969 found 4893913 Ignoring transid failure parent transid verify failed on 87601117642752 wanted 4893969 found 4893913 parent transid verify failed on 87601117642752 wanted 4893969 found 4893913 Ignoring transid failure parent transid verify failed on 87601117650944 wanted 4893969 found 4893913 parent transid verify failed on 87601117650944 wanted 4893969 found 4893913 Ignoring transid failure Couldn't map the block 5764607523034234880 btrfsck: volumes.c:1019: btrfs_num_copies: Assertion `!(!ce)' failed. zsh: abort (core dumped) btrfsck /dev/sda == btrfs fi show == Label: 'lake' uuid: d5e17c49-d980-4bde-bd96-3c8bc95ea077 Total devices 10 FS bytes used 7.43TB devid9 size 1.82TB used 1.61TB path /dev/sdm1 devid 12 size 1.82TB used 1.47TB path /dev/sdb devid 16 size 1.82TB used 1.47TB path /dev/sde devid 13 size 1.82TB used 1.47TB path /dev/sdc devid 11
kernel BUG at fs/btrfs/relocation.c:1062
Got this while running balance. Some other operations were running in parallel (rsync etc.). [152727.584641] [ cut here ] [152727.584723] kernel BUG at fs/btrfs/relocation.c:1062! [152727.584802] invalid opcode: [#1] SMP [152727.584943] Modules linked in: veth ipt_MASQUERADE iptable_nat nf_conntrack_ipv4 nf_defrag_ipv4 nf_nat_ipv4 nf_nat nf_conntrack ip_tables x_tables cpufreq_ondemand cpufreq_conservative cpufreq_powersave cpufreq_stats bridge stp llc ipv6 btrfs xor raid6_pq zlib_deflate loop video i2c_i801 i2c_core button acpi_cpufreq pcspkr ehci_pci ehci_hcd lpc_ich mfd_core ext4 crc16 jbd2 mbcache raid1 sg sd_mod r8169 mii ahci libahci libata scsi_mod [152727.586767] CPU: 3 PID: 1540 Comm: btrfs Tainted: GW3.13.0-rc4 #1 [152727.586888] Hardware name: System manufacturer System Product Name/P8H77-M PRO, BIOS 1101 02/04/2013 [152727.587011] task: 8805bedf9710 ti: 8805a9d9a000 task.ti: 8805a9d9a000 [152727.587132] RIP: 0010:[a02ab31f] [a02ab31f] build_backref_tree+0xb03/0xe7a [btrfs] [152727.587300] RSP: 0018:8805a9d9b898 EFLAGS: 00010246 [152727.587379] RAX: 8805a9d9b900 RBX: 8805a9d9b920 RCX: 88055a72fa10 [152727.587499] RDX: 88055a72ff50 RSI: 88055a72ff80 RDI: 8807e8878600 [152727.587619] RBP: 8805a9d9b998 R08: 8807e8878b80 R09: 1000 [152727.587739] R10: 1000 R11: 1600 R12: [152727.587859] R13: 88055a72ff90 R14: 8805a9d9b930 R15: 88028e648000 [152727.587980] FS: 7fca66eca840() GS:88081fac() knlGS: [152727.588101] CS: 0010 DS: ES: CR0: 80050033 [152727.588180] CR2: 7fac0ba8 CR3: 0007006d9000 CR4: 001407e0 [152727.588300] Stack: [152727.588374] 8800611992c0 8806c99dd500 8807e8878600 [152727.588623] 88055a72fde0 8806c99dd240 88028e648124 88028e648124 [152727.588871] 88028e648120 88028e648020 8807ece02c60 8807ece02a20 [152727.589120] Call Trace: [152727.589205] [a02ac13a] relocate_tree_blocks+0x1c0/0x544 [btrfs] [152727.589292] [a024f2a9] ? btrfs_release_path+0x6b/0x8a [btrfs] [152727.589382] [a02ad399] relocate_block_group+0x23f/0x4c7 [btrfs] [152727.589470] [a02ad76f] btrfs_relocate_block_group+0x14e/0x28d [btrfs] [152727.589598] [a028c8d2] btrfs_relocate_chunk.isra.65+0x58/0x60e [btrfs] [152727.589727] [a029a1ff] ? btrfs_set_lock_blocking_rw+0x89/0xb2 [btrfs] [152727.589853] [a024f1b7] ? btrfs_set_path_blocking+0x23/0x54 [btrfs] [152727.589979] [a0253a1b] ? btrfs_search_slot+0x72f/0x789 [btrfs] [152727.590068] [a0288be1] ? free_extent_buffer+0x6f/0x7c [btrfs] [152727.590157] [a028f709] btrfs_balance+0x9fe/0xbe0 [btrfs] [152727.590245] [a02954c9] btrfs_ioctl_balance+0x220/0x29f [btrfs] [152727.590334] [a0298a66] btrfs_ioctl+0xfce/0x2128 [btrfs] [152727.590417] [810c98c1] ? handle_mm_fault+0x24f/0x965 [152727.590498] [810ccc11] ? __vm_enough_memory+0x26/0x13d [152727.590581] [8110219e] do_vfs_ioctl+0x3f7/0x441 [152727.590661] [8110223a] SyS_ioctl+0x52/0x80 [152727.590741] [8138f222] system_call_fastpath+0x16/0x1b [152727.590821] Code: 60 71 fd 49 89 40 50 49 89 40 58 49 8b 40 58 4d 89 68 58 49 83 c0 50 4d 89 45 00 49 89 45 08 4c 89 28 e9 aa 00 00 00 a8 10 75 02 0f 0b 83 e0 01 41 39 c4 74 02 0f 0b 45 85 e4 75 32 49 8b 70 18 [152727.593073] RIP [a02ab31f] build_backref_tree+0xb03/0xe7a [btrfs] [152727.593194] RSP 8805a9d9b898 [152727.593290] ---[ end trace b67170e8ece9f591 ]--- -- Tomasz Chmielewski http://wpkg.org -- To unsubscribe from this list: send the line unsubscribe linux-btrfs in the body of a message to majord...@vger.kernel.org More majordomo info at http://vger.kernel.org/majordomo-info.html
Re: kernel BUG at fs/btrfs/relocation.c:1062
Hello Tomasz, This seems a known bug that has been fixed by josef, what is your kernel version? Thanks, Wang On 12/19/2013 08:09 PM, Tomasz Chmielewski wrote: Got this while running balance. Some other operations were running in parallel (rsync etc.). [152727.584641] [ cut here ] [152727.584723] kernel BUG at fs/btrfs/relocation.c:1062! [152727.584802] invalid opcode: [#1] SMP [152727.584943] Modules linked in: veth ipt_MASQUERADE iptable_nat nf_conntrack_ipv4 nf_defrag_ipv4 nf_nat_ipv4 nf_nat nf_conntrack ip_tables x_tables cpufreq_ondemand cpufreq_conservative cpufreq_powersave cpufreq_stats bridge stp llc ipv6 btrfs xor raid6_pq zlib_deflate loop video i2c_i801 i2c_core button acpi_cpufreq pcspkr ehci_pci ehci_hcd lpc_ich mfd_core ext4 crc16 jbd2 mbcache raid1 sg sd_mod r8169 mii ahci libahci libata scsi_mod [152727.586767] CPU: 3 PID: 1540 Comm: btrfs Tainted: GW3.13.0-rc4 #1 [152727.586888] Hardware name: System manufacturer System Product Name/P8H77-M PRO, BIOS 1101 02/04/2013 [152727.587011] task: 8805bedf9710 ti: 8805a9d9a000 task.ti: 8805a9d9a000 [152727.587132] RIP: 0010:[a02ab31f] [a02ab31f] build_backref_tree+0xb03/0xe7a [btrfs] [152727.587300] RSP: 0018:8805a9d9b898 EFLAGS: 00010246 [152727.587379] RAX: 8805a9d9b900 RBX: 8805a9d9b920 RCX: 88055a72fa10 [152727.587499] RDX: 88055a72ff50 RSI: 88055a72ff80 RDI: 8807e8878600 [152727.587619] RBP: 8805a9d9b998 R08: 8807e8878b80 R09: 1000 [152727.587739] R10: 1000 R11: 1600 R12: [152727.587859] R13: 88055a72ff90 R14: 8805a9d9b930 R15: 88028e648000 [152727.587980] FS: 7fca66eca840() GS:88081fac() knlGS: [152727.588101] CS: 0010 DS: ES: CR0: 80050033 [152727.588180] CR2: 7fac0ba8 CR3: 0007006d9000 CR4: 001407e0 [152727.588300] Stack: [152727.588374] 8800611992c0 8806c99dd500 8807e8878600 [152727.588623] 88055a72fde0 8806c99dd240 88028e648124 88028e648124 [152727.588871] 88028e648120 88028e648020 8807ece02c60 8807ece02a20 [152727.589120] Call Trace: [152727.589205] [a02ac13a] relocate_tree_blocks+0x1c0/0x544 [btrfs] [152727.589292] [a024f2a9] ? btrfs_release_path+0x6b/0x8a [btrfs] [152727.589382] [a02ad399] relocate_block_group+0x23f/0x4c7 [btrfs] [152727.589470] [a02ad76f] btrfs_relocate_block_group+0x14e/0x28d [btrfs] [152727.589598] [a028c8d2] btrfs_relocate_chunk.isra.65+0x58/0x60e [btrfs] [152727.589727] [a029a1ff] ? btrfs_set_lock_blocking_rw+0x89/0xb2 [btrfs] [152727.589853] [a024f1b7] ? btrfs_set_path_blocking+0x23/0x54 [btrfs] [152727.589979] [a0253a1b] ? btrfs_search_slot+0x72f/0x789 [btrfs] [152727.590068] [a0288be1] ? free_extent_buffer+0x6f/0x7c [btrfs] [152727.590157] [a028f709] btrfs_balance+0x9fe/0xbe0 [btrfs] [152727.590245] [a02954c9] btrfs_ioctl_balance+0x220/0x29f [btrfs] [152727.590334] [a0298a66] btrfs_ioctl+0xfce/0x2128 [btrfs] [152727.590417] [810c98c1] ? handle_mm_fault+0x24f/0x965 [152727.590498] [810ccc11] ? __vm_enough_memory+0x26/0x13d [152727.590581] [8110219e] do_vfs_ioctl+0x3f7/0x441 [152727.590661] [8110223a] SyS_ioctl+0x52/0x80 [152727.590741] [8138f222] system_call_fastpath+0x16/0x1b [152727.590821] Code: 60 71 fd 49 89 40 50 49 89 40 58 49 8b 40 58 4d 89 68 58 49 83 c0 50 4d 89 45 00 49 89 45 08 4c 89 28 e9 aa 00 00 00 a8 10 75 02 0f 0b 83 e0 01 41 39 c4 74 02 0f 0b 45 85 e4 75 32 49 8b 70 18 [152727.593073] RIP [a02ab31f] build_backref_tree+0xb03/0xe7a [btrfs] [152727.593194] RSP 8805a9d9b898 [152727.593290] ---[ end trace b67170e8ece9f591 ]--- -- To unsubscribe from this list: send the line unsubscribe linux-btrfs in the body of a message to majord...@vger.kernel.org More majordomo info at http://vger.kernel.org/majordomo-info.html
Re: kernel BUG at fs/btrfs/relocation.c:1062
It was 3.13.0-rc4. On Thu, 19 Dec 2013 20:14:17 +0800 Wang Shilong wangsl.f...@cn.fujitsu.com wrote: Hello Tomasz, This seems a known bug that has been fixed by josef, what is your kernel version? Thanks, Wang On 12/19/2013 08:09 PM, Tomasz Chmielewski wrote: Got this while running balance. Some other operations were running in parallel (rsync etc.). [152727.584641] [ cut here ] [152727.584723] kernel BUG at fs/btrfs/relocation.c:1062! [152727.584802] invalid opcode: [#1] SMP [152727.584943] Modules linked in: veth ipt_MASQUERADE iptable_nat nf_conntrack_ipv4 nf_defrag_ipv4 nf_nat_ipv4 nf_nat nf_conntrack ip_tables x_tables cpufreq_ondemand cpufreq_conservative cpufreq_powersave cpufreq_stats bridge stp llc ipv6 btrfs xor raid6_pq zlib_deflate loop video i2c_i801 i2c_core button acpi_cpufreq pcspkr ehci_pci ehci_hcd lpc_ich mfd_core ext4 crc16 jbd2 mbcache raid1 sg sd_mod r8169 mii ahci libahci libata scsi_mod [152727.586767] CPU: 3 PID: 1540 Comm: btrfs Tainted: GW 3.13.0-rc4 #1 [152727.586888] Hardware name: System manufacturer System Product Name/P8H77-M PRO, BIOS 1101 02/04/2013 [152727.587011] task: 8805bedf9710 ti: 8805a9d9a000 task.ti: 8805a9d9a000 [152727.587132] RIP: 0010:[a02ab31f] [a02ab31f] build_backref_tree+0xb03/0xe7a [btrfs] [152727.587300] RSP: 0018:8805a9d9b898 EFLAGS: 00010246 [152727.587379] RAX: 8805a9d9b900 RBX: 8805a9d9b920 RCX: 88055a72fa10 [152727.587499] RDX: 88055a72ff50 RSI: 88055a72ff80 RDI: 8807e8878600 [152727.587619] RBP: 8805a9d9b998 R08: 8807e8878b80 R09: 1000 [152727.587739] R10: 1000 R11: 1600 R12: [152727.587859] R13: 88055a72ff90 R14: 8805a9d9b930 R15: 88028e648000 [152727.587980] FS: 7fca66eca840() GS:88081fac() knlGS: [152727.588101] CS: 0010 DS: ES: CR0: 80050033 [152727.588180] CR2: 7fac0ba8 CR3: 0007006d9000 CR4: 001407e0 [152727.588300] Stack: [152727.588374] 8800611992c0 8806c99dd500 8807e8878600 [152727.588623] 88055a72fde0 8806c99dd240 88028e648124 88028e648124 [152727.588871] 88028e648120 88028e648020 8807ece02c60 8807ece02a20 [152727.589120] Call Trace: [152727.589205] [a02ac13a] relocate_tree_blocks+0x1c0/0x544 [btrfs] [152727.589292] [a024f2a9] ? btrfs_release_path+0x6b/0x8a [btrfs] [152727.589382] [a02ad399] relocate_block_group+0x23f/0x4c7 [btrfs] [152727.589470] [a02ad76f] btrfs_relocate_block_group+0x14e/0x28d [btrfs] [152727.589598] [a028c8d2] btrfs_relocate_chunk.isra.65+0x58/0x60e [btrfs] [152727.589727] [a029a1ff] ? btrfs_set_lock_blocking_rw+0x89/0xb2 [btrfs] [152727.589853] [a024f1b7] ? btrfs_set_path_blocking+0x23/0x54 [btrfs] [152727.589979] [a0253a1b] ? btrfs_search_slot+0x72f/0x789 [btrfs] [152727.590068] [a0288be1] ? free_extent_buffer+0x6f/0x7c [btrfs] [152727.590157] [a028f709] btrfs_balance+0x9fe/0xbe0 [btrfs] [152727.590245] [a02954c9] btrfs_ioctl_balance+0x220/0x29f [btrfs] [152727.590334] [a0298a66] btrfs_ioctl+0xfce/0x2128 [btrfs] [152727.590417] [810c98c1] ? handle_mm_fault+0x24f/0x965 [152727.590498] [810ccc11] ? __vm_enough_memory+0x26/0x13d [152727.590581] [8110219e] do_vfs_ioctl+0x3f7/0x441 [152727.590661] [8110223a] SyS_ioctl+0x52/0x80 [152727.590741] [8138f222] system_call_fastpath+0x16/0x1b [152727.590821] Code: 60 71 fd 49 89 40 50 49 89 40 58 49 8b 40 58 4d 89 68 58 49 83 c0 50 4d 89 45 00 49 89 45 08 4c 89 28 e9 aa 00 00 00 a8 10 75 02 0f 0b 83 e0 01 41 39 c4 74 02 0f 0b 45 85 e4 75 32 49 8b 70 18 [152727.593073] RIP [a02ab31f] build_backref_tree+0xb03/0xe7a [btrfs] [152727.593194] RSP 8805a9d9b898 [152727.593290] ---[ end trace b67170e8ece9f591 ]--- -- To unsubscribe from this list: send the line unsubscribe linux-btrfs in the body of a message to majord...@vger.kernel.org More majordomo info at http://vger.kernel.org/majordomo-info.html
Re: [PATCH] btrfs: Cleanup the unused btrfs_check_super_valid.
On Wed, Dec 18, 2013 at 11:14:28AM +0800, Qu Wenruo wrote: Since in David's commit 1104a8855, there is nothing really check the super block now, the btrfs_check_super_valid function can be removed if no one else needs the function. Signed-off-by: Qu Wenruo quwen...@cn.fujitsu.com Cc: David Sterba dste...@suse.cz --- -static int btrfs_check_super_valid(struct btrfs_fs_info *fs_info, - int read_only) -{ - /* - * Placeholder for checks The comment should motivate to add more tests, not to remove the supporting code. Please keep it. - */ - return 0; -} -- To unsubscribe from this list: send the line unsubscribe linux-btrfs in the body of a message to majord...@vger.kernel.org More majordomo info at http://vger.kernel.org/majordomo-info.html
Re: kernel BUG at fs/btrfs/relocation.c:1062
On 12/19/2013 08:30 PM, Tomasz Chmielewski wrote: It was 3.13.0-rc4. I take a look at line 1062, this should be a new bug!!! Thanks, Wang On Thu, 19 Dec 2013 20:14:17 +0800 Wang Shilong wangsl.f...@cn.fujitsu.com wrote: Hello Tomasz, This seems a known bug that has been fixed by josef, what is your kernel version? Thanks, Wang On 12/19/2013 08:09 PM, Tomasz Chmielewski wrote: Got this while running balance. Some other operations were running in parallel (rsync etc.). [152727.584641] [ cut here ] [152727.584723] kernel BUG at fs/btrfs/relocation.c:1062! [152727.584802] invalid opcode: [#1] SMP [152727.584943] Modules linked in: veth ipt_MASQUERADE iptable_nat nf_conntrack_ipv4 nf_defrag_ipv4 nf_nat_ipv4 nf_nat nf_conntrack ip_tables x_tables cpufreq_ondemand cpufreq_conservative cpufreq_powersave cpufreq_stats bridge stp llc ipv6 btrfs xor raid6_pq zlib_deflate loop video i2c_i801 i2c_core button acpi_cpufreq pcspkr ehci_pci ehci_hcd lpc_ich mfd_core ext4 crc16 jbd2 mbcache raid1 sg sd_mod r8169 mii ahci libahci libata scsi_mod [152727.586767] CPU: 3 PID: 1540 Comm: btrfs Tainted: GW 3.13.0-rc4 #1 [152727.586888] Hardware name: System manufacturer System Product Name/P8H77-M PRO, BIOS 1101 02/04/2013 [152727.587011] task: 8805bedf9710 ti: 8805a9d9a000 task.ti: 8805a9d9a000 [152727.587132] RIP: 0010:[a02ab31f] [a02ab31f] build_backref_tree+0xb03/0xe7a [btrfs] [152727.587300] RSP: 0018:8805a9d9b898 EFLAGS: 00010246 [152727.587379] RAX: 8805a9d9b900 RBX: 8805a9d9b920 RCX: 88055a72fa10 [152727.587499] RDX: 88055a72ff50 RSI: 88055a72ff80 RDI: 8807e8878600 [152727.587619] RBP: 8805a9d9b998 R08: 8807e8878b80 R09: 1000 [152727.587739] R10: 1000 R11: 1600 R12: [152727.587859] R13: 88055a72ff90 R14: 8805a9d9b930 R15: 88028e648000 [152727.587980] FS: 7fca66eca840() GS:88081fac() knlGS: [152727.588101] CS: 0010 DS: ES: CR0: 80050033 [152727.588180] CR2: 7fac0ba8 CR3: 0007006d9000 CR4: 001407e0 [152727.588300] Stack: [152727.588374] 8800611992c0 8806c99dd500 8807e8878600 [152727.588623] 88055a72fde0 8806c99dd240 88028e648124 88028e648124 [152727.588871] 88028e648120 88028e648020 8807ece02c60 8807ece02a20 [152727.589120] Call Trace: [152727.589205] [a02ac13a] relocate_tree_blocks+0x1c0/0x544 [btrfs] [152727.589292] [a024f2a9] ? btrfs_release_path+0x6b/0x8a [btrfs] [152727.589382] [a02ad399] relocate_block_group+0x23f/0x4c7 [btrfs] [152727.589470] [a02ad76f] btrfs_relocate_block_group+0x14e/0x28d [btrfs] [152727.589598] [a028c8d2] btrfs_relocate_chunk.isra.65+0x58/0x60e [btrfs] [152727.589727] [a029a1ff] ? btrfs_set_lock_blocking_rw+0x89/0xb2 [btrfs] [152727.589853] [a024f1b7] ? btrfs_set_path_blocking+0x23/0x54 [btrfs] [152727.589979] [a0253a1b] ? btrfs_search_slot+0x72f/0x789 [btrfs] [152727.590068] [a0288be1] ? free_extent_buffer+0x6f/0x7c [btrfs] [152727.590157] [a028f709] btrfs_balance+0x9fe/0xbe0 [btrfs] [152727.590245] [a02954c9] btrfs_ioctl_balance+0x220/0x29f [btrfs] [152727.590334] [a0298a66] btrfs_ioctl+0xfce/0x2128 [btrfs] [152727.590417] [810c98c1] ? handle_mm_fault+0x24f/0x965 [152727.590498] [810ccc11] ? __vm_enough_memory+0x26/0x13d [152727.590581] [8110219e] do_vfs_ioctl+0x3f7/0x441 [152727.590661] [8110223a] SyS_ioctl+0x52/0x80 [152727.590741] [8138f222] system_call_fastpath+0x16/0x1b [152727.590821] Code: 60 71 fd 49 89 40 50 49 89 40 58 49 8b 40 58 4d 89 68 58 49 83 c0 50 4d 89 45 00 49 89 45 08 4c 89 28 e9 aa 00 00 00 a8 10 75 02 0f 0b 83 e0 01 41 39 c4 74 02 0f 0b 45 85 e4 75 32 49 8b 70 18 [152727.593073] RIP [a02ab31f] build_backref_tree+0xb03/0xe7a [btrfs] [152727.593194] RSP 8805a9d9b898 [152727.593290] ---[ end trace b67170e8ece9f591 ]--- -- To unsubscribe from this list: send the line unsubscribe linux-btrfs in the body of a message to majord...@vger.kernel.org More majordomo info at http://vger.kernel.org/majordomo-info.html -- To unsubscribe from this list: send the line unsubscribe linux-btrfs in the body of a message to majord...@vger.kernel.org More majordomo info at http://vger.kernel.org/majordomo-info.html
Re: kernel BUG at fs/btrfs/relocation.c:1062
If it matters, I had to hard reboot after that bug; the balance continued after the system booted again and I got this a while later (filesystem was remounted read only): [ 1781.321219] btrfs: found 232188 extents [ 1781.994796] btrfs: relocating block group 3443586498560 flags 20 [ 2603.422490] btrfs: found 203955 extents [ 2606.188826] btrfs: relocating block group 3051670732800 flags 20 [ 2806.720510] BTRFS debug (device sdb5): run_one_delayed_ref returned -28 [ 2806.720513] [ cut here ] [ 2806.720530] WARNING: CPU: 1 PID: 2359 at fs/btrfs/super.c:254 __btrfs_abort_transaction+0x4d/0xff [btrfs]() [ 2806.720544] btrfs: Transaction aborted (error -28) [ 2806.720544] Modules linked in: veth ipt_MASQUERADE iptable_nat nf_conntrack_ipv4 nf_defrag_ipv4 nf_nat_ipv4 nf_nat nf_conntrack ip_tables x_tables cpufreq_ondemand cpufreq_conservative cpufreq_powersave cpufreq_stats bridge stp llc ipv6 btrfs xor raid6_pq zlib_deflate loop pcspkr lpc_ich mfd_core i2c_i801 i2c_core button acpi_cpufreq ehci_pci video ehci_hcd ext4 crc16 jbd2 mbcache raid1 sg sd_mod ahci libahci libata scsi_mod r8169 mii [ 2806.720626] CPU: 1 PID: 2359 Comm: btrfs-transacti Not tainted 3.13.0-rc4 #1 [ 2806.720636] Hardware name: System manufacturer System Product Name/P8H77-M PRO, BIOS 1101 02/04/2013 [ 2806.720650] 0009 8807ed73dc48 8138a37d 0006 [ 2806.720705] 8807ed73dc98 8807ed73dc88 810370a9 8807ed73dd80 [ 2806.720797] a0227524 ffe4 8807f3389000 8807efad4f00 [ 2806.720889] Call Trace: [ 2806.720935] [8138a37d] dump_stack+0x46/0x58 [ 2806.720986] [810370a9] warn_slowpath_common+0x77/0x91 [ 2806.721005] BTRFS error (device sdb5) in __btrfs_free_extent:5783: errno=-28 No space left [ 2806.721006] BTRFS info (device sdb5): forced readonly [ 2806.721007] BTRFS debug (device sdb5): run_one_delayed_ref returned -28 [ 2806.721008] BTRFS error (device sdb5) in btrfs_run_delayed_refs:2730: errno=-28 No space left [ 2806.721276] [a0227524] ? __btrfs_abort_transaction+0x4d/0xff [btrfs] [ 2806.721372] [81037157] warn_slowpath_fmt+0x41/0x43 [ 2806.721426] [a0227524] __btrfs_abort_transaction+0x4d/0xff [btrfs] [ 2806.721482] [a023c6ed] btrfs_run_delayed_refs+0x253/0x46f [btrfs] [ 2806.721538] [a0249aef] btrfs_commit_transaction+0x70/0x7df [btrfs] [ 2806.721593] [a0248345] transaction_kthread+0xef/0x1c2 [btrfs] [ 2806.721646] [a0248256] ? open_ctree+0x1ac7/0x1ac7 [btrfs] [ 2806.721697] [8104ee9a] kthread+0xcd/0xd5 [ 2806.721744] [8104edcd] ? kthread_freezable_should_stop+0x43/0x43 [ 2806.721794] [8138f17c] ret_from_fork+0x7c/0xb0 [ 2806.721844] [8104edcd] ? kthread_freezable_should_stop+0x43/0x43 [ 2806.721893] ---[ end trace 5d2cc0a807b9d02a ]--- [ 2806.721942] BTRFS error (device sdb5) in btrfs_run_delayed_refs:2730: errno=-28 No space left -- Tomasz Chmielewski http://wpkg.org On Thu, 19 Dec 2013 20:49:04 +0800 Wang Shilong wangsl.f...@cn.fujitsu.com wrote: On 12/19/2013 08:30 PM, Tomasz Chmielewski wrote: It was 3.13.0-rc4. I take a look at line 1062, this should be a new bug!!! Thanks, Wang On Thu, 19 Dec 2013 20:14:17 +0800 Wang Shilong wangsl.f...@cn.fujitsu.com wrote: Hello Tomasz, This seems a known bug that has been fixed by josef, what is your kernel version? Thanks, Wang On 12/19/2013 08:09 PM, Tomasz Chmielewski wrote: Got this while running balance. Some other operations were running in parallel (rsync etc.). [152727.584641] [ cut here ] [152727.584723] kernel BUG at fs/btrfs/relocation.c:1062! [152727.584802] invalid opcode: [#1] SMP [152727.584943] Modules linked in: veth ipt_MASQUERADE iptable_nat nf_conntrack_ipv4 nf_defrag_ipv4 nf_nat_ipv4 nf_nat nf_conntrack ip_tables x_tables cpufreq_ondemand cpufreq_conservative cpufreq_powersave cpufreq_stats bridge stp llc ipv6 btrfs xor raid6_pq zlib_deflate loop video i2c_i801 i2c_core button acpi_cpufreq pcspkr ehci_pci ehci_hcd lpc_ich mfd_core ext4 crc16 jbd2 mbcache raid1 sg sd_mod r8169 mii ahci libahci libata scsi_mod [152727.586767] CPU: 3 PID: 1540 Comm: btrfs Tainted: GW 3.13.0-rc4 #1 [152727.586888] Hardware name: System manufacturer System Product Name/P8H77-M PRO, BIOS 1101 02/04/2013 [152727.587011] task: 8805bedf9710 ti: 8805a9d9a000 task.ti: 8805a9d9a000 [152727.587132] RIP: 0010:[a02ab31f] [a02ab31f] build_backref_tree+0xb03/0xe7a [btrfs] [152727.587300] RSP: 0018:8805a9d9b898 EFLAGS: 00010246 [152727.587379] RAX: 8805a9d9b900 RBX: 8805a9d9b920 RCX: 88055a72fa10 [152727.587499] RDX: 88055a72ff50 RSI: 88055a72ff80 RDI: 8807e8878600 [152727.587619] RBP: 8805a9d9b998 R08: 8807e8878b80 R09: 1000 [152727.587739] R10: 1000 R11:
Re: Btrfs stable updates for 3.12
On Wed, Dec 18, 2013 at 12:51:54PM -0800, Greg KH wrote: On Wed, Dec 18, 2013 at 04:14:02PM +0100, David Sterba wrote: Hi, please queue the following patches to 3.12 stable. They fix a few crashes or lockups that were reported by users. The patch stop using vfs_read in send may seem big for stable, but without it the send/receive ioctl hits the global open file limit sooner or later, depending on the ram size. Subjects: Btrfs: do a full search everytime in btrfs_search_old_slot Btrfs: reset intwrite on transaction abort Btrfs: fix memory leak of chunks' extent map Btrfs: fix hole check in log_one_extent [bug 1] Btrfs: fix incorrect inode acl reset Btrfs: stop using vfs_read in send Btrfs: take ordered root lock when removing ordered operations inode Btrfs: do not run snapshot-aware defragment on error Btrfs: fix a crash when running balance and defrag concurrently Btrfs: fix lockdep error in async commit Commits: d4b4087c43cc00a196c5be57fac41f41309f1d56 e0228285a8cad70e4b7b4833cc650e36ecd8de89 7d3d1744f8a7d62e4875bd69cc2192a939813880 ed9e8af88e2551aaa6bf51d8063a2493e2d71597 8185554d3eb09d23a805456b6fa98dcbb34aa518 ed2590953bd06b892f0411fc94e19175d32f197a 93858769172c4e3678917810e9d5de360eb991cc 6f519564d7d978c00351d9ab6abac3deeac31621 48ec47364b6d493f0a9cdc116977bf3f34e5c3ec b1a06a4b574996692b72b742bf6e6aa0c711a948 all apply cleanly on top of 3.12.5. all now applied, along with 4 of these that seem to be applicable to 3.10-stable. Yes, the 4 from stable-queue/3.10 are ok for 3.10, and for 3.11 if that matters. Thanks, david -- To unsubscribe from this list: send the line unsubscribe linux-btrfs in the body of a message to majord...@vger.kernel.org More majordomo info at http://vger.kernel.org/majordomo-info.html
Re: [PATCH 1/3] Btrfs: introduce lock_ref/unlock_ref
On 12/18/2013 11:01 PM, Dave Chinner wrote: On Wed, Dec 18, 2013 at 04:07:27PM -0500, Josef Bacik wrote: qgroups need to have a consistent view of the references for a particular extent record. Currently they do this through sequence numbers on delayed refs, but this is no longer acceptable. So instead introduce lock_ref/unlock_ref. This will provide the qgroup code with a consistent view of the reference while it does its accounting calculations without interfering with the delayed ref code. Thanks, Signed-off-by: Josef Bacik jba...@fb.com --- fs/btrfs/ctree.h | 11 ++ fs/btrfs/delayed-ref.c | 2 + fs/btrfs/delayed-ref.h | 1 + fs/btrfs/extent-tree.c | 102 +++-- 4 files changed, 113 insertions(+), 3 deletions(-) diff --git a/fs/btrfs/ctree.h b/fs/btrfs/ctree.h index a924274..8b3fd61 100644 --- a/fs/btrfs/ctree.h +++ b/fs/btrfs/ctree.h @@ -1273,6 +1273,9 @@ struct btrfs_block_group_cache { /* For delayed block group creation */ struct list_head new_bg_list; + + /* For locking reference modifications */ + struct extent_io_tree ref_lock; }; /* delayed seq elem */ @@ -3319,6 +3322,14 @@ int btrfs_init_space_info(struct btrfs_fs_info *fs_info); int btrfs_delayed_refs_qgroup_accounting(struct btrfs_trans_handle *trans, struct btrfs_fs_info *fs_info); int __get_raid_index(u64 flags); +int lock_ref(struct btrfs_fs_info *fs_info, u64 root_objectid, u64 bytenr, +u64 num_bytes, int for_cow, +struct btrfs_block_group_cache **block_group, +struct extent_state **cached_state); +int unlock_ref(struct btrfs_fs_info *fs_info, u64 root_objectid, u64 bytenr, + u64 num_bytes, int for_cow, + struct btrfs_block_group_cache *block_group, + struct extent_state **cached_state); Please namespace these - they are far too similar to the generic struct lockref name and manipulation functions Yup will do, thanks, Josef -- To unsubscribe from this list: send the line unsubscribe linux-btrfs in the body of a message to majord...@vger.kernel.org More majordomo info at http://vger.kernel.org/majordomo-info.html
Re: Btrfs stable updates for 3.12
On Thu, Dec 19, 2013 at 02:08:51PM +0100, David Sterba wrote: On Wed, Dec 18, 2013 at 12:51:54PM -0800, Greg KH wrote: On Wed, Dec 18, 2013 at 04:14:02PM +0100, David Sterba wrote: Hi, please queue the following patches to 3.12 stable. They fix a few crashes or lockups that were reported by users. The patch stop using vfs_read in send may seem big for stable, but without it the send/receive ioctl hits the global open file limit sooner or later, depending on the ram size. Subjects: Btrfs: do a full search everytime in btrfs_search_old_slot Btrfs: reset intwrite on transaction abort Btrfs: fix memory leak of chunks' extent map Btrfs: fix hole check in log_one_extent [bug 1] Btrfs: fix incorrect inode acl reset Btrfs: stop using vfs_read in send Btrfs: take ordered root lock when removing ordered operations inode Btrfs: do not run snapshot-aware defragment on error Btrfs: fix a crash when running balance and defrag concurrently Btrfs: fix lockdep error in async commit Commits: d4b4087c43cc00a196c5be57fac41f41309f1d56 e0228285a8cad70e4b7b4833cc650e36ecd8de89 7d3d1744f8a7d62e4875bd69cc2192a939813880 ed9e8af88e2551aaa6bf51d8063a2493e2d71597 8185554d3eb09d23a805456b6fa98dcbb34aa518 ed2590953bd06b892f0411fc94e19175d32f197a 93858769172c4e3678917810e9d5de360eb991cc 6f519564d7d978c00351d9ab6abac3deeac31621 48ec47364b6d493f0a9cdc116977bf3f34e5c3ec b1a06a4b574996692b72b742bf6e6aa0c711a948 all apply cleanly on top of 3.12.5. all now applied, along with 4 of these that seem to be applicable to 3.10-stable. Yes, the 4 from stable-queue/3.10 are ok for 3.10, and for 3.11 if that matters. Thanks, I'm queuing for 3.11 the same commits Greg has applied to the 3.10. Cheers, -- Luis -- To unsubscribe from this list: send the line unsubscribe linux-btrfs in the body of a message to majord...@vger.kernel.org More majordomo info at http://vger.kernel.org/majordomo-info.html
Re: [PATCH v4 00/18] Replace btrfs_workers with kernel workqueue based btrfs_workqueue
I got a panic with btrfs/012 in the worker stuff. I'm bisecting it down to figure out which patch introduced it but I'm afraid it may just be one of the replace blah with btrfs_workqueue patches and not be super helpful. You may want to run it in a loop or something and see if you can trigger it in the meantime and I'll respond whenever my bisect finishes. Thanks, Josef On 12/17/2013 04:31 AM, Qu Wenruo wrote: Add a new btrfs_workqueue_struct which use kernel workqueue to implement most of the original btrfs_workers, to replace btrfs_workers. With this patchset, redundant workqueue codes are replaced with kernel workqueue infrastructure, which not only reduces the code size but also the effort to maintain it. The result from sysbench shows minor improvement on the following server: CPU: two-way Xeon X5660 RAM: 4G HDD: SAS HDD, 150G total, 100G partition for btrfs test Test result on default mount option: https://docs.google.com/spreadsheet/ccc?key=0AhpkL3ehzX3pdENjajJTWFg5d1BWbExnYWFpMTJxeUEusp=sharing Test result on -o compress mount option: https://docs.google.com/spreadsheet/ccc?key=0AhpkL3ehzX3pdHdTTEJ6OW96SXJFaDR5enB1SzMzc0Eusp=sharing Changelog: v1-v2: - Fix some workqueue flags. v2-v3: - Add the thresholding mechanism to simulate the old behavior - Convert all the btrfs_workers to btrfs_workrqueue_struct. - Fix some potential deadlock when executed in IRQ handler. v3-v4: - Change the ordered workqueue implement to fix the performance drop in 32K multi thread random write. - Change the high priority workqueue implement to get an independent high workqueue without starving problem. - Simplify the btrfs_alloc_workqueue parameters. - Coding style cleanup. - Remove the redundant _struct suffix. Qu Wenruo (18): btrfs: Cleanup the unused struct async_sched. btrfs: Added btrfs_workqueue_struct implemented ordered execution based on kernel workqueue btrfs: Add high priority workqueue support for btrfs_workqueue_struct btrfs: Add threshold workqueue based on kernel workqueue btrfs: Replace fs_info-workers with btrfs_workqueue. btrfs: Replace fs_info-delalloc_workers with btrfs_workqueue btrfs: Replace fs_info-submit_workers with btrfs_workqueue. btrfs: Replace fs_info-flush_workers with btrfs_workqueue. btrfs: Replace fs_info-endio_* workqueue with btrfs_workqueue. btrfs: Replace fs_info-rmw_workers workqueue with btrfs_workqueue. btrfs: Replace fs_info-cache_workers workqueue with btrfs_workqueue. btrfs: Replace fs_info-readahead_workers workqueue with btrfs_workqueue. btrfs: Replace fs_info-fixup_workers workqueue with btrfs_workqueue. btrfs: Replace fs_info-delayed_workers workqueue with btrfs_workqueue. btrfs: Replace fs_info-qgroup_rescan_worker workqueue with btrfs_workqueue. btrfs: Replace fs_info-scrub_* workqueue with btrfs_workqueue. btrfs: Cleanup the old btrfs_worker. btrfs: Cleanup the _struct suffix in btrfs_workequeue fs/btrfs/async-thread.c | 821 --- fs/btrfs/async-thread.h | 117 ++- fs/btrfs/ctree.h | 39 ++- fs/btrfs/delayed-inode.c | 6 +- fs/btrfs/disk-io.c | 212 +--- fs/btrfs/extent-tree.c | 4 +- fs/btrfs/inode.c | 38 +-- fs/btrfs/ordered-data.c | 11 +- fs/btrfs/qgroup.c| 15 +- fs/btrfs/raid56.c| 21 +- fs/btrfs/reada.c | 4 +- fs/btrfs/scrub.c | 70 ++-- fs/btrfs/super.c | 36 +-- fs/btrfs/volumes.c | 16 +- 14 files changed, 430 insertions(+), 980 deletions(-) -- To unsubscribe from this list: send the line unsubscribe linux-btrfs in the body of a message to majord...@vger.kernel.org More majordomo info at http://vger.kernel.org/majordomo-info.html
Re: [PATCH] Btrfs-progs: receive: fix the case that we can not find subvolume
On Tue, Dec 17, 2013 at 10:40:41AM -0500, Michael Welsh Duggan wrote: David Sterba dste...@suse.cz writes: On Tue, Dec 17, 2013 at 05:13:49PM +0800, Wang Shilong wrote: If we change our default subvolume, btrfs receive will fail to find subvolume. To fix this problem, i have two ideas. 1.make btrfs snapshot ioctl support passing source subvolume's objectid 2.when we want to using interval subvolume path, we mount it other place that use subvolume 5 as its default subvolume. 3. Tell the user to mount the toplevel subvol by himself and run receive again Ugh. I hope that would be considered a short-term hack waiting for a better solution, perhaps requiring a kernel upgrade. From a user's perspective there is no reason this should be necessary, and requiring this would be extraordinarily surprising. Why is btrfs unable to find my snapshot? It's right there! Moreover, this used to work just fine in previous versions of btrfs-progs. It is a short-term fix, one that we can apply immediatelly without breaking things. A long-term fix is #1, but this would need more work and testing. david -- To unsubscribe from this list: send the line unsubscribe linux-btrfs in the body of a message to majord...@vger.kernel.org More majordomo info at http://vger.kernel.org/majordomo-info.html
Re: [PATCH] btrfs: ioctls would need unique id
On Thu, Dec 19, 2013 at 08:19:08AM +, Hugo Mills wrote: On Thu, Dec 19, 2013 at 12:06:32PM +0800, Anand Jain wrote: BTRFS_IOC_SET_FEATURES and BTRFS_IOC_GET_SUPPORTED_FEATURES conflicts with BTRFS_IOC_GET_FEATURES Signed-off-by: Anand Jain anand.j...@oracle.com --- include/uapi/linux/btrfs.h |4 ++-- 1 files changed, 2 insertions(+), 2 deletions(-) diff --git a/include/uapi/linux/btrfs.h b/include/uapi/linux/btrfs.h index 7d7f776..0fe736e 100644 --- a/include/uapi/linux/btrfs.h +++ b/include/uapi/linux/btrfs.h @@ -634,9 +634,9 @@ struct btrfs_ioctl_fslist_args { struct btrfs_ioctl_fslist_args) #define BTRFS_IOC_GET_FEATURES _IOR(BTRFS_IOCTL_MAGIC, 57, \ struct btrfs_ioctl_feature_flags) -#define BTRFS_IOC_SET_FEATURES _IOW(BTRFS_IOCTL_MAGIC, 57, \ +#define BTRFS_IOC_SET_FEATURES _IOW(BTRFS_IOCTL_MAGIC, 58, \ struct btrfs_ioctl_feature_flags[2]) -#define BTRFS_IOC_GET_SUPPORTED_FEATURES _IOR(BTRFS_IOCTL_MAGIC, 57, \ +#define BTRFS_IOC_GET_SUPPORTED_FEATURES _IOR(BTRFS_IOCTL_MAGIC, 59, \ struct btrfs_ioctl_feature_flags[3]) The ioctls are distinct as tehy are, as they have different-sized parameters. We're already using the trick to distinguish the ioctls by argument size, so it's not strictly necessary to make the main numbers different. Changing these numbers is changing the public interface, which is a big no-no. The related feature hasn't been merged yet so it's still time to change them. I'd say to keep it as it is now. david -- To unsubscribe from this list: send the line unsubscribe linux-btrfs in the body of a message to majord...@vger.kernel.org More majordomo info at http://vger.kernel.org/majordomo-info.html
Re: Unmountable Array After Drive Failure During Device Deletion
Chris Kastorff posted on Thu, 19 Dec 2013 01:26:57 -0800 as excerpted: I'm using btrfs in data and metadata RAID10 on drives (not on md or any other fanciness.) I was removing a drive (btrfs dev del) and during that operation, a different drive in the array failed. Having not had this happen before, I shut down the machine immediately due to the extremely loud piezo buzzer on the drive controller card. I attempted to do so cleanly, but the buzzer cut through my patience and after 4 minutes I cut the power. Afterwards, I located and removed the failed drive from the system, and then got back to linux. The array no longer mounts (failed to read the system array on sdc), with nearly identical messages when attempted with -o recovery and -o recovery,ro. This may be a stupid question, but you're missing a drive so the filesystem will be degraded, but you didn't mention that in your mount options, so... Did you try mounting with -o degraded (possibly with recovery, etc, also, but just try -o degraded plus any normal options first)? -- Duncan - List replies preferred. No HTML msgs. Every nonfree program has a lord, a master -- and if you use the program, he is your master. Richard Stallman -- To unsubscribe from this list: send the line unsubscribe linux-btrfs in the body of a message to majord...@vger.kernel.org More majordomo info at http://vger.kernel.org/majordomo-info.html
Re: btrfs on bcache
Any update on this? I have here exactly the same issue. Kernel 3.12.5-1-ARCH, backing device 500 GB IDE, cache 24 GB SSD = /dev/bcache0 On /dev/bcache I also have 2 subvolumes, / and /home. I get lots of messages in dmesg: (...) [ 22.282469] BTRFS info (device bcache0): csum failed ino 56193 off 212992 csum 519977505 expected csum 3166125439 [ 22.282656] incomplete page read in btrfs with offset 1024 and length 3072 [ 23.370872] incomplete page read in btrfs with offset 1024 and length 3072 [ 23.370890] BTRFS info (device bcache0): csum failed ino 57765 off 106496 csum 3553846164 expected csum 1299185721 [ 23.505238] incomplete page read in btrfs with offset 2560 and length 1536 [ 23.505256] BTRFS info (device bcache0): csum failed ino 75922 off 172032 csum 1883678196 expected csum 1337496676 [ 23.508535] incomplete page read in btrfs with offset 2560 and length 1536 [ 23.508547] BTRFS info (device bcache0): csum failed ino 74368 off 237568 csum 2863587994 expected csum 2693116460 [ 25.683059] incomplete page read in btrfs with offset 2560 and length 1536 [ 25.683078] BTRFS info (device bcache0): csum failed ino 123709 off 57344 csum 1528117893 expected csum 2239543273 [ 25.684339] incomplete page read in btrfs with offset 1024 and length 3072 [ 26.622384] incomplete page read in btrfs with offset 1024 and length 3072 [ 26.906718] incomplete page read in btrfs with offset 2560 and length 1536 [ 27.823247] incomplete page read in btrfs with offset 1024 and length 3072 [ 27.823265] btrfs_readpage_end_io_hook: 2 callbacks suppressed [ 27.823271] BTRFS info (device bcache0): csum failed ino 34587 off 16384 csum 1180114025 expected csum 474262911 [ 28.490066] incomplete page read in btrfs with offset 2560 and length 1536 [ 28.490085] BTRFS info (device bcache0): csum failed ino 65817 off 327680 csum 3065880108 expected csum 2663659117 [ 29.413824] incomplete page read in btrfs with offset 1024 and length 3072 [ 41.913857] incomplete page read in btrfs with offset 2560 and length 1536 [ 55.761753] incomplete page read in btrfs with offset 1024 and length 3072 [ 55.761771] BTRFS info (device bcache0): csum failed ino 72835 off 81920 csum 1511792656 expected csum 3733709121 [ 69.636498] incomplete page read in btrfs with offset 2560 and length 1536 (...) should I be worried? thanks, Fabio Pfeifer 2013/12/18 eb e...@gmx.ch: I've recently setup a system (Kernel 3.12.5-1-ARCH) which is layered as follows: /dev/sdb3 - cache0 (80 GB Intel SSD) /dev/sdc1 - backing device (2 TB WD HDD) sdb3+sdc1 = /dev/bcache0 On /dev/bcache0, there's a btrfs filesystem with 2 subvolumes, mounted as / and /home. What's been bothering me are the following entries in my kernel log: [13811.845540] incomplete page write in btrfs with offset 1536 and length 2560 [13870.326639] incomplete page write in btrfs with offset 3072 and length 1024 The offset/length values are always either 1536/2560 or 3072/1024, they sum up nicely to 4K. There are 607 of those in there as I am writing this, the machine has been up 18 hours and been under no particular I/O strain (it's a desktop). Trying to fix this, I unattached the cache (still using /dev/bcache0, but without /dev/sdb3 attached), causing these errors to disappear. As soon as I re-attached /dev/sdb3 they started again, so I am fairly sure it's an unfavorable interaction between bcache and btrfs. Is this something I should be worried about (they're only emitted with KERN_INFO?) or just an alignment problem? The underlying HDD is using 4K-Sectors, while the block_size of bcache seems to be 512, could that be the issue here? I've also encountered incomplete reads and a few csum errors, but I have not been able to trigger these regularly. I have a feeling that the error is more likely o be on the bcache end (I've mailed to that list as well), however any insight into the matter would be much appreciated. Thanks, - eb -- To unsubscribe from this list: send the line unsubscribe linux-btrfs in the body of a message to majord...@vger.kernel.org More majordomo info at http://vger.kernel.org/majordomo-info.html -- To unsubscribe from this list: send the line unsubscribe linux-btrfs in the body of a message to majord...@vger.kernel.org More majordomo info at http://vger.kernel.org/majordomo-info.html
Re: btrfs on bcache
Forgot to mention: bcache is in writeback mode 2013/12/19 Fábio Pfeifer fmpfei...@gmail.com: Any update on this? I have here exactly the same issue. Kernel 3.12.5-1-ARCH, backing device 500 GB IDE, cache 24 GB SSD = /dev/bcache0 On /dev/bcache I also have 2 subvolumes, / and /home. I get lots of messages in dmesg: (...) [ 22.282469] BTRFS info (device bcache0): csum failed ino 56193 off 212992 csum 519977505 expected csum 3166125439 [ 22.282656] incomplete page read in btrfs with offset 1024 and length 3072 [ 23.370872] incomplete page read in btrfs with offset 1024 and length 3072 [ 23.370890] BTRFS info (device bcache0): csum failed ino 57765 off 106496 csum 3553846164 expected csum 1299185721 [ 23.505238] incomplete page read in btrfs with offset 2560 and length 1536 [ 23.505256] BTRFS info (device bcache0): csum failed ino 75922 off 172032 csum 1883678196 expected csum 1337496676 [ 23.508535] incomplete page read in btrfs with offset 2560 and length 1536 [ 23.508547] BTRFS info (device bcache0): csum failed ino 74368 off 237568 csum 2863587994 expected csum 2693116460 [ 25.683059] incomplete page read in btrfs with offset 2560 and length 1536 [ 25.683078] BTRFS info (device bcache0): csum failed ino 123709 off 57344 csum 1528117893 expected csum 2239543273 [ 25.684339] incomplete page read in btrfs with offset 1024 and length 3072 [ 26.622384] incomplete page read in btrfs with offset 1024 and length 3072 [ 26.906718] incomplete page read in btrfs with offset 2560 and length 1536 [ 27.823247] incomplete page read in btrfs with offset 1024 and length 3072 [ 27.823265] btrfs_readpage_end_io_hook: 2 callbacks suppressed [ 27.823271] BTRFS info (device bcache0): csum failed ino 34587 off 16384 csum 1180114025 expected csum 474262911 [ 28.490066] incomplete page read in btrfs with offset 2560 and length 1536 [ 28.490085] BTRFS info (device bcache0): csum failed ino 65817 off 327680 csum 3065880108 expected csum 2663659117 [ 29.413824] incomplete page read in btrfs with offset 1024 and length 3072 [ 41.913857] incomplete page read in btrfs with offset 2560 and length 1536 [ 55.761753] incomplete page read in btrfs with offset 1024 and length 3072 [ 55.761771] BTRFS info (device bcache0): csum failed ino 72835 off 81920 csum 1511792656 expected csum 3733709121 [ 69.636498] incomplete page read in btrfs with offset 2560 and length 1536 (...) should I be worried? thanks, Fabio Pfeifer 2013/12/18 eb e...@gmx.ch: I've recently setup a system (Kernel 3.12.5-1-ARCH) which is layered as follows: /dev/sdb3 - cache0 (80 GB Intel SSD) /dev/sdc1 - backing device (2 TB WD HDD) sdb3+sdc1 = /dev/bcache0 On /dev/bcache0, there's a btrfs filesystem with 2 subvolumes, mounted as / and /home. What's been bothering me are the following entries in my kernel log: [13811.845540] incomplete page write in btrfs with offset 1536 and length 2560 [13870.326639] incomplete page write in btrfs with offset 3072 and length 1024 The offset/length values are always either 1536/2560 or 3072/1024, they sum up nicely to 4K. There are 607 of those in there as I am writing this, the machine has been up 18 hours and been under no particular I/O strain (it's a desktop). Trying to fix this, I unattached the cache (still using /dev/bcache0, but without /dev/sdb3 attached), causing these errors to disappear. As soon as I re-attached /dev/sdb3 they started again, so I am fairly sure it's an unfavorable interaction between bcache and btrfs. Is this something I should be worried about (they're only emitted with KERN_INFO?) or just an alignment problem? The underlying HDD is using 4K-Sectors, while the block_size of bcache seems to be 512, could that be the issue here? I've also encountered incomplete reads and a few csum errors, but I have not been able to trigger these regularly. I have a feeling that the error is more likely o be on the bcache end (I've mailed to that list as well), however any insight into the matter would be much appreciated. Thanks, - eb -- To unsubscribe from this list: send the line unsubscribe linux-btrfs in the body of a message to majord...@vger.kernel.org More majordomo info at http://vger.kernel.org/majordomo-info.html -- To unsubscribe from this list: send the line unsubscribe linux-btrfs in the body of a message to majord...@vger.kernel.org More majordomo info at http://vger.kernel.org/majordomo-info.html
Re: Unmountable Array After Drive Failure During Device Deletion
I'm using btrfs in data and metadata RAID10 on drives (not on md or any other fanciness.) I was removing a drive (btrfs dev del) and during that operation, a different drive in the array failed. Having not had this happen before, I shut down the machine immediately due to the extremely loud piezo buzzer on the drive controller card. I attempted to do so cleanly, but the buzzer cut through my patience and after 4 minutes I cut the power. Afterwards, I located and removed the failed drive from the system, and then got back to linux. The array no longer mounts (failed to read the system array on sdc), with nearly identical messages when attempted with -o recovery and -o recovery,ro. This may be a stupid question, but you're missing a drive so the filesystem will be degraded, but you didn't mention that in your mount options, so... Did you try mounting with -o degraded (possibly with recovery, etc, also, but just try -o degraded plus any normal options first)? I did not try degraded because I didn't remember that there were two different options for handling broken btrfs volumes. mount -o degraded,ro yields: btrfs: device label lake devid 11 transid 4893967 /dev/sda btrfs: allowing degraded mounts btrfs: disk space caching is enabled parent transid verify failed on 87601116364800 wanted 4893969 found 4893913 btrfs read error corrected: ino 1 off 87601116364800 (dev /dev/sdf sector 62986400) parent transid verify failed on 87601116381184 wanted 4893969 found 4893913 btrfs read error corrected: ino 1 off 87601116381184 (dev /dev/sdf sector 62986432) parent transid verify failed on 87601115320320 wanted 4893969 found 4893913 btrfs read error corrected: ino 1 off 87601115320320 (dev /dev/sdf sector 62985896) parent transid verify failed on 87601116368896 wanted 4893969 found 4893913 btrfs read error corrected: ino 1 off 87601116368896 (dev /dev/sdf sector 62986408) parent transid verify failed on 87601116377088 wanted 4893969 found 4893913 btrfs read error corrected: ino 1 off 87601116377088 (dev /dev/sdf sector 62986424) btrfs: bdev (null) errs: wr 344288, rd 230234, flush 0, corrupt 0, gen 0 btrfs: bdev /dev/sdm1 errs: wr 0, rd 0, flush 0, corrupt 4, gen 0 btrfs: bdev /dev/sdg errs: wr 0, rd 0, flush 0, corrupt 4, gen 0 parent transid verify failed on 87601117097984 wanted 4893969 found 4892460 Failed to read block groups: -5 btrfs: open_ctree failed mount -o degraded,recovery,ro yields: btrfs: device label lake devid 11 transid 4893967 /dev/sda btrfs: allowing degraded mounts btrfs: enabling auto recovery btrfs: disk space caching is enabled parent transid verify failed on 87601116798976 wanted 4893969 found 4893913 btrfs read error corrected: ino 1 off 87601116798976 (dev /dev/sdg sector 113318256) parent transid verify failed on 87601119379456 wanted 4893969 found 4893913 btrfs read error corrected: ino 1 off 87601119379456 (dev /dev/sdg sector 113319456) parent transid verify failed on 87601116774400 wanted 4893969 found 4893913 btrfs read error corrected: ino 1 off 87601116774400 (dev /dev/sdg sector 113318208) parent transid verify failed on 87601119391744 wanted 4893969 found 4893913 btrfs read error corrected: ino 1 off 87601119391744 (dev /dev/sdg sector 113319480) parent transid verify failed on 87601116778496 wanted 4893969 found 4893913 btrfs read error corrected: ino 1 off 87601116778496 (dev /dev/sdg sector 113318216) parent transid verify failed on 87601116786688 wanted 4893969 found 4893849 btrfs read error corrected: ino 1 off 87601116786688 (dev /dev/sdg sector 113318232) btrfs: bdev (null) errs: wr 344288, rd 230234, flush 0, corrupt 0, gen 0 btrfs: bdev /dev/sdm1 errs: wr 0, rd 0, flush 0, corrupt 4, gen 0 btrfs: bdev /dev/sdg errs: wr 0, rd 0, flush 0, corrupt 4, gen 0 parent transid verify failed on 8760515136 wanted 4893968 found 4893913 btrfs read error corrected: ino 1 off 8760515136 (dev /dev/sdg sector 113315616) parent transid verify failed on 8760523328 wanted 4893968 found 4893913 btrfs read error corrected: ino 1 off 8760523328 (dev /dev/sdg sector 113315632) parent transid verify failed on 8760535616 wanted 4893968 found 4893913 btrfs read error corrected: ino 1 off 8760535616 (dev /dev/sdg sector 113315656) parent transid verify failed on 8760556096 wanted 4893968 found 4893913 btrfs read error corrected: ino 1 off 8760556096 (dev /dev/sdg sector 113315696) Failed to read block groups: -5 btrfs: open_ctree failed -- To unsubscribe from this list: send the line unsubscribe linux-btrfs in the body of a message to majord...@vger.kernel.org More majordomo info at http://vger.kernel.org/majordomo-info.html
Re: Unmountable Array After Drive Failure During Device Deletion
I'm using btrfs in data and metadata RAID10 on drives (not on md or any other fanciness.) I was removing a drive (btrfs dev del) and during that operation, a different drive in the array failed. Having not had this happen before, I shut down the machine immediately due to the extremely loud piezo buzzer on the drive controller card. I attempted to do so cleanly, but the buzzer cut through my patience and after 4 minutes I cut the power. Afterwards, I located and removed the failed drive from the system, and then got back to linux. The array no longer mounts (failed to read the system array on sdc), with nearly identical messages when attempted with -o recovery and -o recovery,ro. This may be a stupid question, but you're missing a drive so the filesystem will be degraded, but you didn't mention that in your mount options, so... Did you try mounting with -o degraded (possibly with recovery, etc, also, but just try -o degraded plus any normal options first)? I did not try degraded because I didn't remember that there were two different options for handling broken btrfs volumes. mount -o degraded,ro yields: btrfs: device label lake devid 11 transid 4893967 /dev/sda btrfs: allowing degraded mounts btrfs: disk space caching is enabled parent transid verify failed on 87601116364800 wanted 4893969 found 4893913 btrfs read error corrected: ino 1 off 87601116364800 (dev /dev/sdf sector 62986400) parent transid verify failed on 87601116381184 wanted 4893969 found 4893913 btrfs read error corrected: ino 1 off 87601116381184 (dev /dev/sdf sector 62986432) parent transid verify failed on 87601115320320 wanted 4893969 found 4893913 btrfs read error corrected: ino 1 off 87601115320320 (dev /dev/sdf sector 62985896) parent transid verify failed on 87601116368896 wanted 4893969 found 4893913 btrfs read error corrected: ino 1 off 87601116368896 (dev /dev/sdf sector 62986408) parent transid verify failed on 87601116377088 wanted 4893969 found 4893913 btrfs read error corrected: ino 1 off 87601116377088 (dev /dev/sdf sector 62986424) btrfs: bdev (null) errs: wr 344288, rd 230234, flush 0, corrupt 0, gen 0 btrfs: bdev /dev/sdm1 errs: wr 0, rd 0, flush 0, corrupt 4, gen 0 btrfs: bdev /dev/sdg errs: wr 0, rd 0, flush 0, corrupt 4, gen 0 parent transid verify failed on 87601117097984 wanted 4893969 found 4892460 Failed to read block groups: -5 btrfs: open_ctree failed mount -o degraded,recovery,ro yields: btrfs: device label lake devid 11 transid 4893967 /dev/sda btrfs: allowing degraded mounts btrfs: enabling auto recovery btrfs: disk space caching is enabled parent transid verify failed on 87601116798976 wanted 4893969 found 4893913 btrfs read error corrected: ino 1 off 87601116798976 (dev /dev/sdg sector 113318256) parent transid verify failed on 87601119379456 wanted 4893969 found 4893913 btrfs read error corrected: ino 1 off 87601119379456 (dev /dev/sdg sector 113319456) parent transid verify failed on 87601116774400 wanted 4893969 found 4893913 btrfs read error corrected: ino 1 off 87601116774400 (dev /dev/sdg sector 113318208) parent transid verify failed on 87601119391744 wanted 4893969 found 4893913 btrfs read error corrected: ino 1 off 87601119391744 (dev /dev/sdg sector 113319480) parent transid verify failed on 87601116778496 wanted 4893969 found 4893913 btrfs read error corrected: ino 1 off 87601116778496 (dev /dev/sdg sector 113318216) parent transid verify failed on 87601116786688 wanted 4893969 found 4893849 btrfs read error corrected: ino 1 off 87601116786688 (dev /dev/sdg sector 113318232) btrfs: bdev (null) errs: wr 344288, rd 230234, flush 0, corrupt 0, gen 0 btrfs: bdev /dev/sdm1 errs: wr 0, rd 0, flush 0, corrupt 4, gen 0 btrfs: bdev /dev/sdg errs: wr 0, rd 0, flush 0, corrupt 4, gen 0 parent transid verify failed on 8760515136 wanted 4893968 found 4893913 btrfs read error corrected: ino 1 off 8760515136 (dev /dev/sdg sector 113315616) parent transid verify failed on 8760523328 wanted 4893968 found 4893913 btrfs read error corrected: ino 1 off 8760523328 (dev /dev/sdg sector 113315632) parent transid verify failed on 8760535616 wanted 4893968 found 4893913 btrfs read error corrected: ino 1 off 8760535616 (dev /dev/sdg sector 113315656) parent transid verify failed on 8760556096 wanted 4893968 found 4893913 btrfs read error corrected: ino 1 off 8760556096 (dev /dev/sdg sector 113315696) Failed to read block groups: -5 btrfs: open_ctree failed I should also mention that the corrupt 4 errs on /dev/sdm1 and /dev/sdg are there from an earlier btrfs extent corruption bug, and do not exist on the filesystem anymore (a scrub hours before the device deletion completed with 0 errors.) -- To unsubscribe from this list: send the line unsubscribe linux-btrfs in the body of a message to majord...@vger.kernel.org More majordomo info at http://vger.kernel.org/majordomo-info.html
Re: btrfs on bcache
On Wed, 2013-12-18 at 18:17 +0100, eb wrote: I've recently setup a system (Kernel 3.12.5-1-ARCH) which is layered as follows: /dev/sdb3 - cache0 (80 GB Intel SSD) /dev/sdc1 - backing device (2 TB WD HDD) sdb3+sdc1 = /dev/bcache0 On /dev/bcache0, there's a btrfs filesystem with 2 subvolumes, mounted as / and /home. What's been bothering me are the following entries in my kernel log: [13811.845540] incomplete page write in btrfs with offset 1536 and length 2560 [13870.326639] incomplete page write in btrfs with offset 3072 and length 1024 The offset/length values are always either 1536/2560 or 3072/1024, they sum up nicely to 4K. There are 607 of those in there as I am writing this, the machine has been up 18 hours and been under no particular I/O strain (it's a desktop). Btrfs shouldn't be setting the offset on the bios. Are you able to add a WARN_ON to the message that prints this so we can see the stack trace? Could you please cc the bcache and btrfs list together? -chris -- To unsubscribe from this list: send the line unsubscribe linux-btrfs in the body of a message to majord...@vger.kernel.org More majordomo info at http://vger.kernel.org/majordomo-info.html
Re: [PATCH v2] Btrfs: fix tree mod logging
On Thu, Dec 19, 2013 at 7:37 AM, Ahmet Inan ai...@mathematik.uni-freiburg.de wrote: Thanks a lot Filipe! Have been testing this patch now for 5 days and it fixed this annoying Problem since 3.11.0 on 3x NFS Servers here. This is also a candidate that should be back ported, as it fixes crashes. Just for Information for others here: Your previous patch, Btrfs: return immediately if tree log mod is not necessary is also needed to make it apply cleanly. Thank you Ahmet for testing it and reporting back :) Ahmet -- Filipe David Manana, Reasonable men adapt themselves to the world. Unreasonable men adapt the world to themselves. That's why all progress depends on unreasonable men. -- To unsubscribe from this list: send the line unsubscribe linux-btrfs in the body of a message to majord...@vger.kernel.org More majordomo info at http://vger.kernel.org/majordomo-info.html
Re: Unmountable Array After Drive Failure During Device Deletion
On Dec 19, 2013, at 2:26 AM, Chris Kastorff encryp...@gmail.com wrote: btrfs-progs v0.20-rc1-358-g194aa4a-dirty Most of what you're using is in the kernel so this is not urgent but if it gets to needing btrfs check/repair, I'd upgrade to v3.12 progs: https://www.archlinux.org/packages/testing/x86_64/btrfs-progs/ sd 0:2:3:0: [sdd] Unhandled error code sd 0:2:3:0: [sdd] Result: hostbyte=0x04 driverbyte=0x00 sd 0:2:3:0: [sdd] CDB: cdb[0]=0x2a: 2a 00 26 89 5b 00 00 00 80 00 end_request: I/O error, dev sdd, sector 646535936 btrfs_dev_stat_print_on_error: 7791 callbacks suppressed btrfs: bdev /dev/sdd errs: wr 315858, rd 230194, flush 0, corrupt 0, gen 0 sd 0:2:3:0: [sdd] Unhandled error code sd 0:2:3:0: [sdd] Result: hostbyte=0x04 driverbyte=0x00 sd 0:2:3:0: [sdd] CDB: cdb[0]=0x2a: 2a 00 26 89 5b 80 00 00 80 00 end_request: I/O error, dev sdd, sector 646536064 These are hardware errors. And you have missing devices, or at least a message of missing devices. So if a device went bad, and a new one added without deleting the missing one, then the new device only has new data. Data hasn't been recovered and replicated to the replacement. So it's possible with a missing device that's not removed, and a 2nd device failure, to lose some data. btrfs read error corrected: ino 1 off 87601116364800 (dev /dev/sdf sector 62986400) btrfs read error corrected: ino 1 off 87601116798976 (dev /dev/sdg sector 113318256) I'm not sure what constitutes a btrfs read error, maybe the device it originally requested data from didn't have it where it was expected but was able to find it on these devices. If the drive itself has a problem reading a sector and ECC can't correct it, it reports the read error to libata. So kernel messages report this with a line that starts with the word exception and then a line with cmd that shows what command and LBAs where issued to the drive, and then a res line that should contain an error mask with the actual error - bus error, media error. Very often you don't see these and instead see link reset messages, which means the drive is hanging doing something (probably attempting ECC) but then the linux SCSI layer hits its 30 second time out on the (hanged) queued command and resets the drive instead of waiting any longer. And that's a problem also because it prevents bad sectors from being fixed by Btrfs. So they just get worse to the point where then it can't do anyt hing about the situation. So I think you need to post a full dmesg somewhere rather than snippets. And I'd also like to see the result from smartctl -x for the above three drives, sdd, sdf, and sdg. And we need to know what this missing drive message is about, if you've done a drive replacement and exactly what commands you used to do that and how long ago. Chris Murphy-- To unsubscribe from this list: send the line unsubscribe linux-btrfs in the body of a message to majord...@vger.kernel.org More majordomo info at http://vger.kernel.org/majordomo-info.html
Re: Unmountable Array After Drive Failure During Device Deletion
On 12/19/2013 02:21 PM, Chris Murphy wrote: On Dec 19, 2013, at 2:26 AM, Chris Kastorff encryp...@gmail.com wrote: btrfs-progs v0.20-rc1-358-g194aa4a-dirty Most of what you're using is in the kernel so this is not urgent but if it gets to needing btrfs check/repair, I'd upgrade to v3.12 progs: https://www.archlinux.org/packages/testing/x86_64/btrfs-progs/ Adding the testing repository is a bad idea for this machine; turning off the testing repository is extremely error prone. Instead, I am now using the btrfs tools from git://git.kernel.org/pub/scm/linux/kernel/git/mason/btrfs-progs.git's master (specifically 8cae184), which reports itself as: deep# ./btrfs version Btrfs v3.12 sd 0:2:3:0: [sdd] Unhandled error code sd 0:2:3:0: [sdd] Result: hostbyte=0x04 driverbyte=0x00 sd 0:2:3:0: [sdd] CDB: cdb[0]=0x2a: 2a 00 26 89 5b 00 00 00 80 00 end_request: I/O error, dev sdd, sector 646535936 btrfs_dev_stat_print_on_error: 7791 callbacks suppressed btrfs: bdev /dev/sdd errs: wr 315858, rd 230194, flush 0, corrupt 0, gen 0 sd 0:2:3:0: [sdd] Unhandled error code sd 0:2:3:0: [sdd] Result: hostbyte=0x04 driverbyte=0x00 sd 0:2:3:0: [sdd] CDB: cdb[0]=0x2a: 2a 00 26 89 5b 80 00 00 80 00 end_request: I/O error, dev sdd, sector 646536064 These are hardware errors. And you have missing devices, or at least a message of missing devices. So if a device went bad, and a new one added without deleting the missing one, then the new device only has new data. Data hasn't been recovered and replicated to the replacement. So it's possible with a missing device that's not removed, and a 2nd device failure, to lose some data. This is not what happened, as I explained earlier; I shall explain again, with more verbosity: - Array is good. All drives are accounted for, btrfs scrub runs cleanly. btrfs fi show shows no missing drives and reasonable allocations. - I start btrfs dev del to remove devid 9. It chugs along with no errors, until: - Another drive in the array (NOT THE ONE I RAN DEV DEL ON) fails, and all reads and writes to it fail, causing the SCSI errors above. - I attempt clean shutdown. It takes too long for because my drive controller card is buzzing loudly and the neighbors are sensitive to noise, so: - I power down the machine uncleanly. - I remove the failed drive, NOT the one I ran dev del on. - I reboot, attempt to mount with various options, all of which cause the kernel to yell at me and the mount command returns failure. From what I understand, at all points there should be at least two copies of every extent during a dev del when all chunks are allocated RAID10 (and they are, according to btrfs fi df ran before on the mounted fs). Because of this, I expect to be able to use the chunks from the (not successfully removed) devid=9, as I have done many many times before due to other btrfs bugs that needed unclean shutdowns during dev del. Under the assumption devid=9 is good, if a slightly out of date on transid (which ALL data says is true), I should be able to completely recover all data, because data that was not modified during the deletion resides on devid=9, and data that was modified should be redundantly (RAID10) stored on the remaining drives, and thus should work given this case of a single drive failure. Is this not the case? Does btrfs not maintain redundancy during device removal? btrfs read error corrected: ino 1 off 87601116364800 (dev /dev/sdf sector 62986400) btrfs read error corrected: ino 1 off 87601116798976 (dev /dev/sdg sector 113318256) I'm not sure what constitutes a btrfs read error, maybe the device it originally requested data from didn't have it where it was expected but was able to find it on these devices. If the drive itself has a problem reading a sector and ECC can't correct it, it reports the read error to libata. So kernel messages report this with a line that starts with the word exception and then a line with cmd that shows what command and LBAs where issued to the drive, and then a res line that should contain an error mask with the actual error - bus error, media error. Very often you don't see these and instead see link reset messages, which means the drive is hanging doing something (probably attempting ECC) but then the linux SCSI layer hits its 30 second time out on the (hanged) queued command and resets the drive instead of waiting any longer. And that's a problem also because it prevents bad sectors from being fixed by Btrfs. So they just get worse to the point where then it can't do anything about the situation. There was a single drive immediately failing all its writes and reads because that's how the controller card was configured. No ECC failures, no timeouts. I have hit those issues on other arrays, but the drive controller I'm using here correctly and immediately returned errors on requests when the drive failed. I am no stranger to SCSI error messages on both shitty drive interfaces (which behave as you
Re: [PATCH v4 00/18] Replace btrfs_workers with kernel workqueue based btrfs_workqueue
Thanks for reporting. That's interesting, I'll look into it to figure out what's happening. Qu On Thu, 19 Dec 2013 10:27:22 -0500, Josef Bacik wrote: I got a panic with btrfs/012 in the worker stuff. I'm bisecting it down to figure out which patch introduced it but I'm afraid it may just be one of the replace blah with btrfs_workqueue patches and not be super helpful. You may want to run it in a loop or something and see if you can trigger it in the meantime and I'll respond whenever my bisect finishes. Thanks, Josef On 12/17/2013 04:31 AM, Qu Wenruo wrote: Add a new btrfs_workqueue_struct which use kernel workqueue to implement most of the original btrfs_workers, to replace btrfs_workers. With this patchset, redundant workqueue codes are replaced with kernel workqueue infrastructure, which not only reduces the code size but also the effort to maintain it. The result from sysbench shows minor improvement on the following server: CPU: two-way Xeon X5660 RAM: 4G HDD: SAS HDD, 150G total, 100G partition for btrfs test Test result on default mount option: https://docs.google.com/spreadsheet/ccc?key=0AhpkL3ehzX3pdENjajJTWFg5d1BWbExnYWFpMTJxeUEusp=sharing Test result on -o compress mount option: https://docs.google.com/spreadsheet/ccc?key=0AhpkL3ehzX3pdHdTTEJ6OW96SXJFaDR5enB1SzMzc0Eusp=sharing Changelog: v1-v2: - Fix some workqueue flags. v2-v3: - Add the thresholding mechanism to simulate the old behavior - Convert all the btrfs_workers to btrfs_workrqueue_struct. - Fix some potential deadlock when executed in IRQ handler. v3-v4: - Change the ordered workqueue implement to fix the performance drop in 32K multi thread random write. - Change the high priority workqueue implement to get an independent high workqueue without starving problem. - Simplify the btrfs_alloc_workqueue parameters. - Coding style cleanup. - Remove the redundant _struct suffix. Qu Wenruo (18): btrfs: Cleanup the unused struct async_sched. btrfs: Added btrfs_workqueue_struct implemented ordered execution based on kernel workqueue btrfs: Add high priority workqueue support for btrfs_workqueue_struct btrfs: Add threshold workqueue based on kernel workqueue btrfs: Replace fs_info-workers with btrfs_workqueue. btrfs: Replace fs_info-delalloc_workers with btrfs_workqueue btrfs: Replace fs_info-submit_workers with btrfs_workqueue. btrfs: Replace fs_info-flush_workers with btrfs_workqueue. btrfs: Replace fs_info-endio_* workqueue with btrfs_workqueue. btrfs: Replace fs_info-rmw_workers workqueue with btrfs_workqueue. btrfs: Replace fs_info-cache_workers workqueue with btrfs_workqueue. btrfs: Replace fs_info-readahead_workers workqueue with btrfs_workqueue. btrfs: Replace fs_info-fixup_workers workqueue with btrfs_workqueue. btrfs: Replace fs_info-delayed_workers workqueue with btrfs_workqueue. btrfs: Replace fs_info-qgroup_rescan_worker workqueue with btrfs_workqueue. btrfs: Replace fs_info-scrub_* workqueue with btrfs_workqueue. btrfs: Cleanup the old btrfs_worker. btrfs: Cleanup the _struct suffix in btrfs_workequeue fs/btrfs/async-thread.c | 821 --- fs/btrfs/async-thread.h | 117 ++- fs/btrfs/ctree.h | 39 ++- fs/btrfs/delayed-inode.c | 6 +- fs/btrfs/disk-io.c | 212 +--- fs/btrfs/extent-tree.c | 4 +- fs/btrfs/inode.c | 38 +-- fs/btrfs/ordered-data.c | 11 +- fs/btrfs/qgroup.c| 15 +- fs/btrfs/raid56.c| 21 +- fs/btrfs/reada.c | 4 +- fs/btrfs/scrub.c | 70 ++-- fs/btrfs/super.c | 36 +-- fs/btrfs/volumes.c | 16 +- 14 files changed, 430 insertions(+), 980 deletions(-) -- - Qu Wenruo Development Dept.I Nanjing Fujitsu Nanda Software Tech. Co., Ltd.(FNST) No. 6 Wenzhu Road, Nanjing, 210012, China TEL: +86+25-86630566-8526 COINS: 7998-8526 FAX: +86+25-83317685 MAIL: quwen...@cn.fujitsu.com - -- To unsubscribe from this list: send the line unsubscribe linux-btrfs in the body of a message to majord...@vger.kernel.org More majordomo info at http://vger.kernel.org/majordomo-info.html
Re: [PATCH] Btrfs-progs: receive: fix the case that we can not find subvolume
Michael Welsh Duggan m...@md5i.com writes: Wang Shilong wangsl.f...@cn.fujitsu.com writes: On 12/18/2013 12:06 PM, Michael Welsh Duggan wrote: Wang Shilong wangsl.f...@cn.fujitsu.com writes: It seems that you use older kernel version but use the latest btrfs-progs, new btrfs-progs use uuid tree to search but this tree did not exist yet. Can you try to upgrade your kernel? What version is necessary? (I am currently on 3.11.10.) 3.12 is ok, btw, can you run for 3.11.10 Looks like I'll be rebooting to a new kernel when I get home tonight. Running with a 3.12 kernel does solve this problem for me. Thank you for your help. -- Michael Welsh Duggan (m...@md5i.com) -- To unsubscribe from this list: send the line unsubscribe linux-btrfs in the body of a message to majord...@vger.kernel.org More majordomo info at http://vger.kernel.org/majordomo-info.html
Re: [PATCH v4 00/18] Replace btrfs_workers with kernel workqueue based btrfs_workqueue
I'm sorry but I failed to reproduce the problem. Btrfs/012 in xfstests has been run for serveral hours but nothing happened. Would you please give me some more details about the environment or the panic backtrace? Thanks. Qu On Thu, 19 Dec 2013 10:27:22 -0500, Josef Bacik wrote: I got a panic with btrfs/012 in the worker stuff. I'm bisecting it down to figure out which patch introduced it but I'm afraid it may just be one of the replace blah with btrfs_workqueue patches and not be super helpful. You may want to run it in a loop or something and see if you can trigger it in the meantime and I'll respond whenever my bisect finishes. Thanks, Josef On 12/17/2013 04:31 AM, Qu Wenruo wrote: Add a new btrfs_workqueue_struct which use kernel workqueue to implement most of the original btrfs_workers, to replace btrfs_workers. With this patchset, redundant workqueue codes are replaced with kernel workqueue infrastructure, which not only reduces the code size but also the effort to maintain it. The result from sysbench shows minor improvement on the following server: CPU: two-way Xeon X5660 RAM: 4G HDD: SAS HDD, 150G total, 100G partition for btrfs test Test result on default mount option: https://docs.google.com/spreadsheet/ccc?key=0AhpkL3ehzX3pdENjajJTWFg5d1BWbExnYWFpMTJxeUEusp=sharing Test result on -o compress mount option: https://docs.google.com/spreadsheet/ccc?key=0AhpkL3ehzX3pdHdTTEJ6OW96SXJFaDR5enB1SzMzc0Eusp=sharing Changelog: v1-v2: - Fix some workqueue flags. v2-v3: - Add the thresholding mechanism to simulate the old behavior - Convert all the btrfs_workers to btrfs_workrqueue_struct. - Fix some potential deadlock when executed in IRQ handler. v3-v4: - Change the ordered workqueue implement to fix the performance drop in 32K multi thread random write. - Change the high priority workqueue implement to get an independent high workqueue without starving problem. - Simplify the btrfs_alloc_workqueue parameters. - Coding style cleanup. - Remove the redundant _struct suffix. Qu Wenruo (18): btrfs: Cleanup the unused struct async_sched. btrfs: Added btrfs_workqueue_struct implemented ordered execution based on kernel workqueue btrfs: Add high priority workqueue support for btrfs_workqueue_struct btrfs: Add threshold workqueue based on kernel workqueue btrfs: Replace fs_info-workers with btrfs_workqueue. btrfs: Replace fs_info-delalloc_workers with btrfs_workqueue btrfs: Replace fs_info-submit_workers with btrfs_workqueue. btrfs: Replace fs_info-flush_workers with btrfs_workqueue. btrfs: Replace fs_info-endio_* workqueue with btrfs_workqueue. btrfs: Replace fs_info-rmw_workers workqueue with btrfs_workqueue. btrfs: Replace fs_info-cache_workers workqueue with btrfs_workqueue. btrfs: Replace fs_info-readahead_workers workqueue with btrfs_workqueue. btrfs: Replace fs_info-fixup_workers workqueue with btrfs_workqueue. btrfs: Replace fs_info-delayed_workers workqueue with btrfs_workqueue. btrfs: Replace fs_info-qgroup_rescan_worker workqueue with btrfs_workqueue. btrfs: Replace fs_info-scrub_* workqueue with btrfs_workqueue. btrfs: Cleanup the old btrfs_worker. btrfs: Cleanup the _struct suffix in btrfs_workequeue fs/btrfs/async-thread.c | 821 --- fs/btrfs/async-thread.h | 117 ++- fs/btrfs/ctree.h | 39 ++- fs/btrfs/delayed-inode.c | 6 +- fs/btrfs/disk-io.c | 212 +--- fs/btrfs/extent-tree.c | 4 +- fs/btrfs/inode.c | 38 +-- fs/btrfs/ordered-data.c | 11 +- fs/btrfs/qgroup.c| 15 +- fs/btrfs/raid56.c| 21 +- fs/btrfs/reada.c | 4 +- fs/btrfs/scrub.c | 70 ++-- fs/btrfs/super.c | 36 +-- fs/btrfs/volumes.c | 16 +- 14 files changed, 430 insertions(+), 980 deletions(-) -- - Qu Wenruo Development Dept.I Nanjing Fujitsu Nanda Software Tech. Co., Ltd.(FNST) No. 6 Wenzhu Road, Nanjing, 210012, China TEL: +86+25-86630566-8526 COINS: 7998-8526 FAX: +86+25-83317685 MAIL: quwen...@cn.fujitsu.com - -- To unsubscribe from this list: send the line unsubscribe linux-btrfs in the body of a message to majord...@vger.kernel.org More majordomo info at http://vger.kernel.org/majordomo-info.html
Re: Unmountable Array After Drive Failure During Device Deletion
On Dec 19, 2013, at 5:06 PM, Chris Kastorff encryp...@gmail.com wrote: On 12/19/2013 02:21 PM, Chris Murphy wrote: On Dec 19, 2013, at 2:26 AM, Chris Kastorff encryp...@gmail.com wrote: btrfs-progs v0.20-rc1-358-g194aa4a-dirty Most of what you're using is in the kernel so this is not urgent but if it gets to needing btrfs check/repair, I'd upgrade to v3.12 progs: https://www.archlinux.org/packages/testing/x86_64/btrfs-progs/ Adding the testing repository is a bad idea for this machine; turning off the testing repository is extremely error prone. Instead, I am now using the btrfs tools from git://git.kernel.org/pub/scm/linux/kernel/git/mason/btrfs-progs.git's master (specifically 8cae184), which reports itself as: deep# ./btrfs version Btrfs v3.12 Good. As I thought about it again, you're using user space tools to add, remove, replace devices also, and that code has changed too, so better to use current. - Array is good. All drives are accounted for, btrfs scrub runs cleanly. btrfs fi show shows no missing drives and reasonable allocations. - I start btrfs dev del to remove devid 9. It chugs along with no errors, until: - Another drive in the array (NOT THE ONE I RAN DEV DEL ON) fails, and all reads and writes to it fail, causing the SCSI errors above. - I attempt clean shutdown. It takes too long for because my drive controller card is buzzing loudly and the neighbors are sensitive to noise, so: - I power down the machine uncleanly. - I remove the failed drive, NOT the one I ran dev del on. - I reboot, attempt to mount with various options, all of which cause the kernel to yell at me and the mount command returns failure. devid 9 is device delete in-progress, and while that's occurring devid 15 fails completely. Is that correct? Because previously you reported, in part this: devid 15 size 1.82TB used 1.47TB path /dev/sdd *** Some devices missing And this: sd 0:2:3:0: [sdd] Unhandled error code That why I was confused. It looks like dead/missing device is one devid, and then devid 15 /dev/sdd is also having hardware problems - because all of this was posted at the same time. But I take it they're different boots and the /dev/sdd's are actually two different devids. So devid 9 was deleted and then devid 14 failed. Right? Lovely when /dev/sdX changes between boots. From what I understand, at all points there should be at least two copies of every extent during a dev del when all chunks are allocated RAID10 (and they are, according to btrfs fi df ran before on the mounted fs). Because of this, I expect to be able to use the chunks from the (not successfully removed) devid=9, as I have done many many times before due to other btrfs bugs that needed unclean shutdowns during dev del. I haven't looked at the code or read anything this specific on the state of the file system during a device delete. But my expectation is that there are 1-2 chunks available for writes. And 2-3 chunks available for reads. Some writes must be only one copy because a chunk hasn't yet been replicated elsewhere, and presumably the device being deleted is not subject to writes as the transid also implies. Whereas devid 9 is one set of chunks for reading, those chunks have pre-existing copies elsewhere in the file system so that's two copies. And there's a replication in progress of the soon to be removed chunks. So that's up to three copies. Problem is that for sure you've lost some chunks due to the failed/missing device. Normal raid10, it's unambiguous whether we've lost two mirrored sets. With Btrfs that's not clear as chunks are distributed. So it's possible that there are some chunks that don't exist at all for writes, and only 1 for reads. It may be no chunks are in common between devid 9 and the dead one. It may be only a couple of data or metadata chunks are in common. Under the assumption devid=9 is good, if a slightly out of date on transid (which ALL data says is true), I should be able to completely recover all data, because data that was not modified during the deletion resides on devid=9, and data that was modified should be redundantly (RAID10) stored on the remaining drives, and thus should work given this case of a single drive failure. Is this not the case? Does btrfs not maintain redundancy during device removal? Good questions. I'm not certain. But the speculation seems reasonable, not accounting for the missing device. That's what makes this different. btrfs read error corrected: ino 1 off 87601116364800 (dev /dev/sdf sector 62986400) btrfs read error corrected: ino 1 off 87601116798976 (dev /dev/sdg sector 113318256) I'm not sure what constitutes a btrfs read error, maybe the device it originally requested data from didn't have it where it was expected but was able to find it on these devices. If the drive itself has a problem reading a sector and ECC can't correct it, it reports the read
[PATCH] fs/btrfs: Integer overflow in btrfs_ioctl_resize()
The local variable 'new_size' comes from userspace. If a large number was passed, there would be an integer overflow in the following line: new_size = old_size + new_size; Signed-off-by: Wenliang Fan fanwle...@gmail.com --- fs/btrfs/ioctl.c | 4 1 file changed, 4 insertions(+) diff --git a/fs/btrfs/ioctl.c b/fs/btrfs/ioctl.c index 21da576..92f7707 100644 --- a/fs/btrfs/ioctl.c +++ b/fs/btrfs/ioctl.c @@ -1466,6 +1466,10 @@ static noinline int btrfs_ioctl_resize(struct file *file, } new_size = old_size - new_size; } else if (mod 0) { + if (new_size ULLONG_MAX - old_size) { + ret = -EINVAL; + goto out_free; + } new_size = old_size + new_size; } -- 1.8.5.rc1.28.g7061504 -- To unsubscribe from this list: send the line unsubscribe linux-btrfs in the body of a message to majord...@vger.kernel.org More majordomo info at http://vger.kernel.org/majordomo-info.html