Re: how long should btrfs device delete missing ... take?

2014-09-12 Thread Duncan
Chris Murphy posted on Thu, 11 Sep 2014 20:10:26 -0600 as excerpted: Sure. But what's the next step? Given 260+ snapshots might mean well more than 350GB of data, depending on how deduplicated the fs is, it still probably would be faster to rsync this to a pile of drives in linear/concat+XFS

Re: how long should btrfs device delete missing ... take?

2014-09-12 Thread Chris Murphy
On Sep 11, 2014, at 11:19 PM, Russell Coker russ...@coker.com.au wrote: It would be nice if a file system mounted ro counted as ro snapshots for btrfs send. When a file system is so messed up it can't be mounted rw it should be regarded as ro for all operations. Yes it's come up before,

Re: [PATCH v2] btrfs-progs: deal with conflict options for btrfs fi show

2014-09-12 Thread Gui Hecheng
On Fri, 2014-09-12 at 14:56 +0900, Satoru Takeuchi wrote: Hi Gui, (2014/09/12 10:15), Gui Hecheng wrote: For btrfs fi show, -d|--all-devices -m|--mounted will overwrite each other, so if specified both, let the user know that he should not use them at the same time. Signed-off-by:

RAID1 failure and recovery

2014-09-12 Thread shane-kernel
Hi, I am testing BTRFS in a simple RAID1 environment. Default mount options and data and metadata are mirrored between sda2 and sdb2. I have a few questions and a potential bug report. I don't normally have console access to the server so when the server boots with 1 of 2 disks, the mount will

[PATCH v4 00/11] Implement the data repair function for direct read

2014-09-12 Thread Miao Xie
This patchset implement the data repair function for the direct read, it is implemented like buffered read: 1.When we find the data is not right, we try to read the data from the other mirror. 2.When the io on the mirror ends, we will insert the endio work into the dedicated btrfs workqueue,

[PATCH v4 03/11] Btrfs: do file data check by sub-bio's self

2014-09-12 Thread Miao Xie
Direct IO splits the original bio to several sub-bios because of the limit of raid stripe, and the filesystem will wait for all sub-bios and then run final end io process. But it was very hard to implement the data repair when dio read failure happens, because at the final end io function, we

[PATCH v4 07/11] Btrfs: modify repair_io_failure and make it suit direct io

2014-09-12 Thread Miao Xie
The original code of repair_io_failure was just used for buffered read, because it got some filesystem data from page structure, it is safe for the page in the page cache. But when we do a direct read, the pages in bio are not in the page cache, that is there is no filesystem data in the page

[PATCH v4 05/11] Btrfs: Cleanup unused variant and argument of IO failure handlers

2014-09-12 Thread Miao Xie
Signed-off-by: Miao Xie mi...@cn.fujitsu.com --- Changelog v1 - v4: - None --- fs/btrfs/extent_io.c | 26 ++ 1 file changed, 10 insertions(+), 16 deletions(-) diff --git a/fs/btrfs/extent_io.c b/fs/btrfs/extent_io.c index f8dda46..154cb8e 100644 --- a/fs/btrfs/extent_io.c

[PATCH v4 02/11] Btrfs: cleanup similar code of the buffered data data check and dio read data check

2014-09-12 Thread Miao Xie
Signed-off-by: Miao Xie mi...@cn.fujitsu.com --- Changelog v1 - v4: - None --- fs/btrfs/inode.c | 102 +-- 1 file changed, 47 insertions(+), 55 deletions(-) diff --git a/fs/btrfs/inode.c b/fs/btrfs/inode.c index af304e1..e8139c6 100644 ---

[PATCH v4 04/11] Btrfs: fix missing error handler if submiting re-read bio fails

2014-09-12 Thread Miao Xie
We forgot to free failure record and bio after submitting re-read bio failed, fix it. Signed-off-by: Miao Xie mi...@cn.fujitsu.com --- Changelog v1 - v4: - None --- fs/btrfs/extent_io.c | 5 + 1 file changed, 5 insertions(+) diff --git a/fs/btrfs/extent_io.c b/fs/btrfs/extent_io.c index

[PATCH v4 10/11] Btrfs: implement repair function when direct read fails

2014-09-12 Thread Miao Xie
This patch implement data repair function when direct read fails. The detail of the implementation is: - When we find the data is not right, we try to read the data from the other mirror. - When the io on the mirror ends, we will insert the endio work into the dedicated btrfs workqueue, not

[PATCH v4 01/11] Btrfs: load checksum data once when submitting a direct read io

2014-09-12 Thread Miao Xie
The current code would load checksum data for several times when we split a whole direct read io because of the limit of the raid stripe, it would make us search the csum tree for several times. In fact, it just wasted time, and made the contention of the csum tree root be more serious. This patch

[PATCH v4 11/11] Btrfs: cleanup the read failure record after write or when the inode is freeing

2014-09-12 Thread Miao Xie
After the data is written successfully, we should cleanup the read failure record in that range because - If we set data COW for the file, the range that the failure record pointed to is mapped to a new place, so it is invalid. - If we set no data COW for the file, and if there is no error

[PATCH v4 06/11] Btrfs: split bio_readpage_error into several functions

2014-09-12 Thread Miao Xie
The data repair function of direct read will be implemented later, and some code in bio_readpage_error will be reused, so split bio_readpage_error into several functions which will be used in direct read repair later. Signed-off-by: Miao Xie mi...@cn.fujitsu.com --- Changelog v1 - v4: - None ---

[PATCH v4 09/11] Btrfs: Set real mirror number for read operation on RAID0/5/6

2014-09-12 Thread Miao Xie
We need real mirror number for RAID0/5/6 when reading data, or if read error happens, we would pass 0 as the number of the mirror on which the io error happens. It is wrong and would cause the filesystem read the data from the corrupted mirror again. Signed-off-by: Miao Xie mi...@cn.fujitsu.com

Re: RAID1 failure and recovery

2014-09-12 Thread Hugo Mills
On Fri, Sep 12, 2014 at 01:57:37AM -0700, shane-ker...@csy.ca wrote: Hi, I am testing BTRFS in a simple RAID1 environment. Default mount options and data and metadata are mirrored between sda2 and sdb2. I have a few questions and a potential bug report. I don't normally have console access

Re: RAID1 failure and recovery

2014-09-12 Thread Duncan
shane-kernel posted on Fri, 12 Sep 2014 01:57:37 -0700 as excerpted: [Last question first as it's easy to answer...] Finally for those using this sort of setup in production, is running btrfs on top of mdraid the way to go at this point? While the latest kernel and btrfs-tools have removed

breathe life into degraded raid10 (no space left on specific device)

2014-09-12 Thread Mate Gabri
Dear List, I tried to remove a device from a 12 disk RAID10 array but it failed with a no space left and the system crashed. After a reset i could only mount the array in degraded mode because the device was marked as missing. I've tried a replace command but it said that it does not support

Re: fs corruption report

2014-09-12 Thread Marc Dietrich
Hello Guy, Am Donnerstag, 4. September 2014, 11:50:14 schrieb Marc Dietrich: Am Donnerstag, 4. September 2014, 11:00:55 schrieb Gui Hecheng: Hi Zooko, Marc, Firstly, thanks for your backtrace info, Marc. Sorry to reply late, since I'm offline these days. For the restore problem, I'm

Re: [PATCH v4 00/11] Implement the data repair function for direct read

2014-09-12 Thread Chris Mason
On 09/12/2014 06:43 AM, Miao Xie wrote: This patchset implement the data repair function for the direct read, it is implemented like buffered read: 1.When we find the data is not right, we try to read the data from the other mirror. 2.When the io on the mirror ends, we will insert the

[GIT PULL] Btrfs for rc5

2014-09-12 Thread Chris Mason
Hi Linus, My for-linus branch has some fixes for the next rc: git://git.kernel.org/pub/scm/linux/kernel/git/mason/linux-btrfs.git for-linus Filipe is doing a careful pass through fsync problems, and these are the fixes so far. I'll have one more for rc6 that we're still testing. My big

[PATCH] Btrfs: remove empty block groups automatically

2014-09-12 Thread Josef Bacik
One problem that has plagued us is that a user will use up all of his space with data, remove a bunch of that data, and then try to create a bunch of small files and run out of space. This happens because all the chunks were allocated for data since the metadata requirements were so low. But now

Re: [PATCH] Btrfs: remove empty block groups automatically

2014-09-12 Thread Chris Mason
On 09/12/2014 03:18 PM, Josef Bacik wrote: One problem that has plagued us is that a user will use up all of his space with data, remove a bunch of that data, and then try to create a bunch of small files and run out of space. This happens because all the chunks were allocated for data

Re: Btrfs: device_list_add() should not update list when mounted breaks subvol mount

2014-09-12 Thread xavier.gn...@gmail.com
Hi, On standard ubuntu 14.04 a with an encrypted (cryptsetup) /home as brtfs subvolume we have the following results: 3.17-rc2 : Ok. 3.17-rc3 and 3.17-rc4 : /home fails to mount on boot. If one try mount -a then the system tells that the partition is already mounted according to matab.

[bug] subvol doesn't belong to btrfs mount point

2014-09-12 Thread Chris Murphy
Summary: When a btrfs subvolume is mounted with -o subvol, and a nested ro subvol/snapshot is created, btrfs send returns with an error. If the top level (id 5) is mounted instead, the send command succeeds. 3.17.0-0.rc4.git0.1.fc22.i686 Btrfs v3.16 This may also be happening on x86_64, and

Re: Btrfs: device_list_add() should not update list when mounted breaks subvol mount

2014-09-12 Thread Anand Jain
Hi Xavier, Thanks for the report. I got this reproduced: its a very corner case, it depends on the device path given in the subsequent subvol mounts, the fix appear to be outside of this patch at this moment and I am digging to know if we need to normalize the device path before using it