Chris Murphy posted on Thu, 11 Sep 2014 20:10:26 -0600 as excerpted:
Sure. But what's the next step? Given 260+ snapshots might mean well
more than 350GB of data, depending on how deduplicated the fs is, it
still probably would be faster to rsync this to a pile of drives in
linear/concat+XFS
On Sep 11, 2014, at 11:19 PM, Russell Coker russ...@coker.com.au wrote:
It would be nice if a file system mounted ro counted as ro snapshots for
btrfs send.
When a file system is so messed up it can't be mounted rw it should be
regarded as ro for all operations.
Yes it's come up before,
On Fri, 2014-09-12 at 14:56 +0900, Satoru Takeuchi wrote:
Hi Gui,
(2014/09/12 10:15), Gui Hecheng wrote:
For btrfs fi show, -d|--all-devices -m|--mounted will
overwrite each other, so if specified both, let the user
know that he should not use them at the same time.
Signed-off-by:
Hi,
I am testing BTRFS in a simple RAID1 environment. Default mount options and
data and metadata are mirrored between sda2 and sdb2. I have a few questions
and a potential bug report. I don't normally have console access to the server
so when the server boots with 1 of 2 disks, the mount will
This patchset implement the data repair function for the direct read, it
is implemented like buffered read:
1.When we find the data is not right, we try to read the data from the other
mirror.
2.When the io on the mirror ends, we will insert the endio work into the
dedicated btrfs workqueue,
Direct IO splits the original bio to several sub-bios because of the limit of
raid stripe, and the filesystem will wait for all sub-bios and then run final
end io process.
But it was very hard to implement the data repair when dio read failure happens,
because at the final end io function, we
The original code of repair_io_failure was just used for buffered read,
because it got some filesystem data from page structure, it is safe for
the page in the page cache. But when we do a direct read, the pages in bio
are not in the page cache, that is there is no filesystem data in the page
Signed-off-by: Miao Xie mi...@cn.fujitsu.com
---
Changelog v1 - v4:
- None
---
fs/btrfs/extent_io.c | 26 ++
1 file changed, 10 insertions(+), 16 deletions(-)
diff --git a/fs/btrfs/extent_io.c b/fs/btrfs/extent_io.c
index f8dda46..154cb8e 100644
--- a/fs/btrfs/extent_io.c
Signed-off-by: Miao Xie mi...@cn.fujitsu.com
---
Changelog v1 - v4:
- None
---
fs/btrfs/inode.c | 102 +--
1 file changed, 47 insertions(+), 55 deletions(-)
diff --git a/fs/btrfs/inode.c b/fs/btrfs/inode.c
index af304e1..e8139c6 100644
---
We forgot to free failure record and bio after submitting re-read bio failed,
fix it.
Signed-off-by: Miao Xie mi...@cn.fujitsu.com
---
Changelog v1 - v4:
- None
---
fs/btrfs/extent_io.c | 5 +
1 file changed, 5 insertions(+)
diff --git a/fs/btrfs/extent_io.c b/fs/btrfs/extent_io.c
index
This patch implement data repair function when direct read fails.
The detail of the implementation is:
- When we find the data is not right, we try to read the data from the other
mirror.
- When the io on the mirror ends, we will insert the endio work into the
dedicated btrfs workqueue, not
The current code would load checksum data for several times when we split
a whole direct read io because of the limit of the raid stripe, it would
make us search the csum tree for several times. In fact, it just wasted time,
and made the contention of the csum tree root be more serious. This patch
After the data is written successfully, we should cleanup the read failure
record
in that range because
- If we set data COW for the file, the range that the failure record pointed to
is
mapped to a new place, so it is invalid.
- If we set no data COW for the file, and if there is no error
The data repair function of direct read will be implemented later, and some code
in bio_readpage_error will be reused, so split bio_readpage_error into
several functions which will be used in direct read repair later.
Signed-off-by: Miao Xie mi...@cn.fujitsu.com
---
Changelog v1 - v4:
- None
---
We need real mirror number for RAID0/5/6 when reading data, or if read error
happens, we would pass 0 as the number of the mirror on which the io error
happens. It is wrong and would cause the filesystem read the data from the
corrupted mirror again.
Signed-off-by: Miao Xie mi...@cn.fujitsu.com
On Fri, Sep 12, 2014 at 01:57:37AM -0700, shane-ker...@csy.ca wrote:
Hi,
I am testing BTRFS in a simple RAID1 environment. Default mount
options and data and metadata are mirrored between sda2 and sdb2. I
have a few questions and a potential bug report. I don't normally
have console access
shane-kernel posted on Fri, 12 Sep 2014 01:57:37 -0700 as excerpted:
[Last question first as it's easy to answer...]
Finally for those using this sort of setup in production, is running
btrfs on top of mdraid the way to go at this point?
While the latest kernel and btrfs-tools have removed
Dear List,
I tried to remove a device from a 12 disk RAID10 array but it failed with a no
space left and the system crashed. After a reset i could only mount the array
in degraded mode because the device was marked as missing. I've tried a replace
command but it said that it does not support
Hello Guy,
Am Donnerstag, 4. September 2014, 11:50:14 schrieb Marc Dietrich:
Am Donnerstag, 4. September 2014, 11:00:55 schrieb Gui Hecheng:
Hi Zooko, Marc,
Firstly, thanks for your backtrace info, Marc.
Sorry to reply late, since I'm offline these days.
For the restore problem, I'm
On 09/12/2014 06:43 AM, Miao Xie wrote:
This patchset implement the data repair function for the direct read, it
is implemented like buffered read:
1.When we find the data is not right, we try to read the data from the other
mirror.
2.When the io on the mirror ends, we will insert the
Hi Linus,
My for-linus branch has some fixes for the next rc:
git://git.kernel.org/pub/scm/linux/kernel/git/mason/linux-btrfs.git for-linus
Filipe is doing a careful pass through fsync problems, and these are the
fixes so far. I'll have one more for rc6 that we're still testing.
My big
One problem that has plagued us is that a user will use up all of his space with
data, remove a bunch of that data, and then try to create a bunch of small files
and run out of space. This happens because all the chunks were allocated for
data since the metadata requirements were so low. But now
On 09/12/2014 03:18 PM, Josef Bacik wrote:
One problem that has plagued us is that a user will use up all of his space
with
data, remove a bunch of that data, and then try to create a bunch of small
files
and run out of space. This happens because all the chunks were allocated for
data
Hi,
On standard ubuntu 14.04 a with an encrypted (cryptsetup) /home as brtfs
subvolume we have the following results:
3.17-rc2 : Ok.
3.17-rc3 and 3.17-rc4 : /home fails to mount on boot. If one try mount
-a then the system tells that the partition is already mounted according
to matab.
Summary: When a btrfs subvolume is mounted with -o subvol, and a nested ro
subvol/snapshot is created, btrfs send returns with an error. If the top level
(id 5) is mounted instead, the send command succeeds.
3.17.0-0.rc4.git0.1.fc22.i686
Btrfs v3.16
This may also be happening on x86_64, and
Hi Xavier,
Thanks for the report.
I got this reproduced: its a very corner case, it depends on the
device path given in the subsequent subvol mounts, the fix appear
to be outside of this patch at this moment and I am digging to know
if we need to normalize the device path before using it
26 matches
Mail list logo