On Thu, Jul 4, 2013 at 2:30 AM, Miao Xie mi...@cn.fujitsu.com wrote:
On wed, 3 Jul 2013 15:17:02 +0100, Filipe David Manana wrote:
On Wed, Jul 3, 2013 at 2:25 PM, Miao Xie mi...@cn.fujitsu.com wrote:
+++ b/disk-io.c
@@ -1270,12 +1270,13 @@ static int close_all_devices(struct btrfs_fs_info
Instead of aborting with a BUG_ON() statement, return a
negated errno code. Also updated mkfs and convert tools
to print a nicer error message when make_btrfs() returns
an error.
Signed-off-by: Filipe David Borba Manana fdman...@gmail.com
---
btrfs-convert.c |3 ++-
mkfs.c |2 +-
1) add missing write checks for mkfs
2) add kstrdup() return value check
3) remove unused code
4) make_btrfs() return error code on write failure
5) check for errors in btrfs_add_block_group()
V2: added patches 4 and 5.
Filipe David Borba Manana (5):
Btrfs-progs: add missing write check for
This function was not checking if the calls to set_extent_bits()
and set_state_private() actually succeeded or not.
Signed-off-by: Filipe David Borba Manana fdman...@gmail.com
---
extent-tree.c | 12 +++-
1 file changed, 7 insertions(+), 5 deletions(-)
diff --git a/extent-tree.c
Hi All,
I want to resurrect an old problem. Currently stat(2) returns another
device than other places where the device is printed (/proc/pid/maps,
/proc/pid/fdinfo/, unix-diag). stat(2) reports devices, which is absent
in /proc/pid/mountinfo.
# cat /proc/self/mountinfo | grep mnt
40 32 0:32 /
Hi!
I'm using following command to create btrfs filesystem image with
predefined content:
mkfs.btrfs -b 300M -r /tmp/source-root-dir rootfs.btrfs
This command creates image and put everything under root subvolume (/).
But I want to put everything under /my_rootfs subvolume.
How can I accomplish
The filesystem was corrupted after we did a device replace.
Steps to reproduce:
# mkfs.btrfs -f -m single -d raid10 device0..device3
# mount device0 mnt
# btrfs replace start -rfB 1 device4 mnt
# umount mnt
# btrfsck device4
The reason is that we changed the write offset by mistake. When we
On Thu, 4 Jul 2013 18:37:21 +0800, Miao Xie wrote:
The filesystem was corrupted after we did a device replace.
Steps to reproduce:
# mkfs.btrfs -f -m single -d raid10 device0..device3
# mount device0 mnt
# btrfs replace start -rfB 1 device4 mnt
# umount mnt
# btrfsck device4
The
Miao Xie reported the following issue:
The filesystem was corrupted after we did a device replace.
Steps to reproduce:
# mkfs.btrfs -f -m single -d raid10 device0..device3
# mount device0 mnt
# btrfs replace start -rfB 1 device4 mnt
# umount mnt
# btrfsck device4
The reason for the issue
Miao Xie reported the following issue:
The filesystem was corrupted after we did a device replace.
Steps to reproduce:
# mkfs.btrfs -f -m single -d raid10 device0..device3
# mount device0 mnt
# btrfs replace start -rfB 1 device4 mnt
# umount mnt
# btrfsck device4
The reason for the issue
Hi David,
I believe this patch has the following problem:
On Tue, Mar 12, 2013 at 5:13 PM, David Sterba dste...@suse.cz wrote:
Each time pick one dead root from the list and let the caller know if
it's needed to continue. This should improve responsiveness during
umount and balance which at
If we did a tree search with the goal to find a metadata item
but the search failed with return value 1, we attempt to see
if in the same leaf there's a corresponding extent item, and if
there's one, just use it instead of doing another tree search
for this extent item. The check in the leaf was
On Thu, Jul 4, 2013 at 4:48 PM, Filipe David Borba Manana
fdman...@gmail.com wrote:
If we did a tree search with the goal to find a metadata item
but the search failed with return value 1, we attempt to see
if in the same leaf there's a corresponding extent item, and if
there's one, just use
If we did a tree search with the goal to find a metadata item
but the search failed with return value 1, we attempt to see
if in the same leaf there's a corresponding extent item, and if
there's one, just use it instead of doing another tree search
for this extent item. The check in the leaf was
On Thu, Jul 04, 2013 at 06:29:23PM +0300, Alex Lyakas wrote:
@@ -7363,6 +7365,12 @@ int btrfs_drop_snapshot(struct btrfs_root *root,
wc-reada_count = BTRFS_NODEPTRS_PER_BLOCK(root);
while (1) {
+ if (!for_reloc btrfs_fs_closing(root-fs_info)) {
+
Niels de Carpentier niels at decarpentier.com writes:
I'd like to see how they do that. The fact is you are still going to get
random
seeks since you have to binary search the blocks in an entire row since
there is
no way you can read a several thousand block row into memory to search
Hi David,
On Thu, Jul 4, 2013 at 8:03 PM, David Sterba dste...@suse.cz wrote:
On Thu, Jul 04, 2013 at 06:29:23PM +0300, Alex Lyakas wrote:
@@ -7363,6 +7365,12 @@ int btrfs_drop_snapshot(struct btrfs_root *root,
wc-reada_count = BTRFS_NODEPTRS_PER_BLOCK(root);
while (1) {
Some codes still use the cpu_to_lexx instead of the
BTRFS_SETGET_STACK_FUNCS declared in ctree.h.
Also added some BTRFS_SETGET_STACK_FUNCS for btrfs_header btrfs_timespec
and other structures.
Signed-off-by: Qu Wenruo quwen...@cn.fujitsu.com
Reviewed-by: Miao Xie miao...@cn.fujitsu.com
---
On Thu, Jul 04, 2013 at 10:52:39PM +0300, Alex Lyakas wrote:
Hi David,
On Thu, Jul 4, 2013 at 8:03 PM, David Sterba dste...@suse.cz wrote:
On Thu, Jul 04, 2013 at 06:29:23PM +0300, Alex Lyakas wrote:
@@ -7363,6 +7365,12 @@ int btrfs_drop_snapshot(struct btrfs_root *root,
Hi,
I encountered following BUG_ON. (I think that -28(ENOSPC) returned from
btrfs_orphan_reserve_metadata maybe.)
When this happened, I was running my stress test. But, I cannot reproduce
this problem yet though the test was executed again several times.
- Tsutomu
[ 4823.473913] btrfs: found
On Tuesday, July 02, 2013 09:19:07 PM Shridhar Daithankar wrote:
On Tuesday, July 02, 2013 01:00:29 PM Duncan wrote:
But I'd still expect there to be some better performance steady state
after a few mounts gets the basic filesystem defragged. Tho if the
fileystem is heavily fragmented[2],
21 matches
Mail list logo