On Sat, Sep 21, 2013 at 1:20 PM, Ahmet Inan
ai...@mathematik.uni-freiburg.de wrote:
You will want the patch I just sent,
Btrfs: create the uuid tree on remount rw
and that should fix the snapshot problems. Thanks,
thanks Josef - you can close this bug:
Free btrfs_path structure before returning due to failure of calls to
btrfs_search_slot() or btrfs_next_leaf() functions.
Signed-off-by: chandan chan...@linux.vnet.ibm.com
---
cmds-check.c | 9 +++--
1 file changed, 7 insertions(+), 2 deletions(-)
diff --git a/cmds-check.c b/cmds-check.c
Miao Xie miaox at cn.fujitsu.com writes:
The patchset enhanced btrfs qgroup show command.
Firstly, we restructure show_qgroups, make it easy to add new features.
And then we add '-p' '-c', '-l',and '-e' options to print the parent
qgroup id, child qgroup id, max referenced size and max
On 09/19/2013 11:19 AM, Saul Wold wrote:
Hi there,
I am attempting to build a rootfs image from an existing rootfs
directory tree. I am using the 0.20 @ 194aa4a of Chris's git repo.
The couple problem I saw was that the target image file needed to exist,
although I think I can patch that
Hi Linus,
Please pull my for-linus branch:
git://git.kernel.org/pub/scm/linux/kernel/git/mason/linux-btrfs.git for-linus
These are mostly bug fixes and a two small performance fixes. The most
important of the bunch are Josef's fix for a snapshotting regression and
Mark's update to fix compile
Currently the fs sync function (super.c:btrfs_sync_fs()) doesn't
wait for delayed work to finish before returning success to the
caller. This change fixes this, ensuring that there's no data loss
if a power failure happens right after fs sync returns success to
the caller and before the next
In inode.c:btrfs_orphan_add() if we failed to insert the orphan
item, we would return without decrementing the orphan count that
we just incremented before attempting the insertion, leaving the
orphan inode count wrong.
In inode.c:btrfs_orphan_del(), we were decrementing the inode
orphan count if
Hello list,
I'm still busy trying to figure out how to get VMs running without
corruption (see
http://www.spinics.net/lists/linux-btrfs/msg27300.html) and in the
process of this, I tried to remove the @vmware subvolume I've made.
This seems to trigger some bug, as dmesg is showing an OOPS (2 I
Not sure if it's anything interesting - I had the following entry in
dmesg a few days ago, on a server with 32 GB RAM. The system is still working
fine.
[1878432.675210] btrfs-qgroup-re: page allocation failure: order:5,
mode:0x104050
[1878432.675319] CPU: 5 PID: 22251 Comm: btrfs-qgroup-re Not
On 09/22/2013 11:14 PM, Dusty Mabe wrote:
Miao Xie miaox at cn.fujitsu.com writes:
The patchset enhanced btrfs qgroup show command.
Firstly, we restructure show_qgroups, make it easy to add new features.
And then we add '-p' '-c', '-l',and '-e' options to print the parent
qgroup id, child
On sun, 22 Sep 2013 21:55:53 +0100, Filipe David Borba Manana wrote:
Currently the fs sync function (super.c:btrfs_sync_fs()) doesn't
wait for delayed work to finish before returning success to the
caller. This change fixes this, ensuring that there's no data loss
if a power failure
Hi Wang,
Thank you! There is one other thing I have noticed while playing
around with quota and qgroups. If I delete subvolumes I can manage to
get some of the qgroup information to be reported as a negative
number. If you are interested check out my steps at
12 matches
Mail list logo