When iputting the inode, We may leave the delayed nodes if they have some
delayed items that have not been dealt with. So when the inode is read again,
we must look up the relative delayed node, and use the information in it to
initialize the inode. Or we will get inconsonant inode information, it
On 21.06.2011 17:12, Jan Schmidt wrote:
On 21.06.2011 16:37, David Sterba wrote:
[...]
Something is going wrong here:
Example:
ipath buffer and scratch are 32K, each, ie. the overly sized
ref_name_len will fit there:
[ 8766.928232] btrfs: ino2name: 266 p1/
[ 8767.440964] i2p: [4]
Hi!
I scanned for relevant topics in the last two years but except for putting
a swap file on compress=lzo this march I didn´t found anything.
Does compression make sense on SSD? Or more specifically:
1) In what chunk sizes does BTRFS compress? How much data is affected when
a byte is changed
On Tue, 21 Jun 2011 10:15:30 +0900, Tsutomu Itoh wrote:
[SNIP]
Bad news.
I changed my test environment to 'btrfs-unstable + for-linus', I encountered
following panic without inode_cache. (in about 4 hours after test begins)
btrfs: relocating block group 49161437184 flags 9
btrfs: found
Hi,
On Sun, Jun 19, 2011 at 06:53:28PM +0800, Daniel J Blueman wrote:
I hit this BTRFS oops [1] in 3.0-rc3, clearly due to filesystem corruption.
If lookup_extent_backref fails, path-nodes[0] reasonably could be
null, so look before leaping [2].
I think the check should be placed into
On Wed, Jun 22, 2011 at 08:13:42PM +0200, Jan Kara wrote:
No problem. Just we have to somehow coordinate with Christoph... Either
he can avoid touching ext4 and merge his patch set after you merge my patch
or he can take my patch instead of his ext4 change. Since my patch touches
only
One of the casts in ioctl.c loses the __user annotation; cast so it is
correctly maintained.
Signed-off-by: Daniel J Blueman daniel.blue...@gmail.com
diff --git a/fs/btrfs/ioctl.c b/fs/btrfs/ioctl.c
index a3c4751..79c32d8 100644
--- a/fs/btrfs/ioctl.c
+++ b/fs/btrfs/ioctl.c
@@ -2708,7 +2708,7 @@
On Thu, Jun 23, 2011 at 11:01:01PM +0800, Daniel J Blueman wrote:
On 23 June 2011 18:31, David Sterba d...@jikos.cz wrote:
(how does one follow up an email in git send-email with the message id?)
git-send-email --in-reply-to=identifier
(if it does not ask for it) and paste identifier from mail
I'm game for trying it, been waiting a good bit to recover, and i have spare
drives to save the data to. Just post the git link for it if it's different then
Chris Masons git tree.
--
To unsubscribe from this list: send the line unsubscribe linux-btrfs in
the body of a message to
Hi!
Short summary: I suspect that rsync´ing files to a newly created BTRFS
partition with a subvolume *and* enabled space_cache triggers the error
mentioned in the subject line of this mail. I reported this also as:
Bug 38112 - btrfs: failed to load free space cache for block group on
Apologies if this has been asked already...
I am testing btrfs across multiple devices to see if removing them is
working smoothly yet(smallish loopback devices, a few 2GB and one 5GB)
and I always seem to hit a point where I can no longer remove a device.
I've created the filesystem as RAID
On Wed, Jun 22, 2011 at 09:45:20PM -0400, Chris Mason wrote:
Excerpts from Andrej Podzimek's message of 2011-06-22 18:42:28 -0400:
Could I try your hack, pretty please? If there's any chance it could either
resolve this problem
On Thu, Jun 23, 2011 at 07:37:12PM +0200, Martin Steigerwald wrote:
Hi!
Short summary: I suspect that rsync´ing files to a newly created BTRFS
partition with a subvolume *and* enabled space_cache triggers the error
mentioned in the subject line of this mail. I reported this also as:
Bug
A user reported this bug again where we have more bitmaps than we are supposed
to. This is because we failed to load the free space cache, but don't update
the ctl-total_bitmaps counter when we remove entries from the tree. This patch
fixes this problem and we should be good to go again.
On Thu, Jun 23, 2011 at 03:51:37PM -0400, Josef Bacik wrote:
A user reported this bug again where we have more bitmaps than we are supposed
to. This is because we failed to load the free space cache, but don't update
the ctl-total_bitmaps counter when we remove entries from the tree. This
A user reported this bug again where we have more bitmaps than we are supposed
to. This is because we failed to load the free space cache, but don't update
the ctl-total_bitmaps counter when we remove entries from the tree. This patch
fixes this problem and we should be good to go again.
Martin Steigerwald Martin at lichtvoll.de writes:
Hi!
Short summary: I suspect that rsync´ing files to a newly created BTRFS
partition with a subvolume *and* enabled space_cache triggers the error
mentioned in the subject line of this mail. I reported this also as:
Bug 38112 -
The recently posted EVM/IMA-appraisal patches added a new hook
evm_inode_post_init_security() to calculate the security.evm extended
attribute(xattr) and an additional call to set_xattr().
security_inode_init_security(lsm_xattr)
set_xattr(lsm_xattr)
Well still no cigar, didnt even change the error output. Thank you though for
at least trying to help. here goes the error info with your patch applied to
/fs/btrfs/disk-io.c:
mount -o ro /dev/sdf1 (same for c1,d1,etc) /btrfs (dmesg output)
[ 1647.330104] btrfs: open_ctree failed
[ 1683.328038]
On 06/23/2011 05:11 PM, Daniel Witzel wrote:
Well still no cigar, didnt even change the error output. Thank you though for
at least trying to help. here goes the error info with your patch applied to
/fs/btrfs/disk-io.c:
mount -o ro /dev/sdf1 (same for c1,d1,etc) /btrfs (dmesg output)
[
It was pointed out by 'make versioncheck' that some includes of
linux/version.h were not needed in fs/ (fs/btrfs/ctree.h and
fs/omfs/file.c).
This patch removes them.
Signed-off-by: Jesper Juhl j...@chaosbits.net
---
fs/btrfs/ctree.h |1 -
fs/omfs/file.c |1 -
2 files changed, 0
Could I try your hack, pretty please? If there's any chance it could either
resolve this problem
http://www.mail-archive.com/linux-btrfs@vger.kernel.org/msg10683.html ,
or at least restore the data from the filesystem, then I'd like to give it a
go. Waiting for the new btrfsck is
22 matches
Mail list logo