Maciej Piechotka uzytkownik2 at gmail.com writes:
btrfsck: root-tree.c:46: btrfs_find_last_root: Assertion
`!(path-slots[0] == 0)' failed.
Maciej, I see such Assertion failed messages usually while playing around
with
different kernels and different versions of btrfs-progs. Please make sure
On 08/06/2011 10:16 PM, Andrew Lutomirski wrote:
I've always gotten space cache generation warnings, but some time
after 3.0 they started going nuts. I get:
space cache generation (14667727114112179905) does not match inode (154185)
and other similar messages (with a huge number and a
On Mon, Aug 8, 2011 at 8:14 AM, Josef Bacik jo...@redhat.com wrote:
On 08/06/2011 10:16 PM, Andrew Lutomirski wrote:
I've always gotten space cache generation warnings, but some time
after 3.0 they started going nuts. I get:
space cache generation (14667727114112179905) does not match inode
On 08/08/2011 08:17 AM, Andrew Lutomirski wrote:
On Mon, Aug 8, 2011 at 8:14 AM, Josef Bacik jo...@redhat.com wrote:
On 08/06/2011 10:16 PM, Andrew Lutomirski wrote:
I've always gotten space cache generation warnings, but some time
after 3.0 they started going nuts. I get:
space cache
A user reported getting spammed when moving to 3.0 by this message. Since we
switched to the normal checksumming infrastructure all old free space caches
will be wrong and need to be regenerated so people are likely to see this
message a lot, so ratelimit it so it doesn't fill up their logs and
On 08/06/2011 04:35 AM, Liu Bo wrote:
When btrfs recovers from a crash, it may hit the oops below:
[ cut here ]
kernel BUG at fs/btrfs/inode.c:4580!
[...]
RIP: 0010:[a03df251] [a03df251] btrfs_add_link+0x161/0x1c0
[btrfs]
[...]
Call Trace:
xfstests exposed a problem with preallocate when it fallocates a range that
already has an extent. We don't set the new i_size properly because we see that
we already have an extent. This isn't right and we should update i_size if the
space already exists. With this patch we now pass xfstests
Unfortunately it isn't enough to just exit here - the kzalloc() happens in a
loop and the allocated items are added to a linked list whose head is passed
in from the caller.
To fix the BUG_ON() and also provide the semantic that the list passed in is
only modified on success, I create
The priority and refill_used flags are not used anymore, and neither is the
usage counter, so just remove them from btrfs_block_rsv.
Signed-off-by: Josef Bacik jo...@redhat.com
---
fs/btrfs/ctree.h |3 ---
fs/btrfs/extent-tree.c | 23 ++-
fs/btrfs/relocation.c |
We will try and reserve metadata bytes in btrfs_block_rsv_check and if we cannot
because we have a transaction open it will return EAGAIN, so we do not need to
try and commit the transaction again.
Signed-off-by: Josef Bacik jo...@redhat.com
---
fs/btrfs/extent-tree.c | 29
Currently we're starting and stopping a transaction for no real reason, so kill
that and just reserve enough space as if we can truncate all in one transaction.
Also use btrfs_block_rsv_check() for our reserve to minimize the amount of space
we may have to allocate for our slack space. Thanks,
Hi Christian,
Are you still seeing this slowness?
sage
On Wed, 27 Jul 2011, Christian Brunner wrote:
2011/7/25 Chris Mason chris.ma...@oracle.com:
Excerpts from Christian Brunner's message of 2011-07-25 03:54:47 -0400:
Hi,
we are running a ceph cluster with btrfs as it's base
On Aug 7, 2011, Alexandre Oliva ol...@lsd.ic.unicamp.br wrote:
tl;dr version: 3.0 produces “bio too big” dmesg entries and silently
corrupts data in “meta-raid1/data-single” configurations on disks with
different max_hw_sectors, where 2.6.38 worked fine.
FWIW, I just got the same problem
insert_ptr() always returns zero, so all the exta error handling can go
away. This makes it trivial to also make copy_for_split() a void function
as it's only return was from insert_ptr(). Finally, this all makes the
BUG_ON(ret) in split_leaf() meaningless so I removed that.
Signed-off-by: Mark
On Aug 7, 2011, Alexandre Oliva ol...@lsd.ic.unicamp.br wrote:
2. Removing a partition from the filesystem (say, the external disk)
didn't relocate “single” block groups as such to other disks, as
expected.
/me reads some code and resets expectations about RAID0 in btrfs ;-)
On Wed, Aug 3, 2011 at 3:50 PM, Hugo Mills h...@carfax.org.uk wrote:
Try the instructions on the wiki at [1]. (And please feed back
and/or fix any issues you have with the instructions -- they're still
quite new and probably have awkward corners).
[1]
On Aug 7, 2011, Alexandre Oliva ol...@lsd.ic.unicamp.br wrote:
in very much the same way that it appears to be impossible to go
back from RAID1 to DUP metadata once you temporarily add a second disk,
and any metadata block group happens to be allocated before you remove
it (why couldn't it
Hi, Mark,
(2011/08/06 1:48), Mark Fasheh wrote:
Right now in create_snapshot(), we'll BUG() if btrfs_lookup_dentry() returns
a NULL inode (negative dentry). Getting a negative dentry here probably
isn't ever expected to happen however two things lead me to believe that we
should trap this
18 matches
Mail list logo