On Fri, Nov 18, 2011 at 03:52:00PM +1100, Chris Samuel wrote:
On 18/11/11 08:04, Mike Fleetwood wrote:
It seems overly harsh to fail a resize of a btrfs file system to the
same size when a shrink or grow would succeed. User app GParted trips
over this error. Allow it by bypassing the
I can't pass 254, and below is the output:
254 3s ... - output mismatch (see 254.out.bad)
...
ID 256 top level 5 path snap
-ID 257 top level 5 path subvol
+ID 258 top level 5 path subvol
When space cache is enabled (and now mkfs.btrfs always enables it),
there will be some space cache inodes in
When I ran the xfstests, I found the test tasks was blocked on meta-data
reservation.
By debugging, I found the reason of this bug:
start transaction
|
v
reserve meta-data space
|
v
flush delay allocation - iput inode - evict inode
^
Hi,
Here is what I'm planning for GFS2:
Add sync of metadata after fallocate for O_SYNC files to ensure that we
meet expectations for everything being on disk in this case.
Unfortunately, the offset and len parameters are modified during the
course of the fallocate function, so I've had to add
Okay, I installed 3.1.1 and continue to oops when trying to delete a
directory. Please let me know if you would like any additional
information. I'm going to rebuild from a backup later today.
Cheers,Tim
On Wed, Nov 16, 2011 at 6:00 PM, David Sterba d...@jikos.cz wrote:
Hi,
On Wed, Nov 16,
On 6/1/2011 7:20 PM, Hugo Mills wrote:
Over the last few weeks, I've been playing with a foolish idea,
mostly triggered by a cluster of people being confused by btrfs's free
space reporting (df vs btrfs fi df vs btrfs fi show). I also wanted an
excuse, and some code, to mess around in the
We've been hitting panics when running xfstest 13 in a loop for long periods of
time. And actually this problem has always existed so we've been hitting these
things randomly for a while. Basically what happens is we get a thread coming
into the allocator and reading the space cache off of disk
It seems overly harsh to fail a resize of a btrfs file system to the
same size when a shrink or grow would succeed. User app GParted trips
over this error. Allow it by bypassing the shrink or grow operation.
Signed-off-by: Mike Fleetwood mike.fleetw...@googlemail.com
---
v2: Fix FS shrink
Hello Josef,
I've two new dmesg's (ceph osd 0 and 1). Both filesystems wheren't
responding anymore. Please let me know if you need more information or
another run. Both are made with the 3.1.1 kernel and your patches
applied (the patches with the extra warning messages).
Paste:
OSD.0:
Al pointed out that if we fail to start a worker for whatever reason (ENOMEM
basically), we could leak our count for num_start_workers, and so we'd think we
had more workers than we actually do. This could cause us to shrink workers
when we shouldn't or not start workers when we should. So check
On Tue, Nov 15, 2011 at 09:19:12PM +0100, Stefan Kleijkers wrote:
Hello Josef,
You can find the complete dmesg paste on: http://pastebin.com/R4dFfSdQ
But I doubt it will add more information.
Sorry I forgot about you :). Here is a new debug patch, it will print something
out right
On Fri, Nov 18, 2011 at 02:38:54PM -0500, Josef Bacik wrote:
Al pointed out that if we fail to start a worker for whatever reason (ENOMEM
basically), we could leak our count for num_start_workers, and so we'd think
we
had more workers than we actually do. This could cause us to shrink
I'm running Ceph OSDs on btrfs and have managed to corrupt several of
them so that on mount I get an error:
root@cephstore6356:~# mount /dev/sde1 /mnt/osd.2/
2011 Nov 18 10:44:52 cephstore6356 [68494.771472] btrfs: could not do
orphan cleanup -116
mount: Stale NFS file handle
Attempting to mount
On Fri, Nov 18, 2011 at 08:20:56PM +, Al Viro wrote:
On Fri, Nov 18, 2011 at 02:38:54PM -0500, Josef Bacik wrote:
Al pointed out that if we fail to start a worker for whatever reason (ENOMEM
basically), we could leak our count for num_start_workers, and so we'd
think we
had more
On Sat, Nov 19, 2011 at 01:37:39AM +, Al Viro wrote:
On Fri, Nov 18, 2011 at 08:20:56PM +, Al Viro wrote:
On Fri, Nov 18, 2011 at 02:38:54PM -0500, Josef Bacik wrote:
Al pointed out that if we fail to start a worker for whatever reason
(ENOMEM
basically), we could leak our
15 matches
Mail list logo