On Mon, Feb 17, 2014 at 08:42:23AM +0100, Dan van der Ster wrote:
Did you already try this?? [1]:
btrfs fi balance start -dusage=5 /mnt/nas3
Cheers, dan
[1]
On Mon, Feb 17, 2014 at 03:20:58AM +, Duncan wrote:
Chris Murphy posted on Sun, 16 Feb 2014 12:54:44 -0700 as excerpted:
Also, 10 hours to balance two disks at 2.3TB seems like a long time. I'm
not sure if that's expected.
FWIW, I think you may not realize how big 2.3 TiB is, and/or
Add close_ctree()s before the returns on errors after open_ctree()
Also merge the err returns into the goto + single return pattern.
Signed-off-by: Gui Hecheng guihc.f...@cn.fujitsu.com
---
changelog:
v1-v2: merge err returns into goto + single return pattern
---
cmds-check.c | 32
Am Dienstag, 11. Februar 2014, 15:50:12 schrieb Dave:
On Tue, Feb 11, 2014 at 10:36 AM, Martin Steigerwald
mar...@lichtvoll.de wrote:
Today I started getting those on 3.14-rc. One core as displayed as 100%
system CPU. I rebooted cause the system didnĀ“t respond consistently to
user input
On 02/17/2014 05:35 AM, Martin Steigerwald wrote:
Am Dienstag, 11. Februar 2014, 15:50:12 schrieb Dave:
On Tue, Feb 11, 2014 at 10:36 AM, Martin Steigerwald
mar...@lichtvoll.de wrote:
Today I started getting those on 3.14-rc. One core as displayed as 100%
system CPU. I rebooted cause the
Am Montag, 17. Februar 2014, 08:06:50 schrieb Chris Mason:
On 02/17/2014 05:35 AM, Martin Steigerwald wrote:
Am Dienstag, 11. Februar 2014, 15:50:12 schrieb Dave:
On Tue, Feb 11, 2014 at 10:36 AM, Martin Steigerwald
mar...@lichtvoll.de wrote:
Today I started getting those on 3.14-rc.
On Mon, Feb 10, 2014 at 01:41:23PM -0500, Josef Bacik wrote:
On 02/10/2014 01:36 PM, cwillu wrote:
IMO, used should definitely include metadata, especially given that we
inline small files.
I can convince myself both that this implies that we should roll it
into b_avail, and that we
On 2/16/14, 9:02 AM, Anand Jain wrote:
Hello,
I wonder if there is any known way to get the mount point directory name with
in the btrfs-kernel ?
For what reason?
Remember that a single block device can be mounted in multiple places (or
bind-mounted, etc), so there is not even
On 02/15/2014 11:23 PM, Chris Murphy wrote:
On Feb 14, 2014, at 11:34 AM, Hugo Mills h...@carfax.org.uk wrote:
On Fri, Feb 14, 2014 at 07:27:57PM +0100, Goffredo Baroncelli wrote:
On 02/14/2014 07:11 PM, Roman Mamedov wrote:
On Fri, 14 Feb 2014 18:57:03 +0100
Goffredo Baroncelli
On Thu, Feb 13, 2014 at 08:18:10PM +0100, Goffredo Baroncelli wrote:
This is the 4th attempt of my patches related to show how the data
are stored in a btrfs filesystem. I rebased all the patches on the v3.13
btrfs-progs.
FYI, I've added this series as-is into the -next part of the
Hello,
Saw this while fuzzing the kernel with Trinity.
Tommi
[ 396.136048] =
[ 396.136048] [ INFO: possible irq lock inversion dependency detected ]
[ 396.136048] 3.14.0-rc3 #1 Not tainted
[ 396.136048]
Tests the noatime, relatime, strictatime and nodiratime mount options.
There is an extra check for Btrfs to ensure that the access time is
never updated on read-only subvolumes. (Regression test for bug fixed
with commit 93fd63c2f001ca6797c6b15b696a484b165b4800)
Signed-off-by: Koen De Wit
Thanks for the review, Eric!
Comments inline.
On 02/13/2014 05:42 PM, Eric Sandeen wrote:
On 2/13/14, 9:23 AM, Koen De Wit wrote:
Tests the noatime, relatime, strictatime and nodiratime mount options.
There is an extra check for Btrfs to ensure that the access time is
never updated on
On 2/17/14, 2:25 PM, Koen De Wit wrote:
Tests the noatime, relatime, strictatime and nodiratime mount options.
There is an extra check for Btrfs to ensure that the access time is
never updated on read-only subvolumes. (Regression test for bug fixed
with commit
On Feb 17, 2014, at 1:09 PM, Tommi Rantala tt.rant...@gmail.com wrote:
Hello,
Saw this while fuzzing the kernel with Trinity.
Tommi
[ 396.136048] =
[ 396.136048] [ INFO: possible irq lock inversion dependency detected ]
[
For what reason?
Remember that a single block device can be mounted in multiple places
(or bind-mounted, etc), so there is not even necessarily a single
answer to that question.
-Eric
Yes indeed. (the attempt is should we be able to maintain all
the mount points as a list
On Thu, Feb 13, 2014 at 11:18:57AM +0800, Wang Shilong wrote:
Test flow is to run fsstress after triggering quota rescan.
the ruler is simple, we just remove all files and directories,
sync filesystem and see if qgroup's ref and excl are nodesize.
Signed-off-by: Wang Shilong
On 02/18/2014 02:46 PM, Dave Chinner wrote:
On Thu, Feb 13, 2014 at 11:18:57AM +0800, Wang Shilong wrote:
Test flow is to run fsstress after triggering quota rescan.
the ruler is simple, we just remove all files and directories,
sync filesystem and see if qgroup's ref and excl are nodesize.
18 matches
Mail list logo