On Fri, Feb 13, 2015 at 12:19 AM, Roel Niesen
wrote:
> Hello,
>
> Sometimes my system is hanging for a few seconds.
> When I start top, I see this :
>
> %cpu: 80.7 command: btrfs-transacti
>
> Is it normal that btrfs-transaction takes such hijg cpu.
Approximately
how many subvolumes and snapshot
Hello,
Sometimes my system is hanging for a few seconds.
When I start top, I see this :
%cpu: 80.7 command: btrfs-transacti
Is it normal that btrfs-transaction takes such hijg cpu.
uname- a:
Linux sanos1 3.13.11-ckt13 #1 SMP Tue Feb 3 12:06:18 CET 2015 x86_64 x86_64
x86_64 GNU/Linux
btrf
On Fri, Feb 13, 2015 at 10:20:25AM +0900, Tomasz Chmielewski wrote:
> FYI, still seeing this with 3.19:
I also got this warning (can be reproduced by a loop of running
xfstests/btrfs/057) and
tried to fix it but I failed.
I think you also have several snapshots, and this warning may occur
after
Hello guys,
>
> On Thu, Feb 12, 2015 at 05:33:41AM +0100, Kai Krakow wrote:
>> Duncan <1i5t5.dun...@cox.net> schrieb:
>>
>>> P. Remek posted on Tue, 10 Feb 2015 18:44:33 +0100 as excerpted:
>>>
In the test, I use --direct=1 parameter for fio which basically does
O_DIRECT on target fil
I'm going to amend what I wrote earlier. The problem with the seed
device method, it won't let you change the leafsize. And that means
you'll need to go with a new volume with mkfs, and migrate data with
btrfs send receive instead.
And to clarify, you don't need to thin out subvolume to start out
On Thu, Feb 12, 2015 at 05:33:41AM +0100, Kai Krakow wrote:
> Duncan <1i5t5.dun...@cox.net> schrieb:
>
> > P. Remek posted on Tue, 10 Feb 2015 18:44:33 +0100 as excerpted:
> >
> >> In the test, I use --direct=1 parameter for fio which basically does
> >> O_DIRECT on target file. The O_DIRECT shou
Swâmi Petaramesh posted on Thu, 12 Feb 2015 14:26:09 +0100 as excerpted:
> I have a BTRFS RAID-1 FS made from 2x 2TB SATA mechanical drives.
>
> It was created a while ago, with defaults by the time of 4K leaf sizes.
>
> It also contains *lots* of subvols and snapshots.
>
> It has become very s
On Thu, Feb 12, 2015 at 11:16:51AM -0500, Josef Bacik wrote:
> On our gluster boxes we stream large tar balls of backups onto our fses. With
> 160gb of ram this means we get really large contiguous ranges of dirty data,
> but
> the way our ENOSPC stuff works is that as long as it's contiguous we
Original Message
Subject: Re: [PATCH v3 00/10] Enhance btrfs-find-root and open_ctree()
to provide better chance on damaged btrfs.
From: David Sterba
To: Qu Wenruo
Date: 2015年02月12日 21:16
On Thu, Feb 12, 2015 at 09:36:01AM +0800, Qu Wenruo wrote:
Subject: Re: [PATCH v3 00/
FYI, still seeing this with 3.19:
[196992.429463] [ cut here ]
[196992.429526] WARNING: CPU: 1 PID: 26328 at fs/btrfs/qgroup.c:1414
btrfs_delayed_qgroup_accounting+0x9f3/0xa0d [btrfs]()
[196992.429617] Modules linked in: xt_nat xt_tcpudp ipt_MASQUERADE
nf_nat_masquerade_
Ed Tomlinson schrieb:
> On Tuesday, February 10, 2015 2:17:43 AM EST, Kai Krakow wrote:
>> Tobias Holst schrieb:
>>
>>> and "btrfs scrub status /[device]" gives me the following output:
"scrub status for [UUID]
scrub started at Mon Feb 9 18:16:38 2015 and was aborted after 2008
s
We have a scenario where after the fsync log replay we can lose file data
that had been previously fsync'ed if we added an hard link for our inode
and after that we sync'ed the fsync log (for example by fsync'ing some
other file or directory).
This is because when adding an hard link we updated th
Hi
I don't remember the exact mkfs.btrfs options anymore but
> ls /sys/fs/btrfs/[UUID]/features/
shows the following output:
> big_metadata compress_lzo extended_iref mixed_backref raid56
I also tested my device with a short
> hdparm -tT /dev/dm5
and got
> /dev/mapper/sdc_crypt:
> Timing cac
On Thu, Feb 12, 2015 at 04:54:47PM -0500, Nishant Agrawal wrote:
> Hi,
>
> I am new to btrfs and trying to learn it.
>
> I have download btrfs-progs code from the git repository but not
> able to compile it.
>
> Can someone help me with the steps to compile this user space programs?
There's
This test is motivated by an fsync issue discovered in btrfs.
The issue was that we could lose file data, that was previously
fsync'ed successfully, if we end up adding a hard link to our
inode and then persist the fsync log later via an fsync of other
inode for example. This is similar to my previ
We have a scenario where after the fsync log replay we can lose file data
that had been previously fsync'ed if we added an hard link for our inode
and after that we sync'ed the fsync log (for example by fsync'ing some
other file or directory).
This is because when adding an hard link we updated th
Hi,
I am new to btrfs and trying to learn it.
I have download btrfs-progs code from the git repository but not able to
compile it.
Can someone help me with the steps to compile this user space programs?
Thanks
Nishant
--
To unsubscribe from this list: send the line "unsubscribe linux-btrfs"
On Thu, Feb 12, 2015 at 6:26 AM, Swâmi Petaramesh wrote:
> It also contains *lots* of subvols and snapshots.
About how many is "lots"?
> 1/ Could I first pull a disk out of the current RAID-1 config, losing
> redundancy
> without breaking anything else ?
>
>
> 2/ Then reset the removed HD, an
Austin S Hemmelgarn schrieb:
> On 2015-02-11 23:33, Kai Krakow wrote:
>> Duncan <1i5t5.dun...@cox.net> schrieb:
>>
>>> P. Remek posted on Tue, 10 Feb 2015 18:44:33 +0100 as excerpted:
>>>
In the test, I use --direct=1 parameter for fio which basically does
O_DIRECT on target file. The O
Also,
Another thought came to me. It seems that the system only has issues when a
sync operation happens. As to Why, I don't know but Maybe someone else on the
list can shed some light on this.
end
-- above line is for a mailing list.
--
Sent from my Android device with K-9 Mail. Please excuse
Steven,
The only reason I brought up swap space is that it seems the system may be
trying to utilize that due to low physical memory. How much RAM in the machine
running Docker? The main thing that makes me want to believe it's RAM is this:
[146280.252150] []
>try_to_free_mem_cgroup_pages+0xa
Hi Lucas, we use Java to run largely our own programs. None do anything
special or interesting with the disk, it is simply where we deploy our .jar
files and scratch space for e.g. logs
The fragmentation idea is interesting, but it seems unlikely that the disk
would be fatally fragmented at ~
This test is motivated by an fsync issue discovered in btrfs.
The issue was that we could lose file data, that was previously
fsync'ed successfully, if we end up adding a hard link to our
inode and then persist the fsync log later via an fsync of other
inode for example.
The btrfs issue was fixed
We have a scenario where after the fsync log replay we can lose file data
that had been previously fsync'ed if we added an hard link for our inode
and after that we sync'ed the fsync log (for example by fsync'ing some
other file or directory).
This is because when adding an hard link we updated th
On our gluster boxes we stream large tar balls of backups onto our fses. With
160gb of ram this means we get really large contiguous ranges of dirty data, but
the way our ENOSPC stuff works is that as long as it's contiguous we only hold
metadata reservation for one extent. The problem is we limi
On 02/11/2015 11:36 PM, Liu Bo wrote:
On Wed, Feb 11, 2015 at 03:08:59PM -0500, Josef Bacik wrote:
On our gluster boxes we stream large tar balls of backups onto our fses. With
160gb of ram this means we get really large contiguous ranges of dirty data, but
the way our ENOSPC stuff works is tha
For newly restored metadumps we can actually mount the fs and use it properly
except that the data obviously doesn't match properly. To get around this make
us skip csum validation if the metadump_v2 flag is set on the fs, this will
allow us to reproduce balance issues with metadumps. Thanks,
Si
Our gluster boxes were hitting a problem where they'd run out of space when
updating the block group cache and therefore wouldn't be able to update the free
space inode. This is a problem because this is how we invalidate the cache and
protect ourselves from errors further down the stack, so if th
Hi,
I have a BTRFS RAID-1 FS made from 2x 2TB SATA mechanical drives.
It was created a while ago, with defaults by the time of 4K leaf sizes.
It also contains *lots* of subvols and snapshots.
It has become very slow over time, and I know that BTRFS performs better with
the new 16K leaf sizes.
On Thu, Feb 12, 2015 at 09:36:01AM +0800, Qu Wenruo wrote:
> Subject: Re: [PATCH v3 00/10] Enhance btrfs-find-root and open_ctree()
> to provide better chance on damaged btrfs.
> From: David Sterba
> To: Qu Wenruo
> Date: 2015年02月12日 01:52
> > On Wed, Feb 11, 2015 at 08:33:03AM +0800, Qu Wenruo
On 2015-02-11 23:33, Kai Krakow wrote:
Duncan <1i5t5.dun...@cox.net> schrieb:
P. Remek posted on Tue, 10 Feb 2015 18:44:33 +0100 as excerpted:
In the test, I use --direct=1 parameter for fio which basically does
O_DIRECT on target file. The O_DIRECT should guarantee that the
filesystem cache
[ Please CC me on replies, I'm not on the list ]
[ This is a followup to http://www.spinics.net/lists/linux-btrfs/msg41496.html ]
Hello linux-btrfs,
I've been having troubles keeping my Apache Mesos / Docker slave nodes stable.
After some period of load, tasks begin to hang. Once this happens t
On Thu, Feb 12, 2015 at 10:53:14AM +0100, David Sterba wrote:
> Adding Greg to CC.
>
> On Thu, Feb 12, 2015 at 07:03:37AM +0800, Anand Jain wrote:
> > drivers/cpufreq/cpufreq.c is already using this function. And now btrfs
> > needs it as well. export symbol kobject_move().
> >
> > Signed-off-by:
Adding Greg to CC.
On Thu, Feb 12, 2015 at 07:03:37AM +0800, Anand Jain wrote:
> drivers/cpufreq/cpufreq.c is already using this function. And now btrfs
> needs it as well. export symbol kobject_move().
>
> Signed-off-by: Anand Jain
> ---
> v1->v2: Didn't notice there wasn't my signed-off, now a
On Wed, Feb 11, 2015 at 09:17:22PM -0500, Kevin Mulvey wrote:
> This is a patch to inode.c that fixes some spacing errors found by
> checkpatch.pl
https://btrfs.wiki.kernel.org/index.php/Project_ideas#Cleanup_projects
"Please note that pure whitespace and style reformatting changes are not
reall
On Wed, Feb 11, 2015 at 11:12:39AM +, Filipe Manana wrote:
> We try to lock a mutex while the current task state is not TASK_RUNNING,
> which results in the following warning when CONFIG_DEBUG_LOCK_ALLOC=y:
>
> [30736.772501] [ cut here ]
> [30736.774545] WARNING: CPU:
On Wed, Feb 11, 2015 at 03:46:33PM +0100, Tobias Holst wrote:
> Hmm, it looks like it is getting worse... Here are some parts of my
> syslog, including two crashed btrfs-threads:
>
> So I am still getting many of these:
> > BTRFS (device dm-5): parent transid verify failed on 25033166798848 wanted
37 matches
Mail list logo