Hello,
we use backuppc to backup our hosting machines.
I have recently migrated it to btrfs, so we can use send-recieve for offsite
backups of our backups.
I have several btrfs volumes, each hosts nspawn container, which runs in
/system subvolume and has backuppc data in /backuppc subvolume
.
Update the following quick/auto tag based on their execution time
007
011
050
100
101
Two systems are used to determine their execution time.
One is backed by an SATA spinning rust, whose maximum R/W speed is
about 100MB/s, modern desktop performance. (VM1)
Another one is a VM inside a openstack
At 07/21/2016 07:37 AM, Dave Chinner wrote:
On Wed, Jul 20, 2016 at 03:40:29PM +0800, Qu Wenruo wrote:
At 07/20/2016 03:01 PM, Eryu Guan wrote:
On Tue, Jul 19, 2016 at 01:42:03PM +0800, Qu Wenruo wrote:
This test uses $LOAD_FACTOR, so it should be in 'stress' group. And it
hangs the latest
hello,
On 07/20/2016 04:46 PM, Holger Hoffstätte wrote:
On 07/20/16 07:56, Wang Xiaoguang wrote:
Currently in btrfs, for data space reservation, it does not update
bytes_may_use in btrfs_update_reserved_bytes() and the decrease operation
will be delayed to be done in
hello,
On 07/20/2016 09:18 PM, Josef Bacik wrote:
On 07/20/2016 01:56 AM, Wang Xiaoguang wrote:
In prealloc_file_extent_cluster(), btrfs_check_data_free_space() uses
wrong file offset for reloc_inode, it uses cluster->start and
cluster->end,
which indeed are extent's bytenr. The correct
hello,
On 07/20/2016 09:35 PM, Josef Bacik wrote:
On 07/20/2016 01:56 AM, Wang Xiaoguang wrote:
This patch can fix some false ENOSPC errors, below test script can
reproduce one false ENOSPC error:
#!/bin/bash
dd if=/dev/zero of=fs.img bs=$((1024*1024)) count=128
dev=$(losetup
hello,
On 07/20/2016 09:22 PM, Josef Bacik wrote:
On 07/20/2016 01:56 AM, Wang Xiaoguang wrote:
In next patch, btrfs_clear_bit_hook() will not call
btrfs_free_reserved_data_space_noquota() to update btrfs_space_info's
bytes_may_use unless it has EXTENT_DO_ACCOUNTING or
EXTENT_CLEAR_DATA_RESV,
While processing delayed refs, we may update block group's statistics
and attach it to cur_trans->dirty_bgs, and later writing dirty block
groups will process the list, which happens during
btrfs_commit_transaction().
For whatever reason, the transaction is aborted and dirty_bgs
is not processed
While processing delayed refs, we may update block group's statistics
and attach it to cur_trans->dirty_bgs, and later writing dirty block
groups will process the list, which happens during
btrfs_commit_transaction().
For whatever reason, the transaction is aborted and dirty_bgs
is not processed
This adds several ASSERT()' s to report memory leak of block group cache.
Signed-off-by: Liu Bo
---
fs/btrfs/extent-tree.c | 5 +
1 file changed, 5 insertions(+)
diff --git a/fs/btrfs/extent-tree.c b/fs/btrfs/extent-tree.c
index 82b912a..50bd683 100644
---
On Wed, Jul 20, 2016 at 03:40:29PM +0800, Qu Wenruo wrote:
> At 07/20/2016 03:01 PM, Eryu Guan wrote:
> >On Tue, Jul 19, 2016 at 01:42:03PM +0800, Qu Wenruo wrote:
> >>>
> >>>This test uses $LOAD_FACTOR, so it should be in 'stress' group. And it
> >>>hangs the latest kernel, stop other tests from
On Wed, Jul 20, 2016 at 03:01:00PM +0800, Eryu Guan wrote:
> For running tests, "./check -g auto -x dangerous" might fit your need.
Yes, that's precisely the way the dangerous group is intended to be
used: as a exclusion filter that gets applied to other test group
definitions.
Cheers,
Dave.
--
Am Thu, 21 Jul 2016 00:19:41 +0200
schrieb Kai Krakow :
> Am Fri, 15 Jul 2016 20:45:32 +0200
> schrieb Matt :
>
> > > On 15 Jul 2016, at 14:10, Austin S. Hemmelgarn
> > > wrote:
> > >
> > > On 2016-07-15 05:51, Matt wrote:
>
Am Fri, 15 Jul 2016 20:45:32 +0200
schrieb Matt :
> > On 15 Jul 2016, at 14:10, Austin S. Hemmelgarn
> > wrote:
> >
> > On 2016-07-15 05:51, Matt wrote:
> >> Hello
> >>
> >> I glued together 6 disks in linear lvm fashion (no RAID) to obtain
> >> one
On Fri, Jul 15, 2016 at 12:52 PM, Austin S. Hemmelgarn
wrote:
> Your own 'btrfs fi df' output clearly says that more than 99% of your data
> chunks are in a RAID0 profile, hence my statement.
Somewhen in ancient Btrfs list history, there was a call to change the
mkfs
On Sun, Jul 17, 2016 at 3:08 AM, Hendrik Friedel wrote:
> Well, btrfs does write data very different to many other file systems. On
> every write the file is copied to another place, even if just one bit is
> changed. That's special and I am wondering whether that could
On Wed, Jul 20, 2016 at 03:34:50PM +0800, Wang Xiaoguang wrote:
> Currently in btrfs, there is something wrong with data space reservation.
> For example, if we try to preallocate more than haf of whole fs space,
> ENOSPC will occur, but indeed fs still has free space to satisfy this
> request.
>
On 20.07.2016 15:50, Chris Mason wrote:
>
>
> On 07/19/2016 08:11 PM, Gabriel C wrote:
>>
>>
>> On 19.07.2016 13:05, Chris Mason wrote:
>>> On Mon, Jul 11, 2016 at 11:28:01AM +0530, Chandan Rajendra wrote:
Hi Chris,
I am able to reproduce the issue with the 'short-write'
On 07/19/2016 08:11 PM, Gabriel C wrote:
On 19.07.2016 13:05, Chris Mason wrote:
On Mon, Jul 11, 2016 at 11:28:01AM +0530, Chandan Rajendra wrote:
Hi Chris,
I am able to reproduce the issue with the 'short-write' program. But before
the call trace associated with btrfs_destroy_inode(), I
On 07/20/2016 01:56 AM, Wang Xiaoguang wrote:
This patch can fix some false ENOSPC errors, below test script can
reproduce one false ENOSPC error:
#!/bin/bash
dd if=/dev/zero of=fs.img bs=$((1024*1024)) count=128
dev=$(losetup --show -f fs.img)
mkfs.btrfs -f -M
On 07/20/2016 01:56 AM, Wang Xiaoguang wrote:
In next patch, btrfs_clear_bit_hook() will not call
btrfs_free_reserved_data_space_noquota() to update btrfs_space_info's
bytes_may_use unless it has EXTENT_DO_ACCOUNTING or EXTENT_CLEAR_DATA_RESV,
as for the reason, please see the next patch for
On 07/20/2016 01:56 AM, Wang Xiaoguang wrote:
This patch divides btrfs_update_reserved_bytes() into
btrfs_add_reserved_bytes() and btrfs_free_reserved_bytes(), and
next patch will extend btrfs_add_reserved_bytes()to fix some
false ENOSPC error, please see later patch for detailed info.
On 07/20/2016 01:56 AM, Wang Xiaoguang wrote:
In prealloc_file_extent_cluster(), btrfs_check_data_free_space() uses
wrong file offset for reloc_inode, it uses cluster->start and cluster->end,
which indeed are extent's bytenr. The correct value should be
cluster->[start|end] minus block group's
On Wed, Jul 20, 2016 at 8:34 AM, Wang Xiaoguang
wrote:
> Currently in btrfs, there is something wrong with data space reservation.
> For example, if we try to preallocate more than haf of whole fs space,
> ENOSPC will occur, but indeed fs still has free space to
On 07/20/16 07:56, Wang Xiaoguang wrote:
> Currently in btrfs, for data space reservation, it does not update
> bytes_may_use in btrfs_update_reserved_bytes() and the decrease operation
> will be delayed to be done in extent_clear_unlock_delalloc(), for
> fallocate(2), decrease operation is even
At 07/20/2016 03:01 PM, Eryu Guan wrote:
On Tue, Jul 19, 2016 at 01:42:03PM +0800, Qu Wenruo wrote:
This test uses $LOAD_FACTOR, so it should be in 'stress' group. And it
hangs the latest kernel, stop other tests from running, I think we can
add it to 'dangerous' group as well.
Thanks for
Currently in btrfs, there is something wrong with data space reservation.
For example, if we try to preallocate more than haf of whole fs space,
ENOSPC will occur, but indeed fs still has free space to satisfy this
request.
To easily reproduce this bug, this test case needs fs is mixed mode(btrfs
On 07/20/16 07:31, Stefan Priebe - Profihost AG wrote:
> Hi list,
>
> while i didn't had the problem for some month i'm now getting ENOSPC on
> a regular basis on one host.
Well, it's getting better. :)
> if i umount the volume i get traces (i already did a clear_cache 4 days
> ago to
When over 1000 file extents refers to one extent, find_parent_nodes()
will be obviously slow, due to the O(n^2)~O(n^3) loops inside
__merge_refs().
The following ftrace shows the cubic growth of execution time:
256 refs
5) + 91.768 us | __add_keyed_refs.isra.12 [btrfs]();
5) 1.447 us|
On Tue, Jul 19, 2016 at 01:42:03PM +0800, Qu Wenruo wrote:
> >
> > This test uses $LOAD_FACTOR, so it should be in 'stress' group. And it
> > hangs the latest kernel, stop other tests from running, I think we can
> > add it to 'dangerous' group as well.
> >
>
> Thanks for this info.
> I'm
hello,
On 07/20/2016 01:31 PM, Stefan Priebe - Profihost AG wrote:
Hi list,
while i didn't had the problem for some month i'm now getting ENOSPC on
a regular basis on one host.
It would be great if someone can help me debugging this.
Some basic informations:
# touch /vmbackup/abc
touch:
31 matches
Mail list logo