On Mon, Mar 23, 2015 at 12:57 AM, Lennart Poettering
mzerq...@0pointer.de wrote:
Heya!
So what's the story on recursive btrfs snapshotting and snapshot
removal? Since a while systemd has now by default creating btrfs
subvolumes for /var/lib/machines for example. Now, if that code is run
inside
On 23. mars 2015 13:36, Filipe David Manana wrote:
On Mon, Mar 23, 2015 at 10:35 AM, Torbjørn li...@skagestad.org wrote:
Hi,
After upgrading to 4.0.0-rc5 from 4.0.0-rc4 I see Object already exists
after reboot.
The fs is forced read only.
The error does not disappear after additional reboots.
On 23. mars 2015 13:32, Chris Mason wrote:
On Mon, Mar 23, 2015 at 6:35 AM, Torbjørn li...@skagestad.org wrote:
Hi,
After upgrading to 4.0.0-rc5 from 4.0.0-rc4 I see Object already
exists after reboot.
The fs is forced read only.
The error does not disappear after additional reboots.
On Mon, Mar 23, 2015 at 10:35 AM, Torbjørn li...@skagestad.org wrote:
Hi,
After upgrading to 4.0.0-rc5 from 4.0.0-rc4 I see Object already exists
after reboot.
The fs is forced read only.
The error does not disappear after additional reboots.
If you go back to 4.0.0-rc4, does the error
On Mon, Mar 23, 2015 at 6:35 AM, Torbjørn li...@skagestad.org wrote:
Hi,
After upgrading to 4.0.0-rc5 from 4.0.0-rc4 I see Object already
exists after reboot.
The fs is forced read only.
The error does not disappear after additional reboots.
mount|grep sda1
/dev/sda1 on / type btrfs
On Mon, Mar 23, 2015 at 4:23 AM, Anand Jain anand.j...@oracle.com wrote:
Do you still have the problem ? Can you pls confirm on the latest btrfs ?
Since I am fixing the devices part of the btrfs, I am bit nervous.
I'm having a similar problem. I'm getting some kind of btrfs
corruption that
While committing a transaction we free the log roots before we write the
new super block. Freeing the log roots implies marking the disk location
of every node/leaf (metadata extent) as pinned before the new super block
is written. This is to prevent the disk location of log metadata extents
from
Got this with 4.0.0-rc5 when doing a degraded mount:
Mar 23 13:09:22 server1 kernel: [ 665.197957] BTRFS info (device sdb4):
allowing degraded mounts
Mar 23 13:09:22 server1 kernel: [ 665.198030] BTRFS info (device sdb4):
disk space caching is enabled
Mar 23 13:09:22 server1 kernel: [
On Mon, Mar 23, 2015 at 8:19 AM, Tomasz Chmielewski t...@virtall.com
wrote:
Got this with 4.0.0-rc5 when doing a degraded mount:
Mar 23 13:09:22 server1 kernel: [ 665.197957] BTRFS info (device
sdb4): allowing degraded mounts
Mar 23 13:09:22 server1 kernel: [ 665.198030] BTRFS info
On 03/23/2015 02:50 PM, Torbjørn wrote:
On 23. mars 2015 14:47, Chris Mason wrote:
On Mon, Mar 23, 2015 at 8:53 AM, Torbjørn Skagestad
torbj...@itpas.no wrote:
On 23. mars 2015 13:36, Filipe David Manana wrote:
On Mon, Mar 23, 2015 at 10:35 AM, Torbjørn li...@skagestad.org
wrote:
Hi,
After
While committing a transaction we free the log roots before we write the
new super block. Freeing the log roots implies marking the disk location
of every node/leaf (metadata extent) as pinned before the new super block
is written. This is to prevent the disk location of log metadata extents
from
On Mon, Mar 23, 2015 at 8:35 AM, Chris Mason c...@fb.com wrote:
On Mon, Mar 23, 2015 at 8:19 AM, Tomasz Chmielewski t...@virtall.com
wrote:
Got this with 4.0.0-rc5 when doing a degraded mount:
Do you get this every time, even after going back to rc4?
It should be caused by this commit,
On 23. mars 2015 14:47, Chris Mason wrote:
On Mon, Mar 23, 2015 at 8:53 AM, Torbjørn Skagestad
torbj...@itpas.no wrote:
On 23. mars 2015 13:36, Filipe David Manana wrote:
On Mon, Mar 23, 2015 at 10:35 AM, Torbjørn li...@skagestad.org wrote:
Hi,
After upgrading to 4.0.0-rc5 from 4.0.0-rc4 I
On Mon, Mar 23, 2015 at 8:53 AM, Torbjørn Skagestad
torbj...@itpas.no wrote:
On 23. mars 2015 13:36, Filipe David Manana wrote:
On Mon, Mar 23, 2015 at 10:35 AM, Torbjørn li...@skagestad.org
wrote:
Hi,
After upgrading to 4.0.0-rc5 from 4.0.0-rc4 I see Object already
exists
after reboot.
The
On 2015-03-23 22:48, Chris Mason wrote:
On Mon, Mar 23, 2015 at 8:35 AM, Chris Mason c...@fb.com wrote:
On Mon, Mar 23, 2015 at 8:19 AM, Tomasz Chmielewski t...@virtall.com
wrote:
Got this with 4.0.0-rc5 when doing a degraded mount:
Do you get this every time, even after going back to rc4?
On Fri, Mar 20, 2015 at 02:09:45AM +0100, Sebastian Thorarensen wrote:
Changes since v1:
* Split patch into smaller patches
* btrfs-convert and mkfs now shares check_node_or_leaf_size
* Rebased onto latest v3.19.x
Thanks! Minor changes: I've fixed build failure of mkfs, added btrfs_
prefix
On Mon, Mar 23, 2015 at 11:33 AM, Tomasz Chmielewski t...@virtall.com
wrote:
On 2015-03-23 22:48, Chris Mason wrote:
On Mon, Mar 23, 2015 at 8:35 AM, Chris Mason c...@fb.com wrote:
On Mon, Mar 23, 2015 at 8:19 AM, Tomasz Chmielewski
t...@virtall.com wrote:
Got this with 4.0.0-rc5 when
On Mon, Mar 23, 2015 at 9:22 AM, Rich Freeman
r-bt...@thefreemanclan.net wrote:
I'm having a similar problem. I'm getting some kind of btrfs
corruption that causes a panic/reboot, and then the initramfs won't
mount root for 3.18.9, but it will mount it for 3.18.8.
Running on 3.18.8
I can't tell if this is a kvm virtio blk device regression, with
cache=none and cache=directsync, or if it's a Btrfs regression.
The summary is that on a host using (Fedora) kernel 3.18.9, 3.19.2, or
any 4.0.0 kernel, with qcow2 on Btrfs, and either cache=none or
directsync, the guest Linux OS
Add the nolock option for btrfs_find_all_roots().
This will allow btrfs_find_all_roots() to be called in
btrfs_qgroup_record_ref(), which will provide the basis for coming
qgroup patches.
Signed-off-by: Qu Wenruo quwen...@cn.fujitsu.com
---
fs/btrfs/backref.c | 28 ++--
One of problems in old qgroup is, we can only get a view on the final
results when we are going to adjust qgroup accounting.
This makes the following operataion get wrong result:
1. Subvol 257 add an exclusive extent A.
2. Subvol 258 add a shared reference to extent A.
3. Subvol 259 add a shared
This provides the basis for later implement to determine whether given
root has reference on a given extent before delayed_ref operation.
Signed-off-by: Qu Wenruo quwen...@cn.fujitsu.com
---
fs/btrfs/backref.c | 23 ++-
1 file changed, 14 insertions(+), 9 deletions(-)
diff
This feature is used in incoming qgroup patches to resolve whether a
given root has reference to a extent before the delayed_ref operation.
Signed-off-by: Qu Wenruo quwen...@cn.fujitsu.com
---
fs/btrfs/backref.c | 21 +
fs/btrfs/backref.h | 6 --
fs/btrfs/ioctl.c | 2
For possible shared extent accounting case, call btrfs_find_all_roots()
before we write backref data into extent tree to find exactly how many
roots is referring the extent as old_roots.
And pass it to btrfs_qgroup_record_ref() for later operations.
Signed-off-by: Qu Wenruo
1) Use accurate old/new_roots in btrfs_qgroup_operation.
Old implement uses find_all_roots() to get a roots referring to given
bytenr.
But the problem is, at the timing of btrfs_delayed_qgroup_accounting(),
it's too late and we can only get final result of all delayed_ref
operations.
Thanks to
[BUG]
https://patchwork.kernel.org/patch/6015791/
The above test case shows a bug caused by incorrect old/new_roots.
And the incorrect old/new_roots is caused by the incorrect timing
calling btrfs_find_all_roots().
[FIX]
This patchset try to fix it using a new routine for recording delayed
ref
Use inline functions to do such things, to improve readability.
Signed-off-by: Qu Wenruo quwen...@cn.fujitsu.com
Acked-by: David Sterba dste...@suse.cz
---
v2:
Change nameing to btrfs_qgroup_(update|get)_(old|new)_refcnt.
Not use centeral qgroup_(get|update)_refcnt function, direct code into
Cleanup the unneeded codes only for special cases.
Since the new, more generic but simpler codes are already here, use
them.
Signed-off-by: Qu Wenruo quwen...@cn.fujitsu.com
---
It feels so good just pressing 'd' in vim!!!
---
fs/btrfs/qgroup.c | 249
__btrfs_inc_extent_ref() and __btrfs_free_extent() have already had too
many parameters, but three of them can be extracted from
btrfs_delayed_ref_node struct.
So use btrfs_delayed_ref_node struct as a single parameter to replace
the bytenr/num_byte/no_quota parameters.
The real objective of
Add the following member for struct btrfs_qgroup_operation:
'old_roots' ulist
Records rootfs found before the delayed ref operation.
'new_roots' ulist
Records rootfs found after the delayed ref operation.
Add the following parameters for btrfs_qgroup_record_ref():
'old_roots' ulist
Since new qgroup design need to get old_roots before calling
btrfs_qgroup_record_ref(), modify qgroup test to follow the new routine.
And of course, without this modification, it won't pass the qgroup
multi-ref test.
Signed-off-by: Qu Wenruo quwen...@cn.fujitsu.com
---
fs/btrfs/backref.c
Hi,
After upgrading to 4.0.0-rc5 from 4.0.0-rc4 I see Object already
exists after reboot.
The fs is forced read only.
The error does not disappear after additional reboots.
mount|grep sda1
/dev/sda1 on / type btrfs (rw,noatime,compress=lzo,space_cache,subvol=@)
/dev/sda1 on /home type btrfs
Do you still have the problem ? Can you pls confirm on the latest btrfs
? Since I am fixing the devices part of the btrfs, I am bit nervous.
Thanks, Anand
On 03/20/2015 07:06 AM, G. Richard Bellamy wrote:
When I upgrade to the 3.19.2 Kernel I get a deadlocked boot:
INFO: task mount:302
On Mon, Mar 23, 2015 at 02:01:41PM -0600, Chris Murphy wrote:
I can't tell if this is a kvm virtio blk device regression, with
cache=none and cache=directsync, or if it's a Btrfs regression.
The summary is that on a host using (Fedora) kernel 3.18.9, 3.19.2, or
any 4.0.0 kernel, with qcow2
The private_data member of the Btrfs control device file
(/dev/btrfs-control) is used to hold the current transaction and needs
to be initialized to NULL to signify that no transaction is in progress.
We explicitly set the control file's private_data to NULL to be
independent of whatever value
On Thu, Mar 19, 2015 at 04:31:08PM -0400, Josef Bacik wrote:
[..]
+ * We log writes only after they have been flushed, this makes the log
describe
+ * close to the order in which the data hits the actual disk, not its cache.
So
+ * for example the following sequence (W means write, C
As titled:
Does btrfs have dedup (on raid1 multiple disks) that can be enabled?
Can anyone relate any experiences?
Is there (or will there be,) a bad penalty of fragmentation?
(For kernel 3.18.9)
Thanks,
Martin
--
To unsubscribe from this list: send the line unsubscribe linux-btrfs in
the
On Fri, Mar 20, 2015 at 02:02:09PM -0400, Jeff Mahoney wrote:
Orphans in the fs tree are cleaned up via open_ctree and subvolume
orphans are cleaned via btrfs_lookup_dentry -- except when a default
subvolume is in use. The name for the default subvolume uses a manual
lookup that doesn't
Original Message
Subject: Re: [PATCH 0/7] btrfs-progs: qgroup related enhance.
From: David Sterba dste...@suse.cz
To: Qu Wenruo quwen...@cn.fujitsu.com
Date: 2015年03月24日 07:38
On Fri, Feb 27, 2015 at 04:26:32PM +0800, Qu Wenruo wrote:
Qu Wenruo (7):
btrfs-progs: Update
On Mon, Mar 23, 2015 at 11:10:46PM +, Martin wrote:
As titled:
Does btrfs have dedup (on raid1 multiple disks) that can be enabled?
The current state of play is on the wiki:
https://btrfs.wiki.kernel.org/index.php/Deduplication
Can anyone relate any experiences?
duperemove is
On Fri, Feb 27, 2015 at 04:26:32PM +0800, Qu Wenruo wrote:
Qu Wenruo (7):
btrfs-progs: Update qgroup status flags and replace qgroup
level/subvid calculation with inline function.
btrfs-progs: Allow btrfs-debug-tree to print human readable qgroup
status flag.
FYI, I've added the following patches to 3.19 queue
* btrfs-progs: return the fsid from make_btrfs()
* btrfs-progs: add strdup in btrfs_add_to_fsid() to track the device path
* btrfs-progs: add verbose option to btrfs_add_to_fsid()
* btrfs-progs: add -v and -q switches in the mkfs.btrfs man
Original Message
Subject: Error: btrfs check --repair --init-csum-tree --init-extent-tree
From: Pavol Cupka pavol.cu...@gmail.com
To: linux-btrfs@vger.kernel.org
Date: 2015年03月21日 20:26
When running btrfs check on a RAID1 system using btrfs-progs v3.19-rc2
and running the
43 matches
Mail list logo