Re: BTRFS Deduplication

2017-09-10 Thread Qu Wenruo
On 2017年09月11日 14:05, shally verma wrote: I was going through BTRFS Deduplication page (https://btrfs.wiki.kernel.org/index.php/Deduplication) and I read "As such, xfs_io, is able to perform deduplication on a BTRFS file system," .. following this, I followed on to xfs_io link https://linux.

[PATCH v2 1/7] btrfs-progs: Refactor find_next_chunk() to get rid of parameter root and objectid

2017-09-10 Thread Qu Wenruo
Function find_next_chunk() is used to find next chunk start position, which should only do search on chunk tree and objectid is fixed to BTRFS_FIRST_CHUNK_TREE_OBJECTID. So refactor the parameter list to get rid of @root, which should be get from fs_info->chunk_root, and @objectid, which is fixed

[PATCH v2 7/7] btrfs-progs: Doc/mkfs: Add extra condition for rootdir option

2017-09-10 Thread Qu Wenruo
Add extra limitation explained for --rootdir option, including: 1) Size limitation Now I decide to follow "mkfs.ext4 -d" behavior, so we user is responsible to make sure the block device/file is large enough. 2) Read permission If user can't read the content, mkfs will just fail. So us

[PATCH v2 6/7] btrfs-progs: mkfs: Workaround BUG_ON caused by rootdir option

2017-09-10 Thread Qu Wenruo
--rootdir option will start a transaction to fill the fs, however if something goes wrong, from ENOSPC to lack of permission, we won't commit transaction and cause BUG_ON trigger by uncommitted transaction: -- extent buffer leak: start 29392896 len 16384 extent_io.c:579: free_extent_buffer: BU

[PATCH v2 4/7] btrfs-progs: mkfs: Update allocation info before verbose output

2017-09-10 Thread Qu Wenruo
Since new --rootdir can allocate chunk, it will modify the chunk allocation result. This patch will update allocation info before verbose output to reflect such info. Signed-off-by: Qu Wenruo --- mkfs/main.c | 33 + 1 file changed, 33 insertions(+) diff --git a/

[PATCH v2 3/7] btrfs-progs: mkfs: Rework rootdir option to avoid custom chunk layout

2017-09-10 Thread Qu Wenruo
mkfs.btrfs --rootdir uses its own custom chunk layout. This provides the possibility to limit the filesystem to a minimal size. However this custom chunk allocation has several problems. The most obvious problem is that it will allocate chunk from device offset 0. Both kernel and normal mkfs will

[PATCH v2 5/7] btrfs-progs: Avoid BUG_ON for chunk allocation when ENOSPC happens

2017-09-10 Thread Qu Wenruo
When passing directory larger than block device using --rootdir parameter, we get the following backtrace: -- extent-tree.c:2693: btrfs_reserve_extent: BUG_ON `ret` triggered, value -28 ./mkfs.btrfs(+0x1a05d)[0x557939e6b05d] ./mkfs.btrfs(btrfs_reserve_extent+0xb5a)[0x557939e710c8] ./mkfs.btrfs

[PATCH v2 2/7] btrfs-progs: Fix one-byte overlap bug in free_block_group_cache

2017-09-10 Thread Qu Wenruo
free_block_group_cache() calls clear_extent_bits() with wrong end, which is one byte larger than the correct range. This will cause the next adjacent cache state be split. And due to the split, private pointer (which points to block group cache) will be reset to NULL. This is very hard to detect

[PATCH v2 0/7] Mkfs: Rework --rootdir to a more generic behavior

2017-09-10 Thread Qu Wenruo
mkfs.btrfs --rootdir provides user a method to generate btrfs with pre-written content while without the need of root privilege to mount the fs. However the code is quite old and doesn't get much review or test. This makes some strange behavior, from customized chunk allocation (which uses the res

BTRFS Deduplication

2017-09-10 Thread shally verma
I was going through BTRFS Deduplication page (https://btrfs.wiki.kernel.org/index.php/Deduplication) and I read "As such, xfs_io, is able to perform deduplication on a BTRFS file system," .. following this, I followed on to xfs_io link https://linux.die.net/man/8/xfs_io As I understand, these a

Re: btrfs_remove_chunk call trace?

2017-09-10 Thread Rich Rauenzahn
...and can it be related to the Samsung 840 SSD's not supporting NCQ Trim? (Although I can't tell which device this trace is from -- it could be a mechanical Western Digital.) On Sun, Sep 10, 2017 at 10:16 PM, Rich Rauenzahn wrote: > Is this something to be concerned about? > > I'm running the l

[PATCH] btrfs-progs: Output time elapsed for each major tree it checked

2017-09-10 Thread Qu Wenruo
Marc reported that "btrfs check --repair" runs much faster than "btrfs check", which is quite weird. This patch will add time elapsed for each major tree it checked, for both original mode and lowmem mode, so we can have a clue what's going wrong. Reported-by: Marc MERLIN Signed-off-by: Qu Wenru

Re: Regarding handling of file renames in Btrfs

2017-09-10 Thread Qu Wenruo
On 2017年09月10日 22:34, Martin Raiber wrote: Hi, On 10.09.2017 08:45 Qu Wenruo wrote: On 2017年09月10日 14:41, Qu Wenruo wrote: On 2017年09月10日 07:50, Rohan Kadekodi wrote: Hello, I was trying to understand how file renames are handled in Btrfs. I read the code documentation, but had a probl

[PATCH] btrfs-progs: update btrfs-completion

2017-09-10 Thread Misono, Tomohiro
This patch updates btrfs-completion: - add "filesystem du" and "rescure zero-log" - restrict _btrfs_mnts to show btrfs type only - add more completion in last case statements (This file contains both spaces/tabs and may need cleanup.) Signed-off-by: Tomohiro Misono --- btrfs-completion | 43

btrfs_remove_chunk call trace?

2017-09-10 Thread Rich Rauenzahn
Is this something to be concerned about? I'm running the latest mainline kernel on CentOS 7. [ 1338.882288] [ cut here ] [ 1338.883058] WARNING: CPU: 2 PID: 790 at fs/btrfs/ctree.h:1559 btrfs_update_device+0x1c5/0x1d0 [btrfs] [ 1338.883809] Modules linked in: xt_nat veth i

Re: Help me understand what is going on with my RAID1 FS

2017-09-10 Thread Andrei Borzenkov
10.09.2017 23:17, Dmitrii Tcvetkov пишет: >>> Drive1 Drive2Drive3 >>> X X >>> X X >>> X X >>> >>> Where X is a chunk of raid1 block group. >> >> But this table clearly shows that adding third drive increases free >> space by 50%.

Re: Regarding handling of file renames in Btrfs

2017-09-10 Thread Qu Wenruo
On 2017年09月10日 22:32, Rohan Kadekodi wrote: Thank you for the prompt and elaborate answers! However, I think I was unclear in my questions, and I apologize for the confusion. What I meant was that for a file rename, when I check the blktrace output, there are 2 writes of 256KB each starting fr

Re: BTRFS: error (device dm-2) in btrfs_run_delayed_refs:2960: errno=-17 Object already exists (since 3.4 / 2012)

2017-09-10 Thread Marc MERLIN
On Sun, Sep 10, 2017 at 01:16:26PM +, Josef Bacik wrote: > Great, if the free space cache is fucked again after the next go > around then I need to expand the verifier to watch entries being added > to the cache as well. Thanks, Well, I copied about 1TB of data, and nothing happened. So it se

Re: Help me understand what is going on with my RAID1 FS

2017-09-10 Thread Duncan
FLJ posted on Sun, 10 Sep 2017 15:45:42 +0200 as excerpted: > I have a BTRFS RAID1 volume running for the past year. I avoided all > pitfalls known to me that would mess up this volume. I never > experimented with quotas, no-COW, snapshots, defrag, nothing really. > The volume is a RAID1 from day

Re: Help me understand what is going on with my RAID1 FS

2017-09-10 Thread Kai Krakow
Am Sun, 10 Sep 2017 20:15:52 +0200 schrieb Ferenc-Levente Juhos : > >Problem is that each raid1 block group contains two chunks on two > >separate devices, it can't utilize fully three devices no matter > >what. If that doesn't suit you then you need to add 4th disk. After > >that FS will be able

Re: Help me understand what is going on with my RAID1 FS

2017-09-10 Thread Dmitrii Tcvetkov
> > Drive1 Drive2Drive3 > > X X > > X X > > X X > > > > Where X is a chunk of raid1 block group. > > But this table clearly shows that adding third drive increases free > space by 50%. You need to reallocate data to actually mak

Re: Help me understand what is going on with my RAID1 FS

2017-09-10 Thread Andrei Borzenkov
10.09.2017 19:11, Dmitrii Tcvetkov пишет: >> Actually based on http://carfax.org.uk/btrfs-usage/index.html I >> would've expected 6 TB of usable space. Here I get 6.4 which is odd, >> but that only 1.5 TB is available is even stranger. >> >> Could anyone explain what I did wrong or why my expectati

Re: Help me understand what is going on with my RAID1 FS

2017-09-10 Thread Andrei Borzenkov
10.09.2017 18:47, Kai Krakow пишет: > Am Sun, 10 Sep 2017 15:45:42 +0200 > schrieb FLJ : > >> Hello all, >> >> I have a BTRFS RAID1 volume running for the past year. I avoided all >> pitfalls known to me that would mess up this volume. I never >> experimented with quotas, no-COW, snapshots, defrag

Re: Help me understand what is going on with my RAID1 FS

2017-09-10 Thread Ferenc-Levente Juhos
>Problem is that each raid1 block group contains two chunks on two >separate devices, it can't utilize fully three devices no matter what. >If that doesn't suit you then you need to add 4th disk. After >that FS will be able to use all unallocated space on all disks in raid1 >profile. But even then

Re: Help me understand what is going on with my RAID1 FS

2017-09-10 Thread Dmitrii Tcvetkov
> @Kai and Dmitrii > thank you for your explanations if I understand you correctly, you're > saying that btrfs makes no attempt to "optimally" use the physical > devices it has in the FS, once a new RAID1 block group needs to be > allocated it will semi-randomly pick two devices with enough space a

Re: Help me understand what is going on with my RAID1 FS

2017-09-10 Thread Ferenc-Levente Juhos
@Kai and Dmitrii thank you for your explanations if I understand you correctly, you're saying that btrfs makes no attempt to "optimally" use the physical devices it has in the FS, once a new RAID1 block group needs to be allocated it will semi-randomly pick two devices with enough space and allocat

Re: Help me understand what is going on with my RAID1 FS

2017-09-10 Thread Dmitrii Tcvetkov
>Actually based on http://carfax.org.uk/btrfs-usage/index.html I >would've expected 6 TB of usable space. Here I get 6.4 which is odd, >but that only 1.5 TB is available is even stranger. > >Could anyone explain what I did wrong or why my expectations are wrong? > >Thank you in advance I'd say df

Re: Help me understand what is going on with my RAID1 FS

2017-09-10 Thread Kai Krakow
Am Sun, 10 Sep 2017 15:45:42 +0200 schrieb FLJ : > Hello all, > > I have a BTRFS RAID1 volume running for the past year. I avoided all > pitfalls known to me that would mess up this volume. I never > experimented with quotas, no-COW, snapshots, defrag, nothing really. > The volume is a RAID1 from

Re: netapp-alike snapshots?

2017-09-10 Thread Marc MERLIN
On Sat, Sep 09, 2017 at 10:43:16PM +0300, Andrei Borzenkov wrote: > 09.09.2017 16:44, Ulli Horlacher пишет: > > > > Your tool does not create .snapshot subdirectories in EVERY directory like > > Neither does NetApp. Those "directories" are magic handles that do not > really exist. Correct, than

Re: Regarding handling of file renames in Btrfs

2017-09-10 Thread Martin Raiber
Hi, On 10.09.2017 08:45 Qu Wenruo wrote: > > > On 2017年09月10日 14:41, Qu Wenruo wrote: >> >> >> On 2017年09月10日 07:50, Rohan Kadekodi wrote: >>> Hello, >>> >>> I was trying to understand how file renames are handled in Btrfs. I >>> read the code documentation, but had a problem understanding a few >

Re: Regarding handling of file renames in Btrfs

2017-09-10 Thread Rohan Kadekodi
Thank you for the prompt and elaborate answers! However, I think I was unclear in my questions, and I apologize for the confusion. What I meant was that for a file rename, when I check the blktrace output, there are 2 writes of 256KB each starting from byte number: 13373440 When I check btrfs-deb

Re: generic name for volume and subvolume root?

2017-09-10 Thread Peter Grandi
> As I am writing some documentation abount creating snapshots: > Is there a generic name for both volume and subvolume root? Yes, it is from the UNIX side 'root directory' and from the Btrfs side 'subvolume'. Like some other things Btrfs, its terminology is often inconsistent, but "volume" *usual

Help me understand what is going on with my RAID1 FS

2017-09-10 Thread FLJ
Hello all, I have a BTRFS RAID1 volume running for the past year. I avoided all pitfalls known to me that would mess up this volume. I never experimented with quotas, no-COW, snapshots, defrag, nothing really. The volume is a RAID1 from day 1 and is working reliably until now. Until yesterday it

Re: btrfs check --repair now runs in minutes instead of hours? aborting

2017-09-10 Thread Marc MERLIN
On Sun, Sep 10, 2017 at 02:01:58PM +0800, Qu Wenruo wrote: > > > On 2017年09月10日 01:44, Marc MERLIN wrote: > > So, should I assume that btrfs progs git has some issue since there is > > no plausible way that a check --repair should be faster than a regular > > check? > > Yes, the assumption that

Re: BTRFS: error (device dm-2) in btrfs_run_delayed_refs:2960: errno=-17 Object already exists (since 3.4 / 2012)

2017-09-10 Thread Josef Bacik
Great, if the free space cache is fucked again after the next go around then I need to expand the verifier to watch entries being added to the cache as well. Thanks, Josef Sent from my iPhone > On Sep 10, 2017, at 9:14 AM, Marc MERLIN wrote: > >> On Sun, Sep 10, 2017 at 03:12:16AM +, Jo

Re: BTRFS: error (device dm-2) in btrfs_run_delayed_refs:2960: errno=-17 Object already exists (since 3.4 / 2012)

2017-09-10 Thread Marc MERLIN
On Sun, Sep 10, 2017 at 03:12:16AM +, Josef Bacik wrote: > Ok mount -o clear_cache, umount and run fsck again just to make sure. Then > if it comes out clean mount with ref_verify again and wait for it to blow up > again. Thanks, Ok, just did the 2nd fsck, came back clean after mount -o c

Re: [PATCH] btrfs: tests: Fix a memory leak in error handling path in 'run_test()'

2017-09-10 Thread Qu Wenruo
On 2017年09月10日 19:19, Christophe JAILLET wrote: If 'btrfs_alloc_path()' fails, we must free the resourses already allocated, as done in the other error handling paths in this function. Signed-off-by: Christophe JAILLET Reviewed-by: Qu Wenruo BTW, I also checked all btrfs_alloc_path() in s

[PATCH] btrfs: tests: Fix a memory leak in error handling path in 'run_test()'

2017-09-10 Thread Christophe JAILLET
If 'btrfs_alloc_path()' fails, we must free the resourses already allocated, as done in the other error handling paths in this function. Signed-off-by: Christophe JAILLET --- fs/btrfs/tests/free-space-tree-tests.c | 3 ++- 1 file changed, 2 insertions(+), 1 deletion(-) diff --git a/fs/btrfs/tes

Re: Please help with exact actions for raid1 hot-swap

2017-09-10 Thread Patrik Lundquist
On 10 September 2017 at 08:33, Marat Khalili wrote: > It doesn't need replaced disk to be readable, right? Only enough to be mountable, which it already is, so your read errors on /dev/sdb isn't a problem. > Then what prevents same procedure to work without a spare bay? It is basically the same

Re: netapp-alike snapshots?

2017-09-10 Thread A L
Perhaps netapp is using a VFS overlay. There is really only one snapshot but it is shown in the overlay on every folder. Kind of the same with samba Shadow Copies. From: Ulli Horlacher -- Sent: 2017-09-09 - 21:52 > On Sat 2017-09-09 (22:43), Andrei Borzenkov wrote: > >> > Your tool