Best strategie to remove devices from pool

2017-10-17 Thread Cloud Admin
Hi,
I want to remove two devices from a BTRFS RAID 1 pool. It should be
enough free space to do it, but what is the best strategie. Remove both
device in one call 'btrfs dev rem /dev/sda1 /dev/sdb1' (for example) or
should it be better in two separate calls? What is faster? Are there
other constraints to think about?
Bye
Frank
--
To unsubscribe from this list: send the line "unsubscribe linux-btrfs" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


WARNING: ... at fs/btrfs/ctree.h:1559 btrfs_update_device+0x1be/0x1d0 [btrfs]

2017-10-09 Thread Cloud Admin
Hi,
I update kernel from 4.11.10 to 4.13.4 and since that I get the following 
message in the kernel journal calling 'scrub' or 'balance'. I use Fedora 26 
with btrfs-progs v4.9.1.
What does this mean and (more important) what can I do? 
Bye
Frank

BTRFS info (device dm-7): relocating block group 44050690342912 flags 
system|raid1
BTRFS info (device dm-7): found 117 extents
[ cut here ]
WARNING: CPU: 3 PID: 22095 at fs/btrfs/ctree.h:1559 
btrfs_update_device+0x1be/0x1d0 [btrfs]
Modules linked in: rpcsec_gss_krb5 veth xt_nat xt_addrtype br_netfilter 
xt_CHECKSUM ipt_MASQUERADE nf_nat_masquerade_ipv4 tun nf_conntrack_sane xt_CT 
ip6t_rpfilter ip6t_REJECT nf_reject_ipv6 xt_conntrack ip_set nfnetlink 
ebtable_nat ebtable_broute bridge stp llc ip6table_nat nf_conntrack_ipv6 
nf_defrag_ipv6 nf_nat_ipv6 ip6table_mangle ip6table_raw ip6table_security 
iptable_nat nf_conntrack_ipv4 nf_defrag_ipv4 nf_nat_ipv4 nf_nat nf_conntrack 
libcrc32c iptable_mangle iptable_raw iptable_security ebtable_filter ebtables 
ip6table_filter ip6_tables ftsteutates btrfs xor raid6_pq tda18212 cxd2841er 
tpm_crb intel_rapl x86_pkg_temp_thermal intel_powerclamp coretemp kvm_intel kvm 
iTCO_wdt irqbypass iTCO_vendor_support intel_cstate intel_uncore mei_wdt 
intel_rapl_perf ppdev hci_uart ddbridge dvb_core
Okt 09 19:08:32 hypercloud.fritz.box kernel:  btbcm btqca btintel mei_me 
i2c_i801 shpchp bluetooth joydev mei intel_pch_thermal wmi intel_lpss_acpi 
intel_lpss pinctrl_sunrisepoint fujitsu_laptop parport_pc sparse_keymap 
ecdh_generic parport tpm_tis pinctrl_intel tpm_tis_core rfkill tpm acpi_pad 
nfsd auth_rpcgss nfs_acl lockd grace sunrpc dm_crypt i915 crct10dif_pclmul 
i2c_algo_bit crc32_pclmul drm_kms_helper crc32c_intel e1000e drm 
ghash_clmulni_intel serio_raw ptp pps_core video i2c_hid
CPU: 3 PID: 22095 Comm: btrfs Tainted: GW   4.13.4-200.fc26.x86_64 
#1
Hardware name: FUJITSU D3417-B1/D3417-B1, BIOS V5.0.0.11 R1.12.0 for D3417-B1x  
  02/09/2016
task: 8ecb59b026c0 task.stack: b805cae9
RIP: 0010:btrfs_update_device+0x1be/0x1d0 [btrfs]
RSP: 0018:b805cae93ac8 EFLAGS: 00010206
RAX: 0fff RBX: 8ed094bb11c0 RCX: 074702251e00
RDX: 0004 RSI: 3efa RDI: 8ec9eb032c08
RBP: b805cae93b10 R08: 3efe R09: b805cae93a80
R10: 1000 R11: 0003 R12: 8ed0c7a3f000
R13:  R14: 3eda R15: 8ec9eb032c08
FS:  7f10b256a8c0() GS:8ed0ee4c() knlGS:
CS:  0010 DS:  ES:  CR0: 80050033
CR2: 7f6261c0d5f0 CR3: 0005cad4c000 CR4: 003406e0
DR0:  DR1:  DR2: 
DR3:  DR6: fffe0ff0 DR7: 0400
Call Trace:
 btrfs_remove_chunk+0x365/0x870 [btrfs]
 btrfs_relocate_chunk+0x7e/0xd0 [btrfs]
 btrfs_balance+0xc07/0x1390 [btrfs]
 btrfs_ioctl_balance+0x319/0x380 [btrfs]
 btrfs_ioctl+0x9d5/0x24a0 [btrfs]
 ? lru_cache_add+0x3a/0x80
 ? lru_cache_add_active_or_unevictable+0x4c/0xf0
 ? __handle_mm_fault+0x939/0x10b0
 do_vfs_ioctl+0xa5/0x600
 ? do_brk_flags+0x230/0x360
 ? do_vfs_ioctl+0xa5/0x600
 SyS_ioctl+0x79/0x90
 entry_SYSCALL_64_fastpath+0x1a/0xa5
RIP: 0033:0x7f10b15e65e7
RSP: 002b:7ffc9402ebe8 EFLAGS: 0246 ORIG_RAX: 0010
RAX: ffda RBX: 8041 RCX: 7f10b15e65e7
RDX: 7ffc9402ec80 RSI: c4009420 RDI: 0003
RBP: 7f10b18abae0 R08: 55b07a3b3010 R09: 0078
R10: 7f10b18abb38 R11: 0246 R12: 
R13: 7f10b18abb38 R14: 8060 R15: 
Code: 4c 89 ff 45 31 c0 ba 10 00 00 00 4c 89 f6 e8 fa 20 ff ff 4c 89 ff e8 72 
ef fc ff e9 d3 fe ff ff 41 bd f4 ff ff ff e9 d0 fe ff ff <0f> ff eb b6 e8 39 fd 
78 c6 66 0f 1f 84 00 00 00 00 00 0f 1f 44 
---[ end trace d1e1c8aff99bfeb8 ]---
--
To unsubscribe from this list: send the line "unsubscribe linux-btrfs" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


How to disable/revoke 'compression'?

2017-09-03 Thread Cloud Admin
Hi,
I used the mount option 'compression' on some mounted sub volumes. How
can I revoke the compression? Means to delete the option and get all
data uncompressed on this volume.
Is it enough to remount the sub volume without this option? Or is it
necessary to do some addional step (balancing?) to get all stored data
uncompressed. Beside of it, is it possible to find out what the real
and compressed size of a file, for example or the ratio?
Bye
   Frank
--
To unsubscribe from this list: send the line "unsubscribe linux-btrfs" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Replace missing disc => strange result!?

2017-08-10 Thread Cloud Admin
Hi,
I had a disc failure and must replace it. I followed the description on
 https://btrfs.wiki.kernel.org/index.php/Using_Btrfs_with_Multiple_Devi
ces and started the replacement.
Setup is a two disc RAID1!
After it was done, I called 'btrfs fi us /mn/btrfsroot' and I got the
output below. What is wrong?
Is it a rebalancing issue? I thought the replace command started it
automatically...

Overall:
Device size:   3.64TiB
Device allocated:      1.04TiB
Device unallocated:    2.60TiB
Device missing:  0.00B
Used:    519.76GiB
Free (estimated):      1.56TiB  (min: 1.56TiB)
Data ratio:   2.00
Metadata ratio:   1.60
Global reserve:  279.11MiB  (used: 0.00B)

Data,single: Size:1.00GiB, Used:0.00B
   /dev/mapper/luks-3ff6c412-4d3a-4d33-85a3-cc70e95c26f8   1.00
GiB

Data,RAID1: Size:265.00GiB, Used:259.60GiB
   /dev/mapper/luks-3ff6c412-4d3a-4d33-85a3-cc70e95c26f8 265.00
GiB
   /dev/mapper/luks-ff4bf5da-48af-4563-abb2-db083bd01512 265.00
GiB

Data,DUP: Size:264.00GiB, Used:0.00B
   /dev/mapper/luks-3ff6c412-4d3a-4d33-85a3-cc70e95c26f8 528.00
GiB

Metadata,single: Size:1.00GiB, Used:0.00B
   /dev/mapper/luks-3ff6c412-4d3a-4d33-85a3-cc70e95c26f8   1.00
GiB

Metadata,RAID1: Size:1.00GiB, Used:286.03MiB
   /dev/mapper/luks-3ff6c412-4d3a-4d33-85a3-cc70e95c26f8   1.00
GiB
   /dev/mapper/luks-ff4bf5da-48af-4563-abb2-db083bd01512   1.00
GiB

Metadata,DUP: Size:512.00MiB, Used:112.00KiB
   /dev/mapper/luks-3ff6c412-4d3a-4d33-85a3-cc70e95c26f8   1.00
GiB

System,single: Size:32.00MiB, Used:0.00B
   /dev/mapper/luks-3ff6c412-4d3a-4d33-85a3-cc70e95c26f8  32.00
MiB

System,RAID1: Size:8.00MiB, Used:48.00KiB
   /dev/mapper/luks-3ff6c412-4d3a-4d33-85a3-cc70e95c26f8   8.00
MiB
   /dev/mapper/luks-ff4bf5da-48af-4563-abb2-db083bd01512   8.00
MiB

System,DUP: Size:32.00MiB, Used:48.00KiB
   /dev/mapper/luks-3ff6c412-4d3a-4d33-85a3-cc70e95c26f8  64.00
MiB

Unallocated:
   /dev/mapper/luks-3ff6c412-4d3a-4d33-85a3-cc70e95c26f8   1.04
TiB
   /dev/mapper/luks-ff4bf5da-48af-4563-abb2-db083bd01512   1.56
TiB

Bye
Frank
--
To unsubscribe from this list: send the line "unsubscribe linux-btrfs" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Save to use 'clear_cache' in mount -o remount?

2017-08-06 Thread Cloud Admin
Hi,
is it safe (has it an effect?) to use the 'clear_cache' option in a
'mount -o remount'? I recognize messages in my kernel log regarding
'BTRFS info (device dm-7): The free space cache file (31215079915520)
is invalid. skip it'. I would like to fix it and would do it (in best
case) without rebooting.
Bye
Frank
--
To unsubscribe from this list: send the line "unsubscribe linux-btrfs" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: Best Practice: Add new device to RAID1 pool (Summary)

2017-07-29 Thread Cloud Admin
Am Montag, den 24.07.2017, 18:40 +0200 schrieb Cloud Admin:
> Am Montag, den 24.07.2017, 10:25 -0400 schrieb Austin S. Hemmelgarn:
> > On 2017-07-24 10:12, Cloud Admin wrote:
> > > Am Montag, den 24.07.2017, 09:46 -0400 schrieb Austin S.
> > > Hemmelgarn:
> > > > On 2017-07-24 07:27, Cloud Admin wrote:
> > > > > Hi,
> > > > > I have a multi-device pool (three discs) as RAID1. Now I want
> > > > > to
> > > > > add a
> > > > > new disc to increase the pool. I followed the description on
> > > > > https:
> > > > > //bt
> > > > > rfs.wiki.kernel.org/index.php/Using_Btrfs_with_Multiple_Devic
> > > > > es
> > > > > and
> > > > > used 'btrfs add  '. After that I called a
> > > > > balance
> > > > > for rebalancing the RAID1 using 'btrfs balance start  > > > > path>'.
> > > > > Is that anything or should I need to call a resize (for
> > > > > example) or
> > > > > anything else? Or do I need to specify filter/profile
> > > > > parameters
> > > > > for
> > > > > balancing?
> > > > > I am a little bit confused because the balance command is
> > > > > running
> > > > > since
> > > > > 12 hours and only 3GB of data are touched. This would mean
> > > > > the
> > > > > whole
> > > > > balance process (new disc has 8TB) would run a long, long
> > > > > time...
> > > > > and
> > > > > is using one cpu by 100%.
> > > > 
> > > > Based on what you're saying, it sounds like you've either run
> > > > into a
> > > > bug, or have a huge number of snapshots on this filesystem.
> > > 
> > > It depends what you define as huge. The call of 'btrfs sub list
> > >  > > path>' returns a list of 255 subvolume.
> > 
> > OK, this isn't horrible, especially if most of them aren't
> > snapshots 
> > (it's cross-subvolume reflinks that are most of the issue when it
> > comes 
> > to snapshots, not the fact that they're subvolumes).
> > > I think this is not too huge. The most of this subvolumes was
> > > created
> > > using docker itself. I cancel the balance (this will take awhile)
> > > and will try to delete such of these subvolumes/snapshots.
> > > What can I do more?
> > 
> > As Roman mentioned in his reply, it may also be qgroup related.  If
> > you run:
> > btrfs quota disable
> 
> It seems quota was one part of it. Thanks for the tip. I disabled and
> started balance new.
> Now approx. each 5 min. one chunk will be relocated. But if I take
> the
> reported 10860 chunks and calc. the time it will take ~37 days to
> finish... So, it seems I have to investigate more time into figure
> out
> the subvolume / snapshots structure created by docker.
> A first deeper look shows, there is a subvolume with a snapshot,
> which
> has itself a snapshot, and so forth.
> > 
> > 
Now, the balance process finished after 127h the new disc is in the
pool... Not so long as expected but in my opinion long enough. Quota
seems one big driver in my case. What I could see over the time at the
beginning many extends was relocated ignoring the new disc. Properly it
could be a good idea to rebalance using filter (like -dusage=30 for
example) before add the new disc to decrease the time. 
But only theory. It will try to keep it in my mind for the next time.

Thanks all for your tips, ideas and time!
Frank

--
To unsubscribe from this list: send the line "unsubscribe linux-btrfs" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: Best Practice: Add new device to RAID1 pool

2017-07-25 Thread Cloud Admin
Am Montag, den 24.07.2017, 20:42 + schrieb Hugo Mills:
> On Mon, Jul 24, 2017 at 02:35:05PM -0600, Chris Murphy wrote:
> > On Mon, Jul 24, 2017 at 5:27 AM, Cloud Admin <admin@cloud.haefemeie
> > r.eu> wrote:
> > 
> > > I am a little bit confused because the balance command is running
> > > since
> > > 12 hours and only 3GB of data are touched.
> > 
> > That's incredibly slow. Something isn't right.
> > 
> > Using btrfs-debug -b from btrfs-progs, I've selected a few 100%
> > full chunks.
> > 
> > [156777.077378] f26s.localdomain sudo[13757]:chris : TTY=pts/2
> > ;
> > PWD=/home/chris ; USER=root ; COMMAND=/sbin/btrfs balance start
> > -dvrange=157970071552..159043813376 /
> > [156773.328606] f26s.localdomain kernel: BTRFS info (device sda1):
> > relocating block group 157970071552 flags data
> > [156800.408918] f26s.localdomain kernel: BTRFS info (device sda1):
> > found 38952 extents
> > [156861.343067] f26s.localdomain kernel: BTRFS info (device sda1):
> > found 38951 extents
> > 
> > That 1GiB chunk with quite a few fragments took 88s. That's 11MB/s.
> > Even for a hard drive, that's slow. I've got maybe a dozen
> > snapshots
> > on this particular volume and quotas are not enabled. By definition
> > all of those extents are sequential. So I'm not sure why it's
> > taking
> > so long. Seems almost like a regression somewhere. A nearby chunk
> > with
> > ~23k extents only takes 45s to balance. And another chunk with
> > ~32000
> > extents took 55s to balance.
> 
>    In my experience, it's pretty consistent at about a minute per 1
> GiB for data on rotational drives on RAID-1. For metadata, it can go
> up to several hours (or more) per 256 MiB chunk, depending on what
> kind of metadata it is. With extents shared between lots of files, it
> slows down. In my case, with a few hundred snapshots of the same
> thing, my system was taking 4h per chunk for the chunks full of the
> extent tree.
After disabling quota the balancing is no working faster. After 27h
approx. 1.3TB are done. It has taken around 4h of rearrange the data on
the old three discs the process started to use the new one. Since there
it is processing much faster.

Bye
Frank
--
To unsubscribe from this list: send the line "unsubscribe linux-btrfs" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: Best Practice: Add new device to RAID1 pool

2017-07-25 Thread Cloud Admin
Am Montag, den 24.07.2017, 23:12 +0200 schrieb waxhead:
> 
> Chris Murphy wrote:
> 
> This may be a stupid question , but are your pool of butter (or
> BTRFS 
> pool) by any chance hooked up via USB? If this is USB2.0 at 
No, it is a SATA array with (currently) four 8TB discs.

--
To unsubscribe from this list: send the line "unsubscribe linux-btrfs" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: Best Practice: Add new device to RAID1 pool

2017-07-24 Thread Cloud Admin
Am Montag, den 24.07.2017, 10:25 -0400 schrieb Austin S. Hemmelgarn:
> On 2017-07-24 10:12, Cloud Admin wrote:
> > Am Montag, den 24.07.2017, 09:46 -0400 schrieb Austin S.
> > Hemmelgarn:
> > > On 2017-07-24 07:27, Cloud Admin wrote:
> > > > Hi,
> > > > I have a multi-device pool (three discs) as RAID1. Now I want
> > > > to
> > > > add a
> > > > new disc to increase the pool. I followed the description on
> > > > https:
> > > > //bt
> > > > rfs.wiki.kernel.org/index.php/Using_Btrfs_with_Multiple_Devices
> > > > and
> > > > used 'btrfs add  '. After that I called a
> > > > balance
> > > > for rebalancing the RAID1 using 'btrfs balance start  > > > path>'.
> > > > Is that anything or should I need to call a resize (for
> > > > example) or
> > > > anything else? Or do I need to specify filter/profile
> > > > parameters
> > > > for
> > > > balancing?
> > > > I am a little bit confused because the balance command is
> > > > running
> > > > since
> > > > 12 hours and only 3GB of data are touched. This would mean the
> > > > whole
> > > > balance process (new disc has 8TB) would run a long, long
> > > > time...
> > > > and
> > > > is using one cpu by 100%.
> > > 
> > > Based on what you're saying, it sounds like you've either run
> > > into a
> > > bug, or have a huge number of snapshots on this filesystem.
> > 
> > It depends what you define as huge. The call of 'btrfs sub list
> >  > path>' returns a list of 255 subvolume.
> 
> OK, this isn't horrible, especially if most of them aren't snapshots 
> (it's cross-subvolume reflinks that are most of the issue when it
> comes 
> to snapshots, not the fact that they're subvolumes).
> > I think this is not too huge. The most of this subvolumes was
> > created
> > using docker itself. I cancel the balance (this will take awhile)
> > and will try to delete such of these subvolumes/snapshots.
> > What can I do more?
> 
> As Roman mentioned in his reply, it may also be qgroup related.  If
> you run:
> btrfs quota disable
It seems quota was one part of it. Thanks for the tip. I disabled and
started balance new.
Now approx. each 5 min. one chunk will be relocated. But if I take the
reported 10860 chunks and calc. the time it will take ~37 days to
finish... So, it seems I have to investigate more time into figure out
the subvolume / snapshots structure created by docker.
A first deeper look shows, there is a subvolume with a snapshot, which
has itself a snapshot, and so forth.
> 
> On the filesystem in question, that may help too, and if you are
> using 
> quotas, turning them off with that command will get you a much
> bigger 
> performance improvement than removing all the snapshots.
> --
> To unsubscribe from this list: send the line "unsubscribe linux-
> btrfs" in
> the body of a message to majord...@vger.kernel.org
> More majordomo info at  http://vger.kernel.org/majordomo-info.html
> 
--
To unsubscribe from this list: send the line "unsubscribe linux-btrfs" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: Best Practice: Add new device to RAID1 pool

2017-07-24 Thread Cloud Admin
Am Montag, den 24.07.2017, 19:08 +0500 schrieb Roman Mamedov:
> On Mon, 24 Jul 2017 09:46:34 -0400
> "Austin S. Hemmelgarn"  wrote:
> 
> > > I am a little bit confused because the balance command is running
> > > since
> > > 12 hours and only 3GB of data are touched. This would mean the
> > > whole
> > > balance process (new disc has 8TB) would run a long, long time...
> > > and
> > > is using one cpu by 100%.
> > 
> > Based on what you're saying, it sounds like you've either run into
> > a 
> > bug, or have a huge number of snapshots
> 
> ...and possibly quotas (qgroups) enabled. (perhaps automatically by
> some tool,
> and not by you). Try:
> 
>   btrfs quota disable 
> 
It seems this was one part of my problem. See my answer to Austin.
> 
With respect,
> Roman
> --
> To unsubscribe from this list: send the line "unsubscribe linux-btrfs" in
> the body of a message to majord...@vger.kernel.org
> More majordomo info at  http://vger.kernel.org/majordomo-info.html
--
To unsubscribe from this list: send the line "unsubscribe linux-btrfs" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: Best Practice: Add new device to RAID1 pool

2017-07-24 Thread Cloud Admin
Am Montag, den 24.07.2017, 09:46 -0400 schrieb Austin S. Hemmelgarn:
> On 2017-07-24 07:27, Cloud Admin wrote:
> > Hi,
> > I have a multi-device pool (three discs) as RAID1. Now I want to
> > add a
> > new disc to increase the pool. I followed the description on https:
> > //bt
> > rfs.wiki.kernel.org/index.php/Using_Btrfs_with_Multiple_Devices and
> > used 'btrfs add  '. After that I called a
> > balance
> > for rebalancing the RAID1 using 'btrfs balance start '.
> > Is that anything or should I need to call a resize (for example) or
> > anything else? Or do I need to specify filter/profile parameters
> > for
> > balancing?
> > I am a little bit confused because the balance command is running
> > since
> > 12 hours and only 3GB of data are touched. This would mean the
> > whole
> > balance process (new disc has 8TB) would run a long, long time...
> > and
> > is using one cpu by 100%.
> 
> Based on what you're saying, it sounds like you've either run into a 
> bug, or have a huge number of snapshots on this filesystem.  

It depends what you define as huge. The call of 'btrfs sub list ' returns a list of 255 subvolume.
I think this is not too huge. The most of this subvolumes was created
using docker itself. I cancel the balance (this will take awhile) and will try 
to delete such of these subvolumes/snapshots.
What can I do more?

> What you 
> described is exactly what you should be doing when expanding an
> array 
> (add the device, then run a full balance).  The fact that it's
> taking 
> this long isn't normal, unless you have very slow storage devices.
--
To unsubscribe from this list: send the line "unsubscribe linux-btrfs" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Best Practice: Add new device to RAID1 pool

2017-07-24 Thread Cloud Admin
Hi,
I have a multi-device pool (three discs) as RAID1. Now I want to add a
new disc to increase the pool. I followed the description on https://bt
rfs.wiki.kernel.org/index.php/Using_Btrfs_with_Multiple_Devices and
used 'btrfs add  '. After that I called a balance
for rebalancing the RAID1 using 'btrfs balance start '. 
Is that anything or should I need to call a resize (for example) or
anything else? Or do I need to specify filter/profile parameters for
balancing?
I am a little bit confused because the balance command is running since
12 hours and only 3GB of data are touched. This would mean the whole
balance process (new disc has 8TB) would run a long, long time... and
is using one cpu by 100%.
Thanks for your help and time.
Bye
Frank

--
To unsubscribe from this list: send the line "unsubscribe linux-btrfs" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html