Re: Replacing RAID-1 devices with larger disks

2016-02-28 Thread Christian Robottom Reis
On Sun, Feb 28, 2016 at 05:15:32PM -0300, Christian Robottom Reis wrote:
> I've managed to do the actual swap using a series of btrfs replace
> commands with no special arguments, and the system is now live and
> booting from the 256GB drives. However, I haven't actually noticed any
> difference in btrfs fi show output, and usage looks weird. Has anyone
> seen this before or have a clue as to who?

Yes, now I do, about 10 minutes after writing that mail. After a btrfs
replace, if the device being added is larger than the original device,
you need to issue:

btrfs fi resize :max 

to actually use that disk space. So for something like:

> Label: 'root'  uuid: 670d1132-00dc-4511-a2f6-d28ce08b4d3a
> Total devices 2 FS bytes used 9.33GiB
> devid1 size 13.97GiB used 11.78GiB path /dev/sda1
> devid2 size 13.97GiB used 11.78GiB path /dev/sdb1
> 
> Label: 'var'  uuid: 815b3280-e90f-483a-b244-1d2dfe9b6e67
> Total devices 2 FS bytes used 56.14GiB
> devid1 size 80.00GiB used 80.00GiB path /dev/sda3
> devid2 size 80.00GiB used 80.00GiB path /dev/sdb3

You need to do:

btrfs fi resize 1:max /
btrfs fi resize 2:max /

btrfs fi resize 1:max /var
btrfs fi resize 2:max /var

And it looks great now:

Label: 'root'  uuid: 670d1132-00dc-4511-a2f6-d28ce08b4d3a
Total devices 2 FS bytes used 9.34GiB
devid1 size 40.00GiB used 10.78GiB path /dev/sda1
devid2 size 40.00GiB used 10.78GiB path /dev/sdb1

Label: 'var'  uuid: 815b3280-e90f-483a-b244-1d2dfe9b6e67
Total devices 2 FS bytes used 56.16GiB
devid1 size 160.00GiB used 80.00GiB path /dev/sda3
devid2 size 160.00GiB used 80.00GiB path /dev/sdb3

This would be nice to document in the manpage for replace; it would also
be a good addition to the best google hit for replace RAID-1:


http://unix.stackexchange.com/questions/227560/how-to-replace-a-device-in-btrfs-raid-1-filesystem

but I don't have enough reputation to do it myself.
-- 
Christian Robottom Reis | [+55 16] 3376 0125   | http://async.com.br/~kiko
| [+55 16] 991 126 430 | http://launchpad.net/~kiko
--
To unsubscribe from this list: send the line "unsubscribe linux-btrfs" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Replacing RAID-1 devices with larger disks

2016-02-28 Thread Christian Robottom Reis
Hello there,

I'm running a btrfs RAID-1 on two 128GB SSDs that were getting kind
of full. I found two 256GB SSDs that I plan to use to replace the 128TB
versions.

I've managed to do the actual swap using a series of btrfs replace
commands with no special arguments, and the system is now live and
booting from the 256GB drives. However, I haven't actually noticed any
difference in btrfs fi show output, and usage looks weird. Has anyone
seen this before or have a clue as to who?

The relevant partition sizes are now (sdb is identical):

/dev/sda1   *20488388812741943040   83  Linux
/dev/sda392276736   427821055   167772160   83  Linux

Here's the show output:

Label: 'root'  uuid: 670d1132-00dc-4511-a2f6-d28ce08b4d3a
Total devices 2 FS bytes used 9.33GiB
devid1 size 13.97GiB used 11.78GiB path /dev/sda1
devid2 size 13.97GiB used 11.78GiB path /dev/sdb1

Label: 'var'  uuid: 815b3280-e90f-483a-b244-1d2dfe9b6e67
Total devices 2 FS bytes used 56.14GiB
devid1 size 80.00GiB used 80.00GiB path /dev/sda3
devid2 size 80.00GiB used 80.00GiB path /dev/sdb3

Those sizes have not changed over the resize; i.e. the original sda1/sdb1 pair
was 14GB and the sda3/sdb3 pair was 80GB, and after the replace, they haven't
changed.

And usage for / is now weird:

Overall:
Device size:  27.94GiB
Device allocated: 21.56GiB
Device unallocated:6.38GiB
Device missing:  0.00B
Used: 18.66GiB
Free (estimated):  3.99GiB  (min: 3.99GiB)
Data ratio:   2.00
Metadata ratio:   2.00
Global reserve:  208.00MiB  (used: 0.00B)

Data,RAID1: Size:9.00GiB, Used:8.20GiB
   /dev/sda1   9.00GiB
   /dev/sdb1   9.00GiB

Metadata,RAID1: Size:1.75GiB, Used:1.13GiB
   /dev/sda1   1.75GiB
   /dev/sdb1   1.75GiB

System,RAID1: Size:32.00MiB, Used:16.00KiB
   /dev/sda1  32.00MiB
   /dev/sdb1  32.00MiB

Usage for /var also looks wrong, but in a different way:

Overall:
Device size: 160.00GiB
Device allocated:160.00GiB
Device unallocated:2.00MiB
Device missing:  0.00B
Used:112.28GiB
Free (estimated): 21.20GiB  (min: 21.20GiB)
Data ratio:   2.00
Metadata ratio:   2.00
Global reserve:  512.00MiB  (used: 0.00B)

Data,RAID1: Size:74.97GiB, Used:53.77GiB
   /dev/sda3  74.97GiB
   /dev/sdb3  74.97GiB

Metadata,RAID1: Size:5.00GiB, Used:2.37GiB
   /dev/sda3   5.00GiB
   /dev/sdb3   5.00GiB

System,RAID1: Size:32.00MiB, Used:16.00KiB
   /dev/sda3  32.00MiB
   /dev/sdb3  32.00MiB

Unallocated:
   /dev/sda3   1.00MiB
   /dev/sdb3   1.00MiB


Version information:

async@riff:~$ uname -a
Linux riff 4.2.0-30-generic #36~14.04.1-Ubuntu SMP Fri Feb 26 18:49:23
UTC 2016 x86_64 x86_64 x86_64 GNU/Linux

async@riff:~$ btrfs --version
btrfs-progs v4.0

Thanks,
-- 
Christian Robottom Reis | [+55 16] 3376 0125   | http://async.com.br/~kiko

--
To unsubscribe from this list: send the line "unsubscribe linux-btrfs" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


qgroup limit clearing, was Re: Btrfs progs release 4.1

2015-06-22 Thread Christian Robottom Reis
On Mon, Jun 22, 2015 at 05:00:23PM +0200, David Sterba wrote:
   - qgroup:
 - show: distinguish no limits and 0 limit value
 - limit: ability to clear the limit

I'm using kernel 4.1-rc7 as per:

root@riff:/var/lib/lxc/juju-trusty-lxc-template/rootfs# uname -a
Linux riff 4.1.0-040100rc7-generic #201506080035 SMP Mon Jun 8 04:36:20 UTC 
2015 x86_64 x86_64 x86_64 GNU/Linux

But apart from still having major issues with qgroups (quota enforcement
triggers even when there seems to be plenty of free space) clearing
limits with btrfs-progs 4.1 doesn't revert back to 'none', instead
confusingly setting the quota to 16EiB. Using:

root@riff:/var/lib/lxc/juju-trusty-lxc-template/rootfs# btrfs version
btrfs-progs v4.1

I start from:

qgroupid rfer excl max_rfer max_excl 
     
0/5   2.15GiB  1.95GiB none none 
0/261 1.42GiB  1.11GiB none100.00GiB 
0/265 1.09GiB600.59MiB none100.00GiB 
0/271   793.32MiB366.40MiB none100.00GiB 
0/274   514.96MiB142.92MiB none100.00GiB 

I then issue:

root@riff# btrfs qgroup limit -e none 261 /var
root@riff# btrfs qgroup limit none 261 /var

I end up with:

qgroupid rfer excl max_rfer max_excl 
     
0/5   2.15GiB  1.95GiB none none 
0/261 1.42GiB  1.11GiB 16.00EiB 16.00EiB 
0/265 1.09GiB600.59MiB none100.00GiB 
0/271   793.32MiB366.40MiB none100.00GiB 
0/274   514.96MiB142.92MiB none100.00GiB 

Is that expected?
-- 
Christian Robottom Reis | [+55 16] 3376 0125   | http://async.com.br/~kiko
CEO, Async Open Source  | [+55 16] 9 9112 6430 | http://launchpad.net/~kiko
--
To unsubscribe from this list: send the line unsubscribe linux-btrfs in


qgroup limit clearing, was Re: Btrfs progs release 4.1

2015-06-22 Thread Christian Robottom Reis
On Mon, Jun 22, 2015 at 05:00:23PM +0200, David Sterba wrote:
   - qgroup:
 - show: distinguish no limits and 0 limit value
 - limit: ability to clear the limit

I'm using kernel 4.1-rc7 as per:

root@riff:/var/lib/lxc/juju-trusty-lxc-template/rootfs# uname -a
Linux riff 4.1.0-040100rc7-generic #201506080035 SMP Mon Jun 8 04:36:20 UTC 
2015 x86_64 x86_64 x86_64 GNU/Linux

But apart from still having major issues with qgroups (quota enforcement
triggers even when there seems to be plenty of free space) clearing
limits with btrfs-progs 4.1 doesn't revert back to 'none', instead
confusingly setting the quota to 16EiB. Using:

root@riff:/var/lib/lxc/juju-trusty-lxc-template/rootfs# btrfs version
btrfs-progs v4.1

I start from:

qgroupid rfer excl max_rfer max_excl 
     
0/5   2.15GiB  1.95GiB none none 
0/261 1.42GiB  1.11GiB none100.00GiB 
0/265 1.09GiB600.59MiB none100.00GiB 
0/271   793.32MiB366.40MiB none100.00GiB 
0/274   514.96MiB142.92MiB none100.00GiB 

I then issue:

root@riff# btrfs qgroup limit -e none 261 /var
root@riff# btrfs qgroup limit none 261 /var

I end up with:

qgroupid rfer excl max_rfer max_excl 
     
0/5   2.15GiB  1.95GiB none none 
0/261 1.42GiB  1.11GiB 16.00EiB 16.00EiB 
0/265 1.09GiB600.59MiB none100.00GiB 
0/271   793.32MiB366.40MiB none100.00GiB 
0/274   514.96MiB142.92MiB none100.00GiB 

Is that expected?
-- 
Christian Robottom Reis   | [+1] 612 888 4935| http://launchpad.net/~kiko
Canonical VP Hyperscale   | [+55 16] 9 9112 6430
--
To unsubscribe from this list: send the line unsubscribe linux-btrfs in


Re: Quota limit question

2015-03-06 Thread Christian Robottom Reis
Just as a follow-up, I upgraded btrfs-tools and the kernel again. I
currently have a filesystem which reports 1G exclusive use:

root@riff# btrfs qg show -r -e /var -p -c
qgroupid rfer excl max_rfer max_excl parent child 
     -- - 
0/261 1.52GiB  1.01GiB0.00B100.00GiB --- ---  

This filesystem reports over quota, and removing the quota fixes that:

root@riff# touch x
touch: cannot touch ‘x’: Disk quota exceeded
root@riff# btrfs qg limit -e
none 261 /var
root@riff# touch x
root@riff# 

So at the moment quotas are pretty much unusable in kernel 3.18.6/tools
3.18.2, at least for my use case, and that's a bit surprising since
there isn't anything very interesting about it (other than it contains a
bunch of lxc-cloned rootfs).

I've proactively added Yang who has submitted a few patches on quota
checking recently just to let me know if he thinks that this should be
fixed with a trunk kernel, or if he'd like to investigate or consider
this further. Thanks!

On Wed, Dec 24, 2014 at 03:52:41AM +, Duncan wrote:
 Christian Robottom Reis posted on Tue, 23 Dec 2014 18:36:02 -0200 as
 excerpted:
 
  On Tue, Dec 16, 2014 at 11:15:37PM -0200, Christian Robottom Reis wrote:
  # btrfs qgroup limit 2000m 0/261 .  touch x touch: cannot touch
  ‘x’: Disk quota exceeded
  
  The strange thing is that it doesn't seem to be actually out of space:
  
  # btrfs qgroup show -p -r -e /var | grep 261
  0/261810048  391114752   2097152000  0  ---
  
  Replying to myself as I had not yet been subscribed in time to receive a
  reply; I just upgraded to 3.18.1 and am seeing the same issue on the
  same subvolume (and on no others).
 
 Looking at the thread here on gmane.org (list2news and list2web gateway), 
 it appears my reply was the only reply in any case, and it was general as 
 I don't run quotas myself.
 
 Basically I suggested upgrading, as the quota code as some rather huge 
 bugs in it (quotas could go seriously negative!) with the old versions 
 you were running.  But you've upgraded at least the kernel now (userspace 
 you didn't say).
 
 Here's a link to the thread on the gmane web interface for completeness, 
 but the above about covers my reply, as I said the only one until your 
 thread bump and my reply here, so there's not much new there unless 
 someone posts further followups to this thread...
 
 
 http://comments.gmane.org/gmane.comp.file-systems.btrfs/41491
 
 
 -- 
 Duncan - List replies preferred.   No HTML msgs.
 Every nonfree program has a lord, a master --
 and if you use the program, he is your master.  Richard Stallman
 
 --
 To unsubscribe from this list: send the line unsubscribe linux-btrfs in
 the body of a message to majord...@vger.kernel.org
 More majordomo info at  http://vger.kernel.org/majordomo-info.html
-- 
Christian Robottom Reis   | [+1] 612 888 4935| http://launchpad.net/~kiko
Canonical VP Hyperscale   | [+55 16] 9 9112 6430
--
To unsubscribe from this list: send the line unsubscribe linux-btrfs in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: [PATCH] btrfs-progs: make btrfs qgroups show human readable sizes

2015-01-09 Thread Christian Robottom Reis
On Fri, Jan 09, 2015 at 02:47:05PM +0800, Fan Chengniang wrote:
 make btrfs qgroups show human readable sizes, using -h option, example:

Oh! This is really nice. I wonder, would there be a sane way to show the
actual path the qgroup is associated with as well?
--
To unsubscribe from this list: send the line unsubscribe linux-btrfs in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: Quota limit question

2014-12-23 Thread Christian Robottom Reis
On Tue, Dec 16, 2014 at 11:15:37PM -0200, Christian Robottom Reis wrote:
 # btrfs qgroup limit 2000m 0/261 .  touch x
 touch: cannot touch ‘x’: Disk quota exceeded
 
 The strange thing is that it doesn't seem to be actually out of space:
 
 # btrfs qgroup show -p -r -e /var | grep 261
 0/261810048  391114752   2097152000  0  ---   

Replying to myself as I had not yet been subscribed in time to receive a
reply; I just upgraded to 3.18.1 and am seeing the same issue on the
same subvolume (and on no others).

root@riff:/etc# uname -a
Linux riff 3.18.1-031801-generic #201412170637 SMP Wed Dec 17 11:38:50
UTC 2014 x86_64 x86_64 x86_64 GNU/Linux

It's quite odd that this specific subvolume acts up, given that there
are quite a few others that are closer to the quota:

subvol  grouptotal   unshared
-
(unknown)   0/5  1.37G / none  1.16G /   none
lxc-template1/rootfs0/2590.68G / none  0.10G /  2.00G
machine-2/rootfs0/2611.07G / none  0.40G /  2.00G
machine-3/rootfs0/2651.17G / none  0.41G /  2.00G
lxc-template2/rootfs0/2710.77G / none  0.31G /  2.00G
lxc-template3/rootfs0/2740.46G / none  0.02G /  2.00G
machine-4/rootfs0/2837.12G / none  6.21G / 10.00G
machine-5/rootfs0/2881.05G / none  0.34G /  2.00G
machine-6/rootfs0/289   11.33G / none 10.74G / 15.00G
machine-7/rootfs0/2901.30G / none  0.68G /  2.00G
machine-8/rootfs0/2921.00G / none  0.33G /  2.00G
machine-9/rootfs0/2931.17G / none  0.38G /  2.00G
machine-10/rootfs   0/3061.34G / none  0.62G /  2.00G
machine-11/rootfs   0/3189.49G / none  8.75G / 15.00G
lxc-template4/rootfs0/3200.79G / none  0.78G /  2.00G
machine-14/rootfs   0/3231.10G / none  0.45G /  2.00G

The LWN article suggests that btrfs is quite conservative with quotas,
but shouldn't 265, 290, 306, 320 and 323 all be out of quota as well? Or
is there a lot else that goes into the calculation beyond the numbers
reported by btrfs qgroup show?

What could I do to help investigate further?
-- 
Christian Robottom Reis | [+55 16] 3376 0125   | http://async.com.br/~kiko
CEO, Async Open Source  | [+55 16] 9 9112 6430 | http://launchpad.net/~kiko
--
To unsubscribe from this list: send the line unsubscribe linux-btrfs in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Quota limit question

2014-12-16 Thread Christian Robottom Reis
Hello there,

I'm trying out btrfs on a machine we use to host a number of
containers. After a misbehaved process filled the partition allocated to
the containers, I decided to experiment with quotas to isolate the
containers from each other. But I've now run into an oddity with one of
the containers, which reports being out of space:

# btrfs qgroup limit 2000m 0/261 .  touch x
touch: cannot touch ‘x’: Disk quota exceeded

The strange thing is that it doesn't seem to be actually out of space:

# btrfs qgroup show -p -r -e /var | grep 261
0/261810048  391114752   2097152000  0  ---   

which pretty-printed is 1.04G rfer and 0.36G excl (perhaps the qgroup
show command could take an option to display in other units?)

I can only get it to allow me to start using it again if I go over 5808M:

# btrfs qgroup limit 5807m 0/261 .  rm -f x  touch x
rm: cannot remove ‘x’: Disk quota exceeded
# btrfs qgroup limit 5808m 0/261 .  rm -f x  touch x
#

Why specifically 5808 I'm not sure, but I binary searched until I got to
that number. Does anyone have a clue as to why that might be happening,
and perhaps what I'm missing?

For completeness, some details on the filesystem and system:

# btrfs fi show /var
Label: var  uuid: 815b3280-e90f-483a-b244-1d2dfe9b6e67
Total devices 2 FS bytes used 31.48GiB
devid1 size 80.00GiB used 55.91GiB path /dev/sda3
devid2 size 80.00GiB used 55.91GiB path /dev/sdb3

root@riff:/var/lib/lxc/async-local-machine-2/rootfs# btrfs fi df /var
Data, RAID1: total=53.88GiB, used=30.45GiB
System, RAID1: total=32.00MiB, used=16.00KiB
Metadata, RAID1: total=2.00GiB, used=1.03GiB

# btrfs qgroup show -p -r -e /var
qgroupid rferexclmax_rfermax_excl   parent  
    --  
0/5  1486852096  1252569088  0   0  --- 
0/259727175168   104947712   0   5368709120 --- 
0/261810048  391114752   2097152000  0  ---   
0/2651255923712  442871808   0   5368709120 --- 
0/271831856640   333189120   0   5368709120 --- 
0/274498761728   228270080   5368709120 --- 
0/2837666098176  6691426304  10737418240 0  --- 
0/2881118441472  348901376   0   5368709120 --- 
0/28911134029824 10498187264 16106127360 0  --- 
0/2901412505600  694210560   10737418240 0  --- 
0/2921131053056  73440   0   5368709120 --- 
0/2931258176512  401141760   0   5368709120 --- 
0/3061430532096  656773120   0   5368709120 --- 
0/3189309212672  8509857792  10737418240 0  --- 
0/320860209152   837406720   0   5368709120 --- 
0/3231167962112  469741568   0   5368709120 --- 

# btrfs --version
Btrfs v3.12

# uname -a
Linux riff 3.13.0-43-generic #72-Ubuntu SMP Mon Dec 8 19:35:06 UTC 2014
x86_64 x86_64 x86_64 GNU/Linux

Thanks,
-- 
Christian Robottom Reis | [+55 16] 3376 0125   | http://async.com.br/~kiko
Async Open Source   | [+55 16] 9 9112 6430 | http://launchpad.net/~kiko
--
To unsubscribe from this list: send the line unsubscribe linux-btrfs in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html