Re: [PATCH][BTRFS-PROGS] Enhance btrfs fi df

2012-11-03 Thread Goffredo Baroncelli
On 11/02/2012 08:05 PM, Gabriel wrote:
 On Fri, 02 Nov 2012 13:02:32 +0100, Goffredo Baroncelli wrote:
 On 2012-11-02 12:18, Martin Steigerwald wrote:
[...]
 We could use Chunk(s) capacity instead of total/size ? I would like an
 opinion from a english people point of view..
 
 This is easy to fix, here's a mockup:
 
 Metadata,DUP: Size: 1.75GB ×2, Used: 627.84MB ×2
/dev/dm-03.50GB
 
Data   Metadata MetadataSystem System  
Single Single   DUP Single DUP Unallocated

 /dev/dm-16 1.31TB   8.00MB  56.00GB4.00MB  16.00MB   0.00
==  === == === ===
 Total  1.31TB   8.00MB  28.00GB ×2 4.00MB   8.00MB ×20.00
 Used   1.31TB 0.00   5.65GB ×2   0.00 152.00KB ×2

I want to point out that we faced a lot of difficult to show that a
chunk has a capacity, and the space stored on the disk(s) is greater.
This leaded to show both the chunk (in term of Type and Profile [*]) and
the disk used.
The only thing that it will be a bit unclear is when the Profile is DUP,
because the disk usage is double of the space available.
For this reasons I am considering to put 2x on the line related to the
disks.
Putting a x2 In the total increase the confusion.

GB




[*] For type I means Data, Metadata, System and profile I means
DUP,Raid1/0/10/5/6,Single...


 
 Also, I don't know if you could use libblkid, but it finds more 
 descriptive names than dm-NN (thanks to some smart sorting logic).
 
 
 --
 To unsubscribe from this list: send the line unsubscribe linux-btrfs in
 the body of a message to majord...@vger.kernel.org
 More majordomo info at  http://vger.kernel.org/majordomo-info.html
 


-- 
gpg @keyserver.linux.it: Goffredo Baroncelli (kreijackATinwind.it
Key fingerprint BBF5 1610 0B64 DAC6 5F7D  17B2 0EDA 9B37 8B82 E0B5
--
To unsubscribe from this list: send the line unsubscribe linux-btrfs in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: [PATCH][BTRFS-PROGS] Enhance btrfs fi df

2012-11-03 Thread Goffredo Baroncelli
On 11/03/2012 12:44 AM, Hugo Mills wrote:
 1 MiB stored in RAID-5 across 3 devices takes up 1.5 MiB -- multiplier ×1.5
(1 MiB over 2 devices is 512 KiB, plus an additional 512 KiB for parity)
 1 MiB stored in RAID-5 across 6 devices takes up 1.2 MiB -- multipler ×1.2
(1 MiB over 5 devices is 204.8 KiB, plus an additional 204.8 KiB for 
 parity)
 
With the (initial) proposed implementation of RAID-5, the
 stripe-width (i.e. the number of devices used for any given chunk
 allocation) will be *as many as can be allocated*. Chris confirmed
 this today on IRC. So if I have a disk array of 2T, 2T, 2T, 1T, 1T,
 1T, then the first 1T of allocation will stripe across 6 devices,


Interesting.
Let me simulate a possible output


 ./btrfs filesystem disk-usage -t /
  DataMetadata MetadataSystem System
  RAID5   Single   DUP Single DUP Unallocated

/dev/dm-0  1.50TB   8.00MB   - 4.00MB 16.00MB500.00MB
/dev/dm-1  1.50TB-   - 4.00MB 16.00MB500.00MB
/dev/dm-2  1.50TB-   - 4.00MB   -500.00MB
/dev/dm-3  1.00TB- 2x 100.00MB 4.00MB   -300.00MB
/dev/dm-4  1.00TB- 2x 100.00MB 4.00MB   -300.00MB
  ===  === == === ===
Total  5.00TB   8.00MB200.00MB 4.00MB  8.00MB  2.10TB
Used  10.65GB 0.00 50.00MB   0.00  4.00KB


Would be it clear ? And what if we move the Total/used below the header ?

 ./btrfs filesystem disk-usage -t /
  DataMetadata MetadataSystem System
  RAID5   Single   DUP Single DUP Unallocated

Total  5.00TB   8.00MB200.00MB 4.00MB  8.00MB  2.10TB
Used  10.65GB 0.00 50.00MB   0.00  4.00KB
  ===  === == === ===
/dev/dm-0  1.50TB   8.00MB   - 4.00MB 16.00MB500.00MB
/dev/dm-1  1.50TB-   - 4.00MB 16.00MB500.00MB
/dev/dm-2  1.50TB-   - 4.00MB   -500.00MB
/dev/dm-3  1.00TB- 2x 100.00MB 4.00MB   -300.00MB
/dev/dm-4  1.00TB- 2x 100.00MB 4.00MB   -300.00MB




GB

P.S:
The raid5 is composed by (4+1) x 1T and (2+1) x 0.5TB, supposing the
disks as 2x 1.5TB and 3x2Tb.


-- 
gpg @keyserver.linux.it: Goffredo Baroncelli (kreijackATinwind.it
Key fingerprint BBF5 1610 0B64 DAC6 5F7D  17B2 0EDA 9B37 8B82 E0B5
--
To unsubscribe from this list: send the line unsubscribe linux-btrfs in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: [PATCH][BTRFS-PROGS] Enhance btrfs fi df

2012-11-03 Thread Goffredo Baroncelli
On 11/02/2012 11:06 PM, Hugo Mills wrote:
 non-integer with the RAID-5/6 code (which is due Real Soon Now).
Hi Hugo,

do you have more information about raid ? When it will land on the btrfs
earth ? :-)

-- 
gpg @keyserver.linux.it: Goffredo Baroncelli (kreijackATinwind.it
Key fingerprint BBF5 1610 0B64 DAC6 5F7D  17B2 0EDA 9B37 8B82 E0B5
--
To unsubscribe from this list: send the line unsubscribe linux-btrfs in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: [PATCH][BTRFS-PROGS] Enhance btrfs fi df

2012-11-03 Thread cwillu
 do you have more information about raid ? When it will land on the btrfs
 earth ? :-)

An unnamed source recently said today I'm fixing parity rebuild in
the middle of a read/modify/write. its one of my last blockers, at
which point several gags about progress meters were made.
--
To unsubscribe from this list: send the line unsubscribe linux-btrfs in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: [PATCH][BTRFS-PROGS] Enhance btrfs fi df

2012-11-02 Thread Martin Steigerwald
Am Freitag, 2. November 2012 schrieb Goffredo Baroncelli:
 Hi all, on the basis of the discussion in the thread
 '[RFC] New attempt to a better btrfs fi df', I prepared the following
 set of patches.
 These patches update the btrfs fi df command and add two new commands:
 - btrfs filesystem disk-usage path
 - btrfs device disk-usage path
 
 The command btrfs filesystem df now shows only the disk
 usage/available.
 
 $ btrfs filesystem df /mnt/btrfs1/
 Disk size: 109.00GB
 Disk allocated:  5.90GB
 Disk unallocated:  103.10GB
 Used:  284.00KB
 Free (Estimated):   63.00GB   (Max: 106.51GB, min: 54.96GB)
 Data to disk ratio:58 %

This is coming along nicely.

Tested-By: Martin Steigerwald mar...@lichtvoll.de

I can test on some other boxes next week, if you want to.

I just wonder about one thing:


merkaba:[…]/btrfs-progs-unstable ./btrfs fi df /
Disk size:18.62GB
Disk allocated:   18.62GB
Disk unallocated:0.00
Used: 11.26GB
Free (Estimated):  5.61GB   (Max: 5.61GB, min: 5.61GB)
Data to disk ratio:  91 %


merkaba:[…]/btrfs-progs-unstable ./btrfs filesystem disk-usage /
Data,Single: Size:15.10GB, Used:10.65GB
   /dev/dm-0   15.10GB

Metadata,Single: Size:8.00MB, Used:0.00
   /dev/dm-08.00MB

Metadata,DUP: Size:1.75GB, Used:627.84MB
   /dev/dm-03.50GB

System,Single: Size:4.00MB, Used:0.00
   /dev/dm-04.00MB

System,DUP: Size:8.00MB, Used:4.00KB
   /dev/dm-0   16.00MB

Unallocated:
   /dev/dm-0  0.00


merkaba:[…]/btrfs-progs-unstable ./btrfs filesystem disk-usage -t /
  DataMetadata Metadata System System 
  Single  Single   DUP  Single DUP Unallocated
  
/dev/dm-0 15.10GB   8.00MB   3.50GB 4.00MB 16.00MB0.00
  ===   == === ===
Total 15.10GB   8.00MB   1.75GB 4.00MB  8.00MB0.00
Used  10.65GB 0.00 627.84MB   0.00  4.00KB


Metadata, DUP is displayed as 3,50GB on the device level and as 1,75GB
in total. I understand the logic behind this, but this could be a bit
confusing.

But it makes sense: Showing real allocation on device level makes sense,
cause thats what really allocated on disk. Total makes some sense, cause
thats what is being used from the tree by BTRFS.

It still looks confusing at first…

Maybe two sizes: One total with dup / raid1 / raid10 being accounted for
and one without?

Well maybe just leave as is for now. This output is for experienced
users.


merkaba:[…]/btrfs-progs-unstable ./btrfs device disk-usage /   
/dev/dm-0  18.62GB
   Data,Single: 15.10GB
   Metadata,Single:  8.00MB
   Metadata,DUP: 3.50GB
   System,Single:4.00MB
   System,DUP:  16.00MB
   Unallocated:0.00


This is a nice view on the disk. I know its fully allocated by BTRFS,
and in order to make more free space for the data tree for example I´d
need to look at the tree usage and then if it makes sense do a balance
operation.

Well in that case, I plan to migrate metadata and system to single. And
remove the dup trees then.

Ciao,
-- 
Martin 'Helios' Steigerwald - http://www.Lichtvoll.de
GPG: 03B0 0D6C 0040 0710 4AFA  B82F 991B EAAC A599 84C7
--
To unsubscribe from this list: send the line unsubscribe linux-btrfs in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: [PATCH][BTRFS-PROGS] Enhance btrfs fi df

2012-11-02 Thread Goffredo Baroncelli
On 2012-11-02 12:18, Martin Steigerwald wrote:
 I can test on some other boxes next week, if you want to.

Yes please,

 
 I just wonder about one thing:
 
 
 merkaba:[…]/btrfs-progs-unstable ./btrfs fi df /
 Disk size:18.62GB
 Disk allocated:   18.62GB
 Disk unallocated:0.00
 Used: 11.26GB
 Free (Estimated):  5.61GB   (Max: 5.61GB, min: 5.61GB)
 Data to disk ratio:  91 %
 
 
 merkaba:[…]/btrfs-progs-unstable ./btrfs filesystem disk-usage /
 Data,Single: Size:15.10GB, Used:10.65GB
/dev/dm-0   15.10GB
 
 Metadata,Single: Size:8.00MB, Used:0.00
/dev/dm-08.00MB
 
 Metadata,DUP: Size:1.75GB, Used:627.84MB
/dev/dm-03.50GB
 
 System,Single: Size:4.00MB, Used:0.00
/dev/dm-04.00MB
 
 System,DUP: Size:8.00MB, Used:4.00KB
/dev/dm-0   16.00MB
 
 Unallocated:
/dev/dm-0  0.00
 
 
 merkaba:[…]/btrfs-progs-unstable ./btrfs filesystem disk-usage -t /
   DataMetadata Metadata System System 
   Single  Single   DUP  Single DUP Unallocated
   
 /dev/dm-0 15.10GB   8.00MB   3.50GB 4.00MB 16.00MB0.00
   ===   == === ===
 Total 15.10GB   8.00MB   1.75GB 4.00MB  8.00MB0.00
 Used  10.65GB 0.00 627.84MB   0.00  4.00KB
 
 
 Metadata, DUP is displayed as 3,50GB on the device level and as 1,75GB
 in total. I understand the logic behind this, but this could be a bit
 confusing.
 
 But it makes sense: Showing real allocation on device level makes sense,
 cause thats what really allocated on disk. Total makes some sense, cause
 thats what is being used from the tree by BTRFS.

Yes, me too. At the first I was confused when you noticed this
discrepancy. So I have to admit that it is not so obvious to understand.
However we didn't find any way to make it more clear...

 It still looks confusing at first…
We could use Chunk(s) capacity instead of total/size ? I would like an
opinion from a english people point of view..

GB


-- 
gpg @keyserver.linux.it: Goffredo Baroncelli (kreijackATinwind.it
Key fingerprint BBF5 1610 0B64 DAC6 5F7D  17B2 0EDA 9B37 8B82 E0B5
--
To unsubscribe from this list: send the line unsubscribe linux-btrfs in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: [PATCH][BTRFS-PROGS] Enhance btrfs fi df

2012-11-02 Thread Gabriel
On Fri, 02 Nov 2012 13:02:32 +0100, Goffredo Baroncelli wrote:
 On 2012-11-02 12:18, Martin Steigerwald wrote:
 Metadata, DUP is displayed as 3,50GB on the device level and as 1,75GB
 in total. I understand the logic behind this, but this could be a bit
 confusing.
 
 But it makes sense: Showing real allocation on device level makes
 sense,
 cause thats what really allocated on disk. Total makes some sense,
 cause thats what is being used from the tree by BTRFS.
 
 Yes, me too. At the first I was confused when you noticed this
 discrepancy. So I have to admit that it is not so obvious to understand.
 However we didn't find any way to make it more clear...
 
 It still looks confusing at first…
 We could use Chunk(s) capacity instead of total/size ? I would like an
 opinion from a english people point of view..

This is easy to fix, here's a mockup:

Metadata,DUP: Size: 1.75GB ×2, Used: 627.84MB ×2
   /dev/dm-03.50GB

   Data   Metadata MetadataSystem System  
   Single Single   DUP Single DUP Unallocated
   
/dev/dm-16 1.31TB   8.00MB  56.00GB4.00MB  16.00MB   0.00
   ==  === == === ===
Total  1.31TB   8.00MB  28.00GB ×2 4.00MB   8.00MB ×20.00
Used   1.31TB 0.00   5.65GB ×2   0.00 152.00KB ×2

Also, I don't know if you could use libblkid, but it finds more 
descriptive names than dm-NN (thanks to some smart sorting logic).


--
To unsubscribe from this list: send the line unsubscribe linux-btrfs in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: [PATCH][BTRFS-PROGS] Enhance btrfs fi df

2012-11-02 Thread Goffredo Baroncelli
On 11/02/2012 08:05 PM, Gabriel wrote:
 On Fri, 02 Nov 2012 13:02:32 +0100, Goffredo Baroncelli wrote:
 On 2012-11-02 12:18, Martin Steigerwald wrote:
 Metadata, DUP is displayed as 3,50GB on the device level and as 1,75GB
 in total. I understand the logic behind this, but this could be a bit
 confusing.

 But it makes sense: Showing real allocation on device level makes
 sense,
 cause thats what really allocated on disk. Total makes some sense,
 cause thats what is being used from the tree by BTRFS.

 Yes, me too. At the first I was confused when you noticed this
 discrepancy. So I have to admit that it is not so obvious to understand.
 However we didn't find any way to make it more clear...

 It still looks confusing at first…
 We could use Chunk(s) capacity instead of total/size ? I would like an
 opinion from a english people point of view..
 
 This is easy to fix, here's a mockup:
 
 Metadata,DUP: Size: 1.75GB ×2, Used: 627.84MB ×2
/dev/dm-03.50GB
 
Data   Metadata MetadataSystem System  
Single Single   DUP Single DUP Unallocated

 /dev/dm-16 1.31TB   8.00MB  56.00GB4.00MB  16.00MB   0.00
==  === == === ===
 Total  1.31TB   8.00MB  28.00GB ×2 4.00MB   8.00MB ×20.00
 Used   1.31TB 0.00   5.65GB ×2   0.00 152.00KB ×2 

Nice idea. Even tough I like the opposite:


   Data   Metadata MetadataSystem System
   Single Single   DUP Single DUP Unallocated

/dev/dm-16 1.31TB   8.00MB  28.00GB x2 4.00MB   8.00MB x20.00
   ==  === == === ===
Total  1.31TB   8.00MB  28.00GB4.00MB   8.00MB   0.00
Used   1.31TB 0.00   5.65GB  0.00 152.00KB


However how your solution will became when RAID5/RAID6 will arrive ? mmm
may be the solution is simpler: the x2 factor is applied only to DUP
profile. The other profiles span different disks.

As another option, we can add a field/line which reports the RAID factor:


Metadata,DUP: Size: 1.75GB, Used: 627.84MB, Raid factor: 2x
   /dev/dm-03.50GB


Data   Metadata Metadata   System System
Single Single   DUPSingle DUPUnallocated

/dev/dm-16  1.31TB   8.00MB  56.00GB 4.00MB  16.00MB0.00
==   ==  ===
Raid factor  --   x2  -   x2   -
Total   1.31TB   8.00MB  28.00GB 4.00MB   8.00MB0.00
Used1.31TB 0.00   5.65GB   0.00 152.00KB





 
 Also, I don't know if you could use libblkid, but it finds more 
 descriptive names than dm-NN (thanks to some smart sorting logic).

I don't think that it would be impossible to use libblkid, however it
would be difficult to find spaces for longer device name

 
 
 --
 To unsubscribe from this list: send the line unsubscribe linux-btrfs in
 the body of a message to majord...@vger.kernel.org
 More majordomo info at  http://vger.kernel.org/majordomo-info.html
 


-- 
gpg @keyserver.linux.it: Goffredo Baroncelli (kreijackATinwind.it
Key fingerprint BBF5 1610 0B64 DAC6 5F7D  17B2 0EDA 9B37 8B82 E0B5
--
To unsubscribe from this list: send the line unsubscribe linux-btrfs in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: [PATCH][BTRFS-PROGS] Enhance btrfs fi df

2012-11-02 Thread Gabriel
On Fri, 02 Nov 2012 20:31:56 +0100, Goffredo Baroncelli wrote:

 On 11/02/2012 08:05 PM, Gabriel wrote:
 On Fri, 02 Nov 2012 13:02:32 +0100, Goffredo Baroncelli wrote:
 On 2012-11-02 12:18, Martin Steigerwald wrote:
 Metadata, DUP is displayed as 3,50GB on the device level and as
 1,75GB in total. I understand the logic behind this, but this could
 be a bit confusing.

 But it makes sense: Showing real allocation on device level makes
 sense,
 cause thats what really allocated on disk. Total makes some sense,
 cause thats what is being used from the tree by BTRFS.

 Yes, me too. At the first I was confused when you noticed this
 discrepancy. So I have to admit that it is not so obvious to
 understand.
 However we didn't find any way to make it more clear...

 It still looks confusing at first…
 We could use Chunk(s) capacity instead of total/size ? I would like
 an opinion from a english people point of view..
 
 This is easy to fix, here's a mockup:
 
 Metadata,DUP: Size: 1.75GB ×2, Used: 627.84MB ×2
/dev/dm-03.50GB
 
Data   Metadata MetadataSystem System Single Single  
DUP Single DUP Unallocated

 /dev/dm-16 1.31TB   8.00MB  56.00GB4.00MB  16.00MB   0.00
==  === == === ===
 Total  1.31TB   8.00MB  28.00GB ×2 4.00MB   8.00MB ×20.00
 Used   1.31TB 0.00   5.65GB ×2   0.00 152.00KB ×2
 
 Nice idea. Even tough I like the opposite:
 
 
Data   Metadata MetadataSystem System Single Single   DUP
Single DUP Unallocated
 
 /dev/dm-16 1.31TB   8.00MB  28.00GB x2 4.00MB   8.00MB x20.00
==  === == === ===
 Total  1.31TB   8.00MB  28.00GB4.00MB   8.00MB   0.00
 Used   1.31TB 0.00   5.65GB  0.00 152.00KB
 
 
 However how your solution will became when RAID5/RAID6 will arrive ? mmm
 may be the solution is simpler: the x2 factor is applied only to DUP
 profile. The other profiles span different disks.

That problem solved itself :)

 As another option, we can add a field/line which reports the RAID
 factor:
 
 Metadata,DUP: Size: 1.75GB, Used: 627.84MB, Raid factor: 2x
/dev/dm-03.50GB
 
 
 Data   Metadata Metadata   System System Single Single   DUP
Single DUPUnallocated
 
 /dev/dm-16  1.31TB   8.00MB  56.00GB 4.00MB  16.00MB0.00
 ==   ==  ===
 Raid factor  --   x2  -   x2   -
 Total   1.31TB   8.00MB  28.00GB 4.00MB   8.00MB0.00 Used   
 1.31TB 0.00   5.65GB   0.00 152.00KB

All fine options. Though if you remove the ×2 on the totals line,
you should compute it instead (it looks like a tally, both sides
of the == line should be equal).

Now that I've started bikeshedding, here is something that I would
find pretty much ideal:

DataMetadata   System Unallocated   
   

VolGroup/Btrfs
  Reserved   1.31TB 8.00MB + 2×28.00MB 16.00MB + 2×4.00MB   -
  Used   1.31TB  2× 5.65GB 2×152.00KB   -
=== == == ===
Total
  Reserved   1.31TB56.00GB24.00MB   -
  Used   1.31TB11.30GB   304.00KB   -
  Free  12.34GB44.70GB23.70MB   -



 Also, I don't know if you could use libblkid, but it finds more
 descriptive names than dm-NN (thanks to some smart sorting logic).
 
 I don't think that it would be impossible to use libblkid, however
 it would be difficult to find spaces for longer device name

I suggest cutting out the /dev and putting a line break after the
name. The extra info makes it more human-friendly, and the line
break may complicate machine parsing but the non-tabular format is
better at that anyway.

--
To unsubscribe from this list: send the line unsubscribe linux-btrfs in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: [PATCH][BTRFS-PROGS] Enhance btrfs fi df

2012-11-02 Thread Michael Kjörling
On 2 Nov 2012 20:40 +, from g2p.c...@gmail.com (Gabriel):
 Now that I've started bikeshedding, here is something that I would
 find pretty much ideal:
 
 DataMetadata   System Unallocated 
  
 
 VolGroup/Btrfs
   Reserved   1.31TB 8.00MB + 2×28.00MB 16.00MB + 2×4.00MB   -
   Used   1.31TB  2× 5.65GB 2×152.00KB   -
 === == == ===
 Total
   Reserved   1.31TB56.00GB24.00MB   -
   Used   1.31TB11.30GB   304.00KB   -
   Free  12.34GB44.70GB23.70MB   -

If we can take such liberties, then why bother with the 2× at all?

Also, I think the B can go, since it's implied by talking about
storage capacities. A lot of tools do this already; look at GNU df -h
and ls -lh for just two examples. That gives you a few extra columns
which can be used to make the table column spacing a little bigger even
in an 80-column terminal.

I'm _guessing_ that you meant for metadata reserved to be 2 × 28 GB and
not 2 × 28 MB, because otherwise the numbers really don't add up.

  DataMetadataSystemUnallocated

VolGroup/Btrfs
  Reserved  1.31T  8.00M + 28.00G  16.00M +   4.00M-
   ResRedun -  28.00G 4.00M-
  Used  1.31T   5.65G   152.00K-
   UseRedun -   5.65G   152.00K-
  ===  ==    ===
Total
  Reserved  1.31T  56.01G24.00M-
  Used  1.31T  11.30G   304.00K-
  Free 12.34G  44.71G23.70M-

This way, the numbers should add up nicely. (Redun for redundancy or
something like that.) 8M + 28G + 28G = 56.01G, 5.65G + 5.65G = 11.30G,
56.01G - 11.30G = 44.71G. I'm not sure you couldn't even work 8.00M +
28.00G into a single 28.01G entry at Reserved/Metadata, with
ResRedun/Metadata 28.00G. That would require some care when the units
are different enough that the difference doesn't show up in the numbers,
though, since then there is nothing to indicate that parts of the
metadata is not stored in a redundant fashion.

If some redundancy scheme (RAID 5?) uses an oddball factor, that can
still easily be expressed in a view like the above simply by displaying
the user data and redundancy data separately, in exactly the same way.

And personally, I feel that a summary view like this, for Data, if an
exact number cannot be calculated, should display the _minimum amount of
available free space_, with free space being _usable by user files_.
If I start copying a 12.0GB file onto the file system exemplified above,
I most assuredly _don't_ want to get a report of device full after 10
GB! (You mating female dog, you told me I had 12.3 GB free, wrote 10 GB
and now you're saying there's NO free space?! To hell with this, I'm
switching to Windows!) That also saves this tool from having to take
into account possible compression ratios for when file system level
compression is enabled, savings from possible deduplication of data, etc
etc. Of course it also means that the amount of free space may shrink by
less than the size of the added data, but hey, that's a nice bonus if
your disk grows bigger as you add more data to it. :-)


 I suggest cutting out the /dev and putting a line break after the
 name. The extra info makes it more human-friendly, and the line
 break may complicate machine parsing but the non-tabular format is
 better at that anyway.

That might work well for anything under /dev, but what about things that
aren't? And I stand by my earlier position that the tabular data
shouldn't be machine-parsed anyway. As you say, the non-tabular format
is better for that.

-- 
Michael Kjörling • http://michael.kjorling.se • mich...@kjorling.se
“People who think they know everything really annoy
those of us who know we don’t.” (Bjarne Stroustrup)
--
To unsubscribe from this list: send the line unsubscribe linux-btrfs in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: [PATCH][BTRFS-PROGS] Enhance btrfs fi df

2012-11-02 Thread Hugo Mills
On Fri, Nov 02, 2012 at 07:05:37PM +, Gabriel wrote:
 On Fri, 02 Nov 2012 13:02:32 +0100, Goffredo Baroncelli wrote:
  On 2012-11-02 12:18, Martin Steigerwald wrote:
  Metadata, DUP is displayed as 3,50GB on the device level and as 1,75GB
  in total. I understand the logic behind this, but this could be a bit
  confusing.
  
  But it makes sense: Showing real allocation on device level makes
  sense,
  cause thats what really allocated on disk. Total makes some sense,
  cause thats what is being used from the tree by BTRFS.
  
  Yes, me too. At the first I was confused when you noticed this
  discrepancy. So I have to admit that it is not so obvious to understand.
  However we didn't find any way to make it more clear...
  
  It still looks confusing at first…
  We could use Chunk(s) capacity instead of total/size ? I would like an
  opinion from a english people point of view..
 
 This is easy to fix, here's a mockup:
 
 Metadata,DUP: Size: 1.75GB ×2, Used: 627.84MB ×2
/dev/dm-03.50GB

   I've not considered the full semantics of all this yet -- I'll try
to do that tomorrow. However, I note that the ×2 here could become
non-integer with the RAID-5/6 code (which is due Real Soon Now). In
the first RAID-5/6 code drop, it won't even be simple to calculate
where there are different-sized devices in the filesystem. Putting an
exact figure on that number is potentially going to be awkward. I
think we're going to need kernel help for working out what that number
should be, in the general case.

   Again, I'm raising minor points based on future capabilities, but I
feel it's worth considering them at this stage, even if the correct
answer is yes, we'll do this now, and deal with any other problems
later.

   Hugo.

Data   Metadata MetadataSystem System  
Single Single   DUP Single DUP Unallocated

 /dev/dm-16 1.31TB   8.00MB  56.00GB4.00MB  16.00MB   0.00
==  === == === ===
 Total  1.31TB   8.00MB  28.00GB ×2 4.00MB   8.00MB ×20.00
 Used   1.31TB 0.00   5.65GB ×2   0.00 152.00KB ×2
 
 Also, I don't know if you could use libblkid, but it finds more 
 descriptive names than dm-NN (thanks to some smart sorting logic).
 
 

-- 
=== Hugo Mills: hugo@... carfax.org.uk | darksatanic.net | lug.org.uk ===
  PGP key: 515C238D from wwwkeys.eu.pgp.net or http://www.carfax.org.uk
  --- My doctor tells me that I have a malformed public-duty gland, ---  
and a natural deficiency in moral fibre. 


signature.asc
Description: Digital signature


Re: [PATCH][BTRFS-PROGS] Enhance btrfs fi df

2012-11-02 Thread Gabriel
On Fri, 02 Nov 2012 22:06:04 +, Hugo Mills wrote:

 On Fri, Nov 02, 2012 at 07:05:37PM +, Gabriel wrote:
 On Fri, 02 Nov 2012 13:02:32 +0100, Goffredo Baroncelli wrote:
  On 2012-11-02 12:18, Martin Steigerwald wrote:
  Metadata, DUP is displayed as 3,50GB on the device level and as 1,75GB
  in total. I understand the logic behind this, but this could be a bit
  confusing.
  
  But it makes sense: Showing real allocation on device level makes
  sense,
  cause thats what really allocated on disk. Total makes some sense,
  cause thats what is being used from the tree by BTRFS.
  
  Yes, me too. At the first I was confused when you noticed this
  discrepancy. So I have to admit that it is not so obvious to understand.
  However we didn't find any way to make it more clear...
  
  It still looks confusing at first…
  We could use Chunk(s) capacity instead of total/size ? I would like an
  opinion from a english people point of view..
 
 This is easy to fix, here's a mockup:
 
 Metadata,DUP: Size: 1.75GB ×2, Used: 627.84MB ×2
/dev/dm-03.50GB
 
I've not considered the full semantics of all this yet -- I'll try
 to do that tomorrow. However, I note that the ×2 here could become
 non-integer with the RAID-5/6 code (which is due Real Soon Now). In
 the first RAID-5/6 code drop, it won't even be simple to calculate
 where there are different-sized devices in the filesystem. Putting an
 exact figure on that number is potentially going to be awkward. I
 think we're going to need kernel help for working out what that number
 should be, in the general case.

DUP can be nested below a device because it represents same-device
redundancy (purpose: survive smudges but not device failure).

On the other hand raid levels should occupy the same space on all
linked devices (a necessary consequence of the guarantee that RAID5
can survive the loss of any device and RAID6 any two devices).

The two probably won't need to be represented at the same time
except during a reshape, because I imagine DUP gets converted to
RAID (1 or 5) as soon as the second device is added.

A 1→2 reshape would look a bit like this (doing only the data column
and skipping totals):

InitialDevice
  Reserved   1.21TB
  Used   1.21TB
RAID1(InitialDevice, SecondDevice)
  Reserved   1.31TB + 100GB
  Used 2× 100GB

RAID5, RAID6: same with fractions, n+1⁄n and n+2⁄n.

Again, I'm raising minor points based on future capabilities, but I
 feel it's worth considering them at this stage, even if the correct
 answer is yes, we'll do this now, and deal with any other problems
 later.
 
Hugo.
 
Data   Metadata MetadataSystem System  
Single Single   DUP Single DUP Unallocated

 /dev/dm-16 1.31TB   8.00MB  56.00GB4.00MB  16.00MB   0.00
==  === == === ===
 Total  1.31TB   8.00MB  28.00GB ×2 4.00MB   8.00MB ×20.00
 Used   1.31TB 0.00   5.65GB ×2   0.00 152.00KB ×2
 
 Also, I don't know if you could use libblkid, but it finds more 
 descriptive names than dm-NN (thanks to some smart sorting logic).
 



--
To unsubscribe from this list: send the line unsubscribe linux-btrfs in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: [PATCH][BTRFS-PROGS] Enhance btrfs fi df

2012-11-02 Thread Gabriel
On Fri, 02 Nov 2012 21:46:35 +, Michael Kjörling wrote:
 On 2 Nov 2012 20:40 +, from g2p.c...@gmail.com (Gabriel):
 Now that I've started bikeshedding, here is something that I would
 find pretty much ideal:
 
 DataMetadata   System Unallocated
   
 
 VolGroup/Btrfs
   Reserved   1.31TB 8.00MB + 2×28.00GB 16.00MB + 2×4.00MB   -
   Used   1.31TB  2× 5.65GB 2×152.00KB   
 === == == ===
 Total
   Reserved   1.31TB56.00GB24.00MB   -
   Used   1.31TB11.30GB   304.00KB   
   Free  12.34GB44.70GB23.70MB   -
 
 If we can take such liberties, then why bother with the 2× at all?

It does save a line.

 Also, I think the B can go, since it's implied by talking about
 storage capacities. A lot of tools do this already; look at GNU df -h
 and ls -lh for just two examples. That gives you a few extra columns
 which can be used to make the table column spacing a little bigger even
 in an 80-column terminal.

Good idea.

 I'm _guessing_ that you meant for metadata reserved to be 2 × 28 GB and
 not 2 × 28 MB, because otherwise the numbers really don't add up.

Feh, that's just a typo from when I swapped the 8.00M to the left.

   DataMetadataSystemUnallocated
 
 VolGroup/Btrfs
   Reserved  1.31T  8.00M + 28.00G  16.00M +   4.00M-
ResRedun -  28.00G 4.00M-
   Used  1.31T   5.65G   152.00K-
UseRedun -   5.65G   152.00K-
   ===  ==    ===
 Total
   Reserved  1.31T  56.01G24.00M-
   Used  1.31T  11.30G   304.00K-
   Free 12.34G  44.71G23.70M-
 
 This way, the numbers should add up nicely. (Redun for redundancy or
 something like that.) 8M + 28G + 28G = 56.01G, 5.65G + 5.65G = 11.30G,
 56.01G - 11.30G = 44.71G. I'm not sure you couldn't even work 8.00M +
 28.00G into a single 28.01G entry at Reserved/Metadata, with
 ResRedun/Metadata 28.00G. That would require some care when the units
 are different enough that the difference doesn't show up in the numbers,
 though, since then there is nothing to indicate that parts of the
 metadata is not stored in a redundant fashion.

I tried to work out DUP vs RAID redundancy in my message to Hugo.

 If some redundancy scheme (RAID 5?) uses an oddball factor, that can
 still easily be expressed in a view like the above simply by displaying
 the user data and redundancy data separately, in exactly the same way.
 
 And personally, I feel that a summary view like this, for Data, if an
 exact number cannot be calculated, should display the _minimum amount of
 available free space_, with free space being _usable by user files_.
 If I start copying a 12.0GB file onto the file system exemplified above,
 I most assuredly _don't_ want to get a report of device full after 10
 GB! (You mating female dog, you told me I had 12.3 GB free, wrote 10 GB
 and now you're saying there's NO free space?! To hell with this, I'm
 switching to Windows!) That also saves this tool from having to take
 into account possible compression ratios for when file system level
 compression is enabled, savings from possible deduplication of data, etc
 etc. Of course it also means that the amount of free space may shrink by
 less than the size of the added data, but hey, that's a nice bonus if
 your disk grows bigger as you add more data to it. :-)

I think we can guarantee minimum amounts of free space, as long as
data/metadata/system are segregated properly?
OK, reshapes complicate this. For those we could to take the worst
case between now and the completed reshape.
Or maybe add a second tally:

devices
===
total
 reserved
 used
 free
===
anticipated (reshaped 8% eta 3:12)
 reserved
 used
 free

 I suggest cutting out the /dev and putting a line break after the
 name. The extra info makes it more human-friendly, and the line
 break may complicate machine parsing but the non-tabular format is
 better at that anyway.
 
 That might work well for anything under /dev, but what about things that
 aren't?

Absolute path for those, assuming it ever happens.

 And I stand by my earlier position that the tabular data
 shouldn't be machine-parsed anyway. As you say, the non-tabular format
 is better for that.

--
To unsubscribe from this list: send the line unsubscribe linux-btrfs in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: [PATCH][BTRFS-PROGS] Enhance btrfs fi df

2012-11-02 Thread Hugo Mills
On Fri, Nov 02, 2012 at 11:23:14PM +, Gabriel wrote:
 On Fri, 02 Nov 2012 22:06:04 +, Hugo Mills wrote:
 
  On Fri, Nov 02, 2012 at 07:05:37PM +, Gabriel wrote:
  On Fri, 02 Nov 2012 13:02:32 +0100, Goffredo Baroncelli wrote:
   On 2012-11-02 12:18, Martin Steigerwald wrote:
   Metadata, DUP is displayed as 3,50GB on the device level and as 1,75GB
   in total. I understand the logic behind this, but this could be a bit
   confusing.
   
   But it makes sense: Showing real allocation on device level makes
   sense,
   cause thats what really allocated on disk. Total makes some sense,
   cause thats what is being used from the tree by BTRFS.
   
   Yes, me too. At the first I was confused when you noticed this
   discrepancy. So I have to admit that it is not so obvious to understand.
   However we didn't find any way to make it more clear...
   
   It still looks confusing at first…
   We could use Chunk(s) capacity instead of total/size ? I would like an
   opinion from a english people point of view..
  
  This is easy to fix, here's a mockup:
  
  Metadata,DUP: Size: 1.75GB ×2, Used: 627.84MB ×2
 /dev/dm-03.50GB
  
 I've not considered the full semantics of all this yet -- I'll try
  to do that tomorrow. However, I note that the ×2 here could become
  non-integer with the RAID-5/6 code (which is due Real Soon Now). In
  the first RAID-5/6 code drop, it won't even be simple to calculate
  where there are different-sized devices in the filesystem. Putting an
  exact figure on that number is potentially going to be awkward. I
  think we're going to need kernel help for working out what that number
  should be, in the general case.
 
 DUP can be nested below a device because it represents same-device
 redundancy (purpose: survive smudges but not device failure).
 
 On the other hand raid levels should occupy the same space on all
 linked devices (a necessary consequence of the guarantee that RAID5
 can survive the loss of any device and RAID6 any two devices).

   No, the multiplier here is variable. Consider:

1 MiB stored in RAID-5 across 3 devices takes up 1.5 MiB -- multiplier ×1.5
   (1 MiB over 2 devices is 512 KiB, plus an additional 512 KiB for parity)
1 MiB stored in RAID-5 across 6 devices takes up 1.2 MiB -- multipler ×1.2
   (1 MiB over 5 devices is 204.8 KiB, plus an additional 204.8 KiB for parity)

   With the (initial) proposed implementation of RAID-5, the
stripe-width (i.e. the number of devices used for any given chunk
allocation) will be *as many as can be allocated*. Chris confirmed
this today on IRC. So if I have a disk array of 2T, 2T, 2T, 1T, 1T,
1T, then the first 1T of allocation will stripe across 6 devices,
giving me 5 data+1 parity, or a multiplier of ×1.2. As soon as the
smaller devices are full, the stripe width will drop to 3 devices, and
we'll be using 2 data+1 parity allocation, or a multiplier of ×1.5 for
any subsequent chunks. So, as more data over the first 5T is stored,
the multiplier steadily decreases, until we fill the FS, and we get a
multiplier of ×1.35 overall. This gets more complicated if you have
devices of many different sizes. (Imagine 6 disks with sizes 500G, 1T,
1.5T, 2T, 3T, 3T).

   We probably can work out the current RAID overhead and feed it back
sensibly, but it's (a) not constant as the allocation of the chunks
increases, and (b) not trivial to compute.

 The two probably won't need to be represented at the same time
 except during a reshape, because I imagine DUP gets converted to
 RAID (1 or 5) as soon as the second device is added.
 
 A 1→2 reshape would look a bit like this (doing only the data column
 and skipping totals):
 
 InitialDevice
   Reserved   1.21TB
   Used   1.21TB
 RAID1(InitialDevice, SecondDevice)
   Reserved   1.31TB + 100GB
   Used 2× 100GB
 
 RAID5, RAID6: same with fractions, n+1⁄n and n+2⁄n.

   Except that n isn't guaranteed to be constant. That was pretty much
my only point. Don't assume that it will be (or at the very least, be
aware that you are assuming it is, and be prepared for inconsistencies).

   Hugo.

-- 
=== Hugo Mills: hugo@... carfax.org.uk | darksatanic.net | lug.org.uk ===
  PGP key: 515C238D from wwwkeys.eu.pgp.net or http://www.carfax.org.uk
--- Well, sir, the floor is yours.  But remember, the ---
  roof is ours!  


signature.asc
Description: Digital signature