On Thu, 06 Feb 2014 20:54:19 +0100
Goffredo Baroncelli <kreij...@libero.it> wrote:

> I agree with you about the needing of a solution. However your patch to me 
> seems even worse than the actual code.
> 
> For example you cannot take in account the mix of data/linear and 
> metadata/dup (with the pathological case of small files stored in the 
> metadata chunks ), nor different profile level like raid5/6 (or the future 
> raidNxM)
> And do not forget the compression...

Every estimate first and foremost should be measured by how precise it is, or
in this case "wrong by how many gigabytes". The actual code returns a result
that is pretty much always wrong by 2x, after the patch it will be close
within gigabytes to the correct value in the most common use case (data raid1,
metadata raid1 and that's it). Of course that PoC is nowhere near the final
solution, what I can't agree with is "if another option is somewhat better,
but not ideally perfect, then it's worse than the current one", even
considering the current one is absolutely broken.

> The situation is very complex. I am inclined to use a different approach.
> 
> As you know, btrfs allocate space in chunk. Each chunk has an own ration 
> between the data occupied on the disk, and the data available to the 
> filesystem. For SINGLE the ratio is 1, for DUP/RAID1/RAID10 the ratio is 2, 
> for raid 5 the ratio is n/(n-1) (where n is the stripes count), for raid 6 
> the ratio is n/(n-2)....
> 
> Because a filesystem could have chunks with different ratios, we can compute 
> a global ratio as the composition of the each chunk ratio

> We could further enhance this estimation, taking in account also the total 
> files sizes and their space consumed in the chunks (this could be different 
> due to the compression)

I wonder what would be performance implications of all that. I feel a simpler
approach could work.

-- 
With respect,
Roman

Attachment: signature.asc
Description: PGP signature

Reply via email to