On 07/01/15 18:53, Helmut Grohne wrote:
> Consider Alice. She wants to install foo, which has a good approximation
> for her filesystem. Unfortunately, it is too big to be installed. Thus
> she looks at other packages and determines that she no longer needs bar.
> Duly she issues "apt-get install foo bar-". Unfortunately, this command
> fails unpacking foo as bar's approximation was bad and thus it does not
> free the space advertised in Installed-Size.

If both foo and bar are using the same algorithm for estimation, which
is what the reproducible builds people want anyway, then the only way I
can think of for this to happen is: Alice's filesystem has
higher-than-estimated overhead when storing foo, but
lower-than-estimated overhead when storing bar. foo has many small
high-entropy files that compress poorly, bar has a few large low-entropy
files that compress well, and Alice's filesystem has transparent
compression but no tail-packing, for instance?

The Installed-Size for foo has to be taken from apt metadata because
there's nowhere else to get it; but if bar's installed size was
recomputed during installation to be the space that it actually turned
out to require on this particular filesystem, then making sure that foo
had a more-pessimistic-than-reality estimate would be enough, I think?

However, considering that one of the possibilities for "filesystem of
the future" has
<https://btrfs.wiki.kernel.org/index.php/FAQ#Why_is_free_space_so_complicated.3F>,
perhaps attempts to detect over-large installations before they happen
with 100% reliability are already doomed to failure, and we should just
do something simple and "not horribly wrong".

    S


-- 
To UNSUBSCRIBE, email to debian-policy-requ...@lists.debian.org
with a subject of "unsubscribe". Trouble? Contact listmas...@lists.debian.org
Archive: https://lists.debian.org/54ad8e49.9060...@debian.org

Reply via email to