Pjotr Prins <pjotr.publi...@thebird.nl> skribis:

> On Fri, Feb 09, 2018 at 01:11:23PM +0100, Ricardo Wurmus wrote:


>> I don’t know about scalability.  This number is still well below the
>> limits of ext4 file systems, but accessing a big directory listing like
>> that can be slow.  I would feel a little better about this if we split
>> it up into different prefix directories (like it’s done for browser
>> caches).  I don’t think it’s necessary, though.
> For ext4 it is going to be an issue. Anyway, we'll see what happens.

In practice, when the maximum number of links is reached, we simply
transparently skip deduplication.  See this commit:

  commit 12b6c951cf5ca6055a22a2eec85665353f5510e5
  Author: Ludovic Courtès <l...@gnu.org>
  Date:   Fri Oct 28 20:34:15 2016 +0200

      daemon: Do not error out when deduplication fails due to ENOSPC.

      This solves a problem whereby if /gnu/store/.links had enough entries,
      ext4's directory index would be full, leading to link(2) returning

      * nix/libstore/optimise-store.cc (LocalStore::optimisePath_): Upon
      ENOSPC from link(2), print a message and return instead of throwing a

It does scale well, and it’s been here “forever”.

If you’re wondering how much gets deduplicated, see


Reply via email to