On Wed, Feb 24, 2010 at 11:09 PM, Bob Friesenhahn
<bfrie...@simple.dallas.tx.us> wrote:
> On Wed, 24 Feb 2010, Steve wrote:
>>
>> The overhead I was thinking of was more in the pointer structures...
>> (bearing in mind this is a 128 bit file system), I would guess that memory
>> requirements would be HUGE for all these files...otherwise arc is gonna
>> struggle, and paging system is going mental....?
>
> It is not reasonable to assume that zfs has to retain everything in memory.

At the same time 400M files in a single directory should lead to a lot
of contention on locks associated with look-ups. Spreading files
between a reasonable number of dirs could mitigate this.

Regards,
Andrey


>
> I have a directory here containing a million files and it has not caused any
> strain for zfs at all although it can cause considerable stress on
> applications.
>
> 400 million tiny files is quite a lot and I would hate to use anything but
> mirrors with so many tiny files.
>
> Bob
> --
> Bob Friesenhahn
> bfrie...@simple.dallas.tx.us, http://www.simplesystems.org/users/bfriesen/
> GraphicsMagick Maintainer,    http://www.GraphicsMagick.org/
> _______________________________________________
> zfs-discuss mailing list
> zfs-discuss@opensolaris.org
> http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
>
_______________________________________________
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

Reply via email to