On Tue, 20 Jun 2006, Kern Sibbald wrote:

>>> Well, it really should not have much trouble backing them up or restoring
>>> them, though the restore may be a bit slower when creating so many files
>>> in one directory -- this is really OS and memory size dependent.
>>
>> It's the restore tree build times that I am worried about, however using
>> GFS even running a directory listing stops everything for several minutes.
>
> Uh, exactly what do you mean by running a directory listing?  On the OS, or a
> dir command within the Bacula tree restore routines?   Once the tree is
> built, operations should be relatively fast.

On the OS - as in "ls -U"

GFS is more-or-less ext3 based (non-GPL, but same roots) and like most 
filesystems, large directories cause major headaches.

>>> On the other hand, building the Bacula in-memory directory tree using the
>>> "restore" command could be *really* slow because to insert each new file
>>> in the in memory tree goes something like O(n^2).
>>
>> Thanks.
>>
>> I have users with other filesystems with upwards of 6 million small files
>> in them, however these don't have large flat directories...
>
> Are you able to do a Bacula tree restore with 6 million files?

Yes, and have done so a few times during tests.

AB



_______________________________________________
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users

Reply via email to