Hi Alvin,

On a 2TB dataset, with +-600k files, I have piped tree to less with 
limited joy, it took a few hours and at least I could search for
what I was looking for... - 15TB and 100M is another animal though
and as disk i/o will be your bottleneck, anything will take long, no?

now, for my own info/interest, can you tell me which fs is used for this
ext3? 

On Mon, 4 May 2020 09:55:51 -0400
Alvin Starr via talk <talk@gtalug.org> wrote:
> I am hoping someone has seen this kind of problem before and knows of
> a solution.
> I have a client who has file systems filled with lots of small files
> on the orders of hundreds of millions of files.
> Running something like a find on filesystem takes the better part of
> a week so any kind of directory walking backup tool will take even
> longer to run.
> The actual data-size for 100M files is on the order of 15TB so there
> is a lot of data to backup but the data only increases on the order
> of tens to hundreds of MB a day.
> 
> 
> Even things like xfsdump take a long time.
> For example I tried xfsdump on a 50M file set and it took over 2 days
> to complete.
> 
> The only thing that seems to be workable is Veeam.
> It will run an incremental volume snapshot in a few hours a night but
> I dislike adding proprietary kernel modules into the systems.
> 
> 

---
Post to this mailing list talk@gtalug.org
Unsubscribe from this mailing list https://gtalug.org/mailman/listinfo/talk

Reply via email to