On Tuesday 21 June 2005 13:12, Sebastian Stark wrote:
> On Tuesday 21 June 2005 12:13, Kern Sibbald wrote:
> > On Tuesday 21 June 2005 10:31, Sebastian Stark wrote:
> > > Is there a way to speed up the creation of the directory tree when
> > > restoring files? For some clients this takes more than an hour for us.
> > >
> > > Our MySQL catalog has grown quite large (~5G) and I think this is the
> > > reason. But maybe there's another way to speed this up other than
> > > splitting up the catalog? Maybe play around with indexes?
> >
> > I suspect that it is more a question of how many files you are trying to
> > load into the tree at one time rather than an SQL question.  In general
> > the size of the database is much less important than the number of files
> > backed up per job.
> >
> > You didn't mention how many files/job you have.  If it is more than about
> > 500K then I can understand the problem.
>
> *estimate job="Backup lech system"
> Connecting to Client lech-fd at lech:9102
> 2000 OK estimate files=38800 bytes=2,149,638,554
>
> The building of the directory tree took about 10 minutes.

No memory tree is built for the estimate command only for the restore command. 
The two commands cannot be compared in any way.

>
> But there are other jobs that have hundreds of thousands of files which, I
> hope, do not affect other jobs.
>
> > Some ideas:
> > - Split your jobs to keep the files/job smaller.
>
> I don't think that's the problem here.

Perhaps not, but that is not my experience.

>
> > - Find some *really* good algorithm for building a file tree.  The
> > current one is reasonably well optimized, but I suspect it could be
> > better.
>
> Would it be useful to build the directory tree "lazy"? At the beginning
> when you're in the $ prompt you would only see the top folder hierarchy. If
> you cd into a virtual directory it would be created on the fly. This way
> the tree that needs to be built in memory is _much_ smaller because after
> every "cd" you do not have to care about the other sub-trees anymore. I
> don't know if it's possible with bacula though.
>
> > - Find some algorithm for keeping the file tree on disk rather than in
> > memory as I suspect this is what costs so much (lots of virtual memory).
> > It would need to be good at caching and paging.
>
> I have 6G of memory on the box were I run this, which should be enough
> hopefully.
>
>
> -Sebastian

-- 
Best regards,

Kern

  (">
  /\
  V_V


-------------------------------------------------------
SF.Net email is sponsored by: Discover Easy Linux Migration Strategies
from IBM. Find simple to follow Roadmaps, straightforward articles,
informative Webcasts and more! Get everything you need to get up to
speed, fast. http://ads.osdn.com/?ad_id=7477&alloc_id=16492&op=click
_______________________________________________
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users

Reply via email to