>...  I think it times out because there is over
>100,000 files and it can't create the file list.  ...

Does the whole file system has 100,000 files, or just /raid1/production?
Using GNU tar, it would only matter what's under /raid1/production.

>The "production"
>directory isn't even that big, it's only around 8 GB.

Size doesn't matter for estimates.  A stat() of a 10 KByte file takes
the same amount of time as a 10 GByte file.

>I increased the timeout period in the amanda.conf file which I thought
>would help, but it didn't.  ...

Which timeout variable did you adjust?  It should have been "etimeout".

>When I manually ran amdump, the gtar process
>list took up like 60 to 70 percent of the CPU usage.  I took a look 20
>minutes later and it was still running but the CPU usage was only 0.4
>percent so I just killed it.  I don't think it would've ever finished.

I don't understand.  If the CPU usage dropped that much, it implies the
GNU tar might have been done.  Did you do a "ps" to see what was going on?

You might also take a look at /tmp/amanda/sendsize*debug.

>Before amanda, I used dump to do level zero's every night on the
>/raid1/production directory which worked well.  Can I do that with
>amanda, I don't mind getting a level 0 backup everyday.

I don't think you can do this.  Amanda will see that you are using "dump"
and convert the mount point to a disk name, then pass that to "dump"
which will do the whole file system.

You could probably do the infamous GNU tar wrapper hack from:

  ftp://gandalf.cc.purdue.edu/pub/amanda/gtar-wrapper*

only in your case you'd detect when the request was for /raid1/production
and run dump instead.  You might also set this up with its own dumptype
and configure it with "dumpcycle 0" so Amanda understands it is going
to do a full dump every time.

>Ajay

John R. Jackson, Technical Software Specialist, [EMAIL PROTECTED]

Reply via email to