Hi,

One of the machine I back-up with Amanda is used by our students to run
AI, machine learning, etc. Each coming with huge set of data. When I
know that it will be coming, I may exclude the student's directory, but
I am not always aware.

When the data size gets greaterr than the maximum 770GB allocated for
one single Amanda run, the dump will fail.

Is there a way to tell Amanda to disregard a DLE is the estimated size
if too big?

Is there a way to automatically split one DLE into two when it is
detected it will be too large?

And what about deduplication on the client side? How to implement the
backup? If two DLE contains the same file, that file should exist in a
single location on the physical hard disk. But a backup of each DLE will
have a copy of that file, so the size of the backup will be biger than
the orginal disk. Am I right?

Best regards,

Olivier
-- 

Reply via email to