Hi Gene,

----- Original Message -----
> From: "Gene Heskett" 
> To: "amanda-users" 
> Sent: Wednesday, November 14, 2018 8:40:36 AM
> Subject: Re: Monitor and Manage

>>
>> So, let's suppose I fire up all three amdumps at once:
>>
> Don't do that, instead wrap them in a bash script that will run all of
> them sequentially. And run the script with the backuo users crontab if
> you want it automatic, which IMO it should be.

I am backing up three clients to three separate vtape locations, which I think 
means three different vtape pools. Superficially, there is no problem with 
this, meaning the backups run to completion and collect the expected amount of 
data. Since each separate copy of AMANDA is contacting each independent client 
and storing the backup in independent locations, I don't foresee any 
cross-contamination or confusion, but I confess that I have not analyzed the 
backups to prove this -- yet. The server is the only component that is burdened 
by the multiple copies of AMANDA, but I don't see that demand being reduced by 
serializing the jobs.

It is true that this is fragile, meaning a future change in configuration could 
change this matrix of independence, but the worst case is concurrent non-zero 
dumps have a race condition, so you're never sure which backup has the most 
recent copy of an overlapping file. The Linux clients run xinet.d, so they are 
thread-safe, and even backingup overlapping DLE will work, but you'll never 
know which backup has the most recent version of a file. On Windows clients, I 
don't know enough about ZWCService to know if it can fork child processes or if 
it is thread-safe, but assuming it has the same functionality as Linux, it 
suffers from the same problems, of not knowing where the most recent copy of an 
overlapping file might be.

Is there another reason that I don't want to run my backups simultaneously?

Thanks for the help, Gene.
-- 
Chris. 

V:916.974.0424 
F:916.974.0428

Reply via email to