hi,

I have configured in our bacula community the plugin to backup to AWS.

I have set the option truncate cache = AfterUpload in the Pool. This way it
creates only one local file with the name part.1

I configured a rule in s3 so that some files would go to glacier after a
period of time. This was our mistake.

When bacula tries to backup an existing volume, it somehow fetches
information from this volume in s3. From what I tested, even if you create
a new volume, it does this.

I would like to know what information bacula looks for on the cloud volume
when it goes to do a backup?

I thought that even if the volumes were in glacier, trying to make a new
backup with a new volume would work, but it seems to me that this is not
the case.

-- 
Elias Pereira
_______________________________________________
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users

Reply via email to