Thanks for that advice.

I currently have an admin job script#1 as below following some earlier advice 
on this list from Andrea Venturoli 
(https://sourceforge.net/p/bacula/mailman/message/37680362/ 
<https://sourceforge.net/p/bacula/mailman/message/37680362/>). This runs daily 
when all other jobs are expected to have completed.

#1
#!/bin/bash
#clean up the cached directories
for pool in docs archive; do
  for level in full diff incr; do
    echo "cloud prune AllFromPool Pool=$pool-$level" | bconsole
    echo "Pruning $pool-$level"
  done
done

This cleans up the cache but from what you say it won’t clean up the cloud and 
that appears to be the case.

Should I add another line in there to ‘cloud truncate…’ as well?

On a side note, I used this script#2 to find volumes on the cloud that are not 
recorded in the database (i.e. orphaned). It turned up about 1GB of ‘lost’ 
volumes. Not excessive but annoying. I’m leaving those on the cloud for now 
though they could be deleted I guess as they not recorded and therefore 
redundant.

#2
#!/bin/bash
#Delete orphaned B2 volumes
for vol in $(rclone lsd backblaze:bacula01 | awk '{print $5}'); do       
#volume is 5th field
  size=$(rclone size backblaze:bacula01/$vol --json | jq '.bytes’)     #use 
json option and parse out size in bytes with jq
  echo "list volume=$vol" | bconsole | if grep --quiet "No results to list"; 
then
    rclone purge backblaze:bacula01/$vol --dry-run    #just check, don’t delete 
volume dir for now
  fi
done

I had some jobs that had errored and left volumes/parts in the cache which 
Script#1 wouldn’t clean. I got some errors that the cache was inconsistent with 
the cloud. I manually found and uploaded the inconsistent ones with ‘cloud 
upload’ which then also cleaned the cache.

Best
-Chris-




> On 15 Sep 2022, at 07:45, Ana Emília M. Arruda <emiliaarr...@gmail.com> wrote:
> 
> Hello Chris,
> 
> When dealing with cloud storages, your volumes will be in a local cache and 
> in the remote cloud storage.
> 
> To clean the local cache, you must use the "cloud prune" and the "cloud 
> truncate" commands. Having "Truncate Cache=AfterUpload" in the cloud resource 
> will guarantee that the part file is deleted from the local cache after and 
> only after it is correctly uploaded to the remote cloud. Because it may 
> happen that a part file cannot be uploaded due to, for example, connection 
> issues, you should create an admin job to frequently run both the "cloud 
> prune" and the "cloud truncate" commands.
> 
> Then, to guarantee the volumes in the remote cloud are cleaned, you need both 
> the "prune volumes" and the "truncate volumes" commands (the last one will 
> delete the date in the volume and reduce the volume file to its label only).
> 
> Please note the prune command will respect the retention periods you have 
> defined for the volumes, but the purge command doesn't. Thus, I wouldn't use 
> the purge command to avoid data loss.
> 
> Best regards,
> Ana
> 
> On Wed, Sep 14, 2022 at 6:22 PM Chris Wilkinson <winstonia...@gmail.com 
> <mailto:winstonia...@gmail.com>> wrote:
> I'm backing up to cloud storage (B2). This is working fine but I'm not clear 
> on whether volumes on B2 storage are truncated (i.e. storage recovered) when 
> a volume is purged by the normal pool expiry settings. I've set run after 
> jobs in the daily catalog backup to truncate volumes on purge for each of my 
> pools. E.g
> 
> ...
> Runscript {
>   When = "After"
>   RunsOnClient = no
>   Console = "purge volume action=truncate pool=docs-full storage=cloud-sd"
>  }
> ...
> 
> The local cache is being cleared but I think this is because I set the option 
> "Truncate Cache=AfterUpload" in the cloud resource to empty the local cache 
> after each part is uploaded.
> 
> I'd like of course that storage (and cost) doesn't keep growing out of 
> control and wonder if there is a config option(s) to ensure this doesn't 
> happen.
> 
> Any help or advice on this would be much appreciated. 
> 
> Thanks
> Chris Wilkinson 
> _______________________________________________
> Bacula-users mailing list
> Bacula-users@lists.sourceforge.net <mailto:Bacula-users@lists.sourceforge.net>
> https://lists.sourceforge.net/lists/listinfo/bacula-users 
> <https://lists.sourceforge.net/lists/listinfo/bacula-users>

_______________________________________________
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users

Reply via email to