Hi,
the volume content is user controlled, so splitting it up into multiple
jobs automatically might not solve the problem (and result in way more
jobs..). Doing it manually is not feasible due to limited man power.
I think the best solution is extending the AI scheme similar to the data
Hi,
I just had a closer look at a plugin implementation (libcloud for
processing S3 buckets) and some of the plugin interface and core code.
The idea was to simply skip over all directories if the rctime based check
indicates that the directory has not been changed since the last backup. So