Hello,

I am attempting to implement a bacula cloud backup solution using
backblaze's b2 s3 compatible storage. I want to have different object lock
periods defined for Full, Differential, and Incremental backups. My primary
goal is to defend against a hypothetical ransomware threat that could
attempt to use a compromised bacula system to delete my cloud backups.
Backblaze's b2 service will not delete any object locked file unless I
close that user's account, a level of access that attackers shouldn't ever
have.

I've been trying and thinking about this a few different ways, and I'm
hitting a wall.

Idea 1: I thought I would use a different bucket for each type of backup
(Full, Diff, Inc), and set the buckets to automatically apply a different
object lock duration.
In the bacula-sd.conf I configured 3 different cloud resources, each
pointing to a different bucket. I also configured devices and an
autochanger for each cloud resource.
In bacula-dir.conf I configured storage entries for each cloud entry, and a
pool for each storage entry.
I configured one Jobdef, Craeon-TGU. This had a full, differential, and
incremental pool defined.
I configured multiple Jobs, Craeon-TGU-Full, Craeon-TGU-Diff, etc. The jobs
were configured to use their own pools and storage.
When I ran job Craeon-TGU-Full the relevant volumes were deposited in the
Craeon-TGU-Full bucket. The problem I encountered was that as far as bacula
is concerned, no suitable full backup has been ran for job Craeon-TGU-Diff.
So it forced a new full, to be deposited into the Craeon-TGU-Diff bucket. I
want to have one backup chain, not 3 different backup chains, each with
their own full backup.

Idea 2: I haven't fully fleshed this out, and I don't entirely know how to
implement it. Basically, I could modify the above to use one bucket/cloud
resource (in a more traditional setup), but set the object lock period for
each .part file for this backup, using a RunsAfter script and the s3
compatible API. For this idea I am concerned about the feasibility of
setting different object lock periods in one bucket. I am also concerned
that I may not be able to set immutability for files that already exist in
the bucket. I am also concerned that the volume .part upload may not have
taken place prior to the RunsAfter script's execution. I believe I should
be able to set an object lock period for a given file post-upload using
either the b2 native API or the b2 s3 compatible API. What I don't know how
to do is to determine which volume files were part of the
recently-completed job. Sure, I see it in the log, but how do I specify it
to the RunAfter script? Overall, this might not be a great method to use.

Idea 3: Maybe I could write my initial backups locally using non-cloud file
devices or the s3 file driver provided by bacula, then schedule copy jobs
to copy full backups to the full bucket, diffs to the diff bucket, etc? I'm
guessing that this could be the most effective way to solve this problem. I
would need a way to select jobs for a copy based on their job type, since
merely copying any job that hasn't been copied yet wouldn't necessarily put
copies into the correct bucket.


I welcome any ideas and advice you might have.


Regards,
Robert Gerber
402-237-8692
r...@craeon.net
_______________________________________________
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users

Reply via email to