Hello Ruth,

Thanks for the suggestion. This helped a lot! And I have a lot of ideas how
can I solve my problem :D
I want to purge the data, because of storage efficiency. I need to work
with one 4TB drive on site, and a couple more on a remote server. I would
like to store as much data on site as possible to make it easier and faster
the data recovery process. So I haven't much space to work with.

Thanks for everything!
Best regards,
Máté Zsólya

Ruth Ivimey-Cook <[email protected]> ezt írta (időpont: 2021. okt. 18., H,
19:25):

> I cannot provide a potted solution, but I can point towards the thing that
> helped me... use explicit week numbers in your schedules. For example I
> have a backup schedule:
>
> Schedule {
>   Name = "YearlyCycle"
>   run = fullpool="Full" Full Sun w01,w13,w26,w39 at 18:05
>   run = fullpool="Full" differentialpool="Full" Differential Sun
> w03,w06,w09,w12 at 18:05
>   run = fullpool="Full" differentialpool="Full" Differential Sun
> w15,w18,w21,w24 at 18:05
>   run = fullpool="Full" differentialpool="Full" Differential Sun
> w27,w30,w33,w36 at 18:05
>   run = fullpool="Full" differentialpool="Full" Differential Sun
> w42,w45,w48,w51 at 18:05
>   run = fullpool="Full" incrementalpool="Incr" differentialpool="Incr"
> Incremental Tue,Thu,Sat at 18:05
> }
>
> To help me work out the numbers I created a one-line 'php' script, which
> you can run via 'php -a':
>
> # Full 3 monthly, Diff every 4 wks, Incr otherwise.
> foreach(range(0,51) as $k) { printf("%sw%02d\n", (($k % 13 === 0) ? "  F:
> " : (($k % 3 === 0) ? "     D: " : "        I: ")), $k+1); }
>
> # Full w01 w14 w27 w40
> # Diff w05 w09 w13 w17 w21 w25 w29 w33 w37 w41 w45 w49
> # Incr w02 w03 w04 w06 w07 w08 w10 w11 w12 w15 w16 w18 w19 w20 w22 w23 w24
> w26 w28 w30 w31 w32 w34 w35 w36 w38 w39 w42 w43 w44 w46 w47 w48 w50 w51 w52
>
> [[Note that in addition to the schedule aspect of this, my schedule also
> defines which pools the various levels go to, because for me different
> levels get written to different tapes (so that shorter-lived jobs are
> grouped on one tape, and the tape can be expired sooner than if it also had
> longer-lived jobs on it). Note that if you do this, you must specify pools
> for all levels equal to or higher than the one you want, or sometimes the
> wrong thing happens!]]
>
> As to how to translate this into your ideas, jobs have a lifetime, so they
> can auto-expire after a specific time anyway. The bigger problem is that
> you presumably want to reuse the tape the job was stored on soon after it
> expires, which means that not only does that job have to expire, but all
> other jobs too. Unless of course you use the CopyJob action to move (copy +
> forget) a job to another tape. So I expect your solution will be a
> combination of using tapes as I have (keep expiry times similar), plus copy
> jobs to consolidate older jobs that need to be saved on a different tape.
>
> From experience, I strongly suggest you do not rely on all jobs being
> successful! Stuff happens, and a backup regime that assumes otherwise is a
> failure :-)
>
> For do the 'erase' part of your request I'm not sure what the purpose is?
> Is there some legal reason to delete tapes at their time of expiry? If not,
> bareos will auto-prune expired tapes itself and as required. If you need to
> actually delete the contents of the tape, you could do it with a
> pre-command on an otherwise trivial job (e.g. a backup of an empty
> directory) that used 'dd' or similar to overwrite the tape, though that is
> going to wear out your tapes faster.  The various 'delete' commands in
> bconsole only manipulate the database.
>
> Hope this helps,
>
> Ruth
>
>
> On 18/10/2021 13:18, Máté Zsólya wrote:
>
> Hello,
>
> The basic schedule for the week should be: Sunday>Full,
> Monday-Saturday>Differecial
> I would like to create a schedule which works like the next:
>
>    1. In the past 2 weeks I would store all of the backups.
>    2. After that, in the past 2 months I would like to store all
>    FullBackups.
>    3. And in the past 2 year I would like to have 1 FullBackup per month.
>
>
> Sorry, if it's an easy question, but I didn't find any solution for this
> in the documentation.
> It would be best, If the data could be erased automatically after it
> "expired", and not just deleted from the catalog.
>
> It could be more efficient for storage, if some of the 2nd and 3rd point's
> Backups could be converted to Differential. (It's not possible I think,
> just I would ask if anyone knows a solution for this.)
>
> Could this be a viable solution?
> Thanks for any help you can provide!
> Mate
> --
> You received this message because you are subscribed to the Google Groups
> "bareos-users" group.
> To unsubscribe from this group and stop receiving emails from it, send an
> email to [email protected].
> To view this discussion on the web visit
> https://groups.google.com/d/msgid/bareos-users/CAGV-Qj%3DFaCqFDp0NghWPrdw1As1UimK1SKrvs9_qeAFYypn7Yg%40mail.gmail.com
> <https://groups.google.com/d/msgid/bareos-users/CAGV-Qj%3DFaCqFDp0NghWPrdw1As1UimK1SKrvs9_qeAFYypn7Yg%40mail.gmail.com?utm_medium=email&utm_source=footer>
> .
>
> --
> Tel: 01223 414180
> Blog: http://www.ivimey.org/blog
> LinkedIn: http://uk.linkedin.com/in/ruthivimeycook/
>
>

-- 
You received this message because you are subscribed to the Google Groups 
"bareos-users" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to [email protected].
To view this discussion on the web visit 
https://groups.google.com/d/msgid/bareos-users/CAGV-QjkLQpejYoZDkAkwKrukLS_BuCPfG%2BzM6hZLkFrYctq5qg%40mail.gmail.com.

Reply via email to