Can i set the Volume Retention period for Consolidated pool to somethins 
reasonable? :) 14 days, on example..? :)

понедельник, 11 января 2021 г. в 19:51:40 UTC+2, [email protected]: 

> You leave Storage in your Pool config as the Storage name, it’s telling 
> the Pool “all the media for this pool can be accessed on Storage File and 
> Storage {} says you can access any of the volumes of that pool on these 
> three Devices FileStorage, File1, File2. 
>
> Again think of the Device as a virtual tape drive, If you had a tape 
> library with 2 drives you would have 2 devices and each drive can read 
> tapes from that pool.
>
> Pool {
> Name = AI-Incremental
> Pool Type = Backup
> Recycle = yes # Bareos can automatically recycle Volumes 
> Auto Prune = yes # Prune expired volumes 
> Volume Retention = 2 months # How long should jobs be kept? 
> Maximum Volume Bytes = 50G # Limit Volume size to something reasonable 
> Label Format = "AI-Incremental-"
> Volume Use Duration = 7d
> Storage = File
> Next Pool = AI-Consolidated # consolidated jobs go to this pool 
> Action On Purge=Truncate
> }
>
> Get this working first and then ping me back about my migrations, I do it 
> for two reasons, one is I have limited disk space and cannot hold all 
> backups on them. I also take a long term archive job I take off site.
>
>
> Brock Palen
> [email protected]
> www.mlds-networks.com
> Websites, Linux, Hosting, Joomla, Consulting
>
>
>
> > On Jan 11, 2021, at 12:15 PM, [email protected] <[email protected]> 
> wrote:
> > 
> > In Pools definitions should i use my SD name as "Storage"?
> > 
> > My director storage config:
> > 
> > Storage {
> > Name = File
> > Address = bar
> > Password = "BB"
> > Media Type = File
> > Device = FileStorage
> > Device = File1
> > Device = File2
> > Maximum Concurrent Jobs = 3
> > }
> > 
> > But when i trying to set the "Storage" option in the Pool definition to 
> "File1", i'm getting an error:
> > 
> > Config error: Could not find config Resource "Storage" referenced on 
> line 10 : Storage = File1
> > So, in pools "AI-Incremental", "AI-Consolidated", "AI-Longterm" i need 
> to use "File" as "Storage"?
> > 
> > 
> > 
> > понедельник, 11 января 2021 г. в 17:40:22 UTC+2, [email protected]: 
> > (I also have one for an LTO that jobs migrate to over time). 
> > 
> > Do you save full (Virtual Full?) backups to tape? I would also like to 
> save old full backups (stored on disk) to tape for long term storage. 
> > Now I use Full backups (every week) to HDD and incremental (every day) 
> backups to HDD. Old Full backups are then migrated to tape. 
> > But I want to try always incremental backups :)
> > 
> > 
> > понедельник, 11 января 2021 г. в 15:51:43 UTC+2, 
> [email protected]: 
> > yeah you can do it with 1 SD, but you needs multiple devices one for 
> each pool so when the consolidations happens it can read from one device 
> and write to the other. 
> > 
> > In my case I have one storage with multiple devices that can read both 
> the AI-Incremental and AI-Consolidated pool. (I also have one for an LTO 
> that jobs migrate to over time). 
> > 
> > Eg in my sd.conf 
> > Device { 
> > Name = FileStorage 
> > Media Type = File 
> > Archive Device = /mnt/bacula 
> > LabelMedia = yes; # lets Bareos label unlabeled media 
> > Random Access = yes; 
> > AutomaticMount = yes; # when device opened, read it 
> > RemovableMedia = no; 
> > AlwaysOpen = no; 
> > Spool Directory = /mnt/spool/FileStorage 
> > Maximum Job Spool Size = 80000000000 
> > Maximum Spool Size = 160000000000 
> > Maximum Concurrent Jobs = 1 
> > } 
> > 
> > Device { 
> > Name = FileStorage2 
> > Media Type = File 
> > Archive Device = /mnt/bacula 
> > LabelMedia = yes; # lets Bareos label unlabeled media 
> > Random Access = yes; 
> > AutomaticMount = yes; # when device opened, read it 
> > RemovableMedia = no; 
> > AlwaysOpen = no; 
> > Spool Directory = /mnt/spool/server-vfull 
> > Maximum Job Spool Size = 80000000000 
> > Maximum Spool Size = 160000000000 
> > Maximum Concurrent Jobs = 1 
> > } 
> > ….. 
> > 
> > but my Storage {} config in dir.conf has multiple devices 
> > 
> > Storage { 
> > Name = File 
> > # Do not use "localhost" here 
> > Address = <snip> # N.B. Use a fully qualified name here 
> > Password = “<snip>" 
> > Device = FileStorage 
> > Device = FileStorage2 
> > Device = FileStorage3 
> > Device = FileStorage4 
> > Device = FileStorage5 
> > # number of devices = Maximum Concurrent Jobs 
> > Maximum Concurrent Jobs = 5 
> > Media Type = File 
> > Heartbeat Interval = 60 
> > } 
> > 
> > 
> > So my on disk volumes can be managed by 5 ‘devices’ think of them as 
> virutal tape drives. One can read from a volume, and one can write to the 
> another. 
> > 
> > Brock Palen 
> > [email protected] 
> > www.mlds-networks.com 
> > Websites, Linux, Hosting, Joomla, Consulting 
> > 
> > 
> > 
> > > On Jan 11, 2021, at 4:19 AM, [email protected] <[email protected]> 
> wrote: 
> > > 
> > > As stated in the docs, " For the Always Incremental Backup Scheme at 
> least two storages are needed. " 
> > > Does this mean 2+ storage-daemons? 
> > > 
> > > May i use this scheme with 1 SD? 
> > > 
> > > 
> > > -- 
> > > You received this message because you are subscribed to the Google 
> Groups "bareos-users" group. 
> > > To unsubscribe from this group and stop receiving emails from it, send 
> an email to [email protected]. 
> > > To view this discussion on the web visit 
> https://groups.google.com/d/msgid/bareos-users/0555aac1-aa00-4842-abcd-cd2537c7f200n%40googlegroups.com.
>  
>
> > 
> > 
> > -- 
> > You received this message because you are subscribed to the Google 
> Groups "bareos-users" group.
> > To unsubscribe from this group and stop receiving emails from it, send 
> an email to [email protected].
> > To view this discussion on the web visit 
> https://groups.google.com/d/msgid/bareos-users/b6a6f89a-ba9d-4cf2-8afa-9b1dffc901cdn%40googlegroups.com
> .
>
>

-- 
You received this message because you are subscribed to the Google Groups 
"bareos-users" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to [email protected].
To view this discussion on the web visit 
https://groups.google.com/d/msgid/bareos-users/06edc8c8-8962-4ab9-9507-02c3fb147ce9n%40googlegroups.com.

Reply via email to