In Pools definitions should i use my SD name as "Storage"?
My director storage config:
*Storage {*
* Name = File*
* Address = bar*
* Password = "BB"*
* Media Type = File*
* Device = FileStorage*
* Device = File1*
* Device = File2*
* Maximum Concurrent Jobs = 3*
*}*
But when i trying to set the "Storage" option in the Pool definition to
"File1", i'm getting an error:
Config error: Could not find config Resource "Storage" referenced on line
10 : Storage = File1
So, in pools "AI-Incremental", "AI-Consolidated", "AI-Longterm" i need to
use "File" as "Storage"?
понедельник, 11 января 2021 г. в 17:40:22 UTC+2, [email protected]:
>
> * (I also have one for an LTO that jobs migrate to over time). *
> Do you save full (Virtual Full?) backups to tape? I would also like to
> save old full backups (stored on disk) to tape for long term storage.
> Now I use Full backups (every week) to HDD and incremental (every day)
> backups to HDD. Old Full backups are then migrated to tape.
> But I want to try always incremental backups :)
>
>
> понедельник, 11 января 2021 г. в 15:51:43 UTC+2, [email protected]:
>
>
>> yeah you can do it with 1 SD, but you needs multiple devices one for each
>> pool so when the consolidations happens it can read from one device and
>> write to the other.
>>
>> In my case I have one storage with multiple devices that can read both
>> the AI-Incremental and AI-Consolidated pool. (I also have one for an LTO
>> that jobs migrate to over time).
>>
>> Eg in my sd.conf
>> Device {
>> Name = FileStorage
>> Media Type = File
>> Archive Device = /mnt/bacula
>> LabelMedia = yes; # lets Bareos label unlabeled media
>> Random Access = yes;
>> AutomaticMount = yes; # when device opened, read it
>> RemovableMedia = no;
>> AlwaysOpen = no;
>> Spool Directory = /mnt/spool/FileStorage
>> Maximum Job Spool Size = 80000000000
>> Maximum Spool Size = 160000000000
>> Maximum Concurrent Jobs = 1
>> }
>>
>> Device {
>> Name = FileStorage2
>> Media Type = File
>> Archive Device = /mnt/bacula
>> LabelMedia = yes; # lets Bareos label unlabeled media
>> Random Access = yes;
>> AutomaticMount = yes; # when device opened, read it
>> RemovableMedia = no;
>> AlwaysOpen = no;
>> Spool Directory = /mnt/spool/server-vfull
>> Maximum Job Spool Size = 80000000000
>> Maximum Spool Size = 160000000000
>> Maximum Concurrent Jobs = 1
>> }
>> …..
>>
>> but my Storage {} config in dir.conf has multiple devices
>>
>> Storage {
>> Name = File
>> # Do not use "localhost" here
>> Address = <snip> # N.B. Use a fully qualified name here
>> Password = “<snip>"
>> Device = FileStorage
>> Device = FileStorage2
>> Device = FileStorage3
>> Device = FileStorage4
>> Device = FileStorage5
>> # number of devices = Maximum Concurrent Jobs
>> Maximum Concurrent Jobs = 5
>> Media Type = File
>> Heartbeat Interval = 60
>> }
>>
>>
>> So my on disk volumes can be managed by 5 ‘devices’ think of them as
>> virutal tape drives. One can read from a volume, and one can write to the
>> another.
>>
>> Brock Palen
>> [email protected]
>> www.mlds-networks.com
>> Websites, Linux, Hosting, Joomla, Consulting
>>
>>
>>
>> > On Jan 11, 2021, at 4:19 AM, [email protected] <[email protected]>
>> wrote:
>> >
>> > As stated in the docs, " For the Always Incremental Backup Scheme at
>> least two storages are needed. "
>> > Does this mean 2+ storage-daemons?
>> >
>> > May i use this scheme with 1 SD?
>> >
>> >
>> > --
>> > You received this message because you are subscribed to the Google
>> Groups "bareos-users" group.
>> > To unsubscribe from this group and stop receiving emails from it, send
>> an email to [email protected].
>> > To view this discussion on the web visit
>> https://groups.google.com/d/msgid/bareos-users/0555aac1-aa00-4842-abcd-cd2537c7f200n%40googlegroups.com.
>>
>>
>>
>>
--
You received this message because you are subscribed to the Google Groups
"bareos-users" group.
To unsubscribe from this group and stop receiving emails from it, send an email
to [email protected].
To view this discussion on the web visit
https://groups.google.com/d/msgid/bareos-users/b6a6f89a-ba9d-4cf2-8afa-9b1dffc901cdn%40googlegroups.com.