After reboot I've created a volume manually using label and job seems to be 
working.
Does this mean issue is resolved and further volumes will be created 
automatically as needed?

bareos-dir JobId 664: Max Volume jobs=1 exceeded. Marking Volume "NewVol1" 
as Used.
bareos-sd JobId 664: Wrote label to prelabeled Volume "NewVol1" on device 
"FileStorage" (/var/lib/bareos/storage)

On Thursday, November 9, 2023 at 2:13:30 PM UTC+2 Yariv Hazan wrote:

> This is what I get, does not seem relevant.
> Will reboot the system and check thanks.
>
> There are configuration warnings:
>  * Device FileStorage: unlimited (0) 'Maximum Concurrent Jobs' (the 
> default) reduces the restore peformance.
>  * Device FileStorage: the default value for 'Maximum Concurrent Jobs' 
> will change from 0 (unlimited) to 1 in Bareos 23.
>
>
> On Thursday, November 9, 2023 at 1:13:55 PM UTC+2 Miguel Santos wrote:
>
>> Something is wrong between memory, filesystem and database.
>>
>>
>> bareos-sd -t to see any potential issues with the configuration. 
>> Hopefully it will tell you what's wrong.
>>
>> Restart all daemons after you have fixed the issue.
>>
>> On Thursday, November 9, 2023 at 11:43:59 AM UTC+1 Yariv Hazan wrote:
>>
>>> Please see below some of the output I get running "status storage" in 
>>> any case I get the "*" prompt so nothing to choose from:
>>>
>>>
>>> There are WARNINGS for this storagedaemon's configuration!
>>> See output of 'bareos-sd -t' for details.
>>>
>>> Running Jobs:
>>> Writing: Full Backup job wpcoreopswat01_job_D JobId=663 Volume=""
>>>     pool="DailyFullCyclePool" device="FileStorage" 
>>> (/var/lib/bareos/storage)
>>>     spooling=0 despooling=0 despool_wait=0
>>>     Files=0 Bytes=0 AveBytes/sec=0 LastBytes/sec=0
>>>     FDReadSeqNo=6 in_msg=6 out_msg=4 fd=6
>>>
>>>
>>> Device status:
>>>
>>> Device "FileStorage" (/var/lib/bareos/storage) is not open.
>>>     Device is BLOCKED waiting to create a volume for:
>>>
>>>        Pool:        DailyFullCyclePool
>>>        Media type:  File
>>>
>>>
>>> On Thursday, November 9, 2023 at 12:25:05 PM UTC+2 Miguel Santos wrote:
>>>
>>>> Not related, but what most likely happened is that it cannot find the 
>>>> volume you deleted and bareos had plans to write into it. Even though it 
>>>> was yet unused, the record of that media was in the DB. (I am speculating 
>>>> here based on what you describe)
>>>>
>>>> The easiest option is to create a volume with the same name.
>>>>
>>>> Do a "status storage" in bconsole, then select the storage name and it 
>>>> most likely will tell you what the storage daemon is expecting to write 
>>>> to. 
>>>> Create a volume in that pool with the same name.
>>>>
>>>> If you deleted volumes, please make sure you also delete volumes from 
>>>> your DB by issuing:
>>>>
>>>> "delete volume" in bconsole.
>>>>
>>>> On Thursday, November 9, 2023 at 10:54:43 AM UTC+1 Yariv Hazan wrote:
>>>>
>>>>> I'm not sure its related but using bconsole I have deleted an empty 
>>>>> unused volume restarted director and now I get this error running jobs:
>>>>>
>>>>> bareos-sd JobId 663: Job wpcoreopswat01_job_D.2023-11-09_11.42.35_06 
>>>>> is waiting. Cannot find any appendable volumes.
>>>>> Please use the "label" command to create a new Volume for:
>>>>> Storage: "FileStorage" (/var/lib/bareos/storage)
>>>>> Pool: DailyFullCyclePool
>>>>> Media type: File
>>>>>
>>>>> Runing label in bconsole and creating a new volume in  
>>>>> DailyFullCyclePool does not seem to solve the problem.
>>>>>
>>>>> On Thursday, November 9, 2023 at 11:51:37 AM UTC+2 Yariv Hazan wrote:
>>>>>
>>>>>> Great so fixed and let it run see how it goes.
>>>>>>
>>>>>>
>>>>>> On Thursday, November 9, 2023 at 10:50:42 AM UTC+2 Miguel Santos 
>>>>>> wrote:
>>>>>>
>>>>>>> Sorry about that, yes, the name of the pools is all mixed up since 
>>>>>>> the beginning when I sent the first message.
>>>>>>>
>>>>>>> But the fix should be self explanatory.
>>>>>>>
>>>>>>>   Run = Level=Full Pool=DailyFullCyclePool 
>>>>>>> FullPool=DailyFullCyclePool  mon-sat at 22:00
>>>>>>>
>>>>>>>   Run = Level=Full Pool=WeeklyFullCyclePool 
>>>>>>> FullPool=WeeklyFullCyclePool  2nd-5th sun at 22:00
>>>>>>>
>>>>>>>   Run = Level=Full Pool=MonthlyFullCyclePool 
>>>>>>> FullPool=MonthlyFullCyclePool  1st sun at 22:00
>>>>>>>
>>>>>>> On Thursday, November 9, 2023 at 9:04:48 AM UTC+1 Yariv Hazan wrote:
>>>>>>>
>>>>>>>> This looks OK, but maybe its the naming is the issue.
>>>>>>>> What pool should hold which backups?
>>>>>>>> DailyFullCyclePool should hold mon-sat at 22:00
>>>>>>>> MonthlyFullCyclePool  should hold 1st sun at 22:00
>>>>>>>> Right?
>>>>>>>>
>>>>>>>> On Tuesday, November 7, 2023 at 4:15:16 PM UTC+2 Miguel Santos 
>>>>>>>> wrote:
>>>>>>>>
>>>>>>>>> I think I have seen this before and probably it may be a bug, but 
>>>>>>>>> I have not had the initiative to look further.
>>>>>>>>>
>>>>>>>>> So, what I believe is happening is that the Pool is taken from 
>>>>>>>>> either the defaults.
>>>>>>>>>
>>>>>>>>> Can you try to change the schedule so it ends up like this?
>>>>>>>>>
>>>>>>>>> Schedule {
>>>>>>>>>
>>>>>>>>>   Name = CustomCycle
>>>>>>>>>
>>>>>>>>>   Run = Level=Full Pool=DailyFullCyclePool 
>>>>>>>>> *FullPool=DailyFullCyclePool*  1st sun at 22:00
>>>>>>>>>
>>>>>>>>>   Run = Level=Full Pool=WeeklyFullCyclePool 
>>>>>>>>> *FullPool=WeeklyFullCyclePool*  2nd-5th sun at 22:00
>>>>>>>>>
>>>>>>>>>   Run = Level=Full Pool=MonthlyFullCyclePool 
>>>>>>>>> *FullPool=MonthlyFullCyclePool*  mon-sat at 22:00
>>>>>>>>> }
>>>>>>>>>
>>>>>>>>> Just adding FullPool to the schedule.
>>>>>>>>>
>>>>>>>>> Good luck.
>>>>>>>>>
>>>>>>>>>
>>>>>>>>> On Tuesday, November 7, 2023 at 3:01:29 PM UTC+1 Yariv Hazan wrote:
>>>>>>>>>
>>>>>>>>>> Hi Guys,
>>>>>>>>>>
>>>>>>>>>> I did as per your help and created the files per your examples 
>>>>>>>>>> thank you so much.
>>>>>>>>>>
>>>>>>>>>> I can see the bold letering and also updated the volumes, thanks! 
>>>>>>>>>>
>>>>>>>>>> Seems backups are being saved to an old volume Monthly-0006 , 
>>>>>>>>>> meaning they are not written to a new Daily-XXXX that is supposed to 
>>>>>>>>>> be 
>>>>>>>>>> created.
>>>>>>>>>>
>>>>>>>>>> Please see my configuration:
>>>>>>>>>>
>>>>>>>>>>  
>>>>>>>>>>
>>>>>>>>>> Job {
>>>>>>>>>>
>>>>>>>>>>   Name = "lpsoar01_job_D"
>>>>>>>>>>
>>>>>>>>>>   JobDefs = "DailyJobDefs"
>>>>>>>>>>
>>>>>>>>>>   FileSet = "lpsoar01_fileset"
>>>>>>>>>>
>>>>>>>>>>   Schedule = "CustomCycle"
>>>>>>>>>>
>>>>>>>>>> }
>>>>>>>>>>
>>>>>>>>>>  
>>>>>>>>>>
>>>>>>>>>> Job {
>>>>>>>>>>
>>>>>>>>>>    Name = "lpsyslog01_job_D"
>>>>>>>>>>
>>>>>>>>>>    JobDefs = "DailyJobDefs"
>>>>>>>>>>
>>>>>>>>>>    FileSet = "lpsyslog01_fileset"
>>>>>>>>>>
>>>>>>>>>>    Schedule = "CustomCycle"
>>>>>>>>>>
>>>>>>>>>> }
>>>>>>>>>>
>>>>>>>>>>  
>>>>>>>>>>
>>>>>>>>>>  
>>>>>>>>>>
>>>>>>>>>> JobDefs {
>>>>>>>>>>
>>>>>>>>>>   Name = "DailyJobDefs"
>>>>>>>>>>
>>>>>>>>>>   Type = Backup
>>>>>>>>>>
>>>>>>>>>>   Level = Full
>>>>>>>>>>
>>>>>>>>>>   Client = bareos-fd
>>>>>>>>>>
>>>>>>>>>>   Schedule = "CustomCycle"   ß--------------I’ve changed this 
>>>>>>>>>> line
>>>>>>>>>>
>>>>>>>>>>   Storage = File
>>>>>>>>>>
>>>>>>>>>>   Messages = Standard
>>>>>>>>>>
>>>>>>>>>>   Pool = DailyFullCyclePool
>>>>>>>>>>
>>>>>>>>>>   Priority = 10
>>>>>>>>>>
>>>>>>>>>>   Write Bootstrap = "/var/lib/bareos/%c.bsr"
>>>>>>>>>>
>>>>>>>>>>   Full Backup Pool = DailyFullCyclePool    # write Full Backups 
>>>>>>>>>> into "Full-Pool" Pool
>>>>>>>>>>
>>>>>>>>>>   Differential Backup Pool = Differential  # write Diff Backups 
>>>>>>>>>> into "Differential" Pool
>>>>>>>>>>
>>>>>>>>>>   Incremental Backup Pool = Incremental    # write Incr Backups 
>>>>>>>>>> into "Incremental" Pool
>>>>>>>>>>
>>>>>>>>>> }
>>>>>>>>>>
>>>>>>>>>>  
>>>>>>>>>>
>>>>>>>>>> Pool {
>>>>>>>>>>
>>>>>>>>>>   Name = DailyFullCyclePool
>>>>>>>>>>
>>>>>>>>>>   Pool Type = Backup
>>>>>>>>>>
>>>>>>>>>>   Recycle = yes
>>>>>>>>>>
>>>>>>>>>>   AutoPrune = yes
>>>>>>>>>>
>>>>>>>>>>   Volume Retention = 7 days
>>>>>>>>>>
>>>>>>>>>>   Maximum Volume Jobs = 1
>>>>>>>>>>
>>>>>>>>>>   Label Format = Daily-
>>>>>>>>>>
>>>>>>>>>>   Maximum Volumes = 7
>>>>>>>>>>
>>>>>>>>>> }
>>>>>>>>>>
>>>>>>>>>>  
>>>>>>>>>>
>>>>>>>>>> Pool {
>>>>>>>>>>
>>>>>>>>>>   Name = WeeklyFullCyclePool
>>>>>>>>>>
>>>>>>>>>>   Pool Type = Backup
>>>>>>>>>>
>>>>>>>>>>   Recycle = yes
>>>>>>>>>>
>>>>>>>>>>   AutoPrune = yes
>>>>>>>>>>
>>>>>>>>>>   Volume Retention = 31 days
>>>>>>>>>>
>>>>>>>>>>   Maximum Volume Jobs = 100
>>>>>>>>>>
>>>>>>>>>>   Label Format = Weekly-
>>>>>>>>>>
>>>>>>>>>>   Maximum Volumes = 5
>>>>>>>>>>
>>>>>>>>>> }
>>>>>>>>>>
>>>>>>>>>>  
>>>>>>>>>>
>>>>>>>>>> Pool {
>>>>>>>>>>
>>>>>>>>>>   Name = MonthlyFullCyclePool
>>>>>>>>>>
>>>>>>>>>>   Pool Type = Backup
>>>>>>>>>>
>>>>>>>>>>   Recycle = yes
>>>>>>>>>>
>>>>>>>>>>   AutoPrune = yes
>>>>>>>>>>
>>>>>>>>>>   Volume Retention = 181 days
>>>>>>>>>>
>>>>>>>>>>   Maximum Volume Jobs = 100
>>>>>>>>>>
>>>>>>>>>>   Label Format = Monthly-
>>>>>>>>>>
>>>>>>>>>>   Maximum Volumes = 6
>>>>>>>>>>
>>>>>>>>>> }
>>>>>>>>>>
>>>>>>>>>>  
>>>>>>>>>>
>>>>>>>>>> Schedule {
>>>>>>>>>>
>>>>>>>>>>   Name = CustomCycle
>>>>>>>>>>
>>>>>>>>>>   Run = Level=Full Pool=DailyFullCyclePool 1st sun at 
>>>>>>>>>> 22:00                          ßChanged this line according to 
>>>>>>>>>> my existing pool
>>>>>>>>>>
>>>>>>>>>>   Run = Level=Full Pool=WeeklyFullCyclePool 2nd-5th sun at 
>>>>>>>>>> 22:00             ß Changed this line according to my existing 
>>>>>>>>>> pool
>>>>>>>>>>
>>>>>>>>>>   Run = Level=Full Pool=MonthlyFullCyclePool mon-sat at 
>>>>>>>>>> 22:00                  ß Changed this line according to my 
>>>>>>>>>> existing pool
>>>>>>>>>>
>>>>>>>>>> }
>>>>>>>>>>
>>>>>>>>>> On Sunday, October 29, 2023 at 12:25:36 PM UTC+2 Yariv Hazan 
>>>>>>>>>> wrote:
>>>>>>>>>>
>>>>>>>>>>> Hello,
>>>>>>>>>>> My retention is pretty simple(?) I have only full backups and I 
>>>>>>>>>>> need to keep backups for
>>>>>>>>>>> Last 6 daily backups for a week
>>>>>>>>>>> Last 4 weekly backups for a month
>>>>>>>>>>> Last 6 monthly backups for 6 months.
>>>>>>>>>>>
>>>>>>>>>>> But:
>>>>>>>>>>> 1. All backups are kept for much longer without being pruned.
>>>>>>>>>>> 2. A daily backup volume is created every day but older daily 
>>>>>>>>>>> backup volume are used instead.
>>>>>>>>>>>
>>>>>>>>>>> I run version 22.1.1~pre26.eeec2501e without any changes to 
>>>>>>>>>>> defaults.
>>>>>>>>>>>
>>>>>>>>>>> Here is an examples of my configuration:
>>>>>>>>>>>
>>>>>>>>>>> Job {
>>>>>>>>>>>   Name = "lpsoar01_job_D"
>>>>>>>>>>>   JobDefs = "DailyJobDefs"
>>>>>>>>>>>   FileSet = "lpsoar01_fileset"
>>>>>>>>>>>   Schedule = "DailyFullCycle"
>>>>>>>>>>> }
>>>>>>>>>>>
>>>>>>>>>>> JobDefs {
>>>>>>>>>>>   Name = "DailyJobDefs"
>>>>>>>>>>>   Type = Backup
>>>>>>>>>>>   Level = Full
>>>>>>>>>>>   Client = bareos-fd
>>>>>>>>>>>   Schedule = "DailyFullCycle"
>>>>>>>>>>>   Storage = File
>>>>>>>>>>>   Messages = Standard
>>>>>>>>>>>   Pool = DailyFullCyclePool
>>>>>>>>>>>   Priority = 10
>>>>>>>>>>>   Write Bootstrap = "/var/lib/bareos/%c.bsr"
>>>>>>>>>>>   Full Backup Pool = DailyFullCyclePool                 # write 
>>>>>>>>>>> Full Backups into "Full-Pool" Pool
>>>>>>>>>>>   Differential Backup Pool = Differential  # write Diff Backups 
>>>>>>>>>>> into "Differential" Pool
>>>>>>>>>>>   Incremental Backup Pool = Incremental    # write Incr Backups 
>>>>>>>>>>> into "Incremental" Pool
>>>>>>>>>>> }
>>>>>>>>>>>
>>>>>>>>>>> Pool {
>>>>>>>>>>>   Name = DailyFullCyclePool
>>>>>>>>>>>   Pool Type = Backup
>>>>>>>>>>>   Recycle = yes
>>>>>>>>>>>   AutoPrune = yes
>>>>>>>>>>>   Volume Retention = 7 days
>>>>>>>>>>>   Maximum Volume Jobs = 100
>>>>>>>>>>>   Label Format = Daily-
>>>>>>>>>>>   Maximum Volumes = 40
>>>>>>>>>>> }
>>>>>>>>>>>
>>>>>>>>>>> What am I doing wrong here?
>>>>>>>>>>> Thanks,
>>>>>>>>>>> Yariv
>>>>>>>>>>>
>>>>>>>>>>>
>>>>>>>>>>>
>>>>>>>>>>>
>>>>>>>>>>>
>>>>>>>>>>>
>>>>>>>>>>>
>>>>>>>>>>>

-- 
You received this message because you are subscribed to the Google Groups 
"bareos-users" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to bareos-users+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/bareos-users/1abcc6fd-b578-43b0-adb3-4a06bd46f641n%40googlegroups.com.

Reply via email to