[bareos-users] Re: LTO-6 inconsistent write performance (from 12M/s to 153M/s)

2023-09-13 Thread Bruno Friedmann (bruno-at-bareos)
A few remarks:

- You shouldn't forget tmpfs is fundamentally a (virtual) memory-based file 
system; its contents live in memory only, but since they’re swappable the 
kernel can store them in swap instead of physical memory if necessary. So 
what precaution you're using to ensure, there's no swap involved in that 
case?

- The cpu usage you're seen by the sd look like the few seconds needed by 
spool file attributes sent to the database.

- block size min 0 and normal set to 1M is good.

You didn't explain your numbers, is there all happening at the same time? 
or sequentially. 
Also you didn't explain from where the data come, network ? then what type 
of, speed, hardware, family used and so on.
Did you let interleaving of job occurs on the same tape device? 

You didn't precise what M/s is it right to assume MB/s? 

So what is the speed of despooling only memory stored data with only one 
job writing to the tape drive, is this constant job by job, during the 
whole size of the tape.

Le dimanche 10 septembre 2023 à 15:35:11 UTC+2, TDL a écrit :

> Hi
> I encounter very variable write speeds - example during one backup:
> Writing spool 1: *25.38M/s* (7:03)
> Writing spool 2: 149.1M/s (1:12)
> Writing spool 3: 153.3M/s (1:10)
> Writing spool 4: *12.66M/s* (14:08)
> Writing spool 5: 120.6M/s (1:29)
> Writing spool 6: 153.3M/s (1:10)
> Writing spool 6: 44.37M/s (4:02)
>
> I have an IBM LTO-6 drive - SAS.
> I am using a tmpfs as spool disk with a spool size of 10G (so no disk 
> involved)
> The CPU is at 0% for most of the time (I see storage daemon taking 3% cpu 
> for a few seconds then go back to sleep)
> The max block size of the media is 1048576 (1M)
> The min block size is set to 0 (Do I need to change it to 1M too? if yes, 
> can I change it without problem?) -I do not believe as it has to write 10G, 
> I suppose it will always fill the block to the max size.
>
> I encounter this with all the tapes I have (so likely not a defective 
> tape).
>
> Data written to tape is encrypted so likely not (highly) compressible by 
> the drive that could explain potentially burst of write speed when writing 
> highly compressible data.
>
> Any idea what could cause those random slow downs (and how to fix it ;-) )?
>
> Thanks a lot!
>
> Thierry
>

-- 
You received this message because you are subscribed to the Google Groups 
"bareos-users" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to bareos-users+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/bareos-users/1573cfc1-fdd2-4394-ba3f-6e2e74a646a2n%40googlegroups.com.


Re: [bareos-users] Help with retention

2023-11-06 Thread Bruno Friedmann (bruno-at-bareos)
And if you already have volumes, don't forget to update them all.

bconsole -> update -> volume parameter -> 
13: All Volumes from Pool
or
14: All Volumes from all Pools

On Friday, 3 November 2023 at 08:49:26 UTC+1 Miguel Santos wrote:

> Forgot to change Max Volume Jobs, I will set it to 1, not 100.
>
> On Friday, November 3, 2023 at 8:47:53 AM UTC+1 Miguel Santos wrote:
>
>> This is what I would do.
>>
>> Job {
>>   Name = "lpsoar01_job_D"
>>   JobDefs = "DailyJobDefs"
>>   FileSet = "lpsoar01_fileset"
>>   Schedule = *"CustomCycle"*
>>
>> }
>>
>> JobDefs {
>>   Name = "DailyJobDefs"
>>   Type = Backup
>>   Level = Full
>>   Client = bareos-fd
>>   Schedule = "DailyFullCycle"
>>   Storage = File
>>   Messages = Standard
>>   Pool = DailyFullCyclePool
>>   Priority = 10
>>   Write Bootstrap = "/var/lib/bareos/%c.bsr"
>>   Full Backup Pool = DailyFullCyclePool # write Full 
>> Backups into "Full-Pool" Pool
>>   Differential Backup Pool = Differential  # write Diff Backups into 
>> "Differential" Pool
>>   Incremental Backup Pool = Incremental# write Incr Backups into 
>> "Incremental" Pool
>> }
>>
>> Pool {
>>   Name = *Daily*
>>
>>   Pool Type = Backup
>>   Recycle = yes
>>   AutoPrune = yes
>>   Volume Retention = 7 days
>>   Maximum Volume Jobs = 100
>>   Label Format = Daily-
>>   Maximum Volumes = *7*
>> }
>>
>>
>> *Pool {  Name = Weekly*
>>
>>
>>
>>
>> *  Pool Type = Backup  Recycle = yes  AutoPrune = yes*
>> *  Volume Retention = 31 days*
>>
>>
>> *  Maximum Volume Jobs = 100*
>>
>>
>> *  Label Format = Weekly-  Maximum Volumes = 5}*
>>
>>
>> *Pool {*
>> *  Name = Monthly*
>>
>>
>>
>>
>> *  Pool Type = Backup  Recycle = yes  AutoPrune = yes*
>> *  Volume Retention = 181 days*
>>
>>
>> *  Maximum Volume Jobs = 100*
>>
>>
>> *  Label Format = Monthly-  Maximum Volumes = 6}*
>>
>> *Schedule {*
>> *  Name = CustomCycle*
>> *  Run = Level=Full **Pool=**Daily **1st sun at 22:00*
>> *  Run = Level=Full **Pool=**Weekly* *2nd-5th sun at 22:00*
>> *  Run = Level=Full Pool=**Monthly mon-sat at 22:00*
>> *}*
>>
>>
>> I hope you are able to see the bold text, but basically you create 3 
>> pools and a custom schedule that will take care of writting to the specific 
>> pool you need. I have no tested this configuration so it is at your 
>> discretion.
>>
>> Good luck.
>>
>> On Thursday, November 2, 2023 at 2:43:20 PM UTC+1 Yariv Hazan wrote:
>>
>>> Thank you!
>>>
>>> To simplify (for me L ) please look on the daily only.
>>>
>>> Per configuration below I do 6 backups for a week.
>>>
>>> My retention is on the pool level (Volume Retention = 7 days) right? On 
>>> what level it should be? Or how I should define it in the pool level that 
>>> I’m not doing correctly now?
>>>
>>> Thanks,
>>>
>>> Yariv
>>>
>>>  
>>>
>>> On Sunday, October 29, 2023 at 1:46:16 PM UTC+2 Miguel Santos wrote:
>>>
 You need 3 different pools to do this.

 One that keeps your:
 - 6 daily backups for a week
 - 4 weekly backups for a month
 - 6 monthly backups for 6 months.

 From there you can decide to either:
 * create different jobs to write to different pools (simpler)
 * make a migration/copy of the data to the pool (a bit more complicated)

 It will not work with your current configuration because the retention 
 works at the pool level.

 Good luck.

 On Sun, Oct 29, 2023 at 11:25 AM Yariv Hazan  wrote:

> Hello,
> My retention is pretty simple(?) I have only full backups and I need 
> to keep backups for
> Last 6 daily backups for a week
> Last 4 weekly backups for a month
> Last 6 monthly backups for 6 months.
>
> But:
> 1. All backups are kept for much longer without being pruned.
> 2. A daily backup volume is created every day but older daily backup 
> volume are used instead.
>
> I run version 22.1.1~pre26.eeec2501e without any changes to defaults.
>
> Here is an examples of my configuration:
>
> Job {
>   Name = "lpsoar01_job_D"
>   JobDefs = "DailyJobDefs"
>   FileSet = "lpsoar01_fileset"
>   Schedule = "DailyFullCycle"
> }
>
> JobDefs {
>   Name = "DailyJobDefs"
>   Type = Backup
>   Level = Full
>   Client = bareos-fd
>   Schedule = "DailyFullCycle"
>   Storage = File
>   Messages = Standard
>   Pool = DailyFullCyclePool
>   Priority = 10
>   Write Bootstrap = "/var/lib/bareos/%c.bsr"
>   Full Backup Pool = DailyFullCyclePool # write Full 
> Backups into "Full-Pool" Pool
>   Differential Backup Pool = Differential  # write Diff Backups into 
> "Differential" Pool
>   Incremental Backup Pool = Incremental# write Incr Backups into 
> "Incremental" Pool
> }
>
> Pool {
>   Name = DailyFullCyclePool
>   Pool Type = Backup
>   Recycle = yes
>   AutoPrune = yes
>   Volume Retention = 7 days
>   Maximum Volume Jobs = 100
>   

Re: [bareos-users] New PostgreSQL plugin search volunteers for testing

2023-09-18 Thread Bruno Friedmann (bruno-at-bareos)
Hello Dennis, 

Nice you already tried the fresh PG16 meat, after checking the new 
documentation about subtrans we can see that upstream has added that 
directory as shouldn't be saved.

```The contents of the directories pg_dynshmem/, pg_notify/, pg_serial/, 
pg_snapshots/, pg_stat_tmp/, and pg_subtrans/ (but not the directories 
themselves) can be omitted from the backup as they will be initialized on 
postmaster startup.```

Will apply that to the plugin by default before release.

Le lundi 18 septembre 2023 à 15:26:24 UTC+2, Dennis Benndorf a écrit :

> Hi Bruno,
>
> we tested your new plugin with a quite large database with a running 
> application using it.
> When doing a full backup we ran into:
>
> JobId 5116734: Fatal error: bareosfd: Traceback (most recent call last):
> File "/usr/lib64/bareos/plugins/BareosFdWrapper.py", line 61, in 
> start_backup_file
> return bareos_fd_plugin_object.start_backup_file(savepkt)
> File "/usr/lib64/bareos/plugins/bareos-fd-postgresql.py", line 703, in 
> start_backup_file
> return super().start_backup_file(savepkt)
> File "/usr/lib64/bareos/plugins/BareosFdPluginLocalFilesBaseclass.py", 
> line 118, in start_backup_file
> mystatp.st_mode = statp.st_mode
> UnboundLocalError: local variable 'statp' referenced before assignment
>
> JobId 5116734: Error: python3-fd-mod: Could net get stat-info for file 
> /db/pgsql/pgsql-15/pg_subtrans/00BF: "[Errno 2] No such file or directory: 
> '/db/pgsql/pgsql-15/pg_subtrans/00BF'" 
>
> It seems that the plugin fails when it wants to backup a subtransaction 
> that has been commited and removed from that directory in the meantime.
>
> Is it save to exclude the pg_subtrans directory?
>
> With kind regards,
> Dennis
>
>  Ursprüngliche Nachricht 
> *Von*: Bruno Friedmann 
> *An*: bareos-users 
> *Betreff*: [bareos-users] New PostgreSQL plugin search volunteers for 
> testing
> *Datum*: 13.09.2023 09:46:08
>
> Hi community, users and customers,
>
> As we're moving towards Bareos 23, we could use some help testing the all 
> new PostgreSQL plugin which is needed because of changes happened in 
> PostgreSQL 15.
>
> Main features compared to the old one:
> - use non exclusive mode
> - support all clusters starting from version 10
> - support clusters with tablespace
> - support non standard port
> - support password parameter
> - better debugging
> - renew of parameters names and variables
> - renewed documentation
>
> We're looking for a few early adopters to work with directly. If you'd 
> like to participate please contact https://bareos.com/contact with a 
> small description of your environment to have multiple environments as test 
> scenarios.
>
> And "For all others who'd like to test without early adapters program the 
> testing packages are now available
> at this address https://download.bareos.org/experimental/PR-1541/
> The corresponding documentation is also available on 
>
> https://download.bareos.org/experimental/PR-1541/BareosMainReference/TasksAndConcepts/Plugins.html#postgresql-plugin
>
> The ongoing development PR is available here 
> https://github.com/bareos/bareos/pull/1541
>
> We would really appreciate comments, remarks, and tests. 
> Better to use github comments if you want to participate, and help.
>
> The new plugin aims to replace the old one, and will be delivered in the 
> same package 
> bareos-filedaemon-postgresql-python-plugin
> Notice The old plugin (deprecated) will still also be delivered in 23, but 
> we recommend (not sure if we will force) its usage to only restore previous 
> backups done with, until you migrate to the new one.
>
> Thanks and all the best
>
> -- 
> You received this message because you are subscribed to the Google Groups 
> "bareos-users" group.
> To unsubscribe from this group and stop receiving emails from it, send an 
> email to bareos-users...@googlegroups.com.
> To view this discussion on the web visit 
> https://groups.google.com/d/msgid/bareos-users/8d4432a2-4561-4adf-9fa6-b8aff31da6f0n%40googlegroups.com
>  
> <https://groups.google.com/d/msgid/bareos-users/8d4432a2-4561-4adf-9fa6-b8aff31da6f0n%40googlegroups.com?utm_medium=email_source=footer>
> .
>

-- 
You received this message because you are subscribed to the Google Groups 
"bareos-users" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to bareos-users+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/bareos-users/0af278d9-e25e-42e2-b3e6-909e88c4cb3an%40googlegroups.com.


Re: [bareos-users] New PostgreSQL plugin search volunteers for testing

2023-09-18 Thread Bruno Friedmann (bruno-at-bareos)
Would you mind to try to run the job with a debug level set to 150 + 
timestamp. and attach the job log.

I was trying to fix that case, and just emit a warning when the database 
has removed and changed a file during the backup.

Le lundi 18 septembre 2023 à 17:02:23 UTC+2, Dennis Benndorf a écrit :

> Hi Bruno,
>
> its me once again. The next run with excluded pg_subtrans dir fails on the 
> base directory:
>
> JobId 5116772: Fatal error: bareosfd: Traceback (most recent call last):
>
> File "/usr/lib64/bareos/plugins/BareosFdWrapper.py", line 61, in 
> start_backup_file
> return bareos_fd_plugin_object.start_backup_file(savepkt)
> File "/usr/lib64/bareos/plugins/bareos-fd-postgresql.py", line 703, in 
> start_backup_file
> return super().start_backup_file(savepkt)
> File "/usr/lib64/bareos/plugins/BareosFdPluginLocalFilesBaseclass.py", 
> line 118, in start_backup_file
> mystatp.st_mode = statp.st_mode
> UnboundLocalError: local variable 'statp' referenced before assignment
>
> JobId 5116772: Error: python3-fd-mod: Could net get stat-info for file 
> /db/pgsql/pgsql-15/base/16386/2117113: "[Errno 2] No such file or 
> directory: '/db/pgsql/pgsql-15/base/16386/2117113'"
>
> It is a active database. Any thoughts on this?
>
> With kind regards,
> Dennis
>
>  Ursprüngliche Nachricht 
> *Von*: Bruno Friedmann (bruno-at-bareos) 
> *An*: bareos-users 
> *Betreff*: Re: [bareos-users] New PostgreSQL plugin search volunteers for 
> testing
> *Datum*: 18.09.2023 16:04:42
>
> Sorry in fact it is in 15 doc too.
>
> Le lundi 18 septembre 2023 à 15:59:34 UTC+2, Bruno Friedmann 
> (bruno-at-bareos) a écrit :
>
> Hello Dennis, 
>
> Nice you already tried the fresh PG16 meat, after checking the new 
> documentation about subtrans we can see that upstream has added that 
> directory as shouldn't be saved.
>
> ```The contents of the directories pg_dynshmem/, pg_notify/, pg_serial/, 
> pg_snapshots/, pg_stat_tmp/, and pg_subtrans/ (but not the directories 
> themselves) can be omitted from the backup as they will be initialized on 
> postmaster startup.```
>
> Will apply that to the plugin by default before release.
>
> Le lundi 18 septembre 2023 à 15:26:24 UTC+2, Dennis Benndorf a écrit :
>
> Hi Bruno,
>
> we tested your new plugin with a quite large database with a running 
> application using it.
> When doing a full backup we ran into:
>
> JobId 5116734: Fatal error: bareosfd: Traceback (most recent call last):
> File "/usr/lib64/bareos/plugins/BareosFdWrapper.py", line 61, in 
> start_backup_file
> return bareos_fd_plugin_object.start_backup_file(savepkt)
> File "/usr/lib64/bareos/plugins/bareos-fd-postgresql.py", line 703, in 
> start_backup_file
> return super().start_backup_file(savepkt)
> File "/usr/lib64/bareos/plugins/BareosFdPluginLocalFilesBaseclass.py", 
> line 118, in start_backup_file
> mystatp.st_mode = statp.st_mode
> UnboundLocalError: local variable 'statp' referenced before assignment
>
> JobId 5116734: Error: python3-fd-mod: Could net get stat-info for file 
> /db/pgsql/pgsql-15/pg_subtrans/00BF: "[Errno 2] No such file or directory: 
> '/db/pgsql/pgsql-15/pg_subtrans/00BF'" 
>
> It seems that the plugin fails when it wants to backup a subtransaction 
> that has been commited and removed from that directory in the meantime.
>
> Is it save to exclude the pg_subtrans directory?
>
> With kind regards,
> Dennis
>
>  Ursprüngliche Nachricht 
> *Von*: Bruno Friedmann 
> *An*: bareos-users 
> *Betreff*: [bareos-users] New PostgreSQL plugin search volunteers for 
> testing
> *Datum*: 13.09.2023 09:46:08
>
> Hi community, users and customers,
>
> As we're moving towards Bareos 23, we could use some help testing the all 
> new PostgreSQL plugin which is needed because of changes happened in 
> PostgreSQL 15.
>
> Main features compared to the old one:
> - use non exclusive mode
> - support all clusters starting from version 10
> - support clusters with tablespace
> - support non standard port
> - support password parameter
> - better debugging
> - renew of parameters names and variables
> - renewed documentation
>
> We're looking for a few early adopters to work with directly. If you'd 
> like to participate please contact https://bareos.com/contact with a 
> small description of your environment to have multiple environments as test 
> scenarios.
>
> And "For all others who'd like to test without early adapters program the 
> testing packages are now available
> at this address https://download.bareos.org/experimental/PR-1541/
> The correspondi

Re: [bareos-users] New PostgreSQL plugin search volunteers for testing

2023-09-18 Thread Bruno Friedmann (bruno-at-bareos)
Sorry in fact it is in 15 doc too.

Le lundi 18 septembre 2023 à 15:59:34 UTC+2, Bruno Friedmann 
(bruno-at-bareos) a écrit :

> Hello Dennis, 
>
> Nice you already tried the fresh PG16 meat, after checking the new 
> documentation about subtrans we can see that upstream has added that 
> directory as shouldn't be saved.
>
> ```The contents of the directories pg_dynshmem/, pg_notify/, pg_serial/, 
> pg_snapshots/, pg_stat_tmp/, and pg_subtrans/ (but not the directories 
> themselves) can be omitted from the backup as they will be initialized on 
> postmaster startup.```
>
> Will apply that to the plugin by default before release.
>
> Le lundi 18 septembre 2023 à 15:26:24 UTC+2, Dennis Benndorf a écrit :
>
>> Hi Bruno,
>>
>> we tested your new plugin with a quite large database with a running 
>> application using it.
>> When doing a full backup we ran into:
>>
>> JobId 5116734: Fatal error: bareosfd: Traceback (most recent call last):
>> File "/usr/lib64/bareos/plugins/BareosFdWrapper.py", line 61, in 
>> start_backup_file
>> return bareos_fd_plugin_object.start_backup_file(savepkt)
>> File "/usr/lib64/bareos/plugins/bareos-fd-postgresql.py", line 703, in 
>> start_backup_file
>> return super().start_backup_file(savepkt)
>> File "/usr/lib64/bareos/plugins/BareosFdPluginLocalFilesBaseclass.py", 
>> line 118, in start_backup_file
>> mystatp.st_mode = statp.st_mode
>> UnboundLocalError: local variable 'statp' referenced before assignment
>>
>> JobId 5116734: Error: python3-fd-mod: Could net get stat-info for file 
>> /db/pgsql/pgsql-15/pg_subtrans/00BF: "[Errno 2] No such file or directory: 
>> '/db/pgsql/pgsql-15/pg_subtrans/00BF'" 
>>
>> It seems that the plugin fails when it wants to backup a subtransaction 
>> that has been commited and removed from that directory in the meantime.
>>
>> Is it save to exclude the pg_subtrans directory?
>>
>> With kind regards,
>> Dennis
>>
>>  Ursprüngliche Nachricht 
>> *Von*: Bruno Friedmann 
>> *An*: bareos-users 
>> *Betreff*: [bareos-users] New PostgreSQL plugin search volunteers for 
>> testing
>> *Datum*: 13.09.2023 09:46:08
>>
>> Hi community, users and customers,
>>
>> As we're moving towards Bareos 23, we could use some help testing the all 
>> new PostgreSQL plugin which is needed because of changes happened in 
>> PostgreSQL 15.
>>
>> Main features compared to the old one:
>> - use non exclusive mode
>> - support all clusters starting from version 10
>> - support clusters with tablespace
>> - support non standard port
>> - support password parameter
>> - better debugging
>> - renew of parameters names and variables
>> - renewed documentation
>>
>> We're looking for a few early adopters to work with directly. If you'd 
>> like to participate please contact https://bareos.com/contact with a 
>> small description of your environment to have multiple environments as test 
>> scenarios.
>>
>> And "For all others who'd like to test without early adapters program the 
>> testing packages are now available
>> at this address https://download.bareos.org/experimental/PR-1541/
>> The corresponding documentation is also available on 
>>
>> https://download.bareos.org/experimental/PR-1541/BareosMainReference/TasksAndConcepts/Plugins.html#postgresql-plugin
>>
>> The ongoing development PR is available here 
>> https://github.com/bareos/bareos/pull/1541
>>
>> We would really appreciate comments, remarks, and tests. 
>> Better to use github comments if you want to participate, and help.
>>
>> The new plugin aims to replace the old one, and will be delivered in the 
>> same package 
>> bareos-filedaemon-postgresql-python-plugin
>> Notice The old plugin (deprecated) will still also be delivered in 23, 
>> but we recommend (not sure if we will force) its usage to only restore 
>> previous backups done with, until you migrate to the new one.
>>
>> Thanks and all the best
>>
>> -- 
>> You received this message because you are subscribed to the Google Groups 
>> "bareos-users" group.
>> To unsubscribe from this group and stop receiving emails from it, send an 
>> email to bareos-users...@googlegroups.com.
>> To view this discussion on the web visit 
>> https://groups.google.com/d/msgid/bareos-users/8d4432a2-4561-4adf-9fa6-b8aff31da6f0n%40googlegroups.com
>>  
>> <https://groups.google.com/d/msgid/bareos-users/8d4432a2-4561-4adf-9fa6-b8aff31da6f0n%40googlegroups.com?utm_medium=email_source=footer>
>> .
>>
>

-- 
You received this message because you are subscribed to the Google Groups 
"bareos-users" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to bareos-users+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/bareos-users/f5c5bdd4-56d7-49a3-a676-b26217e6f0edn%40googlegroups.com.


[bareos-users] Re: Always Incremental example?

2023-09-20 Thread Bruno Friedmann (bruno-at-bareos)
Hello Giorgio, while our documentation isn't yet as perfect as it could be 
(PR are welcome at any times :-) it will certainly be more up to date than 
a 5 years old threads.
If you look for inspiration about your configuration, maybe you want to 
have a look at how the systemtests works 
https://github.com/bareos/bareos/tree/master/systemtests/tests/always-incremental-consolidate
Have a look at the test itself (testrunner) and the associated 
configuration will maybe create the ""Ha ! Ha !"  instant
Regards.
Le vendredi 15 septembre 2023 à 12:33:00 UTC+2, Giorgio Bartoccioni a 
écrit :

> Hi, 
> I'm revisiting this old post because I've now started using always 
> incremental. 
> I followed your setup but I can't get the concurrent jobs to run. 
> I have FileStorageCons1 mounted and in use for the first backup of a (very 
> large) client and FileStorageCons 2,3,4 and 5 not open.
> But I also have all other clients that are waiting for incrementals. What 
> can I check?
>
>
> Il giorno mercoledì 31 gennaio 2018 alle 21:50:06 UTC+1 Dan ha scritto:
>
>> On Thursday, November 17, 2016 at 11:59:16 AM UTC-5, darkadept wrote:
>> > I'm new to bareos and I'm trying to implement the "Always Incremental" 
>> backup strategy.
>> > 
>>
>> > My assumption is that this configuration will:
>>
>>
>> > * Do a full backup initially
>> > * Do incremental backups every weekday at 21:00
>> > * Consolidate (every weekday) the incrementals older than 7 days
>> > * Always keep at least 7 incrementals
>> > * Consolidate job only consolidates with the Full backup every 14 days, 
>> otherwise it consolidates just the incrementals (not sure if this is the 
>> correct assumption of how "Always Incremental Max Full Age" works.)
>> > * Only consolidates 1 per consolidate job (if multiple clients).
>> > * Creates a virtual full backup on the 1st saturday of every month.
>>
>> --
>> You are mostly correct in your configuration. The documentation almost 
>> gets you to a working AI implementation, but there are a few gotchas that 
>> have to be worked through. It's hard keep my answer clear by marking up 
>> each of your conf files, so I'll post you a configuration that I've had in 
>> production for a while as complete working example. I'll use the notations 
>> BOF and EOF for beginning of file and end of file, respectively. Lines 
>> containing those notations should not be included in you conf files.
>>
>> This is a LONG POST. Bear with me.
>>
>> The below configuration will ...
>> * Do incremental backups 20:00 every day. By definition the 1st will be a 
>> full backup.
>> * Consolidate at 21:00 every 4th day starting on the 3rd day of the month.
>> * Only consolidate incrementals older than 7 days
>> * Keep at least 7 incrementals
>> * Consolidate with the Full backup every 14 days. (I don't consolidate 
>> very day, so my full gets picked up on the first consolidate job on or 
>> after 14 days.
>>
>>
>> * Only consolidates 1 per consolidate job (if multiple clients).
>>
>> * Creates a virtual full backup at noon on the 1st day of the month.
>>
>> -
>> On my Storage Daemon I have the following:
>>
>> STORAGE:
>> BOF - storage/bareos-sd.conf
>> Storage {
>> Name = bareos-sd
>> Maximum Concurrent Jobs = 5
>> Device Reserve By Media Type = yes # This entry is REQUIRED for 
>> successful consolidation. See FileStorageCons.conf. 
>> }
>> EOF
>>
>> DEVICES:
>> BOF - device/FileStorage.conf
>> Device {
>> Name = FileStorage
>> Media Type = File
>> Archive Device = /bareosFiles/BareosBackups/storage
>>
>>
>> LabelMedia = yes
>> Random Access = yes
>> AutomaticMount = yes
>> RemovableMedia = no
>> AlwaysOpen = no
>>
>> Description = "File device of for general use."
>> Maximum Concurrent Jobs = 5
>> }
>> EOF
>>
>> BOF - device/FileStorageCons.conf
>> Device {
>> Name = FileStorageCons1 # Multiple devices, different name and same media 
>> type, are configured to support simultaneous use.
>> Media Type = FileCons
>> Archive Device = /bareosFiles/BareosBackups/storage
>>
>>
>> LabelMedia = yes
>> Random Access = yes
>> AutomaticMount = yes
>> RemovableMedia = no
>> AlwaysOpen = no
>> }
>>
>> Device {
>> Name = FileStorageCons2
>> Media Type = FileCons
>> Archive Device = /bareosFiles/BareosBackups/storage
>>
>>
>> LabelMedia = yes
>> Random Access = yes
>> AutomaticMount = yes
>> RemovableMedia = no
>> AlwaysOpen = no
>> }
>>
>> Device {
>> Name = FileStorageCons3
>> Media Type = FileCons
>> Archive Device = /bareosFiles/BareosBackups/storage
>>
>>
>> LabelMedia = yes
>> Random Access = yes
>> AutomaticMount = yes
>> RemovableMedia = no
>> AlwaysOpen = no
>> }
>>
>> Device {
>> Name = FileStorageCons4
>> Media Type = FileCons
>> Archive Device = /bareosFiles/BareosBackups/storage
>>
>>
>> LabelMedia = yes
>> Random Access = yes
>> AutomaticMount = yes
>> RemovableMedia = no
>> AlwaysOpen = no
>> }
>>
>> Device {
>> Name = FileStorageCons5
>> Media Type = FileCons

[bareos-users] Re: Issues with getting bareos to see the slots in autochanger

2023-10-02 Thread Bruno Friedmann (bruno-at-bareos)
Usually bareos sd user is also member of "Tape" and Disk group which 
greatly simplify access to that kind of devices.

Le vendredi 29 septembre 2023 à 21:30:03 UTC+2, Philip Dalrymple a écrit :

> OK THAT WAS IT
>
> A line needs to be added to /etc/sudoers file to allow the user (or group) 
> bareos to run the mx-changer script
>
>
>
> On Friday, September 29, 2023 at 3:25:12 PM UTC-4 Philip Dalrymple wrote:
>
>> I enabled logging on the mtx-changer script and if I call it from the 
>> command line I see entries in the log but not if Wait I may need to add 
>> the user bareos to the sudoers
>>
>> Let me try that
>>
>> On Friday, September 29, 2023 at 3:20:31 PM UTC-4 Philip Dalrymple wrote:
>>
>>> I used bareos at my last company and just installed a qualstar Q40. 
>>>
>>> The mtx-changer command works as I expect but the status slots storage 
>>> does not
>>> (see below) 
>>>
>>> Any Ideas on troubleshoting?
>>>
>>> my autochanger record is 
>>>
>>> Autochanger {
>>>   Name = QSTAR
>>>   Changer Device = /dev/sch0
>>>   Device = LTO8
>>>   Changer Command = "/usr/bin/sudo /usr/lib/bareos/scripts/mtx-changer 
>>> %c %o %S %a %d"
>>>
>>> }
>>>
>>> my device record is
>>>
>>> Device {
>>>   Name = LTO8
>>>   Media Type = LTO8
>>>   Device Type = Tape
>>>   Archive Device = /dev/st0
>>>   Label Media = no;   # yes for disk
>>>   Random Access = No;
>>>   Automatic Mount = yes;   # when device opened, read it
>>>   Removable Media = yes;
>>>   Always Open = yes;
>>>   Autochanger = yes;
>>>   Maximum Concurrent Jobs = 1;
>>>   Maximum Block Size = 1048576; 
>>> }
>>>
>>>
>>> -mtx-changer 
>>> ➜  bareos-config git:(main) sudo /usr/lib/bareos/scripts/mtx-changer 
>>> /dev/sch0 slots  
>>> 40
>>> ➜  bareos-config git:(main) sudo /usr/lib/bareos/scripts/mtx-changer 
>>> /dev/sch0 listall
>>> D:0:E
>>> S:1:F:1E0001L8
>>> S:2:F:1E0002L8
>>> S:3:F:1E0003L8
>>> S:4:F:1E0004L8
>>> S:5:F:1E0005L8
>>> S:6:F:1E0006L8
>>> S:7:F:1E0007L8
>>> S:8:F:1E0008L8
>>> S:9:F:1E0009L8
>>> S:10:F:1E0010L8
>>> S:11:F:1E0011L8
>>> S:12:F:1E0012L8
>>> S:13:F:1E0013L8
>>> S:14:F:1E0014L8
>>> S:15:F:1E0015L8
>>> S:16:F:1E0016L8
>>> S:17:F:1E0017L8
>>> S:18:F:1E0018L8
>>> S:19:F:1E0019L8
>>> S:20:F:1E0020L8
>>> S:21:F:CLN001L1
>>> S:22:F:CLN002L1
>>> S:23:E
>>> S:24:E
>>> S:25:E
>>> S:26:E
>>> S:27:E
>>> S:28:E
>>> S:29:E
>>> S:30:E
>>> S:31:E
>>> S:32:E
>>> S:33:E
>>> S:34:E
>>> S:35:E
>>> I:36:E
>>> I:37:E
>>> I:38:E
>>> I:39:E
>>> I:40:E
>>>
>>>  bconsol 
>>>
>>> status slots storage
>>> Automatically selected Storage: bstore-eds1-sd
>>> Automatically selected Catalog: MyCatalog
>>> Using Catalog "MyCatalog"
>>> Connecting to Storage daemon bstore-eds1-sd at bstore-eds1.rhsys.co:9103 
>>> 
>>>  
>>> ...
>>> 3306 Issuing autochanger "slots" command.
>>> Device "LTO8" has 0 slots.
>>> No slots in changer to scan.
>>> *
>>>
>>

-- 
You received this message because you are subscribed to the Google Groups 
"bareos-users" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to bareos-users+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/bareos-users/f7cc3801-13b0-4b09-a8bb-4779e0b354d5n%40googlegroups.com.


[bareos-users] Re: bareos-sd device count usage

2023-09-27 Thread Bruno Friedmann (bruno-at-bareos)
The trick is to use a virtual autochanger for those kind of device, so only 
the changer is declared on the director.

If you want to get inspired, you can have a look at the following PR which 
will propose to introduce this 
https://github.com/bareos/bareos/pull/1467


Le mardi 26 septembre 2023 à 15:25:48 UTC+2, wizh...@gmail.com a écrit :

> I see the sd now has a "Count" parameter that looks handy for multiple 
> devices.
>
> in the director I see no similar way to specify these devices except to 
> list out manually like in the example
>
>
> https://docs.bareos.org/TasksAndConcepts/VolumeManagement.html#example-use-four-storage-devices-pointing-to-the-same-directory
>
> Would be nice if the sd config a a similar directive to not have to list 
> all the auto generated devices.
>
> Perhaps I am missing something and it already does?

-- 
You received this message because you are subscribed to the Google Groups 
"bareos-users" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to bareos-users+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/bareos-users/cda43b69-0045-412c-9fc5-6fe68585cb08n%40googlegroups.com.


[bareos-users] Re: Restore catalog from bootstrap

2023-09-27 Thread Bruno Friedmann (bruno-at-bareos)
Thank you to discover one part of the documentation that need to be 
rewritten.
Oleg todo is quite right in the term of using bextract with the bsr is the 
easiest way to have your catalog back, and maybe also you want to extract 
the previous configuration.

Please don't be shy and propose a PR to update the documentation.

Le mardi 26 septembre 2023 à 08:18:10 UTC+2, Oleg Volkov a écrit :

> http://www.voleg.info/bareos-disaster-recovery.html
> Quite old, but still work in principal.
>
> On Friday, September 22, 2023 at 11:20:10 PM UTC+3 wizh...@gmail.com 
> wrote:
>
>> I have followed the documentation and cannot get past the restore command 
>> requiring a jobid
>>
>> The docs state
>>
>> " After re-initializing the database, you should be able to run Bareos. 
>> If you now try to use the restore command, it will not work because the 
>> database will be empty. However, you can manually run a restore job and 
>> specify your bootstrap file. You do so by entering the run command in the 
>> console and selecting the restore job. If you are using the default Bareos 
>> Director configuration, this Job will be named RestoreFiles. Most likely it 
>> will prompt you with something such as:"
>>
>> So o run in bconsole
>>
>> run
>>
>> then selec 14 for RestoreFiles
>>
>> It then requires a jobid to continue but there are non yet in the 
>> database as I am trying to get the catalog using a bootstrap file.
>>
>> What am I missing?
>>
>>
>>
>>
>>
>>

-- 
You received this message because you are subscribed to the Google Groups 
"bareos-users" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to bareos-users+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/bareos-users/0873a476-839b-46a3-ae50-a354d71818dcn%40googlegroups.com.


[bareos-users] Re: bareos-sd device count usage

2023-09-27 Thread Bruno Friedmann (bruno-at-bareos)
Well not really, I've implemented that with 4 devices in my autochanger 
which mean I can run 4 concurrents writes job or 2 read / 2 write for 
copy/AI etc.
and I have one storage definition on my director.
Of course that mean I will write 4 distinct volumes at the same time. and 
not 4 jobs on a volume. 
Le mercredi 27 septembre 2023 à 14:49:28 UTC+2, wizh...@gmail.com a écrit :

> I saw the autochanger, but that does not allow for parallel jobs correct?
>
> Currently to have parallel jobs you have to define multiple devices and 
> specify them all in both the sd and dir.
>
> using count simplifies the sd config, but the director with needs each 
> listed.
>
> On Wednesday, September 27, 2023 at 4:55:39 a.m. UTC-4 Bruno Friedmann 
> (bruno-at-bareos) wrote:
>
>> The trick is to use a virtual autochanger for those kind of device, so 
>> only the changer is declared on the director.
>>
>> If you want to get inspired, you can have a look at the following PR 
>> which will propose to introduce this 
>> https://github.com/bareos/bareos/pull/1467
>>
>>
>> Le mardi 26 septembre 2023 à 15:25:48 UTC+2, wizh...@gmail.com a écrit :
>>
>>> I see the sd now has a "Count" parameter that looks handy for multiple 
>>> devices.
>>>
>>> in the director I see no similar way to specify these devices except to 
>>> list out manually like in the example
>>>
>>>
>>> https://docs.bareos.org/TasksAndConcepts/VolumeManagement.html#example-use-four-storage-devices-pointing-to-the-same-directory
>>>
>>> Would be nice if the sd config a a similar directive to not have to list 
>>> all the auto generated devices.
>>>
>>> Perhaps I am missing something and it already does?
>>
>>

-- 
You received this message because you are subscribed to the Google Groups 
"bareos-users" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to bareos-users+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/bareos-users/731b91f2-96a7-40ff-996d-baecb7590567n%40googlegroups.com.


[bareos-users] Re: Changelog for upgraded packages

2023-09-27 Thread Bruno Friedmann (bruno-at-bareos)
the link proposed drive you to the master documentation (see little 
droplist at the left bottom) you can choose 22 version
https://docs.bareos.org/bareos-22/Appendix/ReleaseNotes.html

then you have the unreleased (so 22.1.1~pre) changes 


Le mercredi 27 septembre 2023 à 09:17:03 UTC+2, Silvio Schloeffel a écrit :

> Hi,
>
> since last week I can see some connection problems to newer systems.
> Some times the backup is working, some times not and the error is a ssl 
> error.
>
> "bareos-sd JobId 8245: Fatal error: Connect failure: 
> ERR=error:14094410:SSL routines:ssl3_read_bytes:sslv3 alert handshake 
> failure"
>
> This happens only to servers with the latest packages.
> First idea was to check the changelogs.
>
> We use rpm based systems and normally i would do a "rpm -q --changelog 
> rpm_name.rpm" :
>
> If I do this with the bareos rpms i get this:
>
> [root@repo ~]# rpm -q --changelog bareos-filedaemon
> * Tue Sep 26 2023 Bareos Jenkins  - 
> 22.1.1~pre119.1cb7ef4a1-96
> - See https://docs.bareos.org/release-notes/
>
> -> Release Notes it's not up2date because it is for the paid files.
> If i follow the link to the GitHub Branch I do not know where I have to 
> look for the changelog infos. I see a lot of branches, with different 
> pull requests.
> First thinking was that 1cb7ef4a1 from the rpm name is a GitHub pull 
> request number but i did not found it. Second try was to look into the 
> actions but i can see a lot of MacOS based actions but not rpm builds 
>
> Can someone give me a hint how to get these infos?
>
> Best
>
> Silvio
>
>

-- 
You received this message because you are subscribed to the Google Groups 
"bareos-users" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to bareos-users+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/bareos-users/ed12cf8c-4881-4344-b88e-3bb657b7d156n%40googlegroups.com.


[bareos-users] Re: Two auto-changers sharing jobs

2023-10-05 Thread Bruno Friedmann (bruno-at-bareos)
Maybe your sentence is not enough clear: *"The same media is readable and 
writeable by either drive from either library."*
To me it sound like the two autochanger are seen as one big library on the 
system to 2 tapes drives and a lot of slots.

It they are not, then you will have to play with media type so have LTO-8a 
LTO-8b to make them unique (storage/mediatype) and as such Bareos will not 
get confused.
Of course that mean also you will have always export and reimport into the 
same library the used tapes.
Advise in that case, maybe a dedicated scratch pool is useful to not mix 
them.

Le mercredi 4 octobre 2023 à 23:50:28 UTC+2, Chris Boot a écrit :

> Hi all,
>
> I have a system with two Quantum SuperLoader 3 auto-changers, both with 
> an LTO-8 drive each. The same media is readable and writeable by either 
> drive from either library. Bareos has been working with one library and 
> drive for some time, but we now want to configure the 2nd library.
>
> The intention is to have jobs able to write to whatever tapes are 
> available in either library; basically so that two jobs can write at the 
> same time in order to double throughput. How can we achieve that?
>
> I've configured the SD with two Autochanger and two Device resources, 
> and the director with two Storage resources that refer to each 
> autochanger. This allows commands such as "list slots storage=Tape-1", 
> "update slots storage=Tape-2", and so on.
>
> What's not working is starting a job with storage=Tape-1 vs 
> storage=Tape-2. What seems to happen is Bareos picks a tape from the 
> pool that's available in either changer, mounts it in whatever drive can 
> access it even if it's the other drive/changer to the one requested. 
> When it's in the other drive/changer, the storage on the volume is 
> updated to the storage on the job, and things start to fall apart.
>
> For example, I started a job with "run job=MyJob storage=Tape-1". Bareos 
> selected a tape in the Tape-2 Storage/Autochanger/Device and mounted it 
> correctly in the wrong Device, then updated the volume record to change 
> its storage from "Tape-2" to "Tape-1" even though it's actually writing 
> to the tape via "Tape-2".
>
> Running "update slots storage=Tape-2" fixes this, but only for a while; 
> Bareos soon forces it back to the wrong storage.
>
> So as far as I can tell the SD is configured correctly and doing what 
> the Director is telling it to do, but the Director doesn't seem to be 
> able to tell the two sets of devices apart.
>
> Is there some way around this issue? Can I achieve what I'm trying to do?
>
> Thanks,
> Chris
>
> -- 
> Chris Boot
> bo...@boo.tc
>

-- 
You received this message because you are subscribed to the Google Groups 
"bareos-users" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to bareos-users+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/bareos-users/95e86eef-524b-4261-9c9d-aa68aaea86a0n%40googlegroups.com.


Re: [bareos-users] New PostgreSQL plugin search volunteers for testing

2023-10-10 Thread Bruno Friedmann (bruno-at-bareos)
Dear community, it has been a long time since we were able to release a new 
set of binaries to test the new bareos-fd-postgresql plugin.
This long awaited release is now over, and you can get the updated binaries 
directly from the experimental project
https://download.bareos.org/experimental/PR-1541/

For those who have already installed and tested it, you should be able to 
simply update your test system with your operating system command
like dnf, apt, zypper, etc.

The plugin is now able to backup any PostgreSQL cluster from version 10 to 
16, it will no longer complain if file/dir changes during the
during the backup (as these changes are recorded in wal and saved at the 
end).

We improved our system test to cover the case with cluster having activity 
or not, using tablespace, etc.

We add to rename some configuration parameters to make the whole more 
coherent, check the documentation or code if you need to adapt.

Reminder: the whole new documentation chapter about the plugin is available 
at 
https://download.bareos.org/experimental/PR-1541/BareosMainReference/TasksAndConcepts/Plugins.html#postgresql-plugin

Thanks for any test you can do in your own environment, please report 
failure/success here on the mailing list,
and any code improvements in the github comments here

https://github.com/bareos/bareos/pull/1541/

Happy testing.
 

Le mardi 19 septembre 2023 à 13:53:51 UTC+2, bareos-users a écrit :

> Thanks Dennis, this will help the fixes, the new lsn has to be formatted 
> I will create the fix for that.
>
> In the meantime, I'm working in improving our systemtests to create load 
> and activity in the cluster during the backup
> to be able to reproduce and fix for all that case.
>
> Stay tuned, I will post when the patches will be in.
>
> Le mardi 19 septembre 2023 à 11:45:24 UTC+2, Dennis Benndorf a écrit :
>
>> Hi Bruno,
>>
>> I think we found the cause:
>> In the function start_backup_job there is a if-statement
>> if self.switch_wal and current_lsn > self.lastLSN:
>>
>> We added some additional information to the else-clause to come closer to 
>> that problem:
>> else: 
>># Nothing has changed since last backup - only send ROP 
>> this time 
>>bareosfd.JobMessage( 
>>bareosfd.M_INFO, 
>>f"Same LSN {current_lsn} as last time - nothing to do; 
>> Switch_wal: {self.switch_wal} LastLSN: {self.lastLSN} \n", 
>>) 
>>return bareosfd.bRC_OK
>>
>> And found out that current_lsn and lastLSN are different in their format 
>> which leads the plugin to choose the else condition.
>>
>> *root@bareos:~# python3 
>> Python 3.8.10 (default, May 26 2023, 14:05:08)  
>> [GCC 9.4.0] on linux 
>> Type "help", "copyright", "credits" or "license" for more information. 
>> >>> "0135/B5371B08" > "135/A42EF7F0" 
>> False 
>> >>> "135/B5371B08" > "135/A42EF7F0" 
>> True 
>> >>> 
>>
>> So the fix should be to remove leading zeros or to fill it with leading 
>> zeros on the other var.
>>
>> With kind regards,
>> Dennis
>>
>>
>>
>>
>>  Ursprüngliche Nachricht 
>> *Von*: Dennis Benndorf 
>> *An*: Bruno Friedmann (bruno-at-bareos) , 
>> bareos-users 
>> *Betreff*: Re: [bareos-users] New PostgreSQL plugin search volunteers 
>> for testing
>> *Datum*: 19.09.2023 10:33:33
>>
>> Hi Bruno,
>>
>> we increased the debug level but the load on the database decreased so it 
>> ran successful.
>> Will come back to this when there is more load and the problem happens 
>> again. 
>>
>> Another thing I found is that Increments seem to not work as I would 
>> expect them:
>>
>> The Full which ended at 2023-09-19 09:32:39 contains:
>> /backupserver/postgresdumps/wal_archive/
>> /backupserver/postgresdumps/wal_archive/0001013500A1
>> /backupserver/postgresdumps/wal_archive/0001013500A2
>>
>> /backupserver/postgresdumps/wal_archive/0001013500A2.0028.backup
>> /backupserver/postgresdumps/wal_archive/0001013500A3
>> /backupserver/postgresdumps/wal_archive/0001013500A4
>>
>> In those archive dir at 10:08 a new file 0001013500A5 was 
>> created.
>> But the increment at(10:18:41) after that timestamp does not include the 
>> file:
>> *list files jobid=5118974
>> *
>>
>> The log says:
>> JobId 5118974: python3-fd-mod: Same LSN 0

Re: [bareos-users] Re: Two auto-changers sharing jobs

2023-10-05 Thread Bruno Friedmann (bruno-at-bareos)
If you read carefully the documentation, I fear you can't achieve what you 
expect ! 

You will have to have the red-pills and blue-pills autochanger and make one 
media type per unit, and then off course reload tape in the right 
autochanger (except the blank or expired which can be removed and loaded 
into the other one).


Le jeudi 5 octobre 2023 à 13:41:33 UTC+2, Chris Boot a écrit :

> Hi Bruno,
>
> The two libraries are independent units; Autochanger-1 can't load tapes 
> into Tape-1 and vice versa. But all the tapes are LTO-8 and can be 
> loaded in either library, and written/read by either tape drive. We 
> expect to be able to load spare tapes into either library, and have 
> Bareos be able to pick any free tape to write to.
>
> I really want to avoid having different pools or media types as we 
> really don't want to have to put tapes back in the same library they 
> came from.
>
> Is there any way I can achieve that?
>
> Best regards,
> Chris
>
> On 05/10/2023 09:41, Bruno Friedmann (bruno-at-bareos) wrote:
> > Maybe your sentence is not enough clear: /"The same media is readable 
> > and writeable by either drive from either library."/
> > To me it sound like the two autochanger are seen as one big library on 
> > the system to 2 tapes drives and a lot of slots.
> > 
> > It they are not, then you will have to play with media type so have 
> > LTO-8a LTO-8b to make them unique (storage/mediatype) and as such Bareos 
> > will not get confused.
> > Of course that mean also you will have always export and reimport into 
> > the same library the used tapes.
> > Advise in that case, maybe a dedicated scratch pool is useful to not mix 
> > them.
> > 
> > Le mercredi 4 octobre 2023 à 23:50:28 UTC+2, Chris Boot a écrit :
> > 
> > Hi all,
> > 
> > I have a system with two Quantum SuperLoader 3 auto-changers, both with
> > an LTO-8 drive each. The same media is readable and writeable by either
> > drive from either library. Bareos has been working with one library and
> > drive for some time, but we now want to configure the 2nd library.
> > 
> > The intention is to have jobs able to write to whatever tapes are
> > available in either library; basically so that two jobs can write at
> > the
> > same time in order to double throughput. How can we achieve that?
> > 
> > I've configured the SD with two Autochanger and two Device resources,
> > and the director with two Storage resources that refer to each
> > autochanger. This allows commands such as "list slots storage=Tape-1",
> > "update slots storage=Tape-2", and so on.
> > 
> > What's not working is starting a job with storage=Tape-1 vs
> > storage=Tape-2. What seems to happen is Bareos picks a tape from the
> > pool that's available in either changer, mounts it in whatever drive
> > can
> > access it even if it's the other drive/changer to the one requested.
> > When it's in the other drive/changer, the storage on the volume is
> > updated to the storage on the job, and things start to fall apart.
> > 
> > For example, I started a job with "run job=MyJob storage=Tape-1".
> > Bareos
> > selected a tape in the Tape-2 Storage/Autochanger/Device and mounted it
> > correctly in the wrong Device, then updated the volume record to change
> > its storage from "Tape-2" to "Tape-1" even though it's actually writing
> > to the tape via "Tape-2".
> > 
> > Running "update slots storage=Tape-2" fixes this, but only for a while;
> > Bareos soon forces it back to the wrong storage.
> > 
> > So as far as I can tell the SD is configured correctly and doing what
> > the Director is telling it to do, but the Director doesn't seem to be
> > able to tell the two sets of devices apart.
> > 
> > Is there some way around this issue? Can I achieve what I'm trying
> > to do?
> > 
> > Thanks,
> > Chris
> > 
> > -- 
> > Chris Boot
> > bo...@boo.tc
> > 
> > -- 
> > You received this message because you are subscribed to the Google 
> > Groups "bareos-users" group.
> > To unsubscribe from this group and stop receiving emails from it, send 
> > an email to bareos-users...@googlegroups.com 
> > <mailto:bareos-users...@googlegroups.com>.
> > To view this discussion on the web visit 
> > 
> https://groups.google.com/d/msgid/bareos-users/95e86eef-524b-4261-9c9d-aa68aaea86a0n%40googlegroups.com
>  
> <
> https://groups.google.com/d/msgid/bareos-users/95e86eef-524b-4261-9c9d-aa68aaea86a0n%40googlegroups.com?utm_medium=email_source=footer
> >.
>
> -- 
> Chris Boot
> bo...@boo.tc
>
>
> -- 
> Chris Boot
> bo...@boo.tc
>

-- 
You received this message because you are subscribed to the Google Groups 
"bareos-users" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to bareos-users+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/bareos-users/8423de98-fe23-4863-a652-ffddc5835e27n%40googlegroups.com.


[bareos-users] Re: Authorization problem in Webui with Pam authentication

2024-02-08 Thread Bruno Friedmann (bruno-at-bareos)
Certainly due to an upgrade without having run 
su postgres -c /usr/lib/bareos/scripts/grant_bareos_privileges

something present in the fine manual ;-)
Le jeudi 8 février 2024 à 11:53:16 UTC+1, Thomas Messner a écrit :

> When I go to Director -> Subscription, I get the following error message:
>
> Detailed backup unit report:
> Query failed: ERROR: Authorization denied for view backup_unit_overview
>
> My question now is, which ACL authorization must be allowed for this to 
> work?
>
> Is there a complete overview of all ACLs?
>
> Detailed backup unit report:
> Query failed: ERROR: Permission denied for view backup_unit_overview
>
> Yours sincerely,
> Thomas
>

-- 
You received this message because you are subscribed to the Google Groups 
"bareos-users" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to bareos-users+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/bareos-users/947c0d1b-89e1-4a82-8a87-d4e7e686bc84n%40googlegroups.com.


[bareos-users] Re: bareos web ui with https

2024-02-15 Thread Bruno Friedmann (bruno-at-bareos)
Hello Markus,

I hope you noticed in Bareos documentation, that connection between php-fpm 
(webui) and the director had to be unencrypted (due to missing tls-psk 
support in upstream php-curl module).

But to have https and as such tls connection between your browser and the 
webui you will have to find how to activate ssl/tls module for your 
favorite httpd server on your distribution.
Globally it means activate the encryption module on the webserver and 
install a certificate.

Hope this will drive you to a solution.
Le mardi 13 février 2024 à 14:28:00 UTC+1, Markus Dubois a écrit :

> Hi,
>
> i've enabled TLS between dir-sd-client. I've also configured the web-ui 
> according to the docs (directors.ini)
> tls between sd-dir-client is working,
> but i've been struggling to get the web ui response to https, it is still 
> http
> any hints? 
>
> best regards
>
>

-- 
You received this message because you are subscribed to the Google Groups 
"bareos-users" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to bareos-users+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/bareos-users/f6c8fc2e-f3db-462e-a66b-f2586b8ce4ebn%40googlegroups.com.


[bareos-users] Re: First Full Backup is not consolidated

2024-02-20 Thread Bruno Friedmann (bruno-at-bareos)
Hard to say without any configuration sample, but certainly you will have 
to have a look at
https://docs.bareos.org/TasksAndConcepts/AlwaysIncrementalBackupScheme.html#always-incremental-max-full-age

Le dimanche 11 février 2024 à 18:25:14 UTC+1, Mike a écrit :

> Hi,
>
> I'm using Bareos 23 set to always incremental. The issue I'm encountering 
> is that the first backup (Full backup), does not consolidate over time, 
> whereas all subsequent incremental backups and their consolidations are 
> properly consolidated together as expected.
>
> Have any of you encountered this issue, and if so, do you know which part 
> of the configuration might change this?
>
> Thank you.
>

-- 
You received this message because you are subscribed to the Google Groups 
"bareos-users" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to bareos-users+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/bareos-users/04a81ceb-007b-4da4-a36f-e22a3c550c37n%40googlegroups.com.


[bareos-users] Re: Trigger copy job just after a backup job

2024-02-21 Thread Bruno Friedmann (bruno-at-bareos)
Hello Łukasz 

just a big warning: *don't do that ! :-)*

The why is when you are in the run script post state, the initial job is 
still not finished for Bareos and changes still happen afterwards.
You really don't want to try hard to confuse your database and Bareos 
installation. Some have tried and get hardly backfired ;-)

Typically for that kind of things, you may want to check if you can run 
copy job with selection pattern 
https://docs.bareos.org/Configuration/Director.html#config-Dir_Job_SelectionPattern

This should be modular enough to cover all your case.

On Wednesday 21 February 2024 at 11:54:41 UTC+1 Łukasz Szczepanik wrote:

> Hi Guys,
>
>
> Do you know how to trigger copy job just after corresponding main backup 
> finishes with success ?
>
> I look into Bareos doc regarding copy/migration but I could not find any 
> solution. All "Selection Types" do not met my requirements.
> My goal is simple make a copy of backup just after the backup finishes 
> with success.
>
> Thanks!
>   
>
>
>

-- 
You received this message because you are subscribed to the Google Groups 
"bareos-users" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to bareos-users+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/bareos-users/d726960e-d0bf-4b50-8523-04ca5eaab6edn%40googlegroups.com.


[bareos-users] Re: bareos-sd dummy device help

2023-12-19 Thread Bruno Friedmann (bruno-at-bareos)
Did you also install the bareos-storage-fifo 

Le lundi 18 décembre 2023 à 11:33:28 UTC+1, emme a écrit :

> good morning,
>  tried this but seems not working, am i doing wrong?
>
> (
> https://docs.bareos.org/Appendix/Howtos.html#use-a-dummy-device-to-test-the-backup
> )
>
> cat /etc/bareos/bareos-sd.d/device/DummyStorage.conf
> "
> Device {
>   Name = NULL
>   Media Type = NULL
>   Device Type = Fifo
>   Archive Device = /dev/null
>   LabelMedia = yes
>   Random Access = no
>   AutomaticMount = no
>   RemovableMedia = no
>   MaximumOpenWait = 60
>   AlwaysOpen = no
> }
> "
> OUTPUT:
> user@bareos-srv:~$ sudo bareos-sd -fvt
> bareos-sd ERROR TERMINATION
> stored/stored_conf.cc:567 Could not load storage backend fifo for device 
> NULL.
>
> thank you.
>

-- 
You received this message because you are subscribed to the Google Groups 
"bareos-users" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to bareos-users+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/bareos-users/07f6488e-152e-4146-9531-6349e36db8adn%40googlegroups.com.


[bareos-users] Re: Adding / Installing a windows client - v23 vs documentation

2023-12-29 Thread Bruno Friedmann (bruno-at-bareos)
Maybe this will just create the "oooh moment"

Here you have client resource defined in the bareos-dir (director)
/etc/bareos/bareos-dir.d/client/client2-fd.conf
Each of them will contain a passphrase that you will report to
/etc/bareos/bareos-fd.d/director/director-dir.conf

So I can just confirm that starting with no knowledge and no ideas how the 
product work can a be challenge, but once it is setup usually work like a 
charm ;-)
We often see that buying consulting and training on the beginning can boost 
the time to production. 

Le vendredi 29 décembre 2023 à 05:12:06 UTC+1, Jason Chard a écrit :

> Hey Team,
> I'm new to Bareos, and the documentation either doesn't align or my brain 
> needs to bend more to see how the design team was thinking.
> I've been trying for over a week to add a client. the folder structures on 
> windows are not intuitive and seems to have director and client in multiple 
> folders so it is hard to work out what is what and where is the actual 
> configuration held.
>
> I've got a windows client installed, but i don't know how to collect the 
> secret to add it to the director.
> Additionally , the documentation to add a client seems to be misaligned 
> with v23?
>
> https://docs.bareos.org/IntroductionAndTutorial/Tutorial.html#id25
> 'add a client'
>
> [image: AddClient.jpg]
>
> Is there some sort of structure where i can see, oh the client or fd 
> config is here, i just need to copy that and insert it into the director 
> like this. Simplified?
> Being new to Bareos with no handover seems like an impossibility as the 
> documentation is not for someone that doesn't already know, more for 
> someone that just needs a refresher and is already trained in the Bareos 
> ways.
>
> I haven't even got to the adding a S3 bucket or having multiple businesses 
> / devices yet. I don't know if it is suitable for this. Anyone that can 
> throw me a snippet would be greatly appreciated. Also, giving me a URL will 
> not help, i've been to them all and they are all not in the current 
> programing mindset that i have. Show me the Bareos Mind Bending ways.
>   
>
>

-- 
You received this message because you are subscribed to the Google Groups 
"bareos-users" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to bareos-users+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/bareos-users/5a208b8b-d1c1-482e-9f8b-80792e6b1799n%40googlegroups.com.


[bareos-users] Re: ARM64 Client

2023-12-21 Thread Bruno Friedmann (bruno-at-bareos)
There was a thread not so long ago here in the ML
see https://groups.google.com/u/1/g/bareos-users/c/rQ4i5oHoKGQ

Le mercredi 20 décembre 2023 à 15:55:39 UTC+1, Chris Dos a écrit :

> I'm running and ARM64 Orange Pi and the 16.2 client was backing up fine 
> until I upgraded the server yesterday.  I'm seeing packet size too big 
> errors on the client:
> bareos-fd: bsock_tcp.c:570 Packet size too big from "client:
> 192.168.9.1:9102. Terminating connection.
>
> I've been looking for a newer ARM64 Bareos client package to no avail.  
> The version 19.2 client compiled by Karl Cunningham back on November 30, 
> 2020 is 32 bit.
>
> Are there any docs for compiling ARM64 Debian client?  I've been searching 
> and have not been successful.
>
> Thanks much.
>
> Chris
>
> --
> Chris Dos
> Senior Engineer
> Land Line: 303-688-9922 <(303)%20688-9922> Mobile: 303-520-1821 
> <(303)%20520-1821>
>
>
>

-- 
You received this message because you are subscribed to the Google Groups 
"bareos-users" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to bareos-users+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/bareos-users/0931085d-fa62-4667-bafa-f6cb847ade3an%40googlegroups.com.


Re: [bareos-users] LTO capacity management

2023-12-21 Thread Bruno Friedmann (bruno-at-bareos)
You certainly will get more gain to spend a bit of time in the 
documentation, than trying to play with scripting :-)

Bareos has parameter to most (if not all) possible 
case 
https://docs.bareos.org/bareos-23/Configuration/Director.html#config-Dir_Job_RescheduleOnError
 

;-)

Alexander, if a tape is in error, the volume will be place in state error. 
if the tape simply can't write more data, this is similar to end of tape, 
so bareos will just switch to another tape.

If you fear that maybe the job will not finish, but take long time, you may 
be interested in the checkpoint feature, which will allow you to use a job 
in error for restores.
https://docs.bareos.org/bareos-23/Appendix/Checkpoints.html#checkpoints

Regards
Le jeudi 21 décembre 2023 à 10:14:06 UTC+1, Go Away a écrit :

> If the writing fails the backup job will end with either failed status or 
> "with errors". I'm not sure but I'd expect the former.
> You can of course always re-run any job manually but I don't recall any 
> mechanics like "if the job failed, re-try it immediately with another 
> volume". You could try to script it though using your favourite scripting 
> language.
>
> MK
>
> On Thu, 21 Dec 2023, 10:00 Alexander Horner,  
> wrote:
>
>> Hi there, thanks for the response,
>>
>> In the case that a volume fails whilst adding it, is it possible to retry 
>> a backup with a new volume?
>>
>> On Wednesday, December 20, 2023 at 9:24:38 PM UTC Spadajspadaj wrote:
>>
>>> As far as I remember unless you explicitly specify volume size (which 
>>> can be useful if you're using file-based volumes, bareos will write to the 
>>> volume until it reaches the end. Then it will request another volume.
>>>
>>> Unless you hit an error - then I think the volume status will be set to 
>>> error.
>>> On 20.12.2023 20:46, Alexander Horner wrote:
>>>
>>> Good evening,
>>>
>>> I am looking at Bareos as it seems to be the only option that explicitly 
>>> mentions the use of LTO for incremental backups, so I can keep appending to 
>>> existing tapes with each backup. Is this indeed how Bareos will work?
>>>
>>> I am using LTO-5 tapes which have some age and use to them, and will not 
>>> necessarily reach the full 1.5TB raw capacity. Is Bareos capable of 
>>> handling these tapes and filling them as much as possible before requesting 
>>> an additional tape be added?
>>>
>>> Thanks
>>>
>>> -- 
>>> You received this message because you are subscribed to the Google 
>>> Groups "bareos-users" group.
>>> To unsubscribe from this group and stop receiving emails from it, send 
>>> an email to bareos-users...@googlegroups.com.
>>> To view this discussion on the web visit 
>>> https://groups.google.com/d/msgid/bareos-users/73e94304-240a-454f-99b1-757c85f91a15n%40googlegroups.com
>>>  
>>> 
>>> .
>>>
>>> -- 
>> You received this message because you are subscribed to the Google Groups 
>> "bareos-users" group.
>> To unsubscribe from this group and stop receiving emails from it, send an 
>> email to bareos-users...@googlegroups.com.
>>
> To view this discussion on the web visit 
>> https://groups.google.com/d/msgid/bareos-users/ff721ede-d6b8-4d20-a248-220c235c76e3n%40googlegroups.com
>>  
>> 
>> .
>>
>

-- 
You received this message because you are subscribed to the Google Groups 
"bareos-users" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to bareos-users+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/bareos-users/15313364-a4dc-468b-8488-4f4b512cdf79n%40googlegroups.com.


[bareos-users] Re: Adding / Installing a windows client - v23 vs documentation

2024-01-04 Thread Bruno Friedmann (bruno-at-bareos)
Hi Jason, you're still seems to confuse director configuration on fd and 
client configuration on dir

On Windows /etc/bareos is replaced by %ProgramData%/bareos and the rest 
below is identical.

As mentioned into the documentation 
the configure add client create two 
This creates two resource configuration files:
   
   - 
   
   /etc/bareos/bareos-dir.d/client/client2-fd.conf
   - 
   
   
   
/etc/bareos/bareos-dir-export/client/client2-fd/bareos-fd.d/director/bareos-dir.conf
 (assuming 
   your director resource is named bareos-dir)
   
So you just have to transfert from the director 
/etc/bareos/bareos-dir-export/client/client2-fd/bareos-fd.d/director/bareos-dir.conf
%ProgramData%/bareos/bareos-fd.d/director/bareos-dir.conf

I've no other simple way to express the how to.

Le mardi 2 janvier 2024 à 05:00:35 UTC+1, Jason Chard a écrit :

> Hi Bruno and Jörg ,
>
> First let me say thank you for taking the time to ready and reply, also 
> happy new year!
> i have the two results / questions.
> For Bruno, not quiet the Aha moment, in the picture attached i have a 
> putty to the director and cmd to the windows PC. The Windows PC does not 
> align to any directories but i tried to find.
>
> [image: win-dir.jpg]
>
> With Jörg, Yes i was using the bconsole, i thought that was the purpose. 
> I dropped to a putty window and it did perform the command. Now i'm 
> guessing i need to grab a file and drop it to the Windows Client so that 
> they marry, but where and what?
>
> [image: win-dir-2.jpg]
>
> also i've just put in a plain text password, should i convert the password 
> to md5 or just type a better complicated password in this console?
>
> On Friday 29 December 2023 at 18:22:42 UTC+8 Jörg Steffens wrote:
>
>> On 29.12.23 at 05:12 wrote Jason Chard: 
>> > Hey Team, 
>> > I'm new to Bareos, and the documentation either doesn't align or my 
>> > brain needs to bend more to see how the design team was thinking. 
>> > I've been trying for over a week to add a client. the folder structures 
>> > on windows are not intuitive and seems to have director and client in 
>> > multiple folders so it is hard to work out what is what and where is 
>> the 
>> > actual configuration held. 
>> > 
>> > I've got a windows client installed, but i don't know how to collect 
>> the 
>> > secret to add it to the director. 
>> > Additionally , the documentation to add a client seems to be misaligned 
>> > with v23? 
>> > 
>> > https://docs.bareos.org/IntroductionAndTutorial/Tutorial.html#id25 
>> > 'add a client' 
>> > 
>> > AddClient.jpg 
>>
>> A look at this image first confuses me, but later on, I think I 
>> understood, where the trouble comes from. 
>> You have used the bconsole in the bareos-webui? 
>> The admin user in the webui does not have the permission to run the 
>> configure command. You may have seen this in its profile ACL. 
>>
>> When the documentation speaks about bconsole, it means the command line 
>> bconsole, running on (Linux) shell. 
>> Please rerun the configure command there. 
>> Of course, it is also possible to run the configure command using the 
>> bareos-webui console interface, by changing the ACLs. However, the webui 
>> console also have other limitation, so I recommend to execute the 
>> bconsole command on a shell (as root, so it has full permission). 
>>
>> Regards, 
>> Jörg 
>>
>> -- 
>> Jörg Steffens joerg.s...@bareos.com 
>> Bareos GmbH & Co. KG Phone: +49 221 630693-91 <+49%20221%2063069391> 
>> https://www.bareos.com Fax: +49 221 630693-10 <+49%20221%2063069310> 
>>
>> Sitz der Gesellschaft: Köln | Amtsgericht Köln: HRA 29646 
>> Komplementär: Bareos Verwaltungs-GmbH 
>> Geschäftsführer: Stephan Dühr, Jörg Steffens, Philipp Storz 
>>
>>
>>

-- 
You received this message because you are subscribed to the Google Groups 
"bareos-users" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to bareos-users+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/bareos-users/bb4db42c-aa62-4f93-9fa0-f4723653710an%40googlegroups.com.


[bareos-users] Re: Error when listing jobs in bconsole: Fatal error: cats/sql_list.cc:593

2024-01-05 Thread Bruno Friedmann (bruno-at-bareos)
Hi Jen,

I guess you already verified that you didn't forget to update the database 
schema and the privileges ?
either with dbconfig or manually by applying the manual script 
update_bareos_tables and grant_bareos_privileges

```
Enter SQL query: select * from version;
+---+
| versionid |
+---+
| 2,230 |
+---+
```

Otherwise I wonder what kind of output you are expecting with 
list jobid= hours=24

Aren't you looking for `show me the jobs run in the last 24 jours`
which is 
list jobs hours=24 

For example here with a 23 version

Using your command give me the right result
*list jobid=14970 hours=24
+---+-+-+-+--+--+---+--+-+---+
| jobid | name| client  | starttime   | duration | type | level 
| jobfiles | jobbytes| jobstatus |
+---+-+-+-+--+--+---+--+-+---+
| 14970 | catalog | yoda-fd | 2024-01-05 07:02:01 | 00:00:06 | B| F 
|  272 | 770,112,797 | T |
+---+-+-+-+--+--+---+--+-+---+

Show me the jobs for specific jobname during the last 48 hours.

*list jobs jobname=catalog hours=48
+---+-+-+-+--+--+---+--+-+---+
| jobid | name| client  | starttime   | duration | type | level 
| jobfiles | jobbytes| jobstatus |
+---+-+-+-+--+--+---+--+-+---+
| 14961 | catalog | yoda-fd | 2024-01-04 07:02:02 | 00:00:08 | B| F 
|  272 | 767,798,259 | T |
| 14970 | catalog | yoda-fd | 2024-01-05 07:02:01 | 00:00:06 | B| F 
|  272 | 770,112,797 | T |
+---+-+-+-+--+--+---+--+-+---+

While trying to ask for a specific jobid will always return non or one 
result.

*list jobid=14970 hours=48
+---+-+-+-+--+--+---+--+-+---+
| jobid | name| client  | starttime   | duration | type | level 
| jobfiles | jobbytes| jobstatus |
+---+-+-+-+--+--+---+--+-+---+
| 14970 | catalog | yoda-fd | 2024-01-05 07:02:01 | 00:00:06 | B| F 
|  272 | 770,112,797 | T |
+---+-+-+-+--+--+---+--+-+---+

Regards
Le mercredi 3 janvier 2024 à 09:30:52 UTC+1, jens.gr...@gmail.com a écrit :

> Hi guys,
>
> I'm using Version: 23.0.1~pre7.606b211eb (19 December 2023)
>
> The following command does not work after I upgraded my server to Debian 
> 12 and the database from postgres 13 to 15:
>
> *list jobid=36197 hours=24
> You have messages.
> *m
> 03-Jan 08:54 bareos-dir JobId 0: Fatal error: cats/sql_list.cc:593 
> cats/sql_list.cc:593 query SELECT DISTINCT Job.JobId,Job.Name, Client.Name 
> as Client, Job.StartTime, CASE WHEN Job.endtime IS NOT NULL AND Job.endtime 
> >= Job.starttime THEN Job.endtime - Job.starttime ELSE CURRENT_TIMESTAMP(0) 
> - Job.starttime END as Duration, 
> Job.Type,Job.Level,Job.JobFiles,Job.JobBytes,Job.JobStatus FROM Job LEFT 
> JOIN Client ON Client.ClientId=Job.ClientId LEFT JOIN JobMedia ON 
> JobMedia.JobId=Job.JobId LEFT JOIN Media ON JobMedia.MediaId=Media.MediaId 
> LEFT JOIN FileSet ON FileSet.FileSetId=Job.FileSetId WHERE Job.JobId > 0 
> AND Job.JobId=36197AND Job.SchedTime > '2024-01-02 08:54:37'  ORDER BY 
> StartTime;  failed:
> FEHLER:  Müll folgt auf numerische Konstante bei »36197A«
> ZEILE 1: ...d=Job.FileSetId WHERE Job.JobId > 0 AND Job.JobId=36197AND J...
>
> Seems like the missing blank between the JobId and the `AND` in the 
> SQL-command is causing the trouble. Running the command without `hours=24` 
> works as expected:
>
> *list jobid=36197
>
> +---+---++-+--+--+---+--+--+---+
> | jobid | name  | client   
>   | starttime   | duration | type | level | jobfiles | 
> jobbytes | jobstatus |
>
> +---+---++-+--+--+---+--+--+---+
> | 36197 | backup-laptop--fd | laptop--fd | 2024-01-02 09:07:08 | 
> 00:00:02 | C| I |1 |   81 | T |
>
> +---+---++-+--+--+---+--+--+---+
>
> Does anyone know what could be the 

[bareos-users] Re: Error when listing jobs in bconsole: Fatal error: cats/sql_list.cc:593

2024-01-05 Thread Bruno Friedmann (bruno-at-bareos)
Hi Thanks for your report,

I've now being able to reproduce it in the master and 23 branch, we will 
certain get more information/fix from our beloved dev team soon.

Le vendredi 5 janvier 2024 à 12:39:50 UTC+1, jens.gr...@gmail.com a écrit :

> Hi Bruno,
>
> thanks for your reply.
>
> The db-scheme seems correct.
>
> bareos@blabla:~$ psql -U bareos bareos
> psql (15.5 (Debian 15.5-0+deb12u1))
> Geben Sie »help« für Hilfe ein.
>
> bareos=> \c
> Sie sind jetzt verbunden mit der Datenbank »bareos« als Benutzer »bareos«.
> bareos=> select * from version;
>  versionid
> ---
>   2230
> (1 Zeile)
>
> I've had some trouble when I upgraded from bareos v22 to v23 but that was 
> some time ago.
>
> The intention for the command `list jobid=36197 hours=24` is this: I want 
> to check if the given job ran in the last 24 hours. If the result is not 
> empty I don't need a backup, but if not I do need a new backup. I'm calling 
> my scripts every 30 minutes to backup up remote clients incrementally once 
> every 24 hours and fully once every 7 days.
>
> But with your hints I think that I could do my check more correctly like 
> this:
>
> list jobs client=laptop--fd hours=24
>
> The script code is now a little clearer than it was before so that's 
> already a benefit from our conversation.
>
> But nevertheless, if I type in bconsole:
>
> * list jobid=36197 hours=48
>
> I get the error as mentioned above. With the solution the error doesn't 
> bother me but it seems a little strange to me.
>
> Greetings, Jens
>
>
>
>
> Bruno Friedmann (bruno-at-bareos) schrieb am Freitag, 5. Januar 2024 um 
> 11:41:11 UTC+1:
>
> Hi Jen,
>
> I guess you already verified that you didn't forget to update the database 
> schema and the privileges ?
> either with dbconfig or manually by applying the manual script 
> update_bareos_tables and grant_bareos_privileges
>
> ```
> Enter SQL query: select * from version;
> +---+
> | versionid |
> +---+
> | 2,230 |
> +---+
> ```
>
> Otherwise I wonder what kind of output you are expecting with 
> list jobid= hours=24
>
> Aren't you looking for `show me the jobs run in the last 24 jours`
> which is 
> list jobs hours=24 
>
> For example here with a 23 version
>
> Using your command give me the right result
> *list jobid=14970 hours=24
>
> +---+-+-+-+--+--+---+--+-+---+
>
> | jobid | name| client  | starttime   | duration | type | 
> level | jobfiles | jobbytes| jobstatus |
>
> +---+-+-+-+--+--+---+--+-+---+
> | 14970 | catalog | yoda-fd | 2024-01-05 07:02:01 | 00:00:06 | B| F   
>   |  272 | 770,112,797 | T |
>
> +---+-+-+-+--+--+---+--+-+---+
>
> Show me the jobs for specific jobname during the last 48 hours.
>
> *list jobs jobname=catalog hours=48
>
> +---+-+-+-+--+--+---+--+-+---+
>
> | jobid | name| client  | starttime   | duration | type | 
> level | jobfiles | jobbytes| jobstatus |
>
> +---+-+-+-+--+--+---+--+-+---+
> | 14961 | catalog | yoda-fd | 2024-01-04 07:02:02 | 00:00:08 | B| F   
>   |  272 | 767,798,259 | T |
> | 14970 | catalog | yoda-fd | 2024-01-05 07:02:01 | 00:00:06 | B| F   
>   |  272 | 770,112,797 | T |
>
> +---+-+-+-+--+--+---+--+-+---+
>
> While trying to ask for a specific jobid will always return non or one 
> result.
>
> *list jobid=14970 hours=48
>
> +---+-+-+-+--+--+---+--+-+---+
>
> | jobid | name| client  | starttime   | duration | type | 
> level | jobfiles | jobbytes| jobstatus |
>
> +---+-+-+-+--+--+---+--+-+---+
> | 14970 | catalog | yoda-fd | 2024-01-05 07:02:01 | 00:00:06 | B| F   
>   |  272 | 770,112,797 | T |
>
> +---+-+-+-+--+--+---+--+-+---+
>
> Regards
> Le mercredi 3 janvier 2024 à 09:30:52 UTC+1, jens.gr...@gmail.com a 
> écrit :
>
> Hi guys,
>
> I'm using Version: 23.

[bareos-users] Re: Bareos 23.0.1~pre7.606b211eb-33 error: Volume Header Id bad: Bacula 1.0 immortal

2024-01-10 Thread Bruno Friedmann (bruno-at-bareos)
Hi,

The compatible option was in deprecated mode since a few release, and has 
been removed finally in 23.

Here it look like you're recycling a very old volume since ages. The best 
advise would be to either try a manual relabel (when in purge status) check 
if the header is then correctly rewrite, otherwise retire the volume 
(purge, then delete, and remove for FS)

Regards.

Le mardi 9 janvier 2024 à 13:45:48 UTC+1, Heidi van Niekerk a écrit :

> HiSince the upgrade to bareos 23.0.1~pre7.606b211eb-33, we are receiving 
> this error when attempting to do some backups (Virtual Full, Incremental) 
> and restores:
>
> 04-Jan 08:22 bareos_server-sd JobId : Warning: stored/acquire.cc:325 
> Read acquire: Volume Header Id bad: Bacula 1.0 immortal04-Jan 04:14 
> bareos_server-sd JobId : Please mount read Volume "FullVol" for:
> Job:  servername_BackupJob.2024-01-04_03.13.00_36
> Storage:  "dev0010_volume" (/backups/dev0010)
> Pool: servernameDefault
> Media type:   File
>
> We have established that this has to do with the ID and version number in 
> the header of the actual volume.  When checking the header with bls, the 
> backups on volumes where we see the error:Volume Label:
> Id: Bacula 1.0 immortal
> VerNo : 11
> VolName   : IncrVol
> PrevVolName   :
> VolFile   : 0
> LabelType : VOL_LABEL
> LabelSize : 214
> PoolName  : servername_incremental
> MediaType : File
> PoolType  : Backup
> HostName  : bareos_server
> Date label written: 31-Jul-2013 20:27
> 08-Jan 09:06 bls JobId 0: Releasing device "dev0001_volume" 
> (/backups/dev0001).
>
> Before 2016, we used Bacula, but migrated to bareos in that year.  The 
> above server was one of the servers that would have had backups on bacula 
> prior to changing to Bareos.  We can therefore only assume that that is the 
> reason why the Id mentions Bacula.Our newer servers, ones built after the 
> migration to Bareos, have headers similar to the one below - they don't 
> have this issue.Volume Label:
> Id: Bareos 2.0 immortal
> VerNo : 20
> VolName   : FullVol
> PrevVolName   :
> VolFile   : 0
> LabelType : VOL_LABEL
> LabelSize : 218
> PoolName  : servername_full
> MediaType : File
> PoolType  : Backup
> HostName  : bareos_server
> Date label written: 25-Jun-2023 18:23
> 08-Jan 09:04 bls JobId 0: Releasing device "dev0001_volume" 
> (/backups/dev0001).We have looked at the changelogs for Bareos 23.0.1 
> pre7, but could not find relating information.  Has this version stopped 
> support for volumes with Id Bacula 1.0 immortal?
> If so, is there a safe way to change the Id and version number on the 
> volumes so that it is compatible with versions 23.0.1~pre7.606b211eb-33 and 
> higher?We run Virtual Full backups every 2 weeks with daily Incrementals.
>
> Our host_template looks like this:FileSet {
>   Name = SHORTHOSTNAME_FileSet
>   Ignore File Set Changes = yes
>   Include  {
> Options {
>   compression=GZIP
>   signature=MD5
>   noatime=yes
> }
> INCLUDES
> Exclude Dir Containing = .backup_exclude
>   }
>   Exclude {
> EXCLUDES
>   }
> }Job {
>   Name = "SHORTHOSTNAME_BackupJob"
>   Type = Backup
>   Accurate = yes
>   Allow Duplicate Jobs = no
>   Cancel Lower Level Duplicates = yes
>   Cancel Running Duplicates = no
>   Client = SHORTHOSTNAME
>   FileSet= "SHORTHOSTNAME_FileSet"
>   Full Backup Pool = "SHORTHOSTNAME_full"
>   Incremental Backup Pool = "SHORTHOSTNAME_incremental"
>   Messages = Standard
>   Pool = SHORTHOSTNAMEDefault
>   Priority = 10
>   Allow Mixed Priority = yes
>   Schedule = SHORTHOSTNAME_Schedule
>   Storage = SHORTHOSTNAME_FStorage
>   ClientRunBeforeJob = /usr/local/bin/prebackup
>   ClientRunAfterJob = /usr/local/bin/postbackup
>   RunAfterJob = "/usr/local/bin/poller.rb %c"
>   RunScript {
> Command = "/usr/local/bin/full_backup_if_sane %c"
> FailJobOnError = yes
> RunsWhen   = After
> RunsOnClient   = no
> RunsOnFailure  = no
>  }
> }Job {
>   Name = "SHORTHOSTNAME_RestoreJob"
>   Type = Restore
>   Client = SHORTHOSTNAME
>   FileSet= "SHORTHOSTNAME_FileSet"
>   Messages = Standard
>   Pool = SHORTHOSTNAMEDefault
>   Priority = 1
>   Allow Mixed Priority = yes
>   Storage = SHORTHOSTNAME_FStorage
>   ClientRunBeforeJob = "/usr/local/bin/prerestore %n"
>   RunScript {
> Command = "/usr/local/bin/postrestore %n"
> RunsWhen = After
> RunsOnFailure = yes
> RunsOnClient  = yes
> RunsOnSuccess = yes# default, you can drop this line
>   }
> }Client {
>   Name = SHORTHOSTNAME
>   Address = FQDN
>   Catalog = MyCatalog
>   Password = ""
>   Maximum Bandwidth Per Job = 9 Mb/s
> }Pool {
>   Name = "SHORTHOSTNAME_full"
>   Pool Type = Backup
>   Recycle = yes   # Bareos can automatically 

Re: [bareos-users] VSS ERR=The process cannot access the file because it is being used by another process

2024-01-16 Thread Bruno Friedmann (bruno-at-bareos)
Hello, all normally the issues that are described here, should have been 
already fixed, and should work without any trouble with bareos >= 23

The code fixing the problem has been submitted and merge a certain time ago 
with PR1452
https://github.com/bareos/bareos/pull/1452

Please refresh your installation, and of course report success and failures.

Le jeudi 11 janvier 2024 à 23:17:26 UTC+1, Spadajspadaj a écrit :

> As I understand, this is the interesting excerpt from the debug trace.
>
> win10test-fd (50): findlib/find.cc:169-0 Verify= Accurate= 
> BaseJob= flags=<724185736>
> win10test-fd (50): findlib/find.cc:169-0 Verify= Accurate= 
> BaseJob= flags=<724185736>
> win10test-fd (450): findlib/find.cc:175-0 F C:/Users/test/NTUSER.DAT
> win10test-fd (500): compat/compat.cc:278-0 Enter convert_unix_to_win32_path
> win10test-fd (500): compat/compat.cc:322-0 path = 
> \\?\C:\Users\test\NTUSER.DAT
> win10test-fd (500): compat/compat.cc:328-0 Leave cvt_u_to_win32_path 
> path=\\?\C:\Users\test\NTUSER.DAT
> win10test-fd (500): compat/compat.cc:234-0 Win32ConvInitCache: Setup of 
> thread specific cache at address 1532b2b76d0
> win10test-fd (500): compat/compat.cc:531-0 Enter make_wchar_win32_path
> win10test-fd (500): compat/compat.cc:555-0 Leave 
> make_wchar_win32_path=\\?\C:\Users\test\NTUSER.DAT
> win10test-fd (500): compat/compat.cc:1609-0 sizino=8 ino=0 
> filename=C:/Users/test/NTUSER.DAT
> win10test-fd (300): findlib/find_one.cc:911-0 File : 
> C:/Users/test/NTUSER.DAT
> win10test-fd (130): filed/backup.cc:535-0 FT_REG saving: 
> C:/Users/test/NTUSER.DAT
> win10test-fd (130): filed/backup.cc:646-0 filed: sending 
> C:/Users/test/NTUSER.DAT to stored
> win10test-fd (150): lib/crypto_openssl.cc:641-0 crypto_digest_new 
> jcr=1532b228410
> win10test-fd (300): filed/backup.cc:1575-0 encode_and_send_attrs 
> fname=C:/Users/test/NTUSER.DAT
> win10test-fd (500): compat/compat.cc:278-0 Enter convert_unix_to_win32_path
> win10test-fd (500): compat/compat.cc:322-0 path = 
> \\?\C:\Users\test\NTUSER.DAT
> win10test-fd (500): compat/compat.cc:328-0 Leave cvt_u_to_win32_path 
> path=\\?\C:\Users\test\NTUSER.DAT
> win10test-fd (300): filed/backup.cc:1594-0 File C:/Users/test/NTUSER.DAT
> attribs=A A IH/ B A A CAi FAAA A A BlnTaz BlnTaz BlnTaz A A L
> attribsEx=CAi HaQkZzohoU HaQvQ8Q7yY HaQvQ8Q7yY A FAAA
> win10test-fd (300): filed/backup.cc:1620-0 >stored: attrhdr 1 5 
> 0win10test-fd (200): filed/backup.cc:1772-0 No strip for 
> C:/Users/test/NTUSER.DAT
> win10test-fd (300): filed/backup.cc:1715-0 >stored: attr len=130: 1 3 
> C:/Users/test/NTUSER.DAT
> win10test-fd (150): filed/backup.cc:737-0 type=3 do_read=1
> win10test-fd (100): findlib/bfile.cc:710-0 bopen: fname 
> C:/Users/test/NTUSER.DAT, flags 0010, mode , rdev 8226
> win10test-fd (50): findlib/bfile.cc:565-0 === NO plugin
> win10test-fd (100): findlib/bfile.cc:664-0 Read 
> CreateFileW=\\?\C:\Users\test\NTUSER.DAT
> win10test-fd (850): lib/message.cc:1216-0 Enter Jmsg type=8
> win10test-fd (850): lib/message.cc:613-0 Enter DispatchMessage type=8 
> msg=win10test-fd JobId 16959:  Cannot open "C:/Users/test/NTUSER.DAT": 
> ERR=The process cannot access the file because it is being used by another 
> process.
> .
> win10test-fd (850): lib/message.cc:820-0 DIRECTOR for following msg: 
> win10test-fd JobId 16959:  Cannot open "C:/Users/test/NTUSER.DAT": 
> ERR=The process cannot access the file because it is being used by another 
> process.
> .
> win10test-fd (400): findlib/find_one.cc:493-0 FT_REG FI=1 linked=0 
> file=C:/Users/test/NTUSER.DAT
>
> From what I understand from the description of the make_wchar_win32_path() 
> method in win32/compat/compat.cc - we should leave the function with the 
> filename properly converted to VSS-based one. But apparently we're stuck 
> with local name.
>
> MK
> On 11.01.2024 22:03, Spadajspadaj wrote:
>
> The more I dig into it, the more it seems it is bareos after all.
>
> Unfortunately, building Windows bareos-fd is no small feat so I cannot 
> directly debug it but.
>
> I ran a procmon against bareos-fd.exe and it seems that while the FD 
> process does create a VSS snapshot... it doesn't read from it. It reads the 
> files straight from the main device. I did a small test fileset consisting 
> of just my user's registry file. And both procmon's dump as well as 
> bareos-fd own debug trace shows that it's trying to read simply a 
> c:\users\test\ntuser.dat instead of properly going for 
> \\?\GLOBALROOT\Device\HardDiskVolumeShadowCopyXX\Users\test\ntuser.dat. 
> That would explain why the job even though it's supposed to use VSS, fails 
> on copying open files.
>
> When I created a very small example VS project just creating a VSS 
> snapshot and copying said file out of the VSS snapshot (using the proper 
> shadow copy volume path), it works OK.
>
> MK
> On 9.01.2024 15:11, spadaj...@gmail.com wrote:
>
> OK. There is more to this than just bareos.
> 1. I used the script from this thread and it 

Re: [bareos-users] VSS ERR=The process cannot access the file because it is being used by another process

2024-01-17 Thread Bruno Friedmann (bruno-at-bareos)
Hi Spadapjspadaj,

When we examine what happen on test machine here we see it working 
correctly.
Especially that the VSS is present.

you may think we don't backup the snapshot due to the presence of 
*C:/Users/user/NTUSER.dat* in the log, but you need to understand that is 
the resulting file name after its conversion in compact.cccompat/compat.cc:
555-307 Leave 
make_wchar_win32_path=\\?\GLOBALROOT\Device\HarddiskVolumeShadowCopy1\Users\user\NTUSER.dat
compat/compat.cc:1609-307 sizino=8 ino=0 filename=C:/Users/user/NTUSER.dat 
desktop-vcf7e32-fd (300): findlib/find_one.cc:911-307 File : C:/Users
/user/NTUSER.dat desktop-vcf7e32-fd (130): filed/backup.cc:535-307 FT_REG 
saving: C:/Users/user/NTUSER.dat desktop-vcf7e32-fd (130): 
filed/backup.cc:646-307 
filed: sending C:/Users/user/NTUSER.dat to stored
So to be sure we use the same code, could you please install and run the 
following bareos-fd
https://download.bareos.org/current/windows/winbareos-23.0.1~pre57.8e89bfe0a-release-64-bit.exe

In your director you set the debug level to 500 for that client (changing 
the client name in the following example)

setdebug level=500 trace=1 timestamp=1 client=windows-fd

this will indicate where the tracelog will be written (often directly on 
C:\)
run one backup job then disable the debugging,
To extract in text mode the joblog in bconsole
@out /tmp/backup-win-fd.joblog
list joblog jobid=X
q

Then you can zip and attach both trace and joblog here.
Add the fileset used too.

BTW please ensure your fileset contain 
  Enable VSS = yes
as stated in documentation and Windows fileset example.

Le mardi 16 janvier 2024 à 15:38:37 UTC+1, Spadajspadaj a écrit :

I used the relatively new available client. (23-pre-something, downloaded 
around 2 weeks ago). Yes, I should have written that explicitly.
On 16.01.2024 15:23, Bruno Friedmann (bruno-at-bareos) wrote:

Hello, all normally the issues that are described here, should have been 
already fixed, and should work without any trouble with bareos >= 23 

The code fixing the problem has been submitted and merge a certain time ago 
with PR1452
https://github.com/bareos/bareos/pull/1452

Please refresh your installation, and of course report success and failures.

Le jeudi 11 janvier 2024 à 23:17:26 UTC+1, Spadajspadaj a écrit :

As I understand, this is the interesting excerpt from the debug trace.

win10test-fd (50): findlib/find.cc:169-0 Verify= Accurate= 
BaseJob= flags=<724185736>
win10test-fd (50): findlib/find.cc:169-0 Verify= Accurate= 
BaseJob= flags=<724185736>
win10test-fd (450): findlib/find.cc:175-0 F C:/Users/test/NTUSER.DAT
win10test-fd (500): compat/compat.cc:278-0 Enter convert_unix_to_win32_path
win10test-fd (500): compat/compat.cc:322-0 path = 
\\?\C:\Users\test\NTUSER.DAT
win10test-fd (500): compat/compat.cc:328-0 Leave cvt_u_to_win32_path 
path=\\?\C:\Users\test\NTUSER.DAT
win10test-fd (500): compat/compat.cc:234-0 Win32ConvInitCache: Setup of 
thread specific cache at address 1532b2b76d0
win10test-fd (500): compat/compat.cc:531-0 Enter make_wchar_win32_path
win10test-fd (500): compat/compat.cc:555-0 Leave 
make_wchar_win32_path=\\?\C:\Users\test\NTUSER.DAT
win10test-fd (500): compat/compat.cc:1609-0 sizino=8 ino=0 
filename=C:/Users/test/NTUSER.DAT
win10test-fd (300): findlib/find_one.cc:911-0 File : 
C:/Users/test/NTUSER.DAT
win10test-fd (130): filed/backup.cc:535-0 FT_REG saving: 
C:/Users/test/NTUSER.DAT
win10test-fd (130): filed/backup.cc:646-0 filed: sending 
C:/Users/test/NTUSER.DAT to stored
win10test-fd (150): lib/crypto_openssl.cc:641-0 crypto_digest_new 
jcr=1532b228410
win10test-fd (300): filed/backup.cc:1575-0 encode_and_send_attrs 
fname=C:/Users/test/NTUSER.DAT
win10test-fd (500): compat/compat.cc:278-0 Enter convert_unix_to_win32_path
win10test-fd (500): compat/compat.cc:322-0 path = 
\\?\C:\Users\test\NTUSER.DAT
win10test-fd (500): compat/compat.cc:328-0 Leave cvt_u_to_win32_path 
path=\\?\C:\Users\test\NTUSER.DAT
win10test-fd (300): filed/backup.cc:1594-0 File C:/Users/test/NTUSER.DAT
attribs=A A IH/ B A A CAi FAAA A A BlnTaz BlnTaz BlnTaz A A L
attribsEx=CAi HaQkZzohoU HaQvQ8Q7yY HaQvQ8Q7yY A FAAA
win10test-fd (300): filed/backup.cc:1620-0 >stored: attrhdr 1 5 
0win10test-fd (200): filed/backup.cc:1772-0 No strip for 
C:/Users/test/NTUSER.DAT
win10test-fd (300): filed/backup.cc:1715-0 >stored: attr len=130: 1 3 
C:/Users/test/NTUSER.DAT
win10test-fd (150): filed/backup.cc:737-0 type=3 do_read=1
win10test-fd (100): findlib/bfile.cc:710-0 bopen: fname 
C:/Users/test/NTUSER.DAT, flags 0010, mode , rdev 8226
win10test-fd (50): findlib/bfile.cc:565-0 === NO plugin
win10test-fd (100): findlib/bfile.cc:664-0 Read 
CreateFileW=\\?\C:\Users\test\NTUSER.DAT
win10test-fd (850): lib/message.cc:1216-0 Enter Jmsg type=8
win10test-fd (850): lib/message.cc:613-0 Enter DispatchMessage type=8 
msg=win10test-fd JobId 16959:  Cannot open "C:/Users/test/NTUSER.DAT": 
ERR=The process cannot access th

Re: [bareos-users] VSS ERR=The process cannot access the file because it is being used by another process

2024-01-17 Thread Bruno Friedmann (bruno-at-bareos)

Please, may I ask you to either provided the trace and job log as requested.
As I can argue that we are using 2019 to 2022 with 0% of your troubles

That kind of guerilla can be endless.

If we have traces, joblog fileset etc we may have a chance to reproduce.
If it is reproducible, then there's a chance to have a fix.
If we can elaborate that fix, you may have a chance to solve your trouble.


Le mercredi 17 janvier 2024 à 09:58:19 UTC+1, jo.go...@hosted-power.com a 
écrit :

> I just tried 
> https://download.bareos.org/current/windows/winbareos-23.0.1~pre57.8e89bfe0a-release-64-bit.exe
>  
> on Windows 2022. But I still have 100's of failed in use files :/
>
> On Wednesday 17 January 2024 at 09:49:10 UTC+1 Spadajspadaj wrote:
>
>> Hi Bruno.
>>
>> Yes, I understand how it's supposed to work. :-)
>>
>> OK, I will re-run the tests when I have a spare minute and dump the trace 
>> somewhere (the excerpt I posted earlier was with debug level 999 so all 
>> messages should have been captured).
>>
>> And it's not that I _thought_ it wasn't backed up. I know it wasn't 
>> backed up. It was throwing errors of being unable to access the file and if 
>> I wanted to restore the files I got a zero-length content. I know it should 
>> have been converted to the shadowcopy-based filename but wasn't.
>>
>> I'll check the latest package version first though.
>>
>> MK
>> On 17.01.2024 09:43, Bruno Friedmann (bruno-at-bareos) wrote:
>>
>> To add an illustration to the fact that BareOS works as documented and 
>> expected. 
>> We run a backup job while the registry hive is open in regedit and we 
>> tried to remove it so you get the expected error, the file is in use.
>>
>> In the background you can see the success of the job backing up that file 
>> without any error.
>> See the details about the status of VSS BackupComplete.
>>
>> [image: thumb-Screenshots_43.png]
>>
>> Le mercredi 17 janvier 2024 à 09:34:37 UTC+1, Bruno Friedmann 
>> (bruno-at-bareos) a écrit :
>>
>>> Hi Spadapjspadaj, 
>>>
>>> When we examine what happen on test machine here we see it working 
>>> correctly.
>>> Especially that the VSS is present.
>>>
>>> you may think we don't backup the snapshot due to the presence of 
>>> *C:/Users/user/NTUSER.dat* in the log, but you need to understand that 
>>> is the resulting file name after its conversion in compact.cc
>>> compat/compat.cc:555-307 Leave 
>>> make_wchar_win32_path=\\?\GLOBALROOT\Device\HarddiskVolumeShadowCopy1\Users\user\NTUSER.dat
>>> compat/compat.cc:1609-307 sizino=8 ino=0 filename=C:/Users/user/NTUSER.dat 
>>> desktop-vcf7e32-fd (300): findlib/find_one.cc:911-307 File : C:/
>>> Users/user/NTUSER.dat desktop-vcf7e32-fd (130): filed/backup.cc:535-307 
>>> FT_REG saving: C:/Users/user/NTUSER.dat desktop-vcf7e32-fd (130): 
>>> filed/backup.cc:646-307 
>>> filed: sending C:/Users/user/NTUSER.dat to stored
>>> So to be sure we use the same code, could you please install and run the 
>>> following bareos-fd
>>>
>>> https://download.bareos.org/current/windows/winbareos-23.0.1~pre57.8e89bfe0a-release-64-bit.exe
>>>
>>> In your director you set the debug level to 500 for that client 
>>> (changing the client name in the following example)
>>>
>>> setdebug level=500 trace=1 timestamp=1 client=windows-fd
>>>
>>> this will indicate where the tracelog will be written (often directly on 
>>> C:\)
>>> run one backup job then disable the debugging,
>>> To extract in text mode the joblog in bconsole
>>> @out /tmp/backup-win-fd.joblog
>>> list joblog jobid=X
>>> q
>>>
>>> Then you can zip and attach both trace and joblog here.
>>> Add the fileset used too.
>>>
>>> BTW please ensure your fileset contain 
>>>   Enable VSS = yes
>>> as stated in documentation and Windows fileset example.
>>>
>>> Le mardi 16 janvier 2024 à 15:38:37 UTC+1, Spadajspadaj a écrit :
>>>
>>> I used the relatively new available client. (23-pre-something, 
>>> downloaded around 2 weeks ago). Yes, I should have written that explicitly.
>>> On 16.01.2024 15:23, Bruno Friedmann (bruno-at-bareos) wrote:
>>>
>>> Hello, all normally the issues that are described here, should have been 
>>> already fixed, and should work without any trouble with bareos >= 23 
>>>
>>> The code fixing the problem has been submitted and merge a certain time 
>>> ago with PR

[bareos-users] Re: replicating bareos database and environment

2024-01-18 Thread Bruno Friedmann (bruno-at-bareos)
Unconventional setup I must admit. I presume you know that any write to the 
db will failed and as such it is 100% failing 

A restore will need to create a temp table, so I'm not sure you are on the 
right path.
You may want to debug this by starting your director with bareos-dir -f -u 
bareos -g bareos -d250 
and observe the error ;-)

It may not be impossible to setup a pooler in front of the database that 
would redirect all write to the main instance and read on the replica, to 
be tested.

BTW, you mention that remote browsing and database is slow: this sound like
 
Mis(un)-configured database server : quite often people use the default of 
PostgreSQL which allow 131 MEGA Byte of shared memory, which force 
postgresql doing of lot of sorting, join, temp operation on disk instead of 
memory. The thumbs rule here is to allow 1/4 of the total ram, or try to 
have at least enough to put the bareos indexes on ram.
psql 
show shared_buffers
bareos-# ;
 shared_buffers

 8GB
(1 row)
With using the profile OTP in https://pgtune.leopard.in.ua/ you will get 
normally a quite reasonable configuration for your bareos database cluster.

Also if you use the webui, it is recommended to preload the bvfs_cache (see 
documentation), which also need a correctly configured database.

Le mardi 16 janvier 2024 à 16:17:28 UTC+1, Markus Dubois a écrit :

> Hi,
>
> i was thinking about a solution to replicate a remote running bareos 
> environment to the local site.
> Cause was to restore faster, as browsing the remote director and database 
> is slow.
>
> So i've setup a postgres hot standby stream replica, a bareos director, a 
> web gui on the local site
>
> but i failed as soon as i tried to connect the bconsole on the local site 
> console of the (replicated) bareos-director.
>
> i got:
> Connecting to Director 127.0.0.1:9101
> Failed to connect to Director. Giving up.
>
> the bareos-director seems to run though
>
> PID TTY  STAT   TIME COMMAND
>   1 ?Ss 0:00 /usr/sbin/bareos-dir -u bareos -f
>  47 pts/0Ss+0:00 bash
>  57 pts/1Ss 0:00 bash
>  67 pts/1R+ 0:00 ps -ax
>
> Is this use case not possible or how could this be achieved?
>
>
>

-- 
You received this message because you are subscribed to the Google Groups 
"bareos-users" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to bareos-users+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/bareos-users/8a2bca9f-a0b6-4991-8832-7ec84400f093n%40googlegroups.com.


[bareos-users] Re: error:1409E10F:SSL (backup failed)

2024-01-09 Thread Bruno Friedmann (bruno-at-bareos)
You certainly want to get ride of this out of date version Bareos Version: 
20.0.0-1 (16 December 2020) 


Le mardi 9 janvier 2024 à 10:52:53 UTC+1, cwa...@gmail.com a écrit :

> In addition, there is also this error :
>
> 16-oct. 18:48 scribe-dir JobId 6: Fatal error: TLS read/write failure.: 
> ERR=error:1408F119:SSL routines:ssl3_get_record:decryption failed or bad 
> record mac
>
> To summarize, I get these errors :
> ERR=error:1408F119:SSL
> ERR=error:14094438:SSL
> ERR=error:1409E10F:SSL
>
> I've searched for these errors but I didn't find a solution.
>
>
>
> P.S. : I don't get back my own email sent to this mailing list, in 
> Thunderbird the thread is missing my messages. Is there an option 
> somewhere to get own messages like in Sympa ? (this time I went to 
> "Sent", selected my own email and clicked "Answer", I hope mailing list 
> will see the same message ID)
>
>
> Le 08/01/2024 à 16:11, cwa...@gmail.com a écrit :
> > Hello.
> > 
> > On several servers running :
> > * Ubuntu 20.04.6 LTS (focal)
> > * Bareos Version: 20.0.0-1 (16 December 2020)
> > 
> > I get following error. BTW these logs were generated with "-dt -d 99 -f" 
> > options, I didn't add "-v".
> > 
> > b-dir.log:20-déc.-2023 15:06:41.648828 scribe-dir (50): 
> > lib/crypto_openssl.cc:1602-20 jcr=7fd7980156e0 TLS read/write failure.: 
> > ERR=error:14094438:SSL routines:ssl3_read_bytes:tlsv1 alert internal 
> error
> > 
> > b-fd.log:20-déc.-2023 15:06:41.648813 scribe_fd (50): 
> > lib/crypto_openssl.cc:1602-0 jcr=7fdfd0016500 TLS read/write failure.: 
> > ERR=error:1409E10F:SSL routines:ssl3_write_bytes:bad length
> > 
> > b-sd.log:20-déc.-2023 15:06:42.605671 scribe-sd (50): 
> > stored/askdir.cc:103-20 getvolname error BnetRecv
> > b-sd.log:20-déc.-2023 15:06:42.605751 scribe-sd (50): 
> > stored/askdir.cc:335-20 Didn't get vol info vol=scribe-dir-inc-0109: 
> > ERR=Network error on BnetRecv in req_vol_info.
> > b-sd.log:20-déc.-2023 15:06:48.574337 scribe-sd (50): 
> > stored/job.cc:211-21 Auth=1 canceled=0 errstat=0
> > 
> > The configuration looks like this :
> > bareos-dir.conf https://pastebin.com/PHiqT5z3
> > bareos-sd.conf https://pastebin.com/HT1G2F49
> > bareos-fd.conf https://pastebin.com/pWtsNGX2
> > 
> > I get this error on servers that are not on the same location. Actually 
> > some of them are approximately at the other side of the planet from 
> others.
> > 
> > What is surprising is that this only seems to happen with incremental 
> > and differential backups. Full backups run fine.
> > 
> > What do you think about that ?
> > 
> > I can provide more informations if needed.
> > 
> > Thank you. Regards.
> >  Klaas
> > 
>
>

-- 
You received this message because you are subscribed to the Google Groups 
"bareos-users" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to bareos-users+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/bareos-users/7ef0d561-4afe-4daa-9527-48ddbb171419n%40googlegroups.com.


Re: [bareos-users] Re: Upgrade to Bareos 23 on FreeBSD Failed

2024-01-08 Thread Bruno Friedmann (bruno-at-bareos)
Hi There a number of Pull request concerning FreeBSD improvement 
installation and update that are in the process to be merged.
They will bring the missing update script which you can download directly 
from the source code repository if needed 
https://github.com/bareos/bareos/blob/master/core/src/cats/ddl/updates/postgresql.2210_2230.sql

So the goods are coming ;-)
Le dimanche 7 janvier 2024 à 04:48:01 UTC+1, Meggie Hallenbach a écrit :

> Alexander Frankenhauser posted the same problem a few weeks ago 
> (17.12.2023)
>
> "Fatal error: Version error for database "bareos". Wanted 2230, got 2210"
>
> Peter Kropf schrieb am Samstag, 6. Januar 2024 um 00:09:17 UTC+1:
>
>> Digging a bit more into the packages, it looks like 
>> the postgresql.2210_2230.sql file wasn't included sometime in the 22.1.x 
>> release release cycle. In my FreeBSD jail as root:
>>
>> cp postgresql.2210_2230.sql /usr/local/lib/bareos/scripts/ddl/updates/
>> su - postgres
>> psql bareos 
>> >
>> Once the database was upgraded to the correct version, my installation 
>> seems to be working again. I don't know if the original lua error is 
>> anything to be concerned about but I'm in the process of testing various 
>> workflows and will update this if I find any issues.
>>
>>
>> On Friday, January 5, 2024 at 1:29:25 PM UTC-8 Peter Kropf wrote:
>>
>>> I don't see a update_bareos_tables script on my system. I suspect that 
>>> the Lua script that's throwing errors is preventing the database update to 
>>> run. Unfortunately I'm not sure how to debug this.
>>>
>>>
>>>
>>>
>>> On Friday, January 5, 2024 at 11:33:59 AM UTC-8 aeron...@gmail.com 
>>> wrote:
>>>
 On my ubuntu system I had to run

 /usr/bareos/lib/scripts/update_bareos_tables

 Hope that helps
 On 1/5/24 2:18 PM, Peter Kropf wrote:

 In /var/log/bareos/bareos.log: 


 05-Jan 11:05 bareos-dir JobId 0: Fatal error: Version error for 
 database "bareos". Wanted 2230, got 2210
 05-Jan 11:05 bareos-dir JobId 0: Fatal error: Could not open Catalog 
 "MyCatalog", database "bareos".
 05-Jan 11:05 bareos-dir JobId 0: Fatal error: Version error for 
 database "bareos". Wanted 2230, got 2210
 05-Jan 11:05 bareos-dir ERROR TERMINATION
 Please correct the configuration in 
 /usr/local/etc/bareos/bareos-dir.d/*/*.conf


 Seems the upgrade of bareos-dir failed as it was expecting the database 
 to be version 2230 but it's at 2210. 

 Anyone know how to upgrade the database to 2230?


 On Friday, January 5, 2024 at 10:58:35 AM UTC-8 Peter Kropf wrote:

> I'm trying to upgrade my Bareos installation on a FreeBSD server from 
> version 22 to 23. After updating the repository via 
> add_bareos_repositories.sh from my subscription, I ran pkg upgrade. That 
> resulted in  
>
> [bare2] [9/9] Extracting bareos.com-director-23.0.0: 100%
> pkg: Failed to execute lua script: [string "-- args: 
> "etc/bareos/bareos-dir.d/fileset/Win..."]:3: ')' expected near 'etc'
> pkg: lua script failed
> pkg: Failed to execute lua script: [string "-- args: 
> "etc/bareos/bareos-dir.d/fileset/Win..."]:3: ')' expected near 'etc'
> pkg: lua script failed
>
> Anyone successfully upgraded from 22 to 23? Any insights into what 
> needs to happen to fix this?
>
> Full output of upgrade is attached.
>
> - Peter
>
> -- 
 You received this message because you are subscribed to the Google 
 Groups "bareos-users" group.
 To unsubscribe from this group and stop receiving emails from it, send 
 an email to bareos-users...@googlegroups.com.
 To view this discussion on the web visit 
 https://groups.google.com/d/msgid/bareos-users/6f62229b-53cd-4c2b-8d38-75fd1459bb2bn%40googlegroups.com
  
 
 .



-- 
You received this message because you are subscribed to the Google Groups 
"bareos-users" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to bareos-users+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/bareos-users/7555fa86-8c40-4530-a6c1-2b9a77f8bef7n%40googlegroups.com.


Re: [bareos-users] LTO capacity management

2023-12-21 Thread Bruno Friedmann (bruno-at-bareos)
As said previously, if the volume is defective, the job try to use another 
one. But in some cases the drive can be defective, become unresponsive or 
whatever and the job finish with a job status error.

btw Bareos knows where it was on tape ;-)

Le jeudi 21 décembre 2023 à 13:56:12 UTC+1, Alexander Horner a écrit :

>
> Thanks MK, Bruno,
>
> "Checkpoints also happen on volume changes. This means that when a volume 
> is full, or the backup has to switch to writing to another volume for some 
> other reason, a checkpoint is triggered saving what has been written to 
> that volume."
>
> This sounds like what I want to achieve, so I will want to enable this.
>
> "Based on the Checkpoints Feature, a resume of cancelled or broken 
> Backupjobs is conceivable in the future"
>
> A bit of a pain that one cannot yet resume a job which has failed from the 
> point at which the checkpoint was created... Presumably this means a full 
> new backup will need to be taken across all tapes. If this happened in the 
> middle of a tape I would understand, as trying to get back to the correct 
> position on the same tape could be a pain, but if it happens when adding a 
> new tape (such as the new tape being bad overall and failing immediately) 
> this could be a bit of a pain.
>
> Thanks
> On Thursday, December 21, 2023 at 10:26:53 AM UTC Spadajspadaj wrote:
>
>> Ha!
>>
>> Another set of eyes is always useful.
>>
>> Thanks for correcting my (false, this time) assumptions!
>>
>> MK
>> On 21.12.2023 10:41, Bruno Friedmann (bruno-at-bareos) wrote:
>>
>> You certainly will get more gain to spend a bit of time in the 
>> documentation, than trying to play with scripting :-) 
>>
>> Bareos has parameter to most (if not all) possible case 
>> https://docs.bareos.org/bareos-23/Configuration/Director.html#config-Dir_Job_RescheduleOnError
>>  
>>
>> ;-)
>>
>> Alexander, if a tape is in error, the volume will be place in state 
>> error. if the tape simply can't write more data, this is similar to end of 
>> tape, so bareos will just switch to another tape.
>>
>> If you fear that maybe the job will not finish, but take long time, you 
>> may be interested in the checkpoint feature, which will allow you to use a 
>> job in error for restores.
>> https://docs.bareos.org/bareos-23/Appendix/Checkpoints.html#checkpoints
>>
>> Regards
>> Le jeudi 21 décembre 2023 à 10:14:06 UTC+1, Go Away a écrit :
>>
>>> If the writing fails the backup job will end with either failed status 
>>> or "with errors". I'm not sure but I'd expect the former. 
>>> You can of course always re-run any job manually but I don't recall any 
>>> mechanics like "if the job failed, re-try it immediately with another 
>>> volume". You could try to script it though using your favourite scripting 
>>> language.
>>>
>>> MK
>>>
>>> On Thu, 21 Dec 2023, 10:00 Alexander Horner,  
>>> wrote:
>>>
>>>> Hi there, thanks for the response,
>>>>
>>>> In the case that a volume fails whilst adding it, is it possible to 
>>>> retry a backup with a new volume?
>>>>
>>>> On Wednesday, December 20, 2023 at 9:24:38 PM UTC Spadajspadaj wrote:
>>>>
>>>>> As far as I remember unless you explicitly specify volume size (which 
>>>>> can be useful if you're using file-based volumes, bareos will write to 
>>>>> the 
>>>>> volume until it reaches the end. Then it will request another volume.
>>>>>
>>>>> Unless you hit an error - then I think the volume status will be set 
>>>>> to error.
>>>>> On 20.12.2023 20:46, Alexander Horner wrote:
>>>>>
>>>>> Good evening,
>>>>>
>>>>> I am looking at Bareos as it seems to be the only option that 
>>>>> explicitly mentions the use of LTO for incremental backups, so I can keep 
>>>>> appending to existing tapes with each backup. Is this indeed how Bareos 
>>>>> will work?
>>>>>
>>>>> I am using LTO-5 tapes which have some age and use to them, and will 
>>>>> not necessarily reach the full 1.5TB raw capacity. Is Bareos capable of 
>>>>> handling these tapes and filling them as much as possible before 
>>>>> requesting 
>>>>> an additional tape be added?
>>>>>
>>>>> Thanks
>>>>>
>>>>> -- 
>>>>> You receiv

[bareos-users] Re: bareos-fd-postgresql plugin connection

2023-11-23 Thread Bruno Friedmann (bruno-at-bareos)
Hi Sathis, 

I guess your best bet will be to use the newer postgresql plugin come in 23 
where you will have all new sort of possibility
you can already check the new documentation at 
https://docs.bareos.org/master/TasksAndConcepts/Plugins.html#postgresql-plugin

The last iteration of rewrite will also offer an extended support for *deb* 
like pg_*_cluster tool and configuration.
If you want to test the code before the switch to 23, the experimental 
repository can allow you to do so.

Regards.

Le jeudi 23 novembre 2023 à 10:40:12 UTC+1, Sathis Anarudhan a écrit :

> Hi Community,
>
> I have configured the bareos-fd-postgresql plugin on my client machine 
> (which running the db) and created a FileSet configuration on Dir machine 
> to run the DB backups.
>
> below is the FileSet config
>
> FileSet {
> Name = "Postgres"
> Include  {
> Options {
> Compression = LZ4
> Signature = XXH128
> }
> Plugin = "python3"
>  ":module_name=bareos-fd-postgres"
>  ":postgresDataDir=/var/lib/postgresql/11/main"
>  ":walArchive=/var/lib/pgsql/wal_archive/"
>  ":ignoreSubdirs=pg_wal,pg_log,pg_xlog"
>  ":dbHost=/run/postgresql/"
>  ":dbuser=username"
>  ":dbname=dbname"
> }
> }
>
> but i am getting the below error when running the job
>
> client-fd JobId 18: Fatal error: python3-fd-mod: Could not connect to 
> database dbname, user username , host: /run/postgresql/.s.PGSQL.5432: 
> server requesting MD5 password authentication, but no password was provided 
>
> i have modified the pg_hba.conf file in client machine to user peer auth 
> as well, and getting the same error
>
> Can't we hard code the db password in FileSet conf ? or any suggestion to 
> fix this ?
>
> Thanks,
> Sathis Anarudhan
>
>

-- 
You received this message because you are subscribed to the Google Groups 
"bareos-users" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to bareos-users+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/bareos-users/c350fe72-c71e-4ba1-95c9-860194a13748n%40googlegroups.com.


[bareos-users] Re: bareos-fd-postgresql plugin connection

2023-11-23 Thread Bruno Friedmann (bruno-at-bareos)
Hi Stathis, 

I guess your best bet will be to use the newer postgresql plugin come in 23 
where you will have all new sort of possibility
you can already check the new documentation 
at 
https://docs.bareos.org/master/TasksAndConcepts/Plugins.html#postgresql-plugin

The last iteration of rewrite will also offer an extended support for *deb* 
like pg_*_cluster tool and configuration.
If you want to test the code before the switch to 23, the experimental 
repository can allow you to do so.

Regards.
Le jeudi 23 novembre 2023 à 10:40:12 UTC+1, Sathis Anarudhan a écrit :

> Hi Community,
>
> I have configured the bareos-fd-postgresql plugin on my client machine 
> (which running the db) and created a FileSet configuration on Dir machine 
> to run the DB backups.
>
> below is the FileSet config
>
> FileSet {
> Name = "Postgres"
> Include  {
> Options {
> Compression = LZ4
> Signature = XXH128
> }
> Plugin = "python3"
>  ":module_name=bareos-fd-postgres"
>  ":postgresDataDir=/var/lib/postgresql/11/main"
>  ":walArchive=/var/lib/pgsql/wal_archive/"
>  ":ignoreSubdirs=pg_wal,pg_log,pg_xlog"
>  ":dbHost=/run/postgresql/"
>  ":dbuser=username"
>  ":dbname=dbname"
> }
> }
>
> but i am getting the below error when running the job
>
> client-fd JobId 18: Fatal error: python3-fd-mod: Could not connect to 
> database dbname, user username , host: /run/postgresql/.s.PGSQL.5432: 
> server requesting MD5 password authentication, but no password was provided 
>
> i have modified the pg_hba.conf file in client machine to user peer auth 
> as well, and getting the same error
>
> Can't we hard code the db password in FileSet conf ? or any suggestion to 
> fix this ?
>
> Thanks,
> Sathis Anarudhan
>
>

-- 
You received this message because you are subscribed to the Google Groups 
"bareos-users" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to bareos-users+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/bareos-users/7b68927f-5594-480e-881f-4264b738b495n%40googlegroups.com.


[bareos-users] Re: A problem with storage after upgrading to version 23

2023-11-29 Thread Bruno Friedmann (bruno-at-bareos)
As indicated in the changelog of the release


   - 
   
   daemons: remove deprecated Pid Directory config option, and update 
   Maximum Concurrent Jobs default value to 1 PR #1426 
   
   

Le mercredi 29 novembre 2023 à 07:24:30 UTC+1, Serhii Zahuba a écrit :

> I have three bareos servers, last weekend I updated them to version 23 and 
> only one of them I see the problem. Some tasks hang at the stage[image: 
> 1_29.png]
>
> in the console I see the following
>
> [image: 2_29.png]
>
> my storage config 
> [image: 3_29.png]
>
> Where am I making a mistake?
> help please
>

-- 
You received this message because you are subscribed to the Google Groups 
"bareos-users" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to bareos-users+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/bareos-users/e47e1589-d176-40a0-9a50-6b81668a76b6n%40googlegroups.com.


[bareos-users] Re: /var/run/postgresql/.s.PGSQL.5432" failed: FATAL: role "bareos" does not exist

2023-11-30 Thread Bruno Friedmann (bruno-at-bareos)
Hello Jens, that would tend to prove the following.

When the upgraded packages are installed, the dbconfig should take care of 
the upgrade of the database. and only then the bareos-director should be 
restarted.

What you describe, seems to tell that for non expected reason the 
postgresql cluster instance is down when the director restart, which is as 
said: unexpected.

Would be great if someone else upgrading, can check in which state is the 
postgresql cluster after the dbconfig occur ?

Le mercredi 29 novembre 2023 à 21:46:05 UTC+1, jens.gr...@gmail.com a 
écrit :

> Hi Bruno,
>
> I had the same issue as Bruce had. I'm working on a Debian 11.8 system and 
> the reboot was the only way to get bareos up and running again.
>
> Greetings, Jens
>
> Bruno Friedmann (bruno-at-bareos) schrieb am Montag, 27. November 2023 um 
> 10:30:32 UTC+1:
>
>> Hi Bruce, It look really weird to me, to have to reboot a server due to 
>> Bareos update or upgrade.
>> Mostly update just need to restart services which all can be done in one 
>> go with `systemctl restart bareos-dir bareos-sd bareos-fd` 
>>
>> For upgrade, it has always been a good idea (beside reading the 
>> documentation and changelog) to run the 
>> /usr/lib/bareos/script/update_bareos_tables
>> /usr/lib/bareos/script/grant_bareos_privileges
>> Which update the database schema, and associated right 
>> Then restart the service.
>>
>> I would always in my case check each daemon before restart to handle 
>> configuration warning deprecation etc.
>> `
>> bareos-dir -t -u bareos -g bareos
>> bareos-sd -t u bareos -g bareos
>> bareos-fd -t -u root -g bareos
>> `
>>
>> If you still have to reboot, would be interesting to know which platform 
>> requires that.
>> Le dimanche 26 novembre 2023 à 17:17:01 UTC+1, Bruce Eckstein a écrit :
>>
>>> If this is the upgrade to version 23, I had a similar issue. I needed to 
>>> run /usr/lib/bareos/script/update_bareos_tables to update the tables in 
>>> bareos. Then I had to reboot the server. I tried to just reboot the 
>>> postgresql but that did not work.
>>> best of luck.
>>>
>>> On Friday, May 5, 2023 at 4:39:51 AM UTC-4 DUCARROZ Birgit wrote:
>>>
>>>> Hi, 
>>>>
>>>> Please can anyone help me with this issue? 
>>>> -- 
>>>> sudo -u bareos -s 
>>>>
>>>> bareos@:/home/superuser$ psql 
>>>>
>>>> *psql: error: connection to server on socket 
>>>> "/var/run/postgresql/.s.PGSQL.5432" failed: FATAL: role "bareos" does 
>>>> not exist* 
>>>>
>>>>
>>>>
>>>> LOG- AND CONFIGURATION FILES: 
>>>> -- 
>>>> netstat -nlp | grep 5432 
>>>> tcp 0 0 127.0.0.1:5432 0.0.0.0:* 
>>>> LISTEN 1218/postgres 
>>>> unix 2 [ ACC ] STREAM LISTENING 59051 1218/postgres 
>>>> /var/run/postgresql/.s.PGSQL.5432 
>>>> -- 
>>>> ps axf | grep postgres 
>>>> 8021 pts/1 S+ 0:00 \_ grep 
>>>> --color=auto postgres 
>>>> 1218 ? Ss 0:01 /usr/lib/postgresql/15/bin/postgres -D 
>>>> /var/lib/postgresql/15/main -c 
>>>> config_file=/etc/postgresql/15/main/postgresql.conf 
>>>> 1246 ? Ss 0:00 \_ postgres: 15/main: checkpointer 
>>>> 1247 ? Ss 0:00 \_ postgres: 15/main: background writer 
>>>> 1322 ? Ss 0:00 \_ postgres: 15/main: walwriter 
>>>> 1323 ? Ss 0:00 \_ postgres: 15/main: autovacuum launcher 
>>>> 1324 ? Ss 0:00 \_ postgres: 15/main: logical replication 
>>>> launcher 
>>>> -- 
>>>> /usr/sbin/bareos-dir -t 
>>>> bareos-dir: dird/check_catalog.cc:64-0 Could not open Catalog 
>>>> "MyCatalog", database "bareos". 
>>>> bareos-dir: dird/check_catalog.cc:71-0 cats/postgresql.cc:230 Unable to 
>>>> connect to PostgreSQL server. Database=bareos User=bareos 
>>>> Possible causes: SQL server not running; password incorrect; 
>>>> max_connections exceeded. 
>>>> (connection to server at "localhost" (127.0.0.1), port 5432 failed: 
>>>> FATAL: password authentication failed for user "bareos" 
>>>> connection to server at "localhost" (127.0.0.1), port 5432 failed: 
>>>> FATAL: password authen

[bareos-users] Re: /var/run/postgresql/.s.PGSQL.5432" failed: FATAL: role "bareos" does not exist

2023-11-27 Thread Bruno Friedmann (bruno-at-bareos)
Hi Bruce, It look really weird to me, to have to reboot a server due to 
Bareos update or upgrade.
Mostly update just need to restart services which all can be done in one go 
with `systemctl restart bareos-dir bareos-sd bareos-fd` 

For upgrade, it has always been a good idea (beside reading the 
documentation and changelog) to run the 
/usr/lib/bareos/script/update_bareos_tables
/usr/lib/bareos/script/grant_bareos_privileges
Which update the database schema, and associated right 
Then restart the service.

I would always in my case check each daemon before restart to handle 
configuration warning deprecation etc.
`
bareos-dir -t -u bareos -g bareos
bareos-sd -t u bareos -g bareos
bareos-fd -t -u root -g bareos
`

If you still have to reboot, would be interesting to know which platform 
requires that.
Le dimanche 26 novembre 2023 à 17:17:01 UTC+1, Bruce Eckstein a écrit :

> If this is the upgrade to version 23, I had a similar issue. I needed to 
> run /usr/lib/bareos/script/update_bareos_tables to update the tables in 
> bareos. Then I had to reboot the server. I tried to just reboot the 
> postgresql but that did not work.
> best of luck.
>
> On Friday, May 5, 2023 at 4:39:51 AM UTC-4 DUCARROZ Birgit wrote:
>
>> Hi, 
>>
>> Please can anyone help me with this issue? 
>> -- 
>> sudo -u bareos -s 
>>
>> bareos@:/home/superuser$ psql 
>>
>> *psql: error: connection to server on socket 
>> "/var/run/postgresql/.s.PGSQL.5432" failed: FATAL: role "bareos" does 
>> not exist* 
>>
>>
>>
>> LOG- AND CONFIGURATION FILES: 
>> -- 
>> netstat -nlp | grep 5432 
>> tcp 0 0 127.0.0.1:5432 0.0.0.0:* 
>> LISTEN 1218/postgres 
>> unix 2 [ ACC ] STREAM LISTENING 59051 1218/postgres 
>> /var/run/postgresql/.s.PGSQL.5432 
>> -- 
>> ps axf | grep postgres 
>> 8021 pts/1 S+ 0:00 \_ grep 
>> --color=auto postgres 
>> 1218 ? Ss 0:01 /usr/lib/postgresql/15/bin/postgres -D 
>> /var/lib/postgresql/15/main -c 
>> config_file=/etc/postgresql/15/main/postgresql.conf 
>> 1246 ? Ss 0:00 \_ postgres: 15/main: checkpointer 
>> 1247 ? Ss 0:00 \_ postgres: 15/main: background writer 
>> 1322 ? Ss 0:00 \_ postgres: 15/main: walwriter 
>> 1323 ? Ss 0:00 \_ postgres: 15/main: autovacuum launcher 
>> 1324 ? Ss 0:00 \_ postgres: 15/main: logical replication 
>> launcher 
>> -- 
>> /usr/sbin/bareos-dir -t 
>> bareos-dir: dird/check_catalog.cc:64-0 Could not open Catalog 
>> "MyCatalog", database "bareos". 
>> bareos-dir: dird/check_catalog.cc:71-0 cats/postgresql.cc:230 Unable to 
>> connect to PostgreSQL server. Database=bareos User=bareos 
>> Possible causes: SQL server not running; password incorrect; 
>> max_connections exceeded. 
>> (connection to server at "localhost" (127.0.0.1), port 5432 failed: 
>> FATAL: password authentication failed for user "bareos" 
>> connection to server at "localhost" (127.0.0.1), port 5432 failed: 
>> FATAL: password authentication failed for user "bareos" 
>> ) 
>> bareos-dir ERROR TERMINATION 
>>
>> -- 
>>
>> /usr/sbin/bareos-dir --xc Catalog MyCatalog 
>> Catalog { 
>> Name = "MyCatalog" 
>> DbAddress = "localhost" 
>> DbPort = 5432 
>> DbPassword = "test" 
>> DbUser = "bareos" 
>> DbName = "bareos" 
>> } 
>> -- 
>> cat /etc/bareos/bareos-dir.d/catalog/MyCatalog.conf 
>> Catalog { 
>> Name = MyCatalog 
>> dbname = bareos 
>> dbaddress = 127.0.0.1 
>> #dbaddress = localhost 
>> dbport = 5432 
>> dbuser = bareos 
>> dbpassword = test 
>>
>> } 
>> -- 
>> cat /etc/dbconfig-common/bareos-database-common.conf 
>> # automatically generated by the maintainer scripts of 
>> bareos-database-common 
>> # any changes you make will be preserved, though your comments 
>> # will be lost! to change your settings you should edit this 
>> # file and then run "dpkg-reconfigure bareos-database-common" 
>>
>> # dbc_install: configure database with dbconfig-common? 
>> # set to anything but "true" to opt out of assistance 
>> dbc_install='true' 
>>
>> # dbc_upgrade: upgrade database with dbconfig-common? 
>> # set to anything but "true" to opt out of assistance 
>> dbc_upgrade='true' 
>>
>> # dbc_remove: deconfigure database with dbconfig-common? 
>> # set to anything but "true" to opt out of assistance 
>> dbc_remove='true' 
>>
>> # dbc_dbtype: type of underlying database to use 
>> # this exists primarily to let dbconfig-common know what database 
>> # type to use when a package supports multiple database types. 
>> # don't change this value unless you know for certain that this 
>> # package supports multiple database types 
>> dbc_dbtype='pgsql' 
>>
>> # dbc_dbuser: database user 
>> # the name of the user who we will use to connect to 

Re: [bareos-users] Bareos refuses to backup container subvolumes

2023-12-04 Thread Bruno Friedmann (bruno-at-bareos)
With container, you may need to add the overlay as fs type in your fileset 

```
overlay on / type overlay 
(rw,relatime,context="system_u:object_r:container_file_t:s0:...
```

Le samedi 2 décembre 2023 à 19:56:00 UTC+1, Jan Hebler a écrit :

> Hi
>
> Am Samstag, 2. Dezember 2023, 12:20:29 CET schrieb Philipp Storz:
> > Hello,
> > 
> > Am 02.12.23 um 07:13 schrieb Jan Hebler:
> > > Hi
> > > 
> > > Bareos does it's job very well, except that it not want to backup my
> > > container subvolumes, which are btrfs-subvolumes as well:
> > > 
> > > 
> > > 02-Dec 06:22 blaster-fd JobId 153: Disallowed filesystem. Will not
> > > descend from / into
> > > 
> /srv/gitlab/.local/share/containers/storage/btrfs/subvolumes/253efabb1b8f
> > > 1d77d6e481a36fbd429c0a077583740fa548dca112d9516670ee
> > The subvolume seems to be of a different type than what you have 
> specified
> > in your fileset.
> > 
> > Probably it makes sense to set the debug level to 200 and enable tracing 
> in
> > the filedaemon.
> > 
> > -> setdebug trace=1 level=200 client=
> > 
> > That should tell you the exact reason in the trace file.
> > 
> > see:
> > 
> https://github.com/bareos/bareos/blob/master/core/src/findlib/find_one.cc#L
> > 111-L121
> > > 
> > > 
>
> I think this points me in the correct direction. According to 
> https://btrfs.readthedocs.io/en/latest/Subvolumes.html, btrfs subvolumes 
> can, but does not need to be mounted
> like a "normal" Filesystem, but have their own dev-id anyway. This seems 
> to confuse bareos, as (at least this is my understanding of 
>
> https://github.com/bareos/bareos/blob/6f35a0436dd9e9428f100f28ea931fb37c2f3199/core/src/lib/mntent_cache.cc#L227-L237)
>  
> bareos reads /proc/mounts into an cache
> and search for an entry for the device id afterwards. As the subvolumes is 
> not explicit mounted, this fails.
>
> Regards, Jan
>

-- 
You received this message because you are subscribed to the Google Groups 
"bareos-users" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to bareos-users+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/bareos-users/613ea562-413a-4de4-9dec-47849d9d5693n%40googlegroups.com.


Re: [bareos-users] Using rear.

2024-02-01 Thread Bruno Friedmann (bruno-at-bareos)
a director ressource is mandatory to make bareos-fd able to know who is 
connecting.
rear is also using bconsole, which also need a director ressource to 
connect to.

but of course that doesn't mean bareos-director component has to be 
installed, this is 2 separated things.

Le jeudi 1 février 2024 à 11:14:33 UTC+1, Yariv Hazan a écrit :

> No, I was not sure director is needed on the client end, is it needed?
>
> On Wednesday, January 31, 2024 at 11:42:55 AM UTC+2 Jürgen Echter wrote:
>
>> Hi,
>>
>> you have to configure ther director in bconsole.conf.
>>
>> Like this:
>>
>> #
>> # Bareos User Agent (or Console) Configuration File
>> #
>> Director {
>> Name = bareos-dir
>> address = xx.xx.xx.xx
>> Password = "secret"
>> Description = "Bareos Console credentials for local Director"
>> }
>>
>> Greetings
>>
>> Juergen
>>
>> Am Mittwoch, Januar 31, 2024 10:39 CET, schrieb Yariv Hazan <
>> yar...@gmail.com>:
>>  
>>
>> Hi,
>>
>> Trying to configure client for DR with rear following this guide 
>> https://docs.bareos.org/Appendix/DisasterRecoveryUsingBareos.html.
>>
>> Client is being backed up with no issues.
>>
>> After completed configuration and running rear -v mkrescue I get this 
>> error: Director not configured in bconsole see complete rear log below.
>>
>>1. Do I need to have director installed/configured on the client?
>>2. Is this maybe related to client be passive?
>>
>>  
>>
>>  
>>
>> 2024-01-30 12:20:53.541037902 Relax-and-Recover 2.4 / Git
>>
>> 2024-01-30 12:20:53.542304107 Command line options: /sbin/rear -v mkbackup
>>
>> 2024-01-30 12:20:53.543545584 Using log file: 
>> /var/log/rear/rear-tlvpuppet2.log
>>
>> 2024-01-30 12:20:53.544885643 Including /etc/rear/os.conf
>>
>> 2024-01-30 12:20:53.547940491 Including conf/Linux-i386.conf
>>
>> 2024-01-30 12:20:53.54927 Including conf/GNU/Linux.conf
>>
>> 2024-01-30 12:20:53.569648826 Including /etc/rear/local.conf
>>
>> 2024-01-30 12:20:53.572259049 ==
>>
>> 2024-01-30 12:20:53.573421298 Running 'init' stage
>>
>> 2024-01-30 12:20:53.574600245 ==
>>
>> 2024-01-30 12:20:53.581319209 Including init/default/005_verify_os_conf.sh
>>
>> 2024-01-30 12:20:53.585144001 Including init/default/010_set_drlm_env.sh
>>
>> 2024-01-30 12:20:53.589185325 Including 
>> init/default/030_update_recovery_system.sh
>>
>> 2024-01-30 12:20:53.593107191 Including 
>> init/default/050_check_rear_recover_mode.sh
>>
>> 2024-01-30 12:20:53.594340342 Finished running 'init' stage in 0 seconds
>>
>> 2024-01-30 12:20:53.603024718 Using build area '/tmp/rear.QZyujxXHGikhCJi'
>>
>> mkdir: created directory '/tmp/rear.QZyujxXHGikhCJi/rootfs'
>>
>> mkdir: created directory '/tmp/rear.QZyujxXHGikhCJi/tmp'
>>
>> 2024-01-30 12:20:53.606905504 Running mkbackup workflow
>>
>> 2024-01-30 12:20:53.610212060 ==
>>
>> 2024-01-30 12:20:53.611317276 Running 'prep' stage
>>
>> 2024-01-30 12:20:53.612487290 ==
>>
>> 2024-01-30 12:20:53.619731228 Including 
>> prep/default/005_remove_workflow_conf.sh
>>
>> mkdir: created directory '/tmp/rear.QZyujxXHGikhCJi/rootfs/etc'
>>
>> mkdir: created directory '/tmp/rear.QZyujxXHGikhCJi/rootfs/etc/rear'
>>
>> 2024-01-30 12:20:53.625882176 Including prep/default/020_translate_url.sh
>>
>> 2024-01-30 12:20:53.629928653 Including prep/default/030_translate_tape.sh
>>
>> 2024-01-30 12:20:53.633784465 Including 
>> prep/default/040_check_backup_and_output_scheme.sh
>>
>> 2024-01-30 12:20:53.639930329 Including 
>> prep/default/050_check_keep_old_output_copy_var.sh
>>
>> 2024-01-30 12:20:53.643913050 Including 
>> prep/default/100_init_workflow_conf.sh
>>
>> 2024-01-30 12:20:53.649463035 Including 
>> prep/GNU/Linux/200_include_getty.sh
>>
>> 2024-01-30 12:20:53.678029083 Including 
>> prep/GNU/Linux/200_include_serial_console.sh
>>
>> /usr/share/rear/lib/_input-output-functions.sh: line 331: type: getty: 
>> not found
>>
>> 2024-01-30 12:20:53.703885018 Including 
>> prep/GNU/Linux/210_include_dhclient.sh
>>
>> /usr/share/rear/lib/_input-output-functions.sh: line 331: type: dhcpcd: 
>> not found
>>
>> /usr/share/rear/lib/_input-output-functions.sh: line 331: type: dhcp6c: 
>> not found
>>
>> /usr/share/rear/lib/_input-output-functions.sh: line 331: type: 
>> dhclient6: not found
>>
>> 2024-01-30 12:20:53.722731365 Including 
>> prep/GNU/Linux/220_include_lvm_tools.sh
>>
>> 2024-01-30 12:20:53.724914746 Device mapper found enabled. Including LVM 
>> tools.
>>
>> 2024-01-30 12:20:53.857209496 Including 
>> prep/GNU/Linux/230_include_md_tools.sh
>>
>> 2024-01-30 12:20:53.863255338 Including 
>> prep/GNU/Linux/240_include_multipath_tools.sh
>>
>> 2024-01-30 12:20:53.867562578 Including 
>> prep/GNU/Linux/280_include_systemd.sh
>>
>> 2024-01-30 12:20:53.890413524 Including systemd (init replacement) 
>> tool-set to bootstrap Relax-and-Recover
>>
>> 2024-01-30 12:20:53.895336644 Including 
>> prep/GNU/Linux/280_include_virtualbox.sh
>>
>> 2024-01-30 

Re: [bareos-users] Using rear.

2024-01-31 Thread Bruno Friedmann (bruno-at-bareos)
Not sure it will be enough due to the following trace reported

2024-01-30 12:20:54.433433584 Including 
prep/BAREOS/default/500_check_BAREOS_bconsole_results.sh

The -xc and -xs options have changed.

Use --xc and --xs as given in the help.

Run with --help for more information.

2024-01-30 12:20:54.441046794 ERROR: Director not configured in bconsole

 Stack trace 

Trace 0: /sbin/rear:547 main

Trace 1: /usr/share/rear/lib/mkbackup-workflow.sh:9 WORKFLOW_mkbackup

Trace 2: /usr/share/rear/lib/framework-functions.sh:101 SourceStage

Trace 3: /usr/share/rear/lib/framework-functions.sh:49 Source

Trace 4: 
/usr/share/rear/prep/BAREOS/default/500_check_BAREOS_bconsole_results.sh:17 
source

Trace 5: /usr/share/rear/lib/_input-output-functions.sh:372 StopIfError

Message: Director not configured in bconsole

== End stack trace ==
Rear may need a fix as in Bareos release 22 with new cli11 only options 
with a single letter can be used with a single dash (-) 
so previously -xc -xs -dt become now --xc --xs --dt 

In case to be reported upstream (rear)


Le mercredi 31 janvier 2024 à 10:42:55 UTC+1, Jürgen Echter a écrit :

> Hi,
>
> you have to configure ther director in bconsole.conf.
>
> Like this:
>
> #
> # Bareos User Agent (or Console) Configuration File
> #
> Director {
> Name = bareos-dir
> address = xx.xx.xx.xx
> Password = "secret"
> Description = "Bareos Console credentials for local Director"
> }
>
> Greetings
>
> Juergen
>
> Am Mittwoch, Januar 31, 2024 10:39 CET, schrieb Yariv Hazan <
> yar...@gmail.com>:
>  
>
> Hi,
>
> Trying to configure client for DR with rear following this guide 
> https://docs.bareos.org/Appendix/DisasterRecoveryUsingBareos.html.
>
> Client is being backed up with no issues.
>
> After completed configuration and running rear -v mkrescue I get this 
> error: Director not configured in bconsole see complete rear log below.
>
>1. Do I need to have director installed/configured on the client?
>2. Is this maybe related to client be passive?
>
>  
>
>  
>
> 2024-01-30 12:20:53.541037902 Relax-and-Recover 2.4 / Git
>
> 2024-01-30 12:20:53.542304107 Command line options: /sbin/rear -v mkbackup
>
> 2024-01-30 12:20:53.543545584 Using log file: 
> /var/log/rear/rear-tlvpuppet2.log
>
> 2024-01-30 12:20:53.544885643 Including /etc/rear/os.conf
>
> 2024-01-30 12:20:53.547940491 Including conf/Linux-i386.conf
>
> 2024-01-30 12:20:53.54927 Including conf/GNU/Linux.conf
>
> 2024-01-30 12:20:53.569648826 Including /etc/rear/local.conf
>
> 2024-01-30 12:20:53.572259049 ==
>
> 2024-01-30 12:20:53.573421298 Running 'init' stage
>
> 2024-01-30 12:20:53.574600245 ==
>
> 2024-01-30 12:20:53.581319209 Including init/default/005_verify_os_conf.sh
>
> 2024-01-30 12:20:53.585144001 Including init/default/010_set_drlm_env.sh
>
> 2024-01-30 12:20:53.589185325 Including 
> init/default/030_update_recovery_system.sh
>
> 2024-01-30 12:20:53.593107191 Including 
> init/default/050_check_rear_recover_mode.sh
>
> 2024-01-30 12:20:53.594340342 Finished running 'init' stage in 0 seconds
>
> 2024-01-30 12:20:53.603024718 Using build area '/tmp/rear.QZyujxXHGikhCJi'
>
> mkdir: created directory '/tmp/rear.QZyujxXHGikhCJi/rootfs'
>
> mkdir: created directory '/tmp/rear.QZyujxXHGikhCJi/tmp'
>
> 2024-01-30 12:20:53.606905504 Running mkbackup workflow
>
> 2024-01-30 12:20:53.610212060 ==
>
> 2024-01-30 12:20:53.611317276 Running 'prep' stage
>
> 2024-01-30 12:20:53.612487290 ==
>
> 2024-01-30 12:20:53.619731228 Including 
> prep/default/005_remove_workflow_conf.sh
>
> mkdir: created directory '/tmp/rear.QZyujxXHGikhCJi/rootfs/etc'
>
> mkdir: created directory '/tmp/rear.QZyujxXHGikhCJi/rootfs/etc/rear'
>
> 2024-01-30 12:20:53.625882176 Including prep/default/020_translate_url.sh
>
> 2024-01-30 12:20:53.629928653 Including prep/default/030_translate_tape.sh
>
> 2024-01-30 12:20:53.633784465 Including 
> prep/default/040_check_backup_and_output_scheme.sh
>
> 2024-01-30 12:20:53.639930329 Including 
> prep/default/050_check_keep_old_output_copy_var.sh
>
> 2024-01-30 12:20:53.643913050 Including 
> prep/default/100_init_workflow_conf.sh
>
> 2024-01-30 12:20:53.649463035 Including prep/GNU/Linux/200_include_getty.sh
>
> 2024-01-30 12:20:53.678029083 Including 
> prep/GNU/Linux/200_include_serial_console.sh
>
> /usr/share/rear/lib/_input-output-functions.sh: line 331: type: getty: not 
> found
>
> 2024-01-30 12:20:53.703885018 Including 
> prep/GNU/Linux/210_include_dhclient.sh
>
> /usr/share/rear/lib/_input-output-functions.sh: line 331: type: dhcpcd: 
> not found
>
> /usr/share/rear/lib/_input-output-functions.sh: line 331: type: dhcp6c: 
> not found
>
> /usr/share/rear/lib/_input-output-functions.sh: line 331: type: dhclient6: 
> not found
>
> 2024-01-30 12:20:53.722731365 Including 
> prep/GNU/Linux/220_include_lvm_tools.sh
>
> 2024-01-30 12:20:53.724914746 Device mapper found enabled. Including LVM 
> 

[bareos-users] Re: Again question regarding python-ldap

2023-11-08 Thread Bruno Friedmann (bruno-at-bareos)
python3-ldap will be part of bareos 23
you can already test it with the experimental repository (better to go with 
EL9 due to pyhton3 version being more recent)


On Wednesday, 8 November 2023 at 08:11:35 UTC+1 Silvio Schloeffel wrote:

> Hi,
>
> our CentOS7 based backup server now comes to is EOL (hardware) and we 
> plan to migrate everything to a EL8/EL9 based system because of the near 
> EOL of CentOS7.
> We have a small EL8 system running for tests but we know that we can not 
> backup our LDAP data with this system because of the python2 ldap 
> dependencies.
>
> After looking into the repos i can see that you have the same module in 
> the EL9 repo but this make no sense for me. In EL8 you can install a 
> python2 but in EL9 you have no python2 anymore.
>
> So I have 2 questions:
> 1. Is a python3-ldap module coming in the next time?
> 2. How can you build a rpm with it's dependencies then nothing exists to 
> resolve these dependencies?
>
>
> -> The problem in our case is not only the backup (i think) , after the 
> migration we can also not recover the saved ldap data from the old system.
>
> Best
>
> Silvio
>
>

-- 
You received this message because you are subscribed to the Google Groups 
"bareos-users" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to bareos-users+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/bareos-users/e405be28-6322-4e1e-9bbe-27d867ef9fa7n%40googlegroups.com.


[bareos-users] Re: LTO Tape migration

2023-11-22 Thread Bruno Friedmann (bruno-at-bareos)
copying for tape to tape is somewhat what bcopy can do, but then yes new 
volume will not be known by bareos
https://docs.bareos.org/master/Appendix/BareosPrograms.html#bcopy

You're safer way to using migration job.

Le mercredi 22 novembre 2023 à 14:50:57 UTC+1, Guy Foetz a écrit :

> Hi,
>
> I thought about this too, at the moment we do not have different pools for 
> different LTO generations.
>
> Is there a possibility to simply copy the tapes without going over bareos? 
> Or would bareos not recognize the tapes anymore?
>
> Regards,
> Guy
>
> Miguel Santos schrieb am Mittwoch, 22. November 2023 um 13:34:55 UTC+1:
>
>> Not sure what specifically your specific needs are and your constraints, 
>> but I would just create migration jobs from the old pool (with the LTO6 
>> tapes), to new a new pool (with LTO9 tapes).
>>
>> https://docs.bareos.org/TasksAndConcepts/MigrationAndCopy.html
>>
>> Good luck.
>> On Wednesday, November 22, 2023 at 1:30:54 PM UTC+1 Guy Foetz wrote:
>>
>>> Hi,
>>>
>>> we currently have several archive LTO6 Tapes and we want to migrate them 
>>> to LTO9 Tapes, what is the best way to perform this migration?
>>>
>>> Best Regards,
>>> Guy
>>>
>>

-- 
You received this message because you are subscribed to the Google Groups 
"bareos-users" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to bareos-users+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/bareos-users/f328fab5-1d1b-431c-8fa8-4062393c7789n%40googlegroups.com.


[bareos-users] Re: Incremental backup has the same size and amount of files

2024-02-22 Thread Bruno Friedmann (bruno-at-bareos)
The culprit is *Command = "sh -c 'rm -rf /mnt/hidden/backups/*"*

Everyday you add new file, as so they are picked by the fd because they are 
new 

Seems you have to return to the design board ;-)
On Thursday 22 February 2024 at 15:39:17 UTC+1 Руслан Пилипюк wrote:

> Hey guys! I was stuck trying to solve a problem with doing Incremental 
> backup, each time when I try to do it my backup basically looks like a full 
> backup (same size and the same amount of files). I was trying to take 
> backup of MongoDB, using RunScript and it works good with full amount, 
> however with incremental it looks strange
>
> Here is my configuration files:
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
> *Pool {  Name = "DB_hidden-pool"  Pool Type = Backup  Recycle = yes   
>   AutoPrune = yes  Recycle Current Volume = yes  Recycle Oldest Volume = 
> yes  Volume Retention = 1 days  Label Format = "hidden-"  Maximum Volume 
> Jobs = 1}JobDefs {  Name = "DB_hidden-jobdefs-incr"  Type = Backup  Level = 
> Incremental  FileSet = "DB_MongoDB-fileset"  # Schedule = 
> "DB_2DayWeeklyCycle"  Priority = 10  Write Bootstrap = "%c_%n.bsr"  
> Messages = Standard  Allow Mixed Priority = yes  Allow Duplicate Jobs = 
> yes  Allow Higher Duplicates = yes  Maximum Concurrent Jobs = 10  Storage = 
> hz-hidden  RunScript {RunsWhen = BeforeFailJobOnError = yes
> Command = "/bin/bash -c 'mongodump --verbose --host localhost --port 27017 
> --username hidden  --password \"`cat /root/.mongodb/pass_mongo`\" 
> --authenticationDatabase admin --out /mnt/hidden/backups'"  }  RunScript {  
>   RunsWhen = AfterRunsOnFailure = yesCommand = "sh -c 'rm -rf 
> /mnt/hidden/backups/*"  }  Incremental Backup Pool = DB_hidden-pool-incr  
> Full Backup Pool = DB_hidden-pool  Always Incremental = yes  Accurate = 
> yes}Job {Name = "DB_hidden_incr-job"JobDefs = 
> DB_hidden-jobdefs-incrDescription = "Backup of hz-hidden"Client = 
> DB_hidden-clientMaximum Bandwidth =  0Pool = 
> DB_hidden-pool-incr}FileSet {  Name = "DB_MongoDB-fileset"  Description = 
> "MongoDB Backup"  Include {Options {  Signature = MD5  
> Compression = LZ4  noatime = yes}File = "/mnt/hidden/backups"  
> }}*
> Also, in logs of my Backup Job, first time it output that no full backups 
> was present and it made a full backup but after that it was trying to do 
> Incremental backup and no errors are present there and everything looks 
> fine.
>
> Any tips or advices are greatly appreciated
>
>
>

-- 
You received this message because you are subscribed to the Google Groups 
"bareos-users" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to bareos-users+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/bareos-users/7faa549b-0cbc-48c4-9f63-e2c18442aa3en%40googlegroups.com.


Re: [bareos-users] Re: Restore catalog from bootstrap

2024-03-05 Thread Bruno Friedmann (bruno-at-bareos)
bextract is used directly with volume, so of course it is used on storage, 
so if yours is a remote, you have to use bextract there.

On Monday 4 March 2024 at 18:44:42 UTC+1 Oleg Volkov wrote:

> Very interesting question. Had not tried this in a distributed environment.
> I could assume that bextract work without a director on storage itself.
>
> Oleg Volkov
> T: +972-50-7601914 <+972%2050-760-1914>
> Skype: voleg4u , WhatsApp.
>
>
> On Mon, Mar 4, 2024 at 6:17 PM Michael Kurz  wrote:
>
>> So i tried this using the linked guide on my setup which has the director 
>> and the storage daemon on different machines, but got stuck at the bextract 
>> step, with bextract not finding the given device:
>>
>> director-server:~# bextract -b /tmp/bs01-bootstrap BD01-BackupDisk 
>> /tmp/back
>> bextract: stored/butil.cc:299-0 Could not find device "BD01-BackupDisk" 
>> in config file .
>> 04-Mar 15:18 bextract JobId 0: Fatal error: stored/butil.cc:173 Cannot 
>> find device "BD01-BackupDisk" in config file .
>>
>> Specifying any configuration file form the director (bareos-dir.conf or 
>> stroage.conf where the storage is defined) complains about an incompatible 
>> configuration file. 
>>
>> Do i understand it correctly that bextract can only work on the storage 
>> daemon server and not on the director server if these are different 
>> machines?
>> The link implies that it would work on the director server to, but maybe 
>> i understood it wrong.
>>
>> So my question is can bextract be used from a director with the storage 
>> daemon on another machine? And if so, how would i do this?
>>
>> Thanks in advance
>>
>>
>>
>> On Tuesday, September 26, 2023 at 8:18:10 AM UTC+2 Oleg Volkov wrote:
>>
>>> http://www.voleg.info/bareos-disaster-recovery.html
>>> Quite old, but still work in principal.
>>>
>>> On Friday, September 22, 2023 at 11:20:10 PM UTC+3 wizh...@gmail.com 
>>> wrote:
>>>
 I have followed the documentation and cannot get past the restore 
 command requiring a jobid

 The docs state

 " After re-initializing the database, you should be able to run 
 Bareos. If you now try to use the restore command, it will not work 
 because 
 the database will be empty. However, you can manually run a restore job 
 and 
 specify your bootstrap file. You do so by entering the run command in the 
 console and selecting the restore job. If you are using the default Bareos 
 Director configuration, this Job will be named RestoreFiles. Most likely 
 it 
 will prompt you with something such as:"

 So o run in bconsole

 run

 then selec 14 for RestoreFiles

 It then requires a jobid to continue but there are non yet in the 
 database as I am trying to get the catalog using a bootstrap file.

 What am I missing?





 -- 
>> You received this message because you are subscribed to a topic in the 
>> Google Groups "bareos-users" group.
>> To unsubscribe from this topic, visit 
>> https://groups.google.com/d/topic/bareos-users/Yno-Z1DOOTY/unsubscribe.
>> To unsubscribe from this group and all its topics, send an email to 
>> bareos-users...@googlegroups.com.
>> To view this discussion on the web visit 
>> https://groups.google.com/d/msgid/bareos-users/880ef1e8-bf7a-4f5a-a52a-3dc2dacd948dn%40googlegroups.com
>>  
>> 
>> .
>>
>

-- 
You received this message because you are subscribed to the Google Groups 
"bareos-users" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to bareos-users+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/bareos-users/7d2315e6-6c83-4e62-b3e0-d237aebe9568n%40googlegroups.com.


[bareos-users] Re: How to check what jobs does the volume file have

2024-03-18 Thread Bruno Friedmann (bruno-at-bareos)
read the fine manual bconsole commands list 


On Friday 15 March 2024 at 15:28:18 UTC+1 Руслан Пилипюк wrote:

> I have the following settings in a pool, and I want to check in which 
> volume my job backups are located, as you can see I aiming to have 7 
> backups in one volume, therefore by having 7 backups taken per week I would 
> like them to be all in one volume, however I want to find a way to check 
> the content inside a volume. Any tips or advices are appreciated
>
> Pool {
> Name = "censored-pool"
> Description = "Pool of censored configured for 1 FULL & 6 INCR backups 
> per week"
> Pool Type = Backup
> Label Format = "censored-"
>
> Recycle = yes 
> AutoPrune = yes
> Recycle Current Volume = yes
> Recycle Oldest Volume = yes
> Volume Retention = 1 month
> Maximum Volume Jobs = 7 
> Maximum Volume Bytes = 100G 
> }
>

-- 
You received this message because you are subscribed to the Google Groups 
"bareos-users" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to bareos-users+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/bareos-users/7d36c6ec-6658-4c7f-9d2d-f0914b09d04fn%40googlegroups.com.


Re: [bareos-users] Continue failed job

2024-03-21 Thread Bruno Friedmann (bruno-at-bareos)
Wishes tend to be always in late compared to any  PR. You may have an 
interest to participate and make an enhancement to the documentation :-)

On Thursday 21 March 2024 at 09:28:35 UTC+1 Jascha Schubert wrote:

> Hello Andreas,
> thank you for your answer. 
> I hope that resuming a job will be finished soon. It would be much easier 
> to work with. 
> Regarding the wait time. I found the config setting "Max Wait Time", the 
> description in the manuel suggest, that this is exactly the time i need to 
> increase.
> Do I missunderstand the documentation?
>
> Besides it would be nice if the documentation would include the default 
> value and also some hint like "Set to 0 for infinite" (If it is possible to 
> set it to infinite)
>
> Thank You
> Jascha
>
> Andreas Rogge schrieb am Mittwoch, 20. März 2024 um 09:15:15 UTC+1:
>
> Hi Jascha, 
>
> Am 18.03.24 um 20:22 schrieb Jascha Schubert: 
> > I have now two questions: 
> > 
> > 1. Can I somehow continue the job, so that I don't have to start over? 
> Currently not. We have added the infrastructure for that in Bareos 23, 
> but are still working on a way to actually resume the job and make sure 
> it produces a meaningful result. 
>
> > 2. Can I set the waiting time to infinite, so that to job does not get 
> > aborted, when I take to long to change the tape? 
> Currently not. The device wait time is hard-coded at a large value. At 
> some point the job has to give up and I also think that the 5 days you 
> described are usually sufficient :) 
>
> Best Regards, 
> Andreas 
>
> -- 
> Andreas Rogge andrea...@bareos.com 
> Bareos GmbH & Co. KG Phone: +49 221-630693-86 
> http://www.bareos.com 
>
> Sitz der Gesellschaft: Köln | Amtsgericht Köln: HRA 29646 
> Komplementär: Bareos Verwaltungs-GmbH 
> Geschäftsführer: Stephan Dühr, Jörg Steffens, Philipp Storz 
>
>

-- 
You received this message because you are subscribed to the Google Groups 
"bareos-users" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to bareos-users+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/bareos-users/f9e50885-ed9c-4ccd-bfc8-fecb0c22dab3n%40googlegroups.com.


[bareos-users] Re: latest build 23.0.3pre87 from 11. April not working

2024-04-15 Thread Bruno Friedmann (bruno-at-bareos)
Well, I'm not sure what you're trying to report as of course the CI testing 
would normally had caught such failure.

Here a suse 15 just installed with all default run nicely
```
Connecting to Director localhost:9101
 Encryption: TLS_CHACHA20_POLY1305_SHA256 TLSv1.3
1000 OK: bareos-dir Version: 23.0.3~pre87.e086dcd6e (11 April 2024)
Bareos community build (UNSUPPORTED).
Get professional support from https://www.bareos.com
You are connected using the default console

Enter a period (.) to cancel a command.
*run job=backup-bareos-fd
Using Catalog "MyCatalog"
Run Backup job
JobName:  backup-bareos-fd
Level:Incremental
Client:   bareos-fd
Format:   Native
FileSet:  SelfTest
Pool: Incremental (From Job IncPool override)
Storage:  File (From Job resource)
When: 2024-04-15 10:06:30
Priority: 10
OK to run? (yes/mod/no): yes
Job queued. JobId=1
You have messages.
*m
15-Apr 10:06 bareos-dir JobId 1: No prior Full backup Job record found.
15-Apr 10:06 bareos-dir JobId 1: No prior or suitable Full backup found in 
catalog. Doing FULL backup.
15-Apr 10:06 bareos-dir JobId 1: Start Backup JobId 1, 
Job=backup-bareos-fd.2024-04-15_10.06.32_03
15-Apr 10:06 bareos-dir JobId 1: Connected Storage daemon at 
e1a8165d7a5a:9103, encryption: TLS_CHACHA20_POLY1305_SHA256 TLSv1.3
15-Apr 10:06 bareos-dir JobId 1:  Encryption: TLS_CHACHA20_POLY1305_SHA256 
TLSv1.3
15-Apr 10:06 bareos-dir JobId 1: Probing client protocol... (result will be 
saved until config reload)
15-Apr 10:06 bareos-dir JobId 1: Connected Client: bareos-fd at 
localhost:9102, encryption: TLS_CHACHA20_POLY1305_SHA256 TLSv1.3
15-Apr 10:06 bareos-dir JobId 1:Handshake: Immediate TLS
15-Apr 10:06 bareos-dir JobId 1:  Encryption: TLS_CHACHA20_POLY1305_SHA256 
TLSv1.3
15-Apr 10:06 bareos-dir JobId 1: Created new Volume "Full-0001" in catalog.
15-Apr 10:06 bareos-dir JobId 1: Using Device "FileStorage" to write.
15-Apr 10:06 e1a8165d7a5a-fd JobId 1: Connected Storage daemon at 
e1a8165d7a5a:9103, encryption: TLS_CHACHA20_POLY1305_SHA256 TLSv1.3
15-Apr 10:06 e1a8165d7a5a-fd JobId 1:  Encryption: 
TLS_CHACHA20_POLY1305_SHA256 TLSv1.3
15-Apr 10:06 e1a8165d7a5a-fd JobId 1: Extended attribute support is enabled
15-Apr 10:06 e1a8165d7a5a-fd JobId 1: ACL support is enabled
15-Apr 10:06 bareos-sd JobId 1: Labeled new Volume "Full-0001" on device 
"FileStorage" (/var/lib/bareos/storage).
15-Apr 10:06 bareos-sd JobId 1: Wrote label to prelabeled Volume 
"Full-0001" on device "FileStorage" (/var/lib/bareos/storage)
*m
15-Apr 10:06 bareos-sd JobId 1: Releasing device "FileStorage" 
(/var/lib/bareos/storage).
15-Apr 10:06 bareos-sd JobId 1: Elapsed time=00:00:01, Transfer rate=42.37 
M Bytes/second
15-Apr 10:06 bareos-dir JobId 1: Insert of attributes batch table with 167 
entries start
15-Apr 10:06 bareos-dir JobId 1: Insert of attributes batch table done
15-Apr 10:06 bareos-dir JobId 1: Bareos bareos-dir 23.0.3~pre87.e086dcd6e 
(11Apr24):
  Build OS:   SUSE Linux Enterprise Server 15 SP4
  JobId:  1
  Job:backup-bareos-fd.2024-04-15_10.06.32_03
  Backup Level:   Full (upgraded from Incremental)
  Client: "bareos-fd" 23.0.3~pre87.e086dcd6e (11Apr24) SUSE 
Linux Enterprise Server 15 SP4,suse
  FileSet:"SelfTest" 2024-04-15 10:06:32
  Pool:   "Full" (From Job FullPool override)
  Catalog:"MyCatalog" (From Client resource)
  Storage:"File" (From Job resource)
  Scheduled time: 15-Apr-2024 10:06:30
  Start time: 15-Apr-2024 10:06:34
  End time:   15-Apr-2024 10:06:35
  Elapsed time:   1 sec
  Priority:   10
  Allow Mixed Priority:   no
  FD Files Written:   167
  SD Files Written:   167
  FD Bytes Written:   42,357,113 (42.35 MB)
  SD Bytes Written:   42,374,109 (42.37 MB)
  Rate:   42357.1 KB/s
  Software Compression:   None
  VSS:no
  Encryption: no
  Accurate:   no
  Volume name(s): Full-0001
  Volume Session Id:  1
  Volume Session Time:1713175582
  Last Volume Bytes:  42,385,029 (42.38 MB)
  Non-fatal FD errors:0
  SD Errors:  0
  FD termination status:  OK
  SD termination status:  OK
  Bareos binary info: Bareos community build (UNSUPPORTED): Get 
professional support from https://www.bareos.com
  Job triggered by:   User
  Termination:Backup OK

*
```

:-) 

On Monday 15 April 2024 at 09:33:55 UTC+2 Markus Dubois wrote:

> ...see topic in question
>
> bareos-storage states "waiting for postgres".
> going back to a build before that version works
>
>

-- 
You received this message because you are subscribed to the Google Groups 
"bareos-users" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to bareos-users+unsubscr...@googlegroups.com.
To view this discussion on the web visit 

[bareos-users] Re: latest build 23.0.3pre87 from 11. April not working

2024-04-15 Thread Bruno Friedmann (bruno-at-bareos)
I hope you know that doesn't make too much sense, as bareos-sd is never 
connecting PostgreSQL :-) 

On Monday 15 April 2024 at 14:22:54 UTC+2 Markus Dubois wrote:

> i'm happy to report more information if you tell me what you expect.
>
> I've just build my docker containers with the same dockerfile as before.
> I just were going back to a release before and everything works. The only 
> change between "working" and "not working" is using the source version 
> stated in topic.
> After building with the source in question jobs failed with unable to 
> establish TLS connections (TLS certs are fine)
> Deeper troubleshooting then lead to problems of the storage part reaching 
> postgres server (also nothing changed on postgres side).
> As soon as i made a terminal connection to the storage daemon container, 
> the terminal gets filled with constant "waiting for postgres" cannot reach 
> postgres messages.
>
> As stated above. The only change wwre the bareos sources 
>
> The dockerfile for the storage bareos is as follows:
>
> # Dockerfile Bareos storage daemon
> FROM ubuntu:jammy
>
> ARG BUILD_DATE
> ARG NAME
> ARG VCS_REF
> ARG VERSION
>
> LABEL org.label-schema.schema-version="1.0" \
>   org.label-schema.build-date=$BUILD_DATE \
>   org.label-schema.name=$NAME \
>   org.label-schema.vcs-ref=$VCS_REF \
>   org.label-schema.version=$VERSION
>
> ENV BAREOS_KEY 
> https://download.bareos.org/current/xUbuntu_22.04/Release.key
> ENV BAREOS_REPO https://download.bareos.org/current/xUbuntu_22.04/
>
> ENV DEBIAN_FRONTEND noninteractive
>
> SHELL ["/bin/bash", "-o", "pipefail", "-c"]
>
> RUN apt-get update -qq
>
> RUN apt-get install ca-certificates -y
> RUN update-ca-certificates
> RUN apt-get -qq -y install --no-install-recommends curl tzdata gnupg \
>  && curl -Ls $BAREOS_KEY -o /tmp/bareos.key \
>  && apt-key --keyring /etc/apt/trusted.gpg.d/breos-keyring.gpg \
> add /tmp/bareos.key \
>  && echo "deb $BAREOS_REPO /" > /etc/apt/sources.list.d/bareos.list \
>  && apt-get update -qq \
>  && apt-get install -qq -y --no-install-recommends \
> bareos-storage bareos-tools bareos-storage-tape mtx scsitools \
> sg3-utils mt-st bareos-storage-droplet \
>  && apt-get clean \
>  && rm -rf /var/lib/apt/lists/*
>
> COPY docker-entrypoint.sh /docker-entrypoint.sh
> RUN chmod u+x /docker-entrypoint.sh
>
> RUN tar czf /bareos-sd.tgz /etc/bareos/bareos-sd.d
>
> EXPOSE 9103
>
> VOLUME /etc/bareos
> VOLUME /var/lib/bareos/storage
>
> ENTRYPOINT ["/docker-entrypoint.sh"]
> CMD ["/usr/sbin/bareos-sd", "-u", "bareos", "-f"]
>
> Bruno Friedmann (bruno-at-bareos) schrieb am Montag, 15. April 2024 um 
> 12:08:19 UTC+2:
>
>> Well, I'm not sure what you're trying to report as of course the CI 
>> testing would normally had caught such failure.
>>
>> Here a suse 15 just installed with all default run nicely
>> ```
>> Connecting to Director localhost:9101
>>  Encryption: TLS_CHACHA20_POLY1305_SHA256 TLSv1.3
>> 1000 OK: bareos-dir Version: 23.0.3~pre87.e086dcd6e (11 April 2024)
>> Bareos community build (UNSUPPORTED).
>> Get professional support from https://www.bareos.com
>> You are connected using the default console
>>
>> Enter a period (.) to cancel a command.
>> *run job=backup-bareos-fd
>> Using Catalog "MyCatalog"
>> Run Backup job
>> JobName:  backup-bareos-fd
>> Level:Incremental
>> Client:   bareos-fd
>> Format:   Native
>> FileSet:  SelfTest
>> Pool: Incremental (From Job IncPool override)
>> Storage:  File (From Job resource)
>> When: 2024-04-15 10:06:30
>> Priority: 10
>> OK to run? (yes/mod/no): yes
>> Job queued. JobId=1
>> You have messages.
>> *m
>> 15-Apr 10:06 bareos-dir JobId 1: No prior Full backup Job record found.
>> 15-Apr 10:06 bareos-dir JobId 1: No prior or suitable Full backup found 
>> in catalog. Doing FULL backup.
>> 15-Apr 10:06 bareos-dir JobId 1: Start Backup JobId 1, 
>> Job=backup-bareos-fd.2024-04-15_10.06.32_03
>> 15-Apr 10:06 bareos-dir JobId 1: Connected Storage daemon at 
>> e1a8165d7a5a:9103, encryption: TLS_CHACHA20_POLY1305_SHA256 TLSv1.3
>> 15-Apr 10:06 bareos-dir JobId 1:  Encryption: 
>> TLS_CHACHA20_POLY1305_SHA256 TLSv1.3
>> 15-Apr 10:06 bareos-dir JobId 1: Probing client protocol... (result will 
>> be saved until config reload)
>> 15-Apr 10:06 bareos-dir JobId 1: Connected Client: bareos

[bareos-users] Re: Bareos 21.0.0 - missing "delete" command in console

2024-05-14 Thread Bruno Friedmann (bruno-at-bareos)
you are in batch mode (so you need to remove the interactive yes)
delete jobid=999 yes 
should work

By the way you're using a really out of dated version 21 compared to 
current 23.


On Tuesday 14 May 2024 at 06:50:00 UTC+2 Paul Chen wrote:

> [image: Capture2.JPG]
> [image: Capture.JPG]
>
>
>
>

-- 
You received this message because you are subscribed to the Google Groups 
"bareos-users" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to bareos-users+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/bareos-users/afb9e1f5-10d1-48f3-b451-0f7e197bd4a4n%40googlegroups.com.


[bareos-users] Re: Client Run Before Job with different scripts for different backup levels

2024-05-14 Thread Bruno Friedmann (bruno-at-bareos)
Maybe you can use the %l in a major script which decide then action 
corresponding the the level ?


On Tuesday 14 May 2024 at 11:25:01 UTC+2 Leon Bartle wrote:

> Hi there!
>
> Is there a way to run different commands with the "Run Script" Command, 
> depending on the Backup level?
>
> For example on a full backup, it should run "/home/script1.sh -n", but on 
> diff and incr backups, it should run "/home/script2.sh -n -m".
>
> I only found  the character substitution "%l", but it does not change the 
> actual command being run, thus it would require some kind of extra 
> switching script to be installed on every client, which would make 
> automation really hard. 
>
> Have a wonderful day!
> Leon
>

-- 
You received this message because you are subscribed to the Google Groups 
"bareos-users" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to bareos-users+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/bareos-users/1513667f-934d-427a-b7e9-f2c3ad149f6bn%40googlegroups.com.


[bareos-users] Re: Help understand how use pools

2024-05-27 Thread Bruno Friedmann (bruno-at-bareos)
Please return to the documentation and see the warning about mediatype and 
sd for uniqness.

On Monday 27 May 2024 at 09:01:37 UTC+2 Gankov Andrey wrote:

> I read documentation and didn't fine clean answer on question:
> How bareos selected next volume for writing?
>
> Why it try get volume from another storage? 
> For example i had error on Storage "TimeRiver" :
> TimeRiver-sd JobId 30042: Warning: stored/mount.cc:246 Open device 
> "FileStorage-TimeRiver" (/var/lib/bareos/storage) Volume "Vol-7days-4491" 
> failed: ERR=stored/dev.cc:597 Could not open: 
> /var/lib/bareos/storage/Vol-7days-4491
>
> But, volume "Vol-7days-4491" on another storage:
>
> *list volume=Vol-7days-4491
> Using Catalog "MyCatalog"
>
> +-++---+-+---+--+--+-+--+---+---+-+--+
> | mediaid | volumename | volstatus | enabled | volbytes  | 
> volfiles | volretention | recycle | slot | inchanger | mediatype | 
> lastwritten | storage  |
>
> +-++---+-+---+--+--+-+--+---+---+-+--+
> |   4,491 | Vol-7days-4491 | Append|   1 | 7,753,444,472 |   
>  1 |  604,800 |   1 |0 | 0 | File  | 2024-05-27 
> 00:34:32 | Chronos2 |
>
> +-++---+-+---+--+--+-+--+---+---+-+--+
> вторник, 7 мая 2024 г. в 12:47:22 UTC+3, Андрей Ганьков: 
>
>> I have few different storage machine. Should I need for each storage 
>> create self pools?
>> Or i can use one pool for jobs from different storage.
>>
>> I try use one pool for different storage and have error:
>> Open device "FileStorage" (/var/lib/bareos/storage) Volume 
>> "Vol-7days-4298" failed: ERR=stored/dev.cc:604 Could not open.
>>
>> Perhaps i have incorrect configuration?
>
>

-- 
You received this message because you are subscribed to the Google Groups 
"bareos-users" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to bareos-users+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/bareos-users/d3cac1ac-fc01-421e-96da-4e87754f73a7n%40googlegroups.com.