Great, but I won't ^^ . It could be cool but too many manipulations, too dangerous. If I can do something with bconsole like updating a Path by console it could be a great fonction cause if I restore an old backup and don't remember the directory had already moved I could keep prefix and restore to d: instead of g: . So I can't do a full restore with replace and relocate. If devs see what I say I think it could be a cool function to add to bconsole or as a tool in dbcheck or similar tool .
Thanks for the time you spent answering. -----Message d'origine----- De : Arno Lehmann via Bacula-users <bacula-users@lists.sourceforge.net> Envoyé : mardi 5 novembre 2024 12:45 À : bacula-users@lists.sourceforge.net Objet : Re: [Bacula-users] fileset and moving source backup directory Hi Lionel, Am 05.11.2024 um 11:31 schrieb Lionel PLASSE: > Hello, > > I recently move a full directory form D: to G: > > I change my FileSet Accordingly with IgnoreFilesetChanges option. > > But, the next Incremental backup will fully backup the directory as > it is a new Path. That's expected behaviour -- after setting Ignore Fileset Changes, the next backup using the fileset should be upgraded, and *from then on* you're on your own. If it turns out this is not helpful -- as it seems to be the case -- you can tweak the catalog database to have the last (sequnce of) backups that are relevant make it appear they already used the new fileset. This has a certain risk, and I suggest to carefully verify the process and results. In principle, Bacula keeps track of the changed file sets by storing a hash of their relevant contents: Enter SQL querfileset where fileset='TypicalServer'; +-----------+---------------+------------------------+---------------------+---------+ | filesetid | fileset | md5 | createtime | content | +-----------+---------------+------------------------+---------------------+---------+ | 4 | TypicalServer | s9/JT6+6m3lbw95ub9+S4B | 2019-08-21 12:40:32 | | | 5 | TypicalServer | t4/jQ9UUIG+KW++Af+NinC | 2019-08-21 12:48:31 | | | 6 | TypicalServer | Zm/Nd/lpVk+ee++ddQ/H2B | 2019-08-21 12:50:45 | files | +-----------+---------------+------------------------+---------------------+---------+ he dates are interesting as they allow you to actually see what's going on. The md5 column is the hast. The filesetid is what is referenced with a job: select jobid,job,realendtime,jobfiles,filesetid from job where filesetid=6 and jobfiles>0 limit 1; +--------+-----------------------------------+---------------------+----------+-----------+ | jobid | job | realendtime | jobfiles | filesetid | +--------+-----------------------------------+---------------------+----------+-----------+ | 47,868 | radius-all.2024-10-03_23.05.04_13 | 2024-10-03 23:07:31 | 80 | 6 | +--------+-----------------------------------+---------------------+----------+-----------+ What you can do is prepare the list of jobs to tweak (lastest full and all subsequent ones for the relevant job to avoid misunderstandings with reference jobs), find check they all use the expected file set, that is the one with the filesetid referencing your old fileset, and then update them all to use the new filesetid. You should document that. A SQL command such as 'update job set filesetid=9876543 comment='Updated file set from old 2021-09-02 to new 2024-11-05 one manually, blame me if anything goes wrong!' where jobid in (1,2,3,4,whatever-the-list-is); would work. Create a catalog backup first, and make sure no jobs run at the time of your modification. Test this very carefully in a zest instance and with a non-important test job in your production system. Then, double-check all identifiers before actually doing the change. Did I mention you should very carefully test each step? > Is there a way to avoid this and to "tell" the director that it is the > same directory. An alternative that might be interesting could be path changes in the catalog or a very particular setup of base jobs and strip prefixes, but I would'nt want to go into that direction... they would be more trouble than a new full backup, even if available bandwidth to size ratio is very bad :-) Good luck, and don't forget careful testing! Cheers, Arno > Cause as the SD is outside the LAN and have a very poor bandwidth so > the backup rate is really slow . I don't want to save the full 800GB > for a few different bytes. > I thought of renaming all the corresponding path record in the > database but it's quite dangerous. > > Any possibility with bconsole has been provided? > > thanks > > > > _______________________________________________ > Bacula-users mailing list > Bacula-users@lists.sourceforge.net > https://lists.sourceforge.net/lists/listinfo/bacula-users -- Arno Lehmann IT-Service Lehmann Sandstr. 6, 49080 Osnabrück _______________________________________________ Bacula-users mailing list Bacula-users@lists.sourceforge.net https://lists.sourceforge.net/lists/listinfo/bacula-users
smime.p7s
Description: S/MIME cryptographic signature
_______________________________________________ Bacula-users mailing list Bacula-users@lists.sourceforge.net https://lists.sourceforge.net/lists/listinfo/bacula-users