Hello, This is an interesting problem, but it is outside the design of Bacula, because Bacula assumes that you can always make a Full backup, then thereafter, you can do things like Incremental Forever and Progressive Virtual Full backups.
One thing you might try is using the "stop" command. I have never used it in this manner, but it might work for you. Basically you would start a Full backup, then when it runs long enough issue a bconsole "stop" command. If all works well, Bacula will then record everything it has backed up, and at some later time, you can restart the job until it finishes. If this works and subsequent Incrementals take too long, you can try the same trick. If stop works for you, please let me know. I will then think about how we might "automate" this stop feature. I think you need version 7.4.x to get the stop command as it was a relatively recent addition. Best regards, Kern On 11/17/2016 02:08 PM, Paul J R wrote: > On 17/11/16 07:26, Phil Stracchino wrote: >> On 11/16/16 09:12, Paul J R wrote: >>> Hi All, >>> >>> I have a data set that i'd like to backup thats large and not very >>> important. Backing it up is a "nice to have" not a must have. I've been >>> trying to find a way to back it up to disk that isnt disruptive to the >>> normal flow of backups, but everytime i end up in a place where bacula >>> wants to do a full backup of it (which takes too long and ends up >>> getting cancelled). >>> >>> Currently im using 7.0.5 and something i noticed in the 7.4 tree is the >>> ability to resume stopped jobs but from my brief testing of it I wont >>> quite do what Im after either. Ideally what im trying to achieve is give >>> bacula 1 hour a night to backup as much as it can and then stop. Setting >>> a time limit doesnt work cause the backup just gets cancelled and it >>> forgets everything its backed up already and tries to start from scratch >>> again the following night. >>> >>> VirtualFull doesnt really do what im after either and i've also tried >>> populating the database directly in a way that makes it think its >>> already got a full backup (varying results, and none of them fantastic). >>> A full backup of the dataset in one hit isnt realistically achievable. >>> >>> Before I give up though, im curious if anyone has tried doing similar >>> and what results/ideas they had that might work? >> >> This is a pretty difficult problem. To restate the problem, it sounds >> like you are trying to create a consistent full backup, in piecewise >> slices an hour or two at a time, of a large dataset that is changing >> while you're trying to back it up - but without ever actually performing >> a full backup. The problem with this is that you need to be able to >> keep state of a stopped backup for arbitrary periods, and at the same >> time keep track of whether there have been changes to what you have >> already backed up, and you don't have a full backup to refer back to. >> >> >> The only thing I can think of is, is the dataset structured such that >> you could split it [logically] into multiple chunks and back them up as >> separate individual jobs? >> >> >> > I dont need a consistent full, any backup it manages to do is a plus. > I've tried splitting it into multiple job sets but the way the dataset > changes makes it fairly resistant cause the data changes its name as it > gets older. > > Recovering the data when it gets broken isnt difficult (the last time > was via a windows machine connected to it with samba that got > cryptolocker'ed), it just gets synced back across the internet (which is > a little painful on the internet link for a couple of days). Any backup > data I have is simply a plus that means that particular chunk of data > doesnt have to be re-pulled. Unfortunately unless it completes a backup > it doesnt even record what it managed to backup: > > This backup had run for four hours and chewed thru quite a decent chunk > of data, but it never recorded the files it backed up (though this was > only a test): > > +-------+-----------------------+---------------------+------+-------+----------+--------------+-----------+ > | JobId | Name | StartTime | Type | Level | > JobFiles | JobBytes | JobStatus | > +-------+-----------------------+---------------------+------+-------+----------+--------------+-----------+ > | 38 | NAS02-Job | 2016-11-15 17:33:24 | B | F | > 0 | 0 | A | > > Part of the reason why I was hoping to do it with bacula is it'll keep > some of the history that gets lost when it is reconstructed from bare > metal (which really isnt important either in reality, just handy to have). > > One thing i did try was moving the data to be backed up out of the way, > getting bacula to run (which completed the full backup) then moving the > data back into place which then meant bacula did the next one as > incremental, but that really didnt have any success either as it didnt > complete the incremental and didnt record what it backed up. > > > > > > ------------------------------------------------------------------------------ > _______________________________________________ > Bacula-users mailing list > Bacula-users@lists.sourceforge.net > https://lists.sourceforge.net/lists/listinfo/bacula-users > ------------------------------------------------------------------------------ _______________________________________________ Bacula-users mailing list Bacula-users@lists.sourceforge.net https://lists.sourceforge.net/lists/listinfo/bacula-users