Hi,
I have to backup a remote site with several TB of data over a slow
connection (varying between 10-15 GB per hour). Under these
circumstances, one of my initial Full jobs gets killed after 6 days:
30-Aug 21:48 gothmog-dir JobId 34535: Fatal error: Network error with FD
during Backup:
Op 31/08/2011 14:53, Uwe Bolick schreef:
Hi,
I have to backup a remote site with several TB of data over a slow
connection (varying between 10-15 GB per hour). Under these
circumstances, one of my initial Full jobs gets killed after 6 days:
Bacula has a hardcoded time limit on jobs of 6 days.
Il 31/08/2011 15:33, Jeremy Maes ha scritto:
Op 31/08/2011 14:53, Uwe Bolick schreef:
Hi,
I have to backup a remote site with several TB of data over a slow
connection (varying between 10-15 GB per hour). Under these
circumstances, one of my initial Full jobs gets killed after 6 days:
On 31.08.2011 14:53, Uwe Bolick wrote:
Hi,
I have to backup a remote site with several TB of data over a slow
connection (varying between 10-15 GB per hour). Under these
circumstances, one of my initial Full jobs gets killed after 6 days:
30-Aug 21:48 gothmog-dir JobId 34535: Fatal error:
Hi!
I want to ask, if there is any development to support archive jobs as
suggested in the Bacula Project Design Blog:
http://sourceforge.net/apps/wordpress/bacula/2009/09/26/archive/
Regards
--
Peter Allgeyer Salzburg|Research Forschungsgesellschaft mbH
Dipl.-Inform. Univ.
Thank you for your reply.
On Wed, Aug 31, 2011 at 03:40:58PM +0200, Marcello Romani wrote:
...
Bacula has a hardcoded time limit on jobs of 6 days. Kern called it an
insanity check as any job that runs that long isn't all that useful...
See
Thank you for your answer,
On Wed, Aug 31, 2011 at 04:20:20PM +0200, Andre Lorenz wrote:
...
i have solved this problem, by splitting up the data which has to be
backed up.
so amount of data which will go to tape is smaller backup is running
faster, and restore is much easier ;-)
andre
On Wed, Aug 31, 2011 at 04:44:59PM +0200, Uwe Bolick wrote:
Thank you for your answer,
On Wed, Aug 31, 2011 at 04:20:20PM +0200, Andre Lorenz wrote:
...
i have solved this problem, by splitting up the data which has to be
backed up.
so amount of data which will go to tape is smaller
Thank you for your feedback everybody
I have changed the config and retried - It again failed... [Crying or Very
sad] [Crying or Very sad]
However seemed to go a bit further - it failed at a slightly different point
(this time I believe it changed the tape)
Here is the error:-
Thank you for your feedback everybody
I have changed the config and retried - It again failed... [Crying or Very
sad] [Crying or Very sad]
However seemed to go a bit further - it failed at a slightly different point
(this time I believe it changed the tape!)
Here is the error:-
Konstantin Khomoutov writes:
hymie! hy...@lactose.homelinux.net wrote:
A volume will not be recycled until the Volume Retention has expired,
even if all of the backups stored in that volume have expired. If my
Job Retention is 1 month, and my Volume Retention is 3 months, then my
volumes
Hey Hymie...
Job Retention is just for clean up the catalog from old jobs. What make a
Volume Recyclable is the Volume Retention after the volume turns on Full or
Used. I set up my File and Job Retention always longer than my Volume
Retention because when it finish and turns on Purged, the job
On Aug 31, 2011, at 12:23 PM, morgan_cox wrote:
Thank you for your feedback everybody
I have changed the config and retried - It again failed... [Crying or Very
sad] [Crying or Very sad]
However seemed to go a bit further - it failed at a slightly different point
(this time I
13 matches
Mail list logo