Hello,
- setup pools as in the example Kern gave in Using Pools to Manage
Volumes
with some changes: volume retention for full is only 2 month.
Another thing
i would like was to have uniqe name for the volumes created
according to
the job-name (is this a good idea?)
Why not, but unless
On 2 May 2005 at 7:17, Lars Köller wrote:
--
In reply to Dan Langille who wrote:
On 1 May 2005 at 21:52, dave wrote:
I've got a working bacula install running on a FreeBSD 5.3
system with a
mysql database. My problem is on boot bacula starts before mysql,
Dan Langille wrote:
On 2 May 2005 at 7:17, Lars Kller wrote:
--
In reply to Dan Langille who wrote:
On 1 May 2005 at 21:52, dave wrote:
I've got a working bacula install running on a FreeBSD 5.3
system with a
mysql
Hello,
Paulo Victor Fernandes wrote:
Hello bacula-users,
i'm having a bit trouble understanding the following situation:
if we set 'Maximum Volumes = 7' (the number of days in a week)
and then set 'Volume retention = 10 (for example)
and then set 'Recycle = yes'
wouldn't this become useless since
Hi.
Piet le Roux wrote:
A client need to backup user files from +/- 800 Windows workstations at least
once a week to a Linux server.
We estimated about 5Mb changed files per workstation per week.
That's a very low estimate. You're sure about that?
Anyone with experience with a simular
Oops, forgot something...
put the catalog onto another machine dedicated to the database. MySQL
seems to be the best choice at the moment. Depending on how many files
are actually saved, you need a fast database.
Arno
Piet le Roux wrote:
A client need to backup user files from +/- 800 Windows
Hi.
Ludovic Strappazon wrote:
Hello,
Running Jobs:
JobId Level Name Status
==
7 Increme sunserv-data.2005-05-02_08.09.35 is waiting on max
Storage jobs
6 Increme
Title: AW: [Bacula-users] Problem with concurrent jobs, pools, volumes ;-)
Hi Arno!
Again thanks for your help, it seems that i coincidentally did
the right thing today when trying another configuration and now
i've got the confirmation ;-))
Chris
Hello,
My backup job stop progressing after backup more Gb and they are these
messages in logs :
Error: block.c:551 Write error at 0:1860 on device /dev/nst0.
ERR=Input/output error.
Error writing final EOF to tape. This tape may not be readable.
dev.c:1212 ioctl MTWEOF
Any ideas why I would get
this error?
02-May 20:33 kninfratemp-sd: mycastleapp01.2005-05-01_01.05.01 Fatal
error: spool.c:315 Spool block too big. Max 64512 bytes, got 909259313
This error seems to happen when full backups happen.
Thanks in advance.
-Jeff Humes
10 matches
Mail list logo