On Wed, Jul 15, 2009 at 09:50:43PM -0400, John Drescher wrote:
On Wed, Jul 15, 2009 at 8:15 PM, terrycter...@woa.com.au wrote:
Martin Simmons wrote:
The best thing is to find out why it happened, before deciding how to
correct
it.
So the question still stands of the recommended way
Hi everybody. I have problem with jobs that stop and waiting for appendable
volume. Eg. 2 jobs fail, because of network problems, other jobs are in
running state, exclude one that waiting for appendable volume. And all jobs
are waiting. Why? I use autolabel volumes (files), and for most time
On Thu, Jul 16, 2009 at 3:39 AM, Lukasz PUZON Brodowskipu...@eska.pl wrote:
Hi everybody. I have problem with jobs that stop and waiting for appendable
volume. Eg. 2 jobs fail, because of network problems, other jobs are in
running state, exclude one that waiting for appendable volume. And all
-Wiadomość oryginalna-
Od: John Drescher [mailto:dresche...@gmail.com]
Wysłano: 16 lipca 2009 10:39
Do: Lukasz PUZON Brodowski
DW: bacula-users@lists.sourceforge.net
Temat: Re: [Bacula-users] waiting for appendable volume - why?
On Thu, Jul 16, 2009 at 3:39 AM, Lukasz PUZON
In your output I am concerned about the following:
16-Jul 04:03 serwer-news-fd JobId 146: Fatal error: Authorization key
rejected by Storage daemon.
Please see http://www.bacula.org/rel-manual/faq.html#AuthorizationErrors for
help.
16-Jul 04:03 serwer-www-fd JobId 145: Fatal error:
Hi,
I'm trying use bacula more than a month. I have more than 2000 hosts to
backup. We want to do everyday backup. One Full backup per month and
Incrementals every other day. Now I have 10 bacula-sd and one bacula-dir
and I was add only 300 hosts to schedule. I automated hosts add to
On Thu, Jul 16, 2009 at 5:31 AM, Lukasz PUZON Brodowskipu...@eska.pl wrote:
In your output I am concerned about the following:
16-Jul 04:03 serwer-news-fd JobId 146: Fatal error: Authorization key
rejected by Storage daemon.
Please see http://www.bacula.org/rel-
On Thu, Jul 16, 2009 at 12:09:32PM +0200, Marcin Radczuk wrote:
I'm trying use bacula more than a month. I have more than 2000 hosts to
backup. We want to do everyday backup. One Full backup per month and
Incrementals every other day. Now I have 10 bacula-sd and one bacula-dir
and I was add
On Thu, Jul 16, 2009 at 6:45 AM, Lukasz PUZON Brodowskipu...@eska.pl wrote:
On Thu, Jul 16, 2009 at 5:31 AM, Lukasz PUZON Brodowskipu...@eska.pl
wrote:
In your output I am concerned about the following:
16-Jul 04:03 serwer-news-fd JobId 146: Fatal error: Authorization
key
rejected by
On Thu, Jul 16, 2009 at 5:31 AM, Lukasz PUZON Brodowskipu...@eska.pl
wrote:
In your output I am concerned about the following:
16-Jul 04:03 serwer-news-fd JobId 146: Fatal error: Authorization
key
rejected by Storage daemon.
Please see http://www.bacula.org/rel-
On Thu, Jul 16, 2009 at 6:46 AM, John Drescherdresche...@gmail.com wrote:
On Thu, Jul 16, 2009 at 6:45 AM, Lukasz PUZON Brodowskipu...@eska.pl wrote:
On Thu, Jul 16, 2009 at 5:31 AM, Lukasz PUZON Brodowskipu...@eska.pl
wrote:
In your output I am concerned about the following:
16-Jul
On Thu, Jul 16, 2009 at 11:45:53AM +0100, Graham Keeling wrote:
I had similar problems. I had to define 'Maximum Concurrent Jobs' in many
places to get it to work.
Currently, I have it like this:
bacula-dir.conf:
Director { Maximum Concurrent Jobs = 20; }
Storage { Maximum
On Thu, Jul 16, 2009 at 6:45 AM, Graham Keelinggra...@equiinet.com wrote:
On Thu, Jul 16, 2009 at 12:09:32PM +0200, Marcin Radczuk wrote:
I'm trying use bacula more than a month. I have more than 2000 hosts to
backup. We want to do everyday backup. One Full backup per month and
Incrementals
On Thu, Jul 16, 2009 at 6:53 AM, Uwe Schuerkamphoo...@nionex.net wrote:
On Thu, Jul 16, 2009 at 11:45:53AM +0100, Graham Keeling wrote:
I had similar problems. I had to define 'Maximum Concurrent Jobs' in many
places to get it to work.
Currently, I have it like this:
bacula-dir.conf:
Hi,
I'm trying use bacula more than a month. I have more than 2000 hosts
to
backup. We want to do everyday backup. One Full backup per month and
Incrementals every other day. Now I have 10 bacula-sd and one
bacula-dir
and I was add only 300 hosts to schedule. I automated hosts add to
Thanks for the fast replay.
I use Maximum Concurrnet Jobs directive on bacula-dir.conf, storage and fd.
But the problem is that I have one pool per bckuping host:
Client {
Name = CLIENTNAME-fd
Address = CLIENTNAME.atm
FDPort = 9102
Catalog = MyCatalog
Password = dupa
File Retention =
Hello people!
I Have a bacula verion backup-dir Version: 2.4.4 (28 December 2008)
i486-pc-linux-gnu debian 5.0, intalled on a Ubuntu 9.1 server. I have 2 tape
drives attached on this server (A DLT 40/80Gb and a sony SDX470V 40/102 GB).
I've ran btape on both tapes and everything is fine on the
Eduardo Sieber schrieb:
I Have a bacula verion backup-dir Version: 2.4.4 (28 December 2008)
i486-pc-linux-gnu debian 5.0, intalled on a Ubuntu 9.1 server. I have 2 tape
drives attached on this server (A DLT 40/80Gb and a sony SDX470V 40/102 GB).
I've ran btape on both tapes and everything
2009/7/16 Eduardo Sieber sie...@gmail.com:
Hello people!
I Have a bacula verion backup-dir Version: 2.4.4 (28 December 2008)
i486-pc-linux-gnu debian 5.0, intalled on a Ubuntu 9.1 server. I have 2 tape
drives attached on this server (A DLT 40/80Gb and a sony SDX470V 40/102 GB).
I've ran
Oh My mistake...
Thank you a lot ppl!
I'll have to buy a better tape drive :)
On Thu, Jul 16, 2009 at 9:43 AM, Ralf Gross ralf-li...@ralfgross.de wrote:
Eduardo Sieber schrieb:
I Have a bacula verion backup-dir Version: 2.4.4 (28 December 2008)
i486-pc-linux-gnu debian 5.0, intalled on
Eduardo Sieber scripsit:
I have 2
tape drives attached on this server (A DLT 40/80Gb and a sony SDX470V
40/102 GB).
I've ran btape on both tapes and everything is fine on the test.
So, I have a job, and the estimate command for this job says:
000 OK estimate files=94902
2009/7/16 Eduardo Sieber sie...@gmail.com:
Oh My mistake...
Thank you a lot ppl!
I'll have to buy a better tape drive :)
Or more tapes. I recommend LTO drives although they are not cheap.
Remember to size the native capacity accordingly.
LTO1 - 100GB
LTO2 - 200GB
LTO3 - 400GB
LTO4 - 800GB
You have also to define different media type for each pool,
so a different storage definition pointing to a unique at same time sd device
If you have followed the default install I suspect you have only one media type
= file
mradczuk wrote:
Thanks for the fast replay.
I use Maximum
Marcin Radczuk wrote:
Hi,
I'm trying use bacula more than a month. I have more than 2000 hosts to
backup. We want to do everyday backup. One Full backup per month and
Incrementals every other day. Now I have 10 bacula-sd and one bacula-dir
and I was add only 300 hosts to schedule. I
Graham Keeling wrote:
On Thu, Jul 16, 2009 at 12:09:32PM +0200, Marcin Radczuk wrote:
I'm trying use bacula more than a month. I have more than 2000 hosts to
backup. We want to do everyday backup. One Full backup per month and
Incrementals every other day. Now I have 10 bacula-sd and one
On Thu, Jul 16, 2009 at 05:18:24AM -0700, Kevin Keane wrote:
Graham Keeling wrote:
On Thu, Jul 16, 2009 at 12:09:32PM +0200, Marcin Radczuk wrote:
I'm trying use bacula more than a month. I have more than 2000 hosts to
backup. We want to do everyday backup. One Full backup per month
On Thu, 16 Jul 2009 12:09:32 +0200, Marcin Radczuk said:
Hi,
I'm trying use bacula more than a month. I have more than 2000 hosts to
backup. We want to do everyday backup. One Full backup per month and
Incrementals every other day. Now I have 10 bacula-sd and one bacula-dir
and I was
Kevin Keane-2 wrote:
Marcin Radczuk wrote:
Hi,
I'm trying use bacula more than a month. I have more than 2000 hosts to
backup. We want to do everyday backup. One Full backup per month and
Incrementals every other day. Now I have 10 bacula-sd and one bacula-dir
and I was add only 300
On Thu, Jul 16, 2009 at 08:48:03AM -0700, mradczuk wrote:
I can't use multiple Device definition because of restore problems.
That is why I will use more bacula-sd processes on one machine. This
give me more jobs doing at the same time. I'm worried of DB
performance now.
How about multiple
I've just attempted to add a couple of Mac OSX (Tiger - yes, still...)
clients to our bacula system using the current 3.0.1 bacula-fd.
Backups on these clients have a couple of problems. First, there's the
fact that they both choke to death a few GB into the initial Full backup
with a series of
Hello,
We've been intermittently having an issue with backups failing due to the error
Spool block too big. It's happened exactly 10 times since 4/27/09. It
generally happens during large backups (900GB+).
The most recent error happened after the data had been spooled, and was being
written
On Thu, Jul 16, 2009 at 4:28 PM,
teksupptombacula-fo...@backupcentral.com wrote:
Hello,
We've been intermittently having an issue with backups failing due to the
error Spool block too big. It's happened exactly 10 times since 4/27/09. It
generally happens during large backups (900GB+).
Hi.
Help me please. Always becomes full backup when should do incremental
(JobID 9 must be Increment). FileSet did not changed. Previos job
without error.
Thanks for advice.
Client {
Name = ua22-fd
Address = 91.206.5.63
FDPort = 9102
Catalog = MyCatalog
Password = w #
On Thu, Jul 16, 2009 at 11:41:53PM +0300, Slava Dubrovskiy wrote:
Help me please. Always becomes full backup when should do incremental
(JobID 9 must be Increment). FileSet did not changed. Previos job
without error.
I tried to help on bac...@freenode but wasn't very successful.
Can someone explain to me how a migration or copy is generally supposed to
work? In my mind, I would like to take a full volume, which has
Full/Differential/Incremental backups in it and copy or migrate it to
another storage server. I know the volume contains good backups and is
marked as Full.
On Thu, 2009-07-16 at 16:57 -0800, Bob Gamble wrote:
Can someone explain to me how a migration or copy is generally
supposed to work? In my mind, I would like to take a full volume,
which has Full/Differential/Incremental backups in it and copy or
migrate it to another storage server.
Fatal error: Network error with FD during Backup: ERR=Keine Daten verfÃŒgbar
Fatal error: No Job status returned from FD.
ERROR in tls.c:83 TLS read/write failure.: ERR=error:1408F119:SSL
routines:SSL3_GET_RECORD:decryption failed or bad record mac
You could try eliminating TLS as part
37 matches
Mail list logo