Hello,
I am running bacula on mysite (about 100 servers, with differents OS),
and am quite happy about it.
Recently I changed my filesets to speedup the backup and avoid
compression problem with some files.
What I want is:
- not to compress already compressed files (eg: jpg...)
- not to save
Hi
When some error ocurrs when doing a backup, the tape used get the
status Error. How can i change the status so i can use again that
tape without needing to delete it and add it again? If i manually set
the status Append the job runs but i get an error saying that the
number of file in the tape
Hi,
Juan Asensio Sánchez wrote:
Hi
When some error ocurrs when doing a backup, the tape used get the
status Error. How can i change the status so i can use again that
tape without needing to delete it and add it again? If i manually set
the status Append the job runs but i get an error
Hi,
I need a clarification on how a VolumeToCatalog verify job works. Until now I
thought this type of verify would read the attributes from tape (Volume) and
compares it with the attributes in the db (Catalog).
But I see high network traffic between bacula-dir and bacula-fd on the client.
The
Hi!
I tried to get concurrent jobs running, but somehow I couldn't get it going,
although I double checked every concurrency option available.
I use Bacula 2.0.3 and the director reports me the following concurrency:
Storage: name=GRAU MaxJobs=5
Director: name=hadrian-dir MaxJobs=5
client:
-- Forwarded message --
From: John Drescher [EMAIL PROTECTED]
Date: Aug 7, 2007 4:46 AM
Subject: Re: [Bacula-users] bacula-fd
To: tanveer haider [EMAIL PROTECTED]
On 8/7/07, tanveer haider [EMAIL PROTECTED] wrote:
I am receiving error while make, the file is attached that
Hi,
Ralf Gross wrote:
Hi,
I need a clarification on how a VolumeToCatalog verify job works. Until now I
thought this type of verify would read the attributes from tape (Volume) and
compares it with the attributes in the db (Catalog).
Technically true, but I believe it's the file daemon
Troy Daniels schrieb:
I need a clarification on how a VolumeToCatalog verify job works. Until now
I
thought this type of verify would read the attributes from tape (Volume) and
compares it with the attributes in the db (Catalog).
Technically true, but I believe it's the file daemon
On 8/7/07, Andreas Kopecki [EMAIL PROTECTED] wrote:
Hi!
I tried to get concurrent jobs running, but somehow I couldn't get it
going,
although I double checked every concurrency option available.
Do you have the Maximum Concurrent Jobs in at least 3 places in the
bacula-dir.conf? It goes
Hi there,
For the past day bacula has been unresponsive eg: not mounting tapes, not
purging records etc.
Today I got this mail:
*cacti.poller_output
error: Can't find file: 'poller_output.MYI' (errno: 2)
cacti.poller_time
warning : Table is marked as crashed
error: Found key at
Radek Hladik schrieb:
Till now I came up with this ideas:
* Backup catalog and bootstrap files with the data
* Disable jobs/files/volumes autopruning
* maybe modify some livecd to contain current version of bacula or at
least bscan (do not know, maybe such a CD exists)
* Create SQL query to
Hello,
I am running bacula on mysite (about 100 servers, with differents OS),
and am quite happy about it.
Recently I changed my filesets to speedup the backup and avoid
compression problem with some files.
What I want is:
- not to compress already compressed files (eg: jpg...)
- not to save
Mike Follwerk - T²BF napsal(a):
Radek Hladik schrieb:
Till now I came up with this ideas:
* Backup catalog and bootstrap files with the data
* Disable jobs/files/volumes autopruning
* maybe modify some livecd to contain current version of bacula or at
least bscan (do not know, maybe such a
Hi John!
On Tuesday 07 August 2007, John Drescher wrote:
On 8/7/07, Andreas Kopecki [EMAIL PROTECTED] wrote:
I tried to get concurrent jobs running, but somehow I couldn't get it
going,
although I double checked every concurrency option available.
Do you have the Maximum Concurrent Jobs
Hi,
I would change the order because of the first match rule :
FileSet {
Name = DataToSave
Include {
Options {
exclude = yes
wildfile = *.LDF ## list of
wildfile = *.MDF ## extensions I want to
wildfile = *.ldb ## exclude completly
Hi,
07.08.2007 04:02,, Mark Nienberg wrote::
Mark Nienberg wrote:
If not, is it possible to simulate it with an option something like this:
Include {
Options {
Exclude = yes
}
File = \/program.to.run.on.client
}
where the program.to.run.on.client would search
Hi,
07.08.2007 10:09,, Troy Daniels wrote::
Hi,
Juan Asensio Sánchez wrote:
Hi
When some error ocurrs when doing a backup, the tape used get the
status Error. How can i change the status so i can use again that
tape without needing to delete it and add it again? If i manually set
the
On Mon, 6 Aug 2007, Alan Brown wrote:
Actually, it's now in both 7 and fc6 extras too! W00t!
There's an RFE (Feature request) in with Redhat for inclusion of Bacula in
RHEL4 and 5. I filed that in January.
Oops, July 2006, last updated in January.
Hi,
07.08.2007 11:48,, Alexandre Chapellon wrote::
Hello,
I am running bacula on mysite (about 100 servers, with differents OS),
and am quite happy about it.
Recently I changed my filesets to speedup the backup and avoid
compression problem with some files.
What I want is:
- not to
Hi,
07.08.2007 11:54,, Andreas Kopecki wrote::
Hi John!
On Tuesday 07 August 2007, John Drescher wrote:
On 8/7/07, Andreas Kopecki [EMAIL PROTECTED] wrote:
I tried to get concurrent jobs running, but somehow I couldn't get it
going,
although I double checked every concurrency option
Hi,
07.08.2007 11:43,, Monstad wrote::
Hi there,
For the past day bacula has been unresponsive eg: not mounting tapes, not
purging records etc.
Today I got this mail:
*cacti.poller_output
error: Can't find file: 'poller_output.MYI' (errno: 2)
cacti.poller_time
warning
Hi,
07.08.2007 11:39,, Mike Follwerk - T²BF wrote::
Radek Hladik schrieb:
Till now I came up with this ideas:
* Backup catalog and bootstrap files with the data
* Disable jobs/files/volumes autopruning
* maybe modify some livecd to contain current version of bacula or at
least bscan (do not
Monstad wrote:
Hi there,
For the past day bacula has been unresponsive eg: not mounting tapes, not
purging records etc.
Today I got this mail:
*cacti.poller_output
error: Can't find file: 'poller_output.MYI' (errno: 2)
cacti.poller_time
warning : Table is marked as
Hi Arno!
On Tuesday 07 August 2007, Arno Lehmann wrote:
07.08.2007 11:54,, Andreas Kopecki wrote::
On Tuesday 07 August 2007, John Drescher wrote:
On 8/7/07, Andreas Kopecki [EMAIL PROTECTED] wrote:
I tried to get concurrent jobs running, but somehow I couldn't get it
going,
although I
Hi,
Arno Lehmann napsal(a):
Hi,
07.08.2007 11:39,, Mike Follwerk - T²BF wrote::
Radek Hladik schrieb:
Till now I came up with this ideas:
* Backup catalog and bootstrap files with the data
* Disable jobs/files/volumes autopruning
* maybe modify some livecd to contain current version of
On Tue, 07 Aug 2007, Benjamin E. Zeller might have said:
Hi,
this is rather urgent, can't figure it out via the docs.
How can I schedule a fullbackup, that runs every 4 weeks on saturday?
Schedule {
Name = red
Run = Level=Incremental mon-sat at 22:00
Run = Level=Full sun at
Benjamin E. Zeller wrote:
this is rather urgent, can't figure it out via the docs.
Sure you could. ;-)
How can I schedule a fullbackup, that runs every 4 weeks on saturday?
Schedule {
Name = red
Run = Level=Incremental mon-sat at 22:00
Run = Level=Full sun at 22:00
}
The 2nd
Salute,
thought this might be interesting to you, Carsten, and maybe others who
use vchanger from the Bacula Removable Disk Howto.
The issue described in my previous mails has been resolved. Granted, I
am an idiot.
The Howto states the following device section in the storage daemon
example
Your disk space suggestion was right on the money - thanks!
Turns out the bacula directory in this server is sym-linked to another on a
partition that was full (not 100% familiar with this server having recently
inherited it).
The old .db file is working as normal after clearing some
Hi,
this is rather urgent, can't figure it out via the docs.
How can I schedule a fullbackup, that runs every 4 weeks on saturday?
Schedule {
Name = red
Run = Level=Incremental mon-sat at 22:00
Run = Level=Full sun at 22:00
}
The 2nd Run= should be the schedule mentioned above.
Hi Benjamin,
On Tuesday 07 August 2007 writes Benjamin E. Zeller:
this is rather urgent, can't figure it out via the docs.
urgent is a really bad word on any mailing lists ...
How can I schedule a fullbackup, that runs every 4 weeks on saturday?
Schedule {
Name = red
Run =
Dear All,
I was able to compile the rescue CD and was able to burn it.
When booting using this CD everything works fine and I am able to bring the
network up, but when using the partition script I am getting errors, using
fdisk, and not able to partition my hard disk.
The error I am getting:
Hy
I want to use bacula to make backup of data spread all over the internet...
This works great when I can manage the two sides of the jobs (client and
server). But I encounter problems involving firewalls with some clients
(because it's normally the director's job to contact the fd, which is
On 7 Aug 2007 at 14:08, Alexandre Chapellon wrote:
Hy
I want to use bacula to make backup of data spread all over the internet...
This works great when I can manage the two sides of the jobs (client and
server). But I encounter problems involving firewalls with some clients
(because it's
On 7 Aug 2007 at 14:22, Alexandre Chapellon wrote:
2007/8/7, Dan Langille [EMAIL PROTECTED]:
On 7 Aug 2007 at 14:08, Alexandre Chapellon wrote:
Hy
I want to use bacula to make backup of data spread all over the
internet...
This works great when I can manage the two sides of
Hi,
Before submitting a bug, I want to discuss here on the list, to be sure
that this is a bug, and not a misunderstanding of how bacula works.
I have bacula (2.0.1) configured to do concurrent jobs, and it works
very well. The only thing that isn't perfect is the restore job. I have
a default
Good morning
I´m new in Bacula List and I have a server with FreeBSD+Bacula it´s works very
well..
My question is :
My server don´t do backup simultaneous on the same disk. For exemplo :
Bacula Server SD has a job for 2 more servers (server1 and server2) on the same
hour. In this case my SD
Bacula Server SD has a job for 2 more servers (server1 and server2) on
the same hour. In this case my SD wait for server1 end
service to start server2 start.
How can I do it work together on the same time ?
Thanks a lot.
Marcos, are you using the same device for these jobs? If you create a
Hi John / Tanveer:
You need to add /usr/ccs/bin to your PATH. Even if you have gcc
installed elsewhere, it is best to use Sun's native linking tools.
--PLB
At 10:47 7.8.2007, John Drescher wrote:
-- Forwarded message --
From: John Drescher mailto:[EMAIL PROTECTED][EMAIL
Andreas Kopecki wrote:
Hi Arno!
On Tuesday 07 August 2007, Arno Lehmann wrote:
07.08.2007 11:54,, Andreas Kopecki wrote::
On Tuesday 07 August 2007, John Drescher wrote:
On 8/7/07, Andreas Kopecki [EMAIL PROTECTED] wrote:
I tried to get concurrent jobs running, but somehow I couldn't get
Hi,
07.08.2007 13:27,, Falk Sauer wrote::
Hi Benjamin,
On Tuesday 07 August 2007 writes Benjamin E. Zeller:
this is rather urgent, can't figure it out via the docs.
urgent is a really bad word on any mailing lists ...
Indeed :-)
But anyway...
How can I schedule a fullbackup, that
Hi,
07.08.2007 13:17,, Andreas Kopecki wrote::
...
Do you have the Maximum Concurrent Jobs in at least 3 places in the
bacula-dir.conf? It goes in the main bacula-dir.conf settings as well as
the Storage and each client in that bacula-dir.conf file as well as the
bacula-fd.conf on the clients
Hi,
07.08.2007 13:27,, Radek Hladik wrote::
Hi,
Arno Lehmann napsal(a):
Hi,
07.08.2007 11:39,, Mike Follwerk - T²BF wrote::
Radek Hladik schrieb:
Till now I came up with this ideas:
* Backup catalog and bootstrap files with the data
* Disable jobs/files/volumes autopruning
* maybe
Thanks for answer.
In my FD, SD and DIR has Maximum Concurrent Jobs = 20 . I have to put it in
Server1 FD and Server2 FD ?
My backup on disk are in /backup all clients. Do you think that I have to do a
/backup/server1 and /backup/server2 ?
Thanks
-Mensagem original-
De: Junior
Hi Arno!
On Tuesday 07 August 2007, Arno Lehmann wrote:
07.08.2007 13:17,, Andreas Kopecki wrote::
...
Do you have the Maximum Concurrent Jobs in at least 3 places in the
bacula-dir.conf? It goes in the main bacula-dir.conf settings as well
as the Storage and each client in that
Marcos,
Thanks for answer.
In my FD, SD and DIR has Maximum Concurrent Jobs = 20 . I have to put it in
Server1 FD and Server2 FD ?
You must set this directive in the FDs where you will have
simultaneous connections at the same time. The default value is 2
connections.
Hi,
07.08.2007 16:11,, Andreas Kopecki wrote::
...
Perhaps you have different priorities.
Anyway, we will need more detailed information for further help...
either a complete job output from 'show job=...' for two jobs that
should run in parallel, but don't, or the complete job setup from
On Thu, 2007-08-02 at 20:31 -0400, John Drescher wrote:
Below I've included some output from tapeinfo as the scsi tape driver is
the most likely source of the slowdown. I've tried 'mt -f /dev/nst0
stsetoptions buffer-writes async-writes' with no effect.
I'd really appreciate some ideas
On 7 Aug 2007 at 11:57, Tod Hagan wrote:
On Thu, 2007-08-02 at 20:31 -0400, John Drescher wrote:
Below I've included some output from tapeinfo as the scsi tape driver is
the most likely source of the slowdown. I've tried 'mt -f /dev/nst0
stsetoptions buffer-writes async-writes' with no
On Tuesday 07 August 2007 04:08, Alexandre Chapellon wrote:
Hy
I want to use bacula to make backup of data spread all over the internet...
This works great when I can manage the two sides of the jobs (client and
server). But I encounter problems involving firewalls with some clients
(because
Hi all,
I'm having some trouble figuring out how to catch up when someone has
forgotten to put a tape in or if I manually schedule a job that requires a
different pool than what is in the tape.
I think a real-world example is in order. My fulls are on the first
weekend of the month, diffs
On 8/7/07 5:32 PM, Charles Sprickman [EMAIL PROTECTED] wrote:
Hi all,
I'm having some trouble figuring out how to catch up when someone has
forgotten to put a tape in or if I manually schedule a job that requires a
different pool than what is in the tape.
I think a real-world example
Hi,
I run my VolumeToCatalog verify jobs with my backup server specified as
the client to minimise Network impact for this reason.
So the Client option doesn't have to be the same as the client name
that was used for the backup? I've never thought about this.
Neither had I at first
53 matches
Mail list logo