Jason Dixon [EMAIL PROTECTED] writes:
That was just an overview. Each Job is tied to a single client. I
haven't been able to get this working properly yet; the lower
priority jobs always multiplex (to use a NetBackup term)
concurrently and force the higher priority job to wait.
My patch
Alan Brown [EMAIL PROTECTED] writes:
On Mon, 6 Oct 2008, Kjetil Torgrim Homme wrote:
This directive is only implemented in version 2.5 and later. When
set to {\bf yes} (default {\bf no}), this job may run even if lower
priority jobs are already running. This means a high priority
gvm999 [EMAIL PROTECTED] writes:
I found out I accidentally deleted volumes with a full backup. Or
the files are not in the database anymore.
the difference is significant, but it sounds like the latter is your
problem.
So now I wonder, how can I make bacula restore all the files that
have
T. Horsnell [EMAIL PROTECTED] writes:
I can recover the catalog.sql file with bextract, but do I then have
to start using mysql commands to convert this ascii file into a
mysql database, or are there bacula commands to do it. The catalog
maintenance section doesnt tell me
you need to create
Guy Matz [EMAIL PROTECTED] writes:
nope, not that either. are you guys just taking stabs in the dark!? ;-)
that's a bit rich coming from someone who's obviously hasn't bothered
to read the manual himself. you can only use this kind of syntax
inside file-lists. the @filename syntax works
John Drescher [EMAIL PROTECTED] writes:
Thomas Arseneault [EMAIL PROTECTED] wrote:
The storage daemon died on my storage box causing the dump to
fatally error out. I fixed the daemon problem but now I have 4
tapes that I would like to reuse but they are far from their
retention time so how do
Marc Richter [EMAIL PROTECTED] writes:
Further, I want to know, why bacula recycles Volumes (which destroys
data) instead of creating new ones and if this is a Bug which should
be reported.
I believe this is intended behaviour, and it's the behaviour I want.
after all, when I tell Bacula I
Henry Jensen [EMAIL PROTECTED] writes:
because of bug http://bugs.bacula.org/view.php?id=1147 I built the
fd from latest svn without replacing the director (version 2.2.5) on
my backup server.
Then I got the following error:
bserver-dir JobId 6856: Fatal error: File daemon at
James Harper [EMAIL PROTECTED] writes:
Also, if your link is in anyway unreliable (eg some chance of
dropping for a second or two whilst the backup is running) then the
entire running job (or jobs if you are running concurrently) could
be failed.
this sounds strange. TCP will retransmit
Dan Langille [EMAIL PROTECTED] writes:
I know of another project that does something similar. Nagios.
Nagios has a sister project, Lilac (previously called Fruity).
Fruity is a web-based interface for maintaining your Nagios
configuration files. It is quite good. You export your
Brian A. Seklecki [EMAIL PROTECTED] writes:
B-W feature question:
Could we write a query that examines overall pool volume write
capacity?
no, since the configuration files need to be available.
It would be nice to track historical bytes written by a select set
of jobs (normally started
Dan Langille [EMAIL PROTECTED] writes:
On Nov 5, 2008, at 12:41 PM, Kjetil Torgrim Homme wrote:
another approach is to write an Augeas lens for the Bacula
configuration. it may be necessary/useful to restrict the Bacula
syntax somewhat, e.g., require users to remove the optional spaces
Erik Logtenberg [EMAIL PROTECTED] writes:
I would very much like to see the possibility to make only
incremental backups, no more full backups required. [consolidated
backups]
I have a wish which sounds similar, but is actually very different. I
want to avoid having to schedule Full backups.
Jari Fredriksson [EMAIL PROTECTED] writes:
Got an error: the whole machine hang when the backup job was well over 2GB.
Changed to Maximum Part Size 800M, which is apparently there for a reason.
Next error. Job finished successfully [...]
BUT. As it writes, it took one volume. Job wrote
I'm considering to turn on Rerun Failed Levels, but the manual says:
[...] the Ignore FileSet Changes directive is not considered when
checking for failed levels, which means that any FileSet change
will trigger a rerun.
I don't understand what that means...
let's say that the
Dan Langille [EMAIL PROTECTED] writes:
Kjetil Torgrim Homme wrote:
I'm considering to turn on Rerun Failed Levels, but the manual says:
[...] the Ignore FileSet Changes directive is not considered when
checking for failed levels, which means that any FileSet change
will trigger
Tilman Schmidt [EMAIL PROTECTED] writes:
My suggestions for improvement (and no, I don't understand the Bacula
source code nearly well enough to try and implement them myself) would be:
1. Extend the help command to accept a command name argument and
list the keywords accepted by that
Arno Lehmann [EMAIL PROTECTED] writes:
13.11.2008 12:05, Ronald Buder wrote:
Due to a server failure the nfs shares are not available anymore. I
would like to see some sort of a timeout at least if that is at all
possible.
That's not possible inside Bacula - the FD simply can't terminate
Alan Brown [EMAIL PROTECTED] writes:
The subject says it all really.
I really don't want to convert all my match lines to
.*/[T|t][E|e][[M|m][P|p]/, etc etc
ignore case = yes, perhaps? :-)
http://www.bacula.org/en/rel-manual/Configuring_Director.html#SECTION00147
--
ASMR (Anders Sønderberg Mortensen) [EMAIL PROTECTED] writes:
I'm trying to compile the bacula file daemon version 2.4.3 for Irix
(IRIX64 bisse 6.5 07202013 IP35). Here is what I do:
setenv CC cc
./configure --enable-client-only
Which looks pretty happy apart from this
==Entering
Jason Dixon [EMAIL PROTECTED] writes:
On Wed, Nov 26, 2008 at 11:59:41AM +0100, Kshatriya wrote:
On Mon, 24 Nov 2008, Jason Dixon wrote:
We moved our Bacula Director off Linux to Solaris (not my choice)
recently. Since then, we've encountered frequent failures of the
catalog backup job
I thought I'd get back to the original question :-)
David Jurke [EMAIL PROTECTED] writes:
The problem I have is with our large (expected to grow to several
terabytes) [Oracle] server. I’m told by the DBAs that the size and
amount of activity on this database is such that putting the whole
David Jurke [EMAIL PROTECTED] writes:
I think it'd need some tweaking to delay putting each tablespace
into backup mode until Bacula is ready to back it up - one of the
problems I have is that we can't put all the tablespaces into backup
mode at the same time because of the volume of logs
David Jurke [EMAIL PROTECTED] writes:
Basically, as I understand it, your first group of comments are
about not backing up empty space, as per your example if there is
only 10GB data in a 100GB data file. However, our database is
growing rapidly, and our DBAs tend to allocate smaller
Jesper Krogh [EMAIL PROTECTED] writes:
Can you give us the time for doing a tar to /dev/null of the fileset.
time tar cf /dev/null /path/to/maildir
Then we have a feeling about the actual read time for the file of
the filesystem.
if you're using GNU tar, it will *not* read the files if you
Alan Brown [EMAIL PROTECTED] writes:
Ext3 will perform a lot better if you use tune2fs and enable the
following features:
dir_index
Use hashed b-trees to speed up lookups in large
directories.
this may be good for Maildir, but with Cyrus IMAPD, which uses
John Lockard jlock...@umich.edu writes:
But, priority also postpones any jobs of higher priority.
If a job, of priority 20 is currently running and you start
off several other jobs, with priorities of 10, 20 and 30, then
the only jobs which will run concurrently will be the jobs
of priority
[Kern Sibbald]:
[Chandranshu]:
[I] modified the code in src/cats/mysql.c to print the error
message returned by mysql_error(). Then, I compiled and ran the
code to see the most dubious error in my DBA career:
Error 2002 (HY000): Can't connect to local MySQL server through
socket
Craig Ringer cr...@postnewspapers.com.au writes:
[A] AFAIK all it _needs_ for restore is:
- File name
- [st_size] File size in bytes (for restore verification)
- md5sum
neither md5sum nor filename are
Kelly, Brian brian.ke...@uwsp.edu writes:
I've seen lots of posts regarding the decoding of Lstat data. I'm
currently using a few functions to decoded the lstat data and make
it human readable. These functions have been tested and are known
to work with mysql 5.0.70
hi, I spent quite a bit
John Drescher dresche...@gmail.com writes:
(I'm dreading the switch to 3.0... I seriously doubt I'll be able
to do it in 24 hours, and I don't like the thought of a day without
backups. but we'll see.)
BTW, you do not have to upgrade clients so only the director and sds
are needed to be
(sorry about the topic drift, Craig -- this will be the last message
from me in this sub-thread :-)
James Harper james.har...@bendigoit.com.au writes:
oh, sure -- I'm only worried about the database schema update. I
currently have 350M rows in File (90 days retention -- I'd like to
increase
Chandranshu . chandran...@gmail.com writes:
30,865 files inserted into the tree and marked for extraction.
The number of files displayed in the table is correct at
37,840. What is not clear to me is why is it saying that only 30,
865 files are inserted into the tree and marked for
Daniel Bareiro daniel-lis...@gmx.net writes:
Kjetil Torgrim Homme wrote:
Where the client can have a /space/log and/or /space/backup and
the symlink can be in some subirectory of them. With this
declaration of File, how I could force backup of symlinks?
if you have GNU find, try
James Chamberlain jam...@exa.com writes:
On Tue, 14 Apr 2009, Martin Simmons wrote:
I'm not sure if it will work, but you could try using symbolic
links from one file system to the volumes on the other file
system(s)?
Thanks for the suggestion, and I considered it, but I don't think
it'll
Roy Sigurd Karlsbakk r...@karlsbakk.net writes:
PS: I can't find any bacula logs on the SD (on OpenIndiana). I have
tried to recompile with --with-logdir=/opt/bacula/var/log and
created that directory. Since bacula-sd currently runs as root (I
know, don't say it), the permissions should
hymie! hy...@lactose.homelinux.net writes:
JobDefs {
Name = DefaultJob
Pool = IncPool
Full Backup Pool = FPool
Differential Backup Pool = DiffPool
Incremental Backup Pool = IncPool
}
Job {
Name = GreatPlains-Backup
JobDefs = DefaultJob
}
Job {
Name = BackupCatalog
hymie! hy...@lactose.homelinux.net writes:
Kjetil Torgrim Homme writes:
you specified Recycle, so Bacula will reuse old volumes.
Thank you! I thought Maximum Volume Jobs = 1 would prevent that,
but once the job is purged, then there are no jobs in that volume, so
it becomes fair game.
BTW
Laxansh K. Adesara laxa...@anatec.com writes:
After some headaches configuring bacula-dir I have another
problem. When I try to test bacula-dir I get following error
Bacula-dir: Fatal error : mysql.c: unable to connect to MySQL server.
Database=bacula User=bacula
MySQL is not the problem
Richard Marnau mar...@catering-kiel.de writes:
i'm stucked with a small but nasty problem. We need to restore a file
which location is known, but the exact filename is not clear. So I
need to browse old backups, but the file table has been pruned (for an
unknown reason).
okay, you'll have
Eduardo Júnior ihtrau...@gmail.com writes:
what's the reasons to a new FULL job to be created? I mean, after a
scheduled FULL job, another is started automatically.
I asked this because last night, a new job FULL for a client was
started without any change in the File Set or a manual
Sean Clark smcl...@tamu.edu writes:
In my experience, slow performance like this (i.e. 5MB/s on at least
100Mb ethernet) usually turns out to be the client's fault.
Compression seems to be a very common culprit. Try switching
compression off completely and see how much of a difference that
others can find it useful.
--
Kjetil T. Homme
Redpill Linpro AS - Changing the game
#! /usr/bin/perl -w
# bacula-du 1.0
# Written by Kjetil Torgrim Homme kjetil.ho...@redpill-linpro.com
# Released under GPLv3 or the same terms as Bacula itself
sub usage {
print _END_;
Usage: $0 [OPTIONS] -j
Nils Juergens nils+bac...@muon.de writes:
for me (with LTO-3) increasing Maximum block size has had quite the
impact on performance.
Sadly, with the new setting I had trouble reading my tapes so switched
back to the old configuration.
in Linux, you can configure your tape drive to accept
44 matches
Mail list logo