On Thu, Mar 01, 2012 at 08:42:35AM +0100, Jeremy Maes wrote:
Maybe use different priorities for your weekly full and your copy jobs,
and disallow the running of mixed priority jobs?
I think it could also be possible to define one client having a lower
priority than all of the others and use
On Wed, Feb 29, 2012 at 03:46:29PM -0700, Mahmudul Hasan wrote:
Hello Everyone,
I want to run a shell command before the backup job starts on the filer
(client) node, and one after the backup job is finished. Both of these
commands(may be a shell script) will run
on the client node.
The
On Thu, Mar 01, 2012 at 01:28:00AM +0100, Tilman Schmidt wrote:
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA1
CentOS 6 based backup server with Bacula 5.0.0-9.el6 from the CentOS 6
base repository and an HP 8-slot LTO-3 autochanger, no barcode reader.
This evening's backup jobs are hanging,
On Thu, Mar 01, 2012 at 10:32:18AM +0100, Tilman Schmidt wrote:
Nothing since the last reboot except the normal Block limits:
[ts@backup ~]$ dmesg | tail
ADDRCONF(NETDEV_UP): eth0: link is not ready
RPC: Registered udp transport module.
RPC: Registered tcp transport module.
RPC:
Hi folks,
we're going to change our backup strategy from one size fits all to
a more contract proof approach. We have about 110 clients with a
total of 6TB of data per week (full backup once per week).
The policy should look something like this:
- weekly full backup to online storage (RAID6)
On Tue, Mar 13, 2012 at 01:15:36PM -0400, John Drescher wrote:
Not sure I follow, I'm on CentOS but isn't there only one Linux client?
I can't find any decent install procedures that don't come with other
software on top of bacula.
You will have to ask Centos how they packaged bacula.
Hi folks,
I'm running bacula 5.2.6 compiled from source. I'm wondering if adding
the line
compression = GZIP1
is considered a fileset change by bacula and would result in a full
backup of the client when an incremental should be scheduled. It'd be
great if somebody could shed some light on
On Wed, Mar 14, 2012 at 03:38:29PM +0100, Uwe Schuerkamp wrote:
Hi folks,
I'm running bacula 5.2.6 compiled from source. I'm wondering if adding
the line
compression = GZIP1
Sorry to follow up on my own posting, just wanted to report that
introducing compression apparently doesn't
On Fri, Mar 09, 2012 at 03:49:40PM +0100, Holikar, Sachin (ext) wrote:
Thanks Uwe for the reply.
I request some explanation here.
You suggested :
1.Create a separate offsite pool and then after use, remove them as
necessary from the autochanger and set their volume status to
Hi folks,
I've been playing around with the new lzo compression feature on
centos / redhat 5 and I'm more or less blown away by its performance /
compression ratio tradeoff. I was wondering if an lzo-enabled windows
client is also in the works, and if that's the case, if there's
something I could
Hi folks,
last night a rather large Backup (about 5TB or so) crashed with the
following error message:
17-Mär 23:43 localhost-sd JobId 2246: Recycled volume Offline41 on device
LTO4 (/dev/nst0), all previous data lost.
17-Mär 23:43 localhost-sd JobId 2246: New volume Offline41 mounted on
Hi folks,
we recently changed our backup policy from tape- to disk-based, so I
created two pools based on separate filestorage devices (two separate
dirs) to hold the full and incremental backup volumes.
Today I tried a restore job using both incr. and full volumes, and
when bacula (5.2.6 on
On Thu, Mar 29, 2012 at 08:59:05AM -0400, Craig Van Tassle wrote:
I had this issue in the past as well. The solution was to change what
type of media. Here is an example out of my SD configuration file.
Device = Differential
Media Type = File-Diff
Hi Craig,
thanks much for your
On Thu, Mar 29, 2012 at 09:41:40AM -0400, Phil Stracchino wrote:
I confess I don't see why you'd need to do that. But then, I keep all
my disk-based volumes in one place to start with, and have simply never
had a problem.
Hi Phil,
the reasons are mostly historical: I started doing
On Thu, Mar 29, 2012 at 10:40:07AM -0400, Phil Stracchino wrote:
I don't think you should need to do either of those. I have a total of
five pools - Full-Disk, Diff-Disk, Incr-Disk, and Full-Tape, plus a
scratch pool - but all of my disk volumes are in the same /spool/bacula
directory
Hi folks,
after the recent changes in my pool setup outlined on this list, we
now seem to have a problem with Bacula 5.2.6 failing to select the
next volume after the max use duration (1 day in our case) has been
reached. Backup jobs are scheduled correctly, but they seem to be
stuck waiting of
On Mon, Apr 02, 2012 at 08:20:17PM +0200, Dennis Hoppe wrote:
Hello,
is it possible to use chained copy jobs? For example i would like to
copy my full backups from local disk to usb disk and after that to an
nas storage.
Hi Dennis,
I see your definition below is lacking the SQLQuery for
On Thu, Apr 05, 2012 at 12:30:43PM +0200, Dennis Hoppe wrote:
Hello Uwe,
the Selection Type is defined at the following JobDefs. I read
somwhere that i have to use a Selection Type instead of
PoolUncopiedJobs, because it does not set a priorjobid.
Hi Dennis,
sorry I must have overlooked
On Thu, Apr 05, 2012 at 12:47:02PM +0200, Dennis Hoppe wrote:
maybe you could send me your config files? Which distribution and
version are you using?
Hello Dennis,
I'm running Bacula 5.2.6 compiled from source on CentOS 6.2 (64bit)
with a MySQL backend.
Here's the relevant config for
On Mon, Apr 09, 2012 at 10:19:02PM +0300, Panagiotis Christias wrote:
On Wed, Mar 21, 2012 at 11:19 AM, Uwe Schuerkamp
uwe.schuerk...@nionex.net wrote:
Hi folks,
I've been playing around with the new lzo compression feature on
centos / redhat 5 and I'm more or less blown away by its
On Fri, May 04, 2012 at 09:51:51AM -0300, Luis H. Forchesatto wrote:
Hi
I'm receiving que following message when trying to run a job:
JobId Level Name Status
==
2498 Full
Hello Luis,
looks like bacula has a permission problem and cannot read or write to
the new usb device, you might want to check permission settings on the
directory /media/usb-backup/bacula/ when the drive is mounted:
On Fri, May 04, 2012 at 10:36:34AM -0300, Luis H. Forchesatto wrote:
04-May
Hi folks,
I've been running copy jobs for the last couple of months to copy disk
backups to tape with good results. However running only one copy job
at the same time results in long copy queues that run into the start
of the evening incrementals, so I was wondering how to increase the
number of
Hi folks,
recently we've been seeing more and more problems with bacula-fd
messages in dmesg about a page allocation failure.
Platform is centos 6.2 64 bit, Version 5.2.6 compiled from Source
using the stock distro gcc.
We're using MariaDB 5.x as the db backend, here are some stats about
the
Hi folks,
this is my 2nd attempt to configure bacula 5.2.6 to run more than one
copy job at a time, and for the life of me I cannot find the error in
my config. Parallel backup jobs (both full and incr.) work like a
charm, but for some reason bacula refuses to run more than one copy
job from disk
Hi folks,
while we're at it, I was wondering what's happening when bacula copies
an on-disk (software-compressed) job to tape? Will it decompress the
data or will it simply transfer the copy to tape as-is?
All the best,
Uwe
On Mon, Jun 04, 2012 at 09:59:53AM -0400, John Drescher wrote:
Don't disk volumes have just about the same restrictions as a tape
volume? Meaning you can not load more than 1 disk volume into the same
device at a time. You also can not read or write to different parts of
the same volume at
On Mon, Jun 04, 2012 at 05:32:16PM -0400, Phil Stracchino wrote:
to keep the LTO4 streaming almost continuously anyway. What I do notice
is that Bacula sits for some time - many minutes - after each job ends
doing nothing but batch-writing attributes into the DB.
That's the reason I
On Tue, Jun 05, 2012 at 09:21:26AM -0400, Clark, Patricia A. wrote:
Yes, all of the drives are defined. I'm not sure what you mean by set
your pools to use the drives appropriately. I'm using an autochanger and
not placing any limits on the drives or the changer. I posted the pool
On Tue, Jun 05, 2012 at 02:37:56PM -0700, Steve O'Brien wrote:
I had a fully working 5.0.1 installation with my Quantum Scalar i40 library
that has 2 drives, after upgrading to 5.2.3 my backups started failing, I
then upgraded to 5.2.6 hoping that something had been fixed no such luck.
On Fri, Jun 08, 2012 at 10:28:03AM +0200, Julien Cochennec wrote:
Hi,
New to this list and bacula newbie, here almost everything works great,
backup around 50 clients, except one thing.
I followed this example
On Wed, Jun 06, 2012 at 09:50:49PM -0400, Fred Parks wrote:
I don't see anything on quotas being turned on in /etc/fstab and the free
inodes on the file systems is way high.
Have you tried the update command from within bconsole to create /
update the pool and volume definitions? What's the
On Fri, Jun 08, 2012 at 01:43:52PM +0200, Julien Cochennec wrote:
Hello Uwe,
I have this element nowhere, where should I put it?
My director has only include links, see below.
I saw many posts about this parameter but it appears sometimes in
device, sometimes in storage, sometimes in
Hi folks,
I have a very slow / long running backup job that starts up on 17:00
Friday and usually takes about 30h to complete. Later on that same
Friday, some more full backups are scheduled around 21:00 which take
off fine at first so at 21:00, I have six jobs running in parallel
onto the same
On Wed, Jun 13, 2012 at 12:13:19PM +0200, M. Müller wrote:
Hi,
sometimes the restore seems to work and ends with Status=OK, but it
allways dies. If Status is OK, then files are restored, and hopefully
all are restored.
The messages in the log file is allways: bacula-fd: Bacula interrupted
On Wed, Jun 13, 2012 at 03:34:16AM -0700, sbooth wrote:
I did the volume cleanup and had just around the number of volumes needed (or
I thought). Then it started to work the way I expected in that it recycled
volumes that jobs had expired (7 days). But then one day someone put a 32Gb
of
On Tue, Jun 12, 2012 at 03:31:12PM -0400, Clark, Patricia A. wrote:
I have several large file systems (1TB) where I want to break them up to get
smaller backup streams in parallel to increase the throughput to tape. My
fileset directive is below. I want everything in /home, but divided by
On Wed, Jun 13, 2012 at 10:37:45PM +0200, Jean-François Leroux wrote:
wait_timeout=691200
interactive_timeout=691200
To my.cnf.
I also added HearbeatInterval = 1 min
to bacula-sd.conf and bacula-fd.conf (on the vserver that had problems).
But still no go. Backup runs for about 15
Hi folks,
I was wondering how bacula decides wether a job has already been
copied to the next pool when using the PoolUncopiedJobs selection
method. Can someone shed any light on this?
All the best,
Uwe
--
Live
On Thu, Jun 14, 2012 at 09:41:21PM +0200, Georges wrote:
Le 14/06/2012 13:00, Uwe Schuerkamp a écrit :
Hi folks,
I was wondering how bacula decides wether a job has already been
copied to the next pool when using the PoolUncopiedJobs selection
method. Can someone shed any light
On Fri, Jun 15, 2012 at 10:39:45AM +0200, Tilman Schmidt wrote:
Am 13.06.2012 12:13, schrieb M. Müller:
sometimes the restore seems to work and ends with Status=OK, but it
allways dies. If Status is OK, then files are restored, and hopefully
all are restored.
The messages in the log
On Fri, Jun 15, 2012 at 11:11:26AM +0200, Gandalf Corvotempesta wrote:
Il giorno ven, 15/06/2012 alle 04.57 -0400, John Drescher ha scritto:
bextract or bscan entire database + normal restore
So, to restore from a tape i'll have to rescan the whole library
and then restore it as usual?
On Thu, Jun 14, 2012 at 11:25:03PM +0200, Yougo wrote:
Is it possible to check a free space ratio on the storage daemon,
fail a job if required instead of failing also the volumes, waiting
for a mount… and thus avoid a lot of successive error that could
cause miscomprehension for the
Hi folks,
in our current bacula setup (5.2.6 on CentOS 6), we run a load of full
backups starting on Friday evening which all go to the same online
disk file. The disk volume is currently 2,9T in size, so as a
consequence pruning / recycling the volume takes about six hours which
is a bit long
On Fri, Jun 15, 2012 at 10:07:58AM -0400, Marty Frasier wrote:
I have a query in my query list to show uncopied jobs by pool name in
bconsole. Here it is:
:List Uncopied Jobs for Pool:
*Enter Source Pool Name :
SELECT DISTINCT
Hi,
I've noted the link to Current files on bacula.org still points to
the 5.2.6 tree. Is this intentional or simply an oversight?
Cheers, Uwe
--
Live Security Virtual Conference
Exclusive live event will cover all
On Thu, Jun 21, 2012 at 07:54:01PM -0400, Mike Hobbs wrote:
Hello, I was wondering if anyone has written a bacula query that you can
run to find out what clients haven't been backed up within a certain
time period? If so could you share? I'm not an SQL guy which is why I
am asking.
Hi folks,
I've been running copy jobs for a couple of months now and I'm
wondering why they vary so much in speed. Sometimes they zoom along
just fine around 60MB / sec, at other times transfer rates never go
beyond 5MB / sec. Only one copy job is running at a time.
We're copying from a disk
On Fri, Jun 22, 2012 at 11:22:32AM -0400, Phil Stracchino wrote:
That. Way too many people just run with the default config file that
the package installed. Or they go to the example configs and pick out
the large server configuration example and use that.
The problem is, that large
On Tue, Jul 03, 2012 at 03:06:48PM +0200, Gandalf Corvotempesta wrote:
I'm still having issue with a copy job.
I'm trying to copy a job using this selection query:
Hello Gandalf,
can you post your copy job definition? Have you checked if a full
*client* backup is started after that message
On Tue, Jul 03, 2012 at 09:08:02PM +0100, Keith wrote:
Hi,
Our Bacula director keeps locking up and so far I've been unable to
identify exactly why it's happening. Basically everything seems fine
until we run a scheduled Full backup then after a minute or so BAT
bconsole stop working.
On Thu, Jul 05, 2012 at 11:44:06AM +0200, Gandalf Corvotempesta wrote:
So, should I set schedule time for this job one minute less than
others ?
This could help, or simply define a completely separate infrastructure
for this job (Device, Storage and Pool(s)).
Cheers, Uwe
On Thu, Jun 28, 2012 at 04:12:17PM +0200, Holikar, Sachin (ext) wrote:
Thanx but the bextract command halts ,
host:/opt/bacula/etc # bextract -V AAD371L3 /dev/nst0 /tmp/restore_bacula/
bextract: butil.c:287 Using device: /dev/nst0 for reading.
28-Jun 15:40 bextract JobId 0: No slot defined
On Tue, Jul 03, 2012 at 07:14:36PM +0200, Gandalf Corvotempesta wrote:
The full client backup is not run, as far as I can see.
A full backup will take many many hours and actually the copy jobs
ends in a couple of minutes.
If the copy job finishes successfully, I'd simply ignore the starting
On Thu, Jul 26, 2012 at 08:09:38PM -0600, NetCetera Lists wrote:
I am working on setting up a Copy job to a tape autoloader - and am
having an issue with the Next Pool directive requirement.
I am picking the jobs to copy - last Full for a number of existing
clients - using an SQL Query.
On Thu, Jul 19, 2012 at 08:35:42AM -0400, Clark, Patricia A. wrote:
I know that the quantity of files and the retention time are the big
factors in the size of the catalog database in bacula. What would
be a good calculation to use to ensure a healthy amount of space for
the database? I
On Thu, Aug 02, 2012 at 10:32:04AM -0400, John Drescher wrote:
The manual does describe all of the steps necessary although maybe not
in one single place.
At minimum you need to set Max Concurrent Jobs for the director,
storage and possibly client resource of bacula-dir.conf. Also
On Thu, Aug 02, 2012 at 11:26:44AM -0400, John Drescher wrote:
I agree it should be pretty simple. I was asking before I had to spend
30 minutes to 1 hour of work to figure it all out since I have never
written a nagios plugin yet.
Just use the check_log script as a template and check the
On Tue, Aug 07, 2012 at 02:26:51PM +0800, d tbsky wrote:
2012/8/5 Phil Stracchino ala...@metrocast.net
On 08/05/12 05:57, d tbsky wrote:
First of all note that 5.2.10 is a considerably more recent version than
2.0.0, and does nut therefore fall under until 2.0.0. :) This should
answer
On Mon, Aug 06, 2012 at 10:09:31AM -0500, Rao, Uthra R. (GSFC-672.0)[ADNET
SYSTEMS INC] wrote:
I am currently on bacula 5.2.6 on a RHEL6 machine. When I had compiled
bacula 5.2.6 sometime back I had used the following switches but when I go
to bconsole and type commands and later if I want
On Tue, Aug 07, 2012 at 08:15:45AM -0500, Rao, Uthra R. (GSFC-672.0)[ADNET
SYSTEMS INC] wrote:
Uwe,
Thank you for your email.
1) On my system I checked for readline-devel:
# rpm -qa | grep readline
readline-6.0-3.el6.x86_64
compat-readline5-5.2-17.1.el6.x86_64
So, should I install
On Tue, Aug 07, 2012 at 12:49:42PM -0500, Rao, Uthra R. (GSFC-672.0)[ADNET
SYSTEMS INC] wrote:
Uwe,
I installed #yum install readline-devel and then ran ./configure as follows:
Hello,
if you installed using
#yum install readline-devel
literally all you did was paste a comment into
On Wed, Aug 08, 2012 at 08:38:05AM -0500, Rao, Uthra R. (GSFC-672.0)[ADNET
SYSTEMS INC] wrote:
Uwe,
I installed yum install readline-devel
The did:
# rpm -q readline-devel
readline-devel-6.0-4.el6.x86_64
Then ran ./configure without these two switches --disable-conio
On Thu, Aug 09, 2012 at 09:35:57PM +, bacula-...@dflc.ch wrote:
Dear all,
I'm proud to announce you that Bacula-Web 5.2.10 is available for download
The full release note is available in the download section
http://bacula-web.org/download.html
Best regards
Davide
Hey Davide,
On Thu, Aug 09, 2012 at 04:58:32PM +0100, Gary Stainburn wrote:
Hi folks.
Google has burried me once again as this seems to be a recurring question.
What is the best method of starting a restore without an interactive
interface. Basically, I want to be able to start a restore of the last
Hi folks,
for some time now, I've been getting random apparently spurious
backup errors like the one below from several servers:
24-Aug 22:55 bacula-fd: clientXXX.2012-08-24_20.44.30_09 Fatal error:
Authorization key rejected by Storage daemon.
However the jobs seem to be running fine,
On Mon, Aug 27, 2012 at 12:29:31PM +0200, lst_ho...@kwsoft.de wrote:
Are you using tcp-wrappers?
http://www.mail-archive.com/bacula-users@lists.sourceforge.net/msg17096.html
No, we're not using tcp-wrappers.
All the best, Uwe
On Thu, Sep 06, 2012 at 06:10:16AM +0200, ganiuszka wrote:
It works for me. Did you try to use semicolon character for separate
elementary commands?
Example:
ClientRunBeforeJob = /bin/bash -c 'echo aaa /tmp/foo1.out; echo bbb
/tmp/foo2.out'
Regards.
gani
For improved
Hi folks,
I updated one of our bacula servers to 5.2.11 today (CentOS 6.x,
compiled from source), but sadly the director crashes after a couple
of copy jobs which were due this morning. Any idea how to go about
debugging the issue?
The server has a dir-bactrace file, but it appears to be empty,
What kind of files are you backing up? I've seen speeds as low as 0,5
MB / sec on a more or less idle host which has a gazillion of small
image files on it. In fact I've needed to split up the backup into
several sub-jobs to prevent it from running more than 48h or so ;-)
Also, I've seen jobs
On Fri, Oct 19, 2012 at 06:34:12PM +0800, krishna pawankar wrote:
Hello There,
I have one query that can i run a script without making it a bacup job.
I don't want to run any backup job. What i want is just to run a script on a
scheduled time in Bacula. I am not able to achieve it since i
On Fri, Oct 19, 2012 at 08:47:23PM +0800, krishna pawankar wrote:
Yes.but i need to run that script through bacula...
--- On Fri, 19/10/12, Uwe Schuerkamp uwe.schuerk...@nionex.net wrote:
You can also prepare a file with bacula console commands and pipe it
to bconsole from within
Hi folks,
sometimes we have a weird problem with bacula 5.2.12 (compiled from
source on CentOS 6) getting stuck trying to recycle an on-disk
volume.
The clients where this problem crops up have their own storages,
devices, pools and volumes (disk-based). We see a lot of messages in
the logs
On Thu, Oct 25, 2012 at 08:55:54AM -0700, warley wrote:
Hi guys!
Sometimes the mtx gives some errors, like this one:
root@server1:~# mtx -f /dev/sg2 status
Storage Changer /dev/sg2:1 Drives, 8 Slots ( 0 Import/Export )
Data Transfer Element 0:Empty
Storage Element 1:Full
Hi folks,
I'm having a problem understanding job priorities. We don't allow
mixed jobs to run concurrently, so based on that I've set up my
catalog backup at the lowest priority (25) while regular backups run
at 10, some at prio 15 to assure the catalog backup won't start before
all regular
On Tue, Nov 06, 2012 at 04:15:18PM +0100, lst_ho...@kwsoft.de wrote:
from my knowledge the jobs are created but take on hold as long as
other jobs with different priority are running. So it looks like the
catalog connections are established as part of the job creation. That
said you
On Thu, Nov 08, 2012 at 12:53:38PM +0100, Adrian Reyer wrote:
On Wed, Nov 07, 2012 at 12:53:43PM +0100, Uwe Schuerkamp wrote:
to reasonable levels (dumping the db through a gzip --fast would take
five hours or so while an rsync takes only roughly 30 minutes,
including lzop compression
Hi folks,
after a few rather uneventful weeks, all of a sudden some copy jobs
(no pattern discernible) have started to fail with messages like
these:
30-Nov 09:55 deniol186-dir JobId 51675: Copying using JobId=51200
Job=deniol2147.2012-11-27_22.05.57_09
30-Nov 09:55 deniol186-dir JobId 51675:
On Fri, Nov 30, 2012 at 11:56:23AM +0100, Uwe Schuerkamp wrote:
Hi folks,
after a few rather uneventful weeks, all of a sudden some copy jobs
(no pattern discernible) have started to fail with messages like
these:
30-Nov 09:55 deniol186-dir JobId 51675: Copying using JobId=51200
Job
Hi folks,
does a reload issued to beconsole also reload included config files?
Apparently in our setup (5.2.9 or .12), this does not seem to be the
case. Is there a signal I can send to the director process to make it
rescan an included config dir? We're using the statement
@|sh -c 'for f in
On Tue, Dec 11, 2012 at 10:53:23AM +, deepak@wipro.com wrote:
Hi guys,
I am a new user of bacula. It was working fine previously but suddenly it
stopped working with tape drives. I am not sure why it happens it may be due
to some internal network changes in organization.
Now...
I think your best bet would be to try the update volume command and
assign the correct slot for the volume that's being requested by
bacula.
email snipped for pw protection ;-
--
NIONEX --- Ein Unternehmen der Bertelsmann SE Co. KGaA
On Tue, Dec 11, 2012 at 06:59:44AM -0800, ccspro wrote:
Yes reload also scans through the included config files from bacula-dir.conf
I use them heavily in my environment.
What a reload does not do is load any modifications of bacula-sd.conf or
bacula-fd.conf you need to restart those
On Thu, Dec 13, 2012 at 11:43:41AM -0500, Boris Epstein wrote:
Hello all,
When I compile Bacula using
./configure --with-postgresql=dir
what's the directory I use? I have PostgreSQL 9 running on an OpenIndiana
machine and no setting I choose seems to work.
Any advice much
On Mon, Dec 17, 2012 at 07:28:38AM +, bacula-...@dflc.ch wrote:
Dear all,
I'm really happy to announce that a new version of Bacula-Web (5.2.11) is
available.
As this version contain major bug fixes and some new features too, i'd
suggest everybody to upgrade to this version.
All
On Fri, Dec 14, 2012 at 12:55:56PM -0500, Boris Epstein wrote:
Uwe,
Thanks for replying. I am a bit new to OpenIndiana and the terminology may
have been a little different from the Linux one - but yes, I believe I
have. I've got a bunch of includes - that I know for sure.
Boris.
Hi
On Thu, Dec 06, 2012 at 03:47:17PM +0100, Uwe Schuerkamp wrote:
On Fri, Nov 30, 2012 at 11:56:23AM +0100, Uwe Schuerkamp wrote:
Hi folks,
after a few rather uneventful weeks, all of a sudden some copy jobs
(no pattern discernible) have started to fail with messages like
these:
30
On Tue, Dec 18, 2012 at 01:50:37PM +0200, Nasos Nikologiannis wrote:
I am planning an enterprise-level network backup solution with the
following requirements/restrictions:
-Local and remote servers with heterogenous operating systems
(Linux,Windows)
-Backup policy that dictates backup data
On Tue, Dec 18, 2012 at 02:47:48PM +0200, Nasos Nikologiannis wrote:
Yes, i forgot to mention that I am already using separate media types for
full, inc and diff pools.
As of data that I am expecting to backup, the size is approximately 80GB
including database dumps and a fileserver (approx.
Hi folks,
what's the black magic involved in getting bacula to enable readline
support when compiling from source on CentOS6? I have all the relevant
dev libs install it seems, still bacula fails to pick up readline
during the configure run.
I've tried --with-readline, --with-readline=/usr,
On Wed, Dec 19, 2012 at 02:51:55PM -0500, Joseph De Nicolo wrote:
and so on, and I then when a new week starts it should recycle. Also I'm
backing up 3 different clients and I want them to use their own volumes but
when I make the volumes they all use the same one until it fills up... this
is
On Mon, Dec 24, 2012 at 12:02:21AM +0100, Radosław Korzeniewski wrote:
Hello,
2012/12/19 Uwe Schuerkamp uwe.schuerk...@nionex.net
Hi folks,
what's the black magic involved in getting bacula to enable readline
support when compiling from source on CentOS6? I have all the relevant
Ok, next question: Now that I have readline support enabled, how does
tab completion work in bconsole? Hitting TAB doesn't appear to
complete anything as far as I can tell ;-)
All the best,
Uwe
--
NIONEX --- Ein Unternehmen der Bertelsmann SE Co. KGaA
On Mon, Dec 24, 2012 at 06:45:51PM +0100, Tilman Schmidt wrote:
Am 24.12.2012 10:51, schrieb Uwe Schuerkamp:
As for my previous question on how to run a job using bconsole,
passing pool, storage and other params it works like this:
(echo run catalog=MyCatalog job=copy-single
storage
On Mon, Dec 24, 2012 at 06:52:52PM +0100, Tilman Schmidt wrote:
Am 24.12.2012 10:53, schrieb Uwe Schuerkamp:
Now that I have readline support enabled, how does
tab completion work in bconsole?
It doesn't. AFAIK support for it hasn't been implemented.
Ah ok, it may be available
On Mon, Dec 24, 2012 at 05:48:35PM -0500, K. M. Peterson wrote:
Hi Mark,
I have two jobs that I run periodically.
query_errvols.sql generates a list of volumes that were written by failed
jobs. If a job failed, or was cancelled, it gets returned by this program.
query_orphan_volumes.sql
On Wed, Jan 02, 2013 at 10:23:52AM +, Gary Stainburn wrote:
Dan,
The main cause of my confusion is that Bacula usually just creates a new
volume file when the current one is full or marked as (unless a volume has
already been recycled). This is why I only get the errror elsewhere if
On Thu, Jan 03, 2013 at 03:33:17PM +0100, Radosław Korzeniewski wrote:
Hello,
2013/1/3 Rao, Uthra R. (GSFC-672.0)[ADNET SYSTEMS INC] uthra.r@nasa.gov
I would like to know if a tape in a pool has expired but if the jobs on
that tapes are not deleted that is if the tape is not
Hi folks,
we're trying to move a client to new, separate mysql catalog. This
has worked mostly fine but when the job finishes, we get the following
error:
JobId 51080: Warning: Error updating job record. sql_update.c:202
Update failed: affected_rows=0 for UPDATE J\
ob SET
On Tue, Jan 08, 2013 at 10:45:33AM +0100, Jean-Louis Dupond wrote:
Hi,
I'm trying to register on http://bugs.bacula.org, everything seems to
work fine, but the system doesn't seem to send an email.
So i'm unable to create a new account to report some bugs/request.
Could some admin
101 - 200 of 450 matches
Mail list logo