Hi,
I'm getting started with bacula. I have a couple of minor backup jobs running
for a month or two. We have a windows webserver whose existing backups are
a bit rough so I thought I'd test out bacula's windows fd and get a little
further into bacula as I'm at it. This is the first FD which
Hi,
On Tue, 23 Jun 2009, Matthias Reif wrote:
Make sure your director and fd are the same major version, e.g. 3.0.1.
I had the same error when a v3 director tried to talk to a v2 fd.
Yep! Both you and John were spot on. I'm up and running now.
Thanks,
Gavin
Hi,
On Tue, 23 Jun 2009, Arno Lehmann wrote:
and welcome here! I hope you find all the advice you need here - and
I'm sure you'll be able to halp others, too!
Thanks. I have so far :-)
Gavin
--
Hi,
in general, we use Ubuntu LTS (hardy heron) for our linux servers. We're
currently running the packaged version of Bacula on Hardy which is pretty
old (2.2.8). Even the latest version of Ubuntu (jaunty) is only running
2.4.4 and I saw Kern's recent email saying 2.4 is going to cease support
Hi,
we have two Ubuntu Hardy Heron (8.04) servers running the packaged versions
of bacula-dir/bacula-sd (the backup server) and bacula-fd (the client).
All versions are 2.2.8-5ubuntu7.2 (out of support, I know).
I set up a reasonably large (190GB) backup job recently. The estimation
says:
Hi James,
On Wed, 08 Jul 2009, James Harper wrote:
7200 + 9 * 75 = 7875 seconds = 2 hours, 11 minutes and 15 seconds. I
don't think that's a coincidence.
I'm inclined to agree. Thanks :-)
There are 3 TCP connections when a backup runs:
DIR-SD
DIR-FD
FD-SD
The FD-SD one moves a lot of
Hi,
On Wed, 08 Jul 2009, Gavin McCullagh wrote:
I've set a heartbeat interval of 60 seconds on the director and am running
the backup again to see what happens.
Actually, this didn't solve it. Despite the heartbeat packets visibly
(in tcpdump) going from director to FD, the connection
Hi,
up until now, we've tended to keep backups in a fairly ad hoc manner.
People looking after a particular system have worked out their own way, be
it a proprietary backup tool, or a script of some sort. We've started
setting up bacula and I hope we'll be in a position to backup nearly every
Hi,
thanks for the response.
On Mon, 13 Jul 2009, Jon Schewe wrote:
The first thing I notice is that you're doing incremental backups every
other day. I'd really encourage you to do them everyday if you can,
otherwise on that odd day you're going to be really unhappy when a disk
crashes and
Hi,
On Mon, 13 Jul 2009, Gavin McCullagh wrote:
I guess the time to restore is a function of the number of volumes which
must be consulted and the time seeking through each one. A single file
restore should always be a single volume but multiple files (or even all
files) could potentially
On Tue, 14 Jul 2009, Reynier Pérez Mira wrote:
Reynier Pérez Mira wrote:
Hi every:
I have a lot of Pools (one for each Client) and I want to move the
volumes stored in some of those Pools to another Pool for better
organization. For example I want to move all the volumes from
Hi,
On Sat, 18 Jul 2009, Bruno Friedmann wrote:
Some time ago, I've made some tests on a customer site.
They have plenty data that could be compressed ( a 75% ratio )
With GZIP ( which is equal to gzip default level 6 ) we loose hours of
compression to obtain finally only a 78% ratio
Hi,
On Sun, 19 Jul 2009, Bruno Friedmann wrote:
Most of the data are compressible ( exchange server storage, and Navision
Database ) at a 75% rate with gzip2
we have 78% with gzip6 but it double easyly the time need to obtain it. So
sometime it doesn't help to try to do big compression.
Hi John
On Wed, 05 Aug 2009, John Drescher wrote:
I'm pretty sure I had a problem doing that. Off the top of my head there's
some complaint about a hello not getting responded to properly. Possibly
the v3 bacula-fd doesn't like talking to a v2 bacula-sd, I'm not sure.
I'd love to be
Hi,
On Wed, 05 Aug 2009, Shawn wrote:
Has anyone looked into compiling a 64-bit v2.4 package?
Would that resolve this? I'm also trying the hotfix mentioned in another
response, will see if that does the trick first.
If these posts are to be believed, it might.
Hi,
I'm not sure if this suggestion is well-known already, implemented already
or maybe just a plain stupid idea that wouldn't work. So, I'm going to
suggest it and don my flame-retardant suit. Feel free to flame or shoot
it down.
Suppose I do monthly full backups with differentials and
Hi,
wanting to keep things simple in my early days with Bacula I decided to use
a sqlite database. A few months later, I'm looking at growing things a
little more and starting to think sqlite might not have been the wisest
choice. The database is now 1.4GB in size and the combined size of the
Hi John,
On Fri, 21 Aug 2009, John Goerzen wrote:
Over at Debian, we received a bug report at
http://bugs.debian.org/cgi-bin/bugreport.cgi?bug=542810 regarding the
migration from sqlite2 to sqlite3. We are doing the process implied
by the make_catalog_backup command; namely:
Did you find a
Hi Kern,
many thanks for your reply.
On Thu, 03 Sep 2009, Kern Sibbald wrote:
This sounds to me like it would be resolved by our Base project (though
perhaps not as automatically as you would like). See the projects file for
details. This project is currently being implemented for the
Hi,
On Thu, 03 Sep 2009, Kern Sibbald wrote:
I suspect that all that is already possible with Bacula but may require a bit
of scripting. In short, you need support from a Bacula specialist, and we
don't do that on this list (bacula-devel).
Fair enough. I asked for suggested approaches on
job type that will create a reverse
incremental (or decremental) backup from two existing full backups.
Date: 05 October 2009
Origin: Griffith College Dublin. Some sponsorship available.
Contact: Gavin McCullagh gavin.mccull...@gcd.ie
Status:
What: The ability
Hi,
On Wed, 04 Nov 2009, Dennis Schneck wrote:
i wann backup a windows 2003 server.
FileSet {
Name = Full Set WIN
Enable VSS = yes
Include {
Options {
Ignore Case = yes
signature = MD5
}
file = C:\\WINDOWS\\
}
Exclude {
file = c:\\temp
file
Hi,
On Tue, 10 Nov 2009, Jesper Krogh wrote:
wvoice wrote:
However, I'd like to be able to backup the backup data offsite. Right now,
my storage pool is on a file volume located in /data/backup/. I'm trying to
figure out the best way to do this. My usual mechanism is to use rsync for
my
On Wed, 25 Nov 2009, bac...@bertholino.com.br wrote:
The client of version 3 are funcional with Windows 7 professional ?
I have installed a bacula-fd v3.0.3 on Windows 7 Professional and run some
backups against it successfully.
I wouldn't call it rigorous testing, but it seemed to work fine.
Hi,
On Mon, 30 Nov 2009, Paul Binkley wrote:
I hope someone can help. I have been using Bacula for a couple of months now
running 2.4 on the director and all client daemons, and 3.0 for the storage
daemon (my mistake). This has been working fine. The 3.0 file daemon isn't
compatible with the
is (as opposed to the total
amount of uncompressed data).
I realise a progress bar is a bit much to ask, but do people know of
sensible ways to estimate the time a backup has remaining?
Gavin
--
Gavin McCullagh
Senior System Administrator
IT Services
Griffith College Dublin
South Circular Road
Hi,
On Tue, 08 Dec 2009, Niklas Hagman wrote:
Thank you James for that answer. As I expected, 2 full backups seems to
be needed to exist to be able to have 2 weeks possibility to restore.
But what about this:
(F = full, I= incremental. Day 1 to day 13.)
F+I+I+I+I+I+I+I+I+I+I+I+I
Then
Hi,
On Mon, 07 Dec 2009, Alex Chekholko wrote:
On Thu, 3 Dec 2009 13:46:02 +
Gavin McCullagh gavin.mccull...@gcd.ie wrote:
I started a full backup last night of a tired old Windows-based Dell NAS.
It's very slow. The filesystem is full and super-fragmented. I also have
Hi,
On Tue, 08 Dec 2009, Niklas Hagman - FiberDirekt AB wrote:
Hi there Gavin. Thanks for the answer.
It seems like I have misunderstood something here.
You say that it only requires about 1.4GB more disk space to have
F+I+I+I+I+I+I+I+I+I+I+I+I+I+F2+I2+I2+I2+I2+I2+I2+I2+I2+I2+I2+I2+I2+I2
Hi,
On Wed, 09 Dec 2009, Gabriel - IP Guys wrote:
Is it possible to rate limit how fast a backup is sent to the SD? I have
three collocated boxes, and they all send data at about 100m/s! We're on
a 20 meg line, so you can imagine what happens to the rest of our
internet abilities when the
On Wed, 09 Dec 2009, Hayden Katzenellenbogen wrote:
I started an 800G restore. Which is half of my 1.7TB backup.
I ran iostat while it was restoring and was getting between 50 and 140
Meg/s. I would say the average is around 85Meg/s.
Is that 85MByte/sec across the network or from local
Hi,
we're doing a restore from bacula to a Windows NAS. We've found that the
fileset has a number of files in it which have trailing spaces, eg a
directory in the tree is called Agent . We also want to drop the drive
letter prefix off and restore to the exact location they cae from. Moving
Hi,
we're restoring a large number of files backed up from an old Win2K-based
NAS onto a Windows Server 2008 system. We backup e:\group and e:\home.
We're finding that directories which were not hidden on the Win2K system,
when restored are hidden. This includes the e:\ prefix if we restore
Hi,
when I look at this section on the live site:
http://www.bacula.org/en/rel-manual/Restore_Command.html#SECTION002252000
the list of allowed separators displays find under ISO-8859-1 but under
UTF-8 (the detected encoding), a strange symbol appears between ! and ;.
Hi,
On Mon, 14 Dec 2009, Kendall Shaw wrote:
I have a 300GB hard drive that I backup to a 600GB hard drive, but the
initial full backup runs out of space (it takes almost twice as much
space to hold the backup?).
Something sounds wrong there. Are you saying that a single volume for a
single
Hi,
On Tue, 15 Dec 2009, Kendall Shaw wrote:
Do you have a recommendation? I'm backing up my home computers. The
point of doing the full backup only once, was to avoid transferring the
same files over and over again every week. Maybe it's not worth avoiding
doing the full backup every week.
Hi,
On Wed, 16 Dec 2009, Arno Lehmann wrote:
It's possible this is a bug that has been fixed recently... I've seen
commit messages talking about improving file attribute handling on
windows. You might give the current development version a try. But
beware: Unless the developers release
Hi,
On Sun, 03 Jan 2010, Timo Neuvonen wrote:
3.0.3a is mentioned here:
http://www.mail-archive.com/bacula-de...@lists.sourceforge.net/msg05607.html
Hmm... it obviously fixes something restore-related.
Indeed. In our case, restored files were being incorrectly given the
hidden
Hi,
On Wed, 06 Jan 2010, Daniel Holtkamp wrote:
My company has requested this feature (copy jobs between multiple SDs)
and is willing to pay half of what Bacula Systems wants for
implementing. They asked for 10k€ and we are willing to pay 5k€.
We're also interested in this feature. If need
Hi,
we're running backups of a few Windows desktops with Bacula. On Windows
Vista and Windows 7, you tend to get a bunch of messages like this:
jm-fd JobId 2526: c:/Users/johnm/Documents/My Videos is a different
filesystem. Will not descend from c:/Users/johnm/ into
Hi,
many thanks for clearing this up for me.
On Wed, 13 Jan 2010, Josh Fisher wrote:
Windows calls these junction points. Windows implements directory
symlinks as junction points, the only difference being that rather than a
completely different filesystem being mounted at the junction
Hi,
On Thu, 14 Jan 2010, Dan Langille wrote:
Would it be possible for Bacula to give a more precise answer? The
message:
x is a different filesystem
Can you think of a better and concise message?
This is an example one I've seen:
fd-name JobId X:
Hi,
is there any facility of profiling a job in bacula. By that I mean, being
able to gather information on the time taken for parts of the backups.
I can see a job is taking a long time (on a senior staff member's laptop)
and I can look at the status and see that it's spending large amounts of
Hi,
this may be a very unreasonable request.
If you run a full backup, then add to the fileset, then try and run an
incremental, I can see why a full backup is triggered.
I was taken by surprise recently when I added an exclude to a fileset and
triggered a full backup. I guess the rule is any
Hi,
On Fri, 15 Jan 2010, Silver Salonen wrote:
I was taken by surprise recently when I added an exclude to a fileset and
triggered a full backup. I guess the rule is any changes to the fileset
will trigger a full backup but is that really necessary on an exclude?
There's an option
Hi,
as a relative bacula newbie myself I have a couple of suggestions.
On Tue, 26 Jan 2010, Cyril Lavier wrote:
Now that my exclude rule works perfectly (thank you guys), I just begin
to see a problem.
Backups are made on a LAN 100Mbit.
But the actual speed of bacula's backup is about
On Wed, 27 Jan 2010, Dirk H. Schulz wrote:
Telnetting from external-fd to server-sd using the above mentionened FQDN
and the port of the storage daemon (telnet storage.server.sd 9103)
outputs exactly the same as telnetting internally to that port. Afaik,
that means: bacula-fd on the external
Hi,
I'd just like to run something by you guys to see am I doing it right.
We have a senior staff member who exclusively works on a laptop and moves
around and travels a lot. A full backup takes 5+ hours due mainly to the
relatively slow disk and large amount of data. That's just not
Hi,
On Mon, 08 Feb 2010, Khalid Pasha wrote:
I am new to Bacula, I am planning to install Bacula server on VM machine
and storage ( for keeping backup data ) on other server on which we have
mapped LUNs from EMC SAN, my question is it possible to configure Bacula
in this way, if yes please
Hi,
On Wed, 10 Feb 2010, Ken Barclay wrote:
$ add /public/share/120 SALES DIVISION/*.*
No files marked.
$ add /public/share/120 SALES DIVISION/
No files marked.
The command prompt you get from the bacula console is a little bit
primitive. Off the top of my head, I'd suggest you try
On Wed, 10 Feb 2010, Ken Barclay wrote:
Thanks Gavin, but
$ mark /public/share/120 SALES DIVISION
No files marked.
$ add /public/share/120 SALES DIVISION/
No files marked.
Sorry, I wasn't thinking. I'd suggest you do:
cd public
cd share
mark 120 SALES DIVISION
Hi,
On Wed, 24 Feb 2010, Administrator wrote:
Is there any other solution, e.g. could i close a volume using a
script and then change the disk? Would bacula create silently a new
volume on the new disk if the last used volume was marked full before
the disk is removed?
Could you just use
Hi,
On Fri, 26 Feb 2010, Ralf Gross wrote:
I'm still thinking if it would be possible to use bacula for backing
up xxx TB of data, instead of a more expensive solution with LAN-less
backups and snapshots.
Problem is the time window and bandwith.
VirtualFull backups be a partial solution
On Sat, 27 Feb 2010, Gavin McCullagh wrote:
`
VirtualFull backups be a partial solution to your problem. We have a
laptop which we get very short backup time windows for -- never enough time
to run a full backup. Instead, we run incrementals (takes about 20% of the
time) and then run
Hi,
On Mar 11, 2010, at 5:12, vishesh bacula-fo...@backupcentral.com
wrote:
I am new to bacula and implemented it on my RHEl server 5.2.
Everything is working great and now i want to use my another linux
system disk partition as backup storage. Should i use NFS or similar
network
Hi,
On Tue, 16 Mar 2010, Bob Cousins wrote:
- Data staged to disk on the way to tape -- allowing backups to spool
faster than tapes and when tape drives are full.
I'm pretty sure this already exists.
http://www.bacula.org/en/dev-manual/Data_Spooling.html
- Explicit capturing of
On Tue, 23 Mar 2010, Bruce McCarthy wrote:
Basically I would like to get the server-side apps (Director Storage
Daemon) running on either Windows or Mac OS X. There are binaries available
for Windows up to 3.0.2 but everything I've read leads me to shy away from
this unsupported
Hi,
On Thu, 01 Apr 2010, XZed wrote:
http://www.backupcentral.com/phpBB2/two-way-mirrors-of-external-mailing-lists-3/backuppc-21/full-backup-built-from-incrementals-104809/
As it keeps unanswered, i just wanted to have confirmation that Virtual
Backup feature of Bacula, is the right one i'm
Hi,
On Fri, 02 Apr 2010, Avarca, Anthony wrote:
I'm using bacula to backup desktop and laptop clients. The desktops work
well with a schedule, but laptops are another story. Does anyone have a
strategy to backup laptops? Is it possible to have the user trigger a
backup?
It's not the
Hi,
On Fri, 02 Apr 2010, Avi Rozen wrote:
Gavin McCullagh wrote:
1. Start bconsole
2. Type runreturn
3. Type exitreturn
4. The messages are emailed to the user so they know when the job is
finished.
Assuming the laptop is running a Debian based Linux distro: can't
On Thu, 06 May 2010, Carlo Filippetto wrote:
Hi all,
I have a problem with VSS, I receive this:
Warning: VSS was not initialized properly. VSS support is
disabled. ERR=Overlapped I/O operation is in progress.
One cause of VSS problems like this is running 32-bit Bacula-fd on a 64-bit
On Thu, 06 May 2010, Steve Polyack wrote:
These are certainly good points. My thought is just that instead of
breaking out of bconsole to perform these tasks can be cumbersome.
Personally, I feel that it's something that I'd use a lot, simply to
prevent me from constantly breaking in and
Hi,
On Tue, 11 May 2010, martinofmoscow wrote:
I kicked off a [400Gb, full] bacup at 1am on Saturday and it completed 11
hours later at mid-day.
At the risk of getting the sums wrong and looking silly:
400GB in 11 hours
~ 36GB per hour
~ 600MB per minute
~ 10MB per second
~ 82Mbit/sec
on the client but it also reduces the
bandwidth required. Assuming the CPU can keep up, it may well relieve the
network bottleneck.
Gavin McCullagh-2 wrote:
Interestingly, I just noticed that the Bacula dir was able to reel off a
backup of the fd on localhost of about 750Mb in under 10
Hi,
On Wed, 12 May 2010, Kevin Keane wrote:
Because Windows Backup goes down to the sector or block level, it can
back up basically anything that is on your hard disk - Exchange, SQL
Server, virtual machines, registries, active directory, junction points,
case-sensitive files, files with
Hi,
we have a number of servers (about 30) being backed up by Bacula now. This
is working reasonably well. We use disk-based volumes and a scheme based
on the example in the Automated Disk Backup chapter of the manual. For
each backup we create a directory on the filesystem and a storage
Hi,
On Thu, 13 May 2010, mario parreño wrote:
But I prefer not mounting a unit, I prefer acceding to the nas directly,
because I have many folders of different accounts in the nas
and then I will have that to mount in Debian so many folders since has in the
nas.
Bacula's backups (as far
Hi,
we have a windows server 2003 server here and realised that its disk setup
is in such a bad way that we want to reinstall it. Never having done one,
we thought it would be nice to try a bare metal restore of the machine from
the backups (to spare disks). Both c:\ and d:\ drives are entirely
Hi,
On Sat, 12 Jun 2010, James Harper wrote:
You really need a windows live CD (eg bartpe) or else you won't get all
your NTFS ACL's and other stuff restored properly. Also, certain
versions of mkntfs are broken wrt making a partition bootable.
That's a real shame. Knoppix et al are so much
Hi,
On Mon, 14 Jun 2010, Bob Hetzel wrote:
I've never been able to get the bare-metal restore to work doing a restore
starting from a Live CD. I last tried it over a year ago and people
responded a while later saying they got it to work that way and they would
update a web page with said
Hi,
I wonder is there a simple way to verify that Bacula's MySQL tables are as
expected by Bacula.
I was running Ubuntu Hardy on our main backup server but needed v3
(primarily for VSS). I took a cut of the Debian maintainer's git archive
and created and installed packages for v3.0.2. Now that
Hi,
On Sat, 26 Jun 2010, Graham Sparks wrote:
26-Jun 18:01 bacula-dir JobId 0: Fatal error: Could not open Catalogue
MyCatalog, database bacula.
26-Jun 18:01 bacula-dir JobId 0: Fatal error: mysql.c:194 Unable to connect
to MySQL server.
Database=bacula User=bacula
MySQL connect failed
On Thu, 01 Jul 2010, Derek Harkness wrote:
I've seen a very significant slow in backup speed by enabling gzip
compress, 32MB/s (without gzip) vs 4MB/s (with gzip). The server I'm
backing up has lots of CPU 24x2.6ghz so the compression time shouldn't be
a huge factor. Is this normal for
Hi,
On Thu, 01 Jul 2010, Derek Harkness wrote:
Sorry I miss spoke in the original post. I'm backing up a server which
has 24x2.6ghz cpus and is barely using any of them.
Sorry, on reflection, you were quite clear. I misread :-)
Gavin
Hi,
On Sat, 24 Jul 2010, John Drescher wrote:
On Sat, Jul 24, 2010 at 4:10 PM, Mister IT Guru misteritg...@gmx.com wrote:
What would be a method of restoring a remote linux based server to it
last full backup, (or even it's first!). Is such a move possible?
I was thinking of a general
Hi,
On Mon, 23 Aug 2010, Radosław Korzeniewski wrote:
2010/8/23 Marcio Merlone marcio.merl...@a1.ind.br
I was wondering... would be nice to have a maildir backup plugin, where
one could backup a maildir and have its contents indexed and restorable
by sender, subject, date, attachments,
On Mon, 18 Oct 2010, Graham Keeling wrote:
On Sat, Oct 16, 2010 at 09:33:13AM +0200, Hugo Letemplier wrote:
Hi thanks a lot for your answers
I have retried with a new test scenario its clear now and deleting an
incremental is really dangerous.
But I think that a function that enable
On Mon, 08 Nov 2010, Alan Brown wrote:
Mysql works well - if tuned, but tuning is a major undertaking when
things get large/busy and may take several iterations.
Some time back there was an issue with Bacula (v5?) which seemed to come
down to a particular query associated (I think) with
Hi Alan,
On Mon, 08 Nov 2010, Alan Brown wrote:
When we do restores, building the tree takes a considerable time now. I
haven't had a lot of time to look at it, but suspected it might be down to
this issue.
That's a classic symptom of not having the right indexes on the File table.
On Tue, 09 Nov 2010, Alan Brown wrote:
and it still takes 14 minutes to build the tree on one of our bigger clients.
We have 51 million entries in the file table.
Add individual indexes for Fileid, Jobid and Pathid
Postgres will work with the combined index for individual table
didn't spot that, thanks.
This kind of thing is why it makes more sense to switch to postgres
when mysql databases get large.
I see. Well, as long as I'm not missing some simple tweak to make MySQL
run quicker I guess I'll plan to do that.
Gavin
--
Gavin McCullagh
Senior System Administrator
On Mon, 08 Nov 2010, Gavin McCullagh wrote:
We seem to have the correct indexes on the file table. I've run optimize
table
and it still takes 14 minutes to build the tree on one of our bigger clients.
We have 51 million entries in the file table.
I thought I should give some mroe concrete
Hi,
On Fri, 12 Nov 2010, Mikael Fridh wrote:
On Thu, Nov 11, 2010 at 3:47 PM, Gavin McCullagh gavin.mccull...@gcd.ie
wrote:
# Time: 10 14:24:49
# u...@host: bacula[bacula] @ localhost []
# Query_time: 1139.657646 Lock_time: 0.000471 Rows_sent: 4263403
Rows_examined: 50351037
Hi,
On Fri, 12 Nov 2010, Bob Hetzel wrote:
I'm starting to think the issue might be linked to some kernels or linux
distros. I have two bacula servers here. One system is a year and a half
old (12 GB RAM), has with a File table having approx 40 million File
records. That system has had
Hi,
On Sun, 28 Nov 2010, bopfi68 wrote:
i have backup jobs for a mashine who have a lot of jpeg , zip
and i dont want to compress them
so is it possible to configure some filetypes for not compressing
zip, jpg ... in a the same fileset ?
This prior discussion is pretty relevant...
On Sun, 12 Dec 2010, Guillaume Valdenaire wrote:
Here is attached a feature request for that wonderful Bacula.
Thanks in advance
Item 1:Implement a functionality that permits to log which files were
restored during a restore job (especially when using the Bweb
On Mon, 10 Jan 2011, Oliver Hoffmann wrote:
I did some tests with different gzip levels and with no compression at
all. It makes a difference but not as expected. Without compression I
still have a rate of only 11346.1 KB/s. Anything else I should try?
Are you sure the cross-over connection
Hi,
I have a Windows 7 computer running the bacula-fd v5.0.3. The computer has
an IPv6 address (as does the bacula director and storage daemon.
The bacula-fd daemon (according to netstat -na) doesn't appear to be
listening on the IPv6 address. This doesn't kill us in that eventually
things
won't change, the
v4 address will. I can give this laptop a reservation, but we're heading
in the direction of setting up bacula backups for 20-30 laptops, so I'd
prefer not to give them all reservations.
Gavin
--
Gavin McCullagh
Senior System Administrator
IT Services
Griffith College
South
Hi,
On Mon, 14 Mar 2011, Josh Fisher wrote:
On 3/12/2011 5:15 AM, Gavin McCullagh wrote:
On Fri, 11 Mar 2011, Joseph L. Casale wrote:
The bacula-fd daemon (according to netstat -na) doesn't appear to be
listening on the IPv6 address.
Force it to listen on whatever address/port you
Hi,
we have bacula in use with about 35 different FDs now. It works
brilliantly for us¹.
Our routine for each server is based on the example automated backup
routine described in the manual. We're not using tapes which some people
may frown on a little, we're using disk-based volumes. The
on my Ubuntu
instance (which makes sense of course). However, if you add the IPv4 in:
FDAddresses = {
ipv4 = { addr = 0.0.0.0; port = 9102; }
ipv6 = { addr = :: }
}
It no longer binds to any IPv6 addresses.
Gavin
--
Gavin McCullagh
Senior System Administrator
IT Services
Griffith
Hi,
On Fri, 01 Apr 2011, Gavin McCullagh wrote:
Ah, I just assumed this problem was with Windows only. I never thought to
check a linux host. I can confirm this on Ubuntu with packaged bacula-fd
5.0.2.
It looks like the same problem applies to bacula-sd and bacula-dir.
Gavin
Hi,
On Fri, 01 Apr 2011, Gavin McCullagh wrote:
When I was starting out, I came across a post somewhere on the lists that
said it was a good idea with disk volumes to create a separate storage
device for each client as it would avoid concurrency issues, etc.
I went a little further
Hi,
On Fri, 01 Apr 2011, Gavin McCullagh wrote:
The above config will makes bacula-fd listen on IPv6 only on my Ubuntu
instance (which makes sense of course). However, if you add the IPv4 in:
FDAddresses = {
ipv4 = { addr = 0.0.0.0; port = 9102; }
ipv6 = { addr
I've reported a bug based on this conversation, if anyone would like to add
anything to it
http://bugs.bacula.org/view.php?id=1719
Gavin
--
Xperia(TM) PLAY
It's a major breakthrough. An authentic gaming
smartphone
Hi,
On Mon, 11 Apr 2011, Peter Hoskin wrote:
I'm using bacula to do backups of some remote servers, over the Internet
encapsulated in OpenVPN (just to make sure things are encrypted and kept off
public address space).
The bacula-fd is in Montreal Canada with 100mbit Ethernet. I also have
Hi,
On Mon, 11 Apr 2011, Hugo Letemplier wrote:
I imagine a command like status job jobid=
I presume you've looked at status client=
It does much of what you want (current job duration, data transferred,
rate, num files, current file), but without the predictive information
On Wed, 13 Apr 2011, Pablo Marques wrote:
But I would still have the problem that I need a device tied up backing
up each client. The problem I am facing is that I need to backup lots of
slow clients, and I need to come up with something so I can back them up
all at the _same_ time on one or
config,
which would be much simpler :-)
Gavin
--
Gavin McCullagh
Senior System Administrator
IT Services
Griffith College
South Circular Road
Dublin 8
Ireland
Tel: +353 1 4163365
http://www.gcd.ie
http://www.gcd.ie/brochure.pdf
http://www.gcd.ie/opendays
http://www.gcd.ie/ebrochure
This E-mail
1 - 100 of 144 matches
Mail list logo