On Tue, 27 Sep 2011, René Moser wrote:
We are currently using a proprietary backup solution, and we evaluation
bacula to replace it.
We have some 100 hosts to backup up. The current work flow is like:
1. backup server backups host files over working time to a disk volume
on backup
On Tue, 27 Sep 2011, René Moser wrote:
On Tue, 2011-09-27 at 16:51 +0100, Gavin McCullagh wrote:
2. It is apparently not possible to backup all 100 hosts straight to
tape, so you write them to disk first. Why is that? It sounds like
you're manually doing Spooling, which Bacula has
Hi,
On Tue, 27 Sep 2011, René Moser wrote:
Okay, it is not _really_ during day it is more like backups over night
to disk and backup to tape should be finished in the morning. But this
is just a detail. We did not have (yet) really (big) problems about
consistency. But as you say these
Hi,
On Wed, 21 Sep 2011, Dan Langille wrote:
Do other people do this?
I do Copy to tape. Be aware: your tape drive must be on the same SD as
your disk storage. Copy and migrate jobs can involve only one SD. You
cannot copy/migrate from one SD to another.
Gah. I had forgotten that,
Hi,
On Thu, 22 Sep 2011, Ben Walton wrote:
Not sure if this would work for you (especially as it requires some up
front choices) but this is what I'm looking at here. I did some small
tests with it and it seemed to work fine.
Each host has a dedicated storage pool and storage device (I'm
Hi,
we've been happily using Bacula now for a few years, with a couple of big
disk arrays as the storage devices/media. We use something along the lines
of what the manual documents for fully automated disk-based backups. This
has worked well and is really quick and convenient for doing
On Wed, 21 Sep 2011, Erik P. Olsen wrote:
I am running a very smooth bacula 5.0.3 on Fedora 14. Everything seems to be
OK
except restores which are incredibly slow. How can I debug it to see what's
wrong?
Start off by telling us what part of the restore process is slow:
- building the
Hi,
On Wed, 21 Sep 2011, Gavin McCullagh wrote:
On Wed, 21 Sep 2011, Erik P. Olsen wrote:
I am running a very smooth bacula 5.0.3 on Fedora 14. Everything seems to
be OK
except restores which are incredibly slow. How can I debug it to see what's
wrong?
Start off by telling us
Hi,
On Wed, 21 Sep 2011, Marcio Merlone wrote:
Em 21-09-2011 09:29, Gavin McCullagh escreveu:
On Wed, 21 Sep 2011, Erik P. Olsen wrote:
I am running a very smooth bacula 5.0.3 on Fedora 14. Everything seems to
be OK
except restores which are incredibly slow. How can I debug it to see
Hi,
On Wed, 21 Sep 2011, Marcio Merlone wrote:
Em 21-09-2011 10:29, Gavin McCullagh escreveu:
A 150GB database. That's pretty large. How many clients have you?
About a dozen clients - some inactive but still with valid backup -
File Retention = 6 months, Job Retention = 1 year. Most
-mysql-to-postgres-85413/
Gavin
--
Gavin McCullagh
Senior System Administrator
IT Services
Griffith College
South Circular Road
Dublin 8
Ireland
Tel: +353 1 4163365
http://www.gcd.ie
http://www.gcd.ie/brochure.pdf
http://www.gcd.ie/opendays
http://www.gcd.ie/ebrochure
This E-mail is from Griffith
Hi,
On Thu, 01 Sep 2011, Eric Pratt wrote:
The incremental will run, but it shouldn't back anything up if nothing
changed since the last time the job ran. When it runs, it looks to
see if anything changed and if not, exits with OK. There is no
redundancy there. Check the byte count of the
On Tue, 06 Sep 2011, Dan Schaefer wrote:
This certainly will fix my problem. All my backups are in fact
disk-based and I have a separate process backing up to tapes (not
Bacula, atm). I like the idea of the virtual full option, because my
full backups usually take longer than I would like.
Hi,
On Tue, 02 Aug 2011, Annette Jaekel wrote:
Thats amazing, because I searched for reasons of bad performance in the
last days both for backup (18 hours for a full of 300 GB with 5 million
files)
It seems unlikely that adding an index would noticeably improve backup
performance, as that
Hi,
I've reported before that we've had very slow restores, due to the time taken
to build the file tree for the console -- which is about 20+ minutes.
http://adsm.org//lists/html/Bacula-users/2010-11/msg0.html
I've been looking at this again, particularly at a bug that had some
Hi,
On Mon, 01 Aug 2011, Gavin McCullagh wrote:
It looks to me like I should remove the indexes PathId and FilenameID,
leaving the PRIMARY, JobId and jobid_index. Does that seem correct?
Oooh. I think I'm close to shouting Eureka. Having dropped those indexes
(indices?), the same restore
Hi,
On Tue, 12 Jul 2011, Gavin McCullagh wrote:
I stopped the file daemon, started it again and its memory usage fell quite
dramatically. Initially it looked like this:
PID USER PRI NI VIRT RES SHR S CPU% MEM% TIME+ Command
12768 root 20 0 66512 1616 632 S 0.0
Hi,
I was looking at the output of htop today and noticed that the bacula-fd
process, although entirely idle was the highest memory process on the
server.
The output was:
PID USER PRI NI VIRT RES SHR S CPU% MEM% TIME+ Command
3949 root 20 0 219M 63296 936 S 0.0 1.5
Hi,
On Wed, 29 Jun 2011, Martin Simmons wrote:
These duplicates in the File table are probably generated by the batch insert
code. Since each pair has the same FileIndex, it should be safe to elide them
them.
Fair enough, thanks.
so perhaps this is safe enough. Does anyone know how
Hi,
On Tue, 28 Jun 2011, Roy Sigurd Karlsbakk wrote:
We're using Bacula for some backups with three SDs so far, and I wonder
if it's possible somehow to allow for client / laptop backups in a good
manner. As far as I can see, this will need to either be
client-initiated, client saying I'm
Hi,
On Tue, 28 Jun 2011, Gavin McCullagh wrote:
We do this in a somewhat manual way. The client computer has a bconsole
configured. When the laptop owner wants a backup, they start bconsole,
then type runret, yesret, quityes. They then get a
confirmation email when the backup completes
On Sat, 25 Jun 2011, Gavin McCullagh wrote:
It seems you need to drop --quick which is implied in --skip-opt. The
resulting command that I'm working with at the moment is:
mysqldump -t -n -c --compatible=postgresql --skip-quote-names --quick \
--lock-tables --add-drop-table --add
Hi,
I've been experimenting with a migration from MySQL to Postgres.
One problem I've come across is that there are a handful of duplicate files
in the Filename table
mysql select count(*) as filecount, Filename.name from Filename GROUP BY
Filename.Name ORDER BY filecount DESC LIMIT 30;
:00/g' \
| sed -e 's/\\0//' bacula-backup.sql`
That being said, this is untested so far -- I haven't actually done the
migration -- but this is the plan thus far :-)
Feedback/corrections welcome...
Gavin
--
Gavin McCullagh
Senior System Administrator
IT Services
Griffith College
On Fri, 24 Jun 2011, ted wrote:
I have an issue when trying a restore of a large data set consisting of
about 6.5TB and 35,000,000 files the console takes an extremely long time
to build the directory tree, over an hr. After the tree is built I typed
mark * and this command ran for about 18
Hi,
On Fri, 24 Jun 2011, Craig Van Tassle wrote:
I'm trying to increase the network speed between my FD and my SD.
My Cacti graphs are showing that my network usage averages 60Mb/s and
the connections between all the clients are on 1Gb links. I would think
I would be able to pull more
Hi,
I'm looking to work out a query or script which quotes me the time since
the last live backup (ie I don't want to include virtual full backups) for
each of our configured jobs.
Not all of our backups are scheduled, some are triggered manually, so I
need to produce a list of how long it has
Hi,
On Mon, 20 Jun 2011, Gavin McCullagh wrote:
I'm looking to work out a query or script which quotes me the time since
the last live backup (ie I don't want to include virtual full backups) for
each of our configured jobs.
Not all of our backups are scheduled, some are triggered manually
Hi,
On Mon, 20 Jun 2011, Kevin O'Connor wrote:
Bacula Server (DIR, SD) - Firewall/NAT - Server to be backed up (FD)
The FD is accessible from anywhere, but the DIR/SD is not (NAT/FW).
When I start the backup, the Director connects to the FD without a problem,
but then when the Director
Hi,
On Mon, 20 Jun 2011, Kevin O'Connor wrote:
I understand how it's supposed to work (FD to SD), that's why I'm asking if
there was some cryptic config option or something I was missing to make it
do the reverse. It exists as Active/Passive in FTP, so it's not too crazy
to think something
On Mon, 20 Jun 2011, Gavin McCullagh wrote:
This seems quite close
SELECT Job.Name, MAX(Job.RealEndTime)
FROM Job
WHERE Job.Type='B' AND Job.JobStatus='T'
GROUP BY Job.Name;
but we have clients who manually trigger their own incremental (eg once per
week
Hi James,
On Sat, 11 Jun 2011, James Harper wrote:
Not really directly bacula related, but one of the concerns I have with
switching to IPv6 for LAN scale traffic is the performance of the
various offload features in the network adapters. Did you do any
throughput testing?
I haven't yet had
Hi,
On Sat, 11 Jun 2011, Kevin Keane wrote:
The two problems I had:
- The default setting for bacula was wrong. It would only listen on IPv4
but not IPv6 unless you explicitly added a
DIRAddresses/FDAddresses/SDAddresses section to the respective config
files.
This is still true,
Hi,
just a short note to say that I've been testing Bacula's IPv6 support of
late and have generally found it to be good.
We have:
- consoles connecting to the director over IPv6
- director talking to SD and FD over IPv6
- FD talking to SD over IPv6
As you might expect, if you configure
On Fri, 10 Jun 2011, Gavin McCullagh wrote:
just a short note to say that I've been testing Bacula's IPv6 support of
late and have generally found it to be good.
PS: well done to all the developers involved :-)
Gavin
it here:
http://www.adsm.org/lists/html/Bacula-users/2010-10/msg00130.html
and we pin-pointed the exact query which was taking the time:
http://www.adsm.org/lists/html/Bacula-users/2010-11/msg00187.html
Gavin
--
Gavin McCullagh
Senior System Administrator
IT Services
Griffith College
South
Hi,
On Wed, 08 Jun 2011, Phil Stracchino wrote:
The very first thing I would do would be upgrade to MySQL 5.5.[current]
(5.5.13, right now) if you're not already using 5.5, making sure it's
properly configured (hint: look at the new configuration directive
innodb_buffer_pool_instances),
.
Which Bacula version do you have ? Perhaps it's an index issue.
Bacula package for Ubuntu v5.0.1-1ubuntu1
Gavin
--
Gavin McCullagh
Senior System Administrator
IT Services
Griffith College
South Circular Road
Dublin 8
Ireland
Tel: +353 1 4163365
http://www.gcd.ie
http://www.gcd.ie/brochure.pdf
.
Gavin
--
Gavin McCullagh
Senior System Administrator
IT Services
Griffith College
South Circular Road
Dublin 8
Ireland
Tel: +353 1 4163365
http://www.gcd.ie
http://www.gcd.ie/brochure.pdf
http://www.gcd.ie/opendays
http://www.gcd.ie/ebrochure
This E-mail is from Griffith College.
The E-mail
to reproduce
this behaviour.
Perhaps a bad query which does not use an index.
Sounds plausible. Does your issue occur with bconsole?
Gavin
--
Gavin McCullagh
Senior System Administrator
IT Services
Griffith College
South Circular Road
Dublin 8
Ireland
Tel: +353 1 4163365
http://www.gcd.ie
On Wed, 01 Jun 2011, Mauro Colorio wrote:
what's wrong?
FileSet {
Name = my Set
Include {
Options {
Ignore Case = yes;
}
File = E:/Shared/
}
Include {
Options {
Exclude = yes;
Ignore Case = yes;
}
File = E:/Shared/foo
File
Hi,
On Thu, 05 May 2011, Jeremy Maes wrote:
All you need to do to suppress those messages is add all the
junction points on the given windows system to the exclude list of
your filesystem.
I appreciate that I can do that, but it seems like having to create a mass
of wilddir entries in the
Hi,
like many people I imagine, we get various warnings from the Bacula
daemons, particularly the file daemons. There are some which seem like
it would be nice to simply suppress them and some which are severe and I'd
actually like more attention drawn to them.
To give an example, on a
Hi,
On Thu, 14 Apr 2011, ruslan usifov wrote:
I'm new in bacula world so have a question:
If i do incremental backup. For example veri big file change only few bytes
i it, what bacula do send all file, or only changed part of file?
It backs up the whole file each time a single byte or more
On Wed, 13 Apr 2011, Pablo Marques wrote:
But I would still have the problem that I need a device tied up backing
up each client. The problem I am facing is that I need to backup lots of
slow clients, and I need to come up with something so I can back them up
all at the _same_ time on one or
config,
which would be much simpler :-)
Gavin
--
Gavin McCullagh
Senior System Administrator
IT Services
Griffith College
South Circular Road
Dublin 8
Ireland
Tel: +353 1 4163365
http://www.gcd.ie
http://www.gcd.ie/brochure.pdf
http://www.gcd.ie/opendays
http://www.gcd.ie/ebrochure
This E-mail
Hi,
On Mon, 11 Apr 2011, Peter Hoskin wrote:
I'm using bacula to do backups of some remote servers, over the Internet
encapsulated in OpenVPN (just to make sure things are encrypted and kept off
public address space).
The bacula-fd is in Montreal Canada with 100mbit Ethernet. I also have
Hi,
On Mon, 11 Apr 2011, Hugo Letemplier wrote:
I imagine a command like status job jobid=
I presume you've looked at status client=
It does much of what you want (current job duration, data transferred,
rate, num files, current file), but without the predictive information
Hi,
On Fri, 01 Apr 2011, Gavin McCullagh wrote:
The above config will makes bacula-fd listen on IPv6 only on my Ubuntu
instance (which makes sense of course). However, if you add the IPv4 in:
FDAddresses = {
ipv4 = { addr = 0.0.0.0; port = 9102; }
ipv6 = { addr
I've reported a bug based on this conversation, if anyone would like to add
anything to it
http://bugs.bacula.org/view.php?id=1719
Gavin
--
Xperia(TM) PLAY
It's a major breakthrough. An authentic gaming
smartphone
Hi,
On Fri, 01 Apr 2011, Gavin McCullagh wrote:
When I was starting out, I came across a post somewhere on the lists that
said it was a good idea with disk volumes to create a separate storage
device for each client as it would avoid concurrency issues, etc.
I went a little further
Hi,
On Fri, 01 Apr 2011, Gavin McCullagh wrote:
Ah, I just assumed this problem was with Windows only. I never thought to
check a linux host. I can confirm this on Ubuntu with packaged bacula-fd
5.0.2.
It looks like the same problem applies to bacula-sd and bacula-dir.
Gavin
Hi,
we have bacula in use with about 35 different FDs now. It works
brilliantly for us¹.
Our routine for each server is based on the example automated backup
routine described in the manual. We're not using tapes which some people
may frown on a little, we're using disk-based volumes. The
on my Ubuntu
instance (which makes sense of course). However, if you add the IPv4 in:
FDAddresses = {
ipv4 = { addr = 0.0.0.0; port = 9102; }
ipv6 = { addr = :: }
}
It no longer binds to any IPv6 addresses.
Gavin
--
Gavin McCullagh
Senior System Administrator
IT Services
Griffith
Hi,
On Mon, 14 Mar 2011, Josh Fisher wrote:
On 3/12/2011 5:15 AM, Gavin McCullagh wrote:
On Fri, 11 Mar 2011, Joseph L. Casale wrote:
The bacula-fd daemon (according to netstat -na) doesn't appear to be
listening on the IPv6 address.
Force it to listen on whatever address/port you
won't change, the
v4 address will. I can give this laptop a reservation, but we're heading
in the direction of setting up bacula backups for 20-30 laptops, so I'd
prefer not to give them all reservations.
Gavin
--
Gavin McCullagh
Senior System Administrator
IT Services
Griffith College
South
Hi,
I have a Windows 7 computer running the bacula-fd v5.0.3. The computer has
an IPv6 address (as does the bacula director and storage daemon.
The bacula-fd daemon (according to netstat -na) doesn't appear to be
listening on the IPv6 address. This doesn't kill us in that eventually
things
On Mon, 10 Jan 2011, Oliver Hoffmann wrote:
I did some tests with different gzip levels and with no compression at
all. It makes a difference but not as expected. Without compression I
still have a rate of only 11346.1 KB/s. Anything else I should try?
Are you sure the cross-over connection
On Sun, 12 Dec 2010, Guillaume Valdenaire wrote:
Here is attached a feature request for that wonderful Bacula.
Thanks in advance
Item 1:Implement a functionality that permits to log which files were
restored during a restore job (especially when using the Bweb
Hi,
On Sun, 28 Nov 2010, bopfi68 wrote:
i have backup jobs for a mashine who have a lot of jpeg , zip
and i dont want to compress them
so is it possible to configure some filetypes for not compressing
zip, jpg ... in a the same fileset ?
This prior discussion is pretty relevant...
Hi,
On Fri, 12 Nov 2010, Bob Hetzel wrote:
I'm starting to think the issue might be linked to some kernels or linux
distros. I have two bacula servers here. One system is a year and a half
old (12 GB RAM), has with a File table having approx 40 million File
records. That system has had
Hi,
On Fri, 12 Nov 2010, Mikael Fridh wrote:
On Thu, Nov 11, 2010 at 3:47 PM, Gavin McCullagh gavin.mccull...@gcd.ie
wrote:
# Time: 10 14:24:49
# u...@host: bacula[bacula] @ localhost []
# Query_time: 1139.657646 Lock_time: 0.000471 Rows_sent: 4263403
Rows_examined: 50351037
didn't spot that, thanks.
This kind of thing is why it makes more sense to switch to postgres
when mysql databases get large.
I see. Well, as long as I'm not missing some simple tweak to make MySQL
run quicker I guess I'll plan to do that.
Gavin
--
Gavin McCullagh
Senior System Administrator
On Mon, 08 Nov 2010, Gavin McCullagh wrote:
We seem to have the correct indexes on the file table. I've run optimize
table
and it still takes 14 minutes to build the tree on one of our bigger clients.
We have 51 million entries in the file table.
I thought I should give some mroe concrete
On Tue, 09 Nov 2010, Alan Brown wrote:
and it still takes 14 minutes to build the tree on one of our bigger clients.
We have 51 million entries in the file table.
Add individual indexes for Fileid, Jobid and Pathid
Postgres will work with the combined index for individual table
On Mon, 08 Nov 2010, Alan Brown wrote:
Mysql works well - if tuned, but tuning is a major undertaking when
things get large/busy and may take several iterations.
Some time back there was an issue with Bacula (v5?) which seemed to come
down to a particular query associated (I think) with
Hi Alan,
On Mon, 08 Nov 2010, Alan Brown wrote:
When we do restores, building the tree takes a considerable time now. I
haven't had a lot of time to look at it, but suspected it might be down to
this issue.
That's a classic symptom of not having the right indexes on the File table.
On Mon, 18 Oct 2010, Graham Keeling wrote:
On Sat, Oct 16, 2010 at 09:33:13AM +0200, Hugo Letemplier wrote:
Hi thanks a lot for your answers
I have retried with a new test scenario its clear now and deleting an
incremental is really dangerous.
But I think that a function that enable
Hi,
On Mon, 23 Aug 2010, Radosław Korzeniewski wrote:
2010/8/23 Marcio Merlone marcio.merl...@a1.ind.br
I was wondering... would be nice to have a maildir backup plugin, where
one could backup a maildir and have its contents indexed and restorable
by sender, subject, date, attachments,
Hi,
On Sat, 24 Jul 2010, John Drescher wrote:
On Sat, Jul 24, 2010 at 4:10 PM, Mister IT Guru misteritg...@gmx.com wrote:
What would be a method of restoring a remote linux based server to it
last full backup, (or even it's first!). Is such a move possible?
I was thinking of a general
On Thu, 01 Jul 2010, Derek Harkness wrote:
I've seen a very significant slow in backup speed by enabling gzip
compress, 32MB/s (without gzip) vs 4MB/s (with gzip). The server I'm
backing up has lots of CPU 24x2.6ghz so the compression time shouldn't be
a huge factor. Is this normal for
Hi,
On Thu, 01 Jul 2010, Derek Harkness wrote:
Sorry I miss spoke in the original post. I'm backing up a server which
has 24x2.6ghz cpus and is barely using any of them.
Sorry, on reflection, you were quite clear. I misread :-)
Gavin
Hi,
On Sat, 26 Jun 2010, Graham Sparks wrote:
26-Jun 18:01 bacula-dir JobId 0: Fatal error: Could not open Catalogue
MyCatalog, database bacula.
26-Jun 18:01 bacula-dir JobId 0: Fatal error: mysql.c:194 Unable to connect
to MySQL server.
Database=bacula User=bacula
MySQL connect failed
Hi,
I wonder is there a simple way to verify that Bacula's MySQL tables are as
expected by Bacula.
I was running Ubuntu Hardy on our main backup server but needed v3
(primarily for VSS). I took a cut of the Debian maintainer's git archive
and created and installed packages for v3.0.2. Now that
Hi,
On Sat, 12 Jun 2010, James Harper wrote:
You really need a windows live CD (eg bartpe) or else you won't get all
your NTFS ACL's and other stuff restored properly. Also, certain
versions of mkntfs are broken wrt making a partition bootable.
That's a real shame. Knoppix et al are so much
Hi,
On Mon, 14 Jun 2010, Bob Hetzel wrote:
I've never been able to get the bare-metal restore to work doing a restore
starting from a Live CD. I last tried it over a year ago and people
responded a while later saying they got it to work that way and they would
update a web page with said
Hi,
we have a windows server 2003 server here and realised that its disk setup
is in such a bad way that we want to reinstall it. Never having done one,
we thought it would be nice to try a bare metal restore of the machine from
the backups (to spare disks). Both c:\ and d:\ drives are entirely
Hi,
On Thu, 13 May 2010, mario parreño wrote:
But I prefer not mounting a unit, I prefer acceding to the nas directly,
because I have many folders of different accounts in the nas
and then I will have that to mount in Debian so many folders since has in the
nas.
Bacula's backups (as far
Hi,
we have a number of servers (about 30) being backed up by Bacula now. This
is working reasonably well. We use disk-based volumes and a scheme based
on the example in the Automated Disk Backup chapter of the manual. For
each backup we create a directory on the filesystem and a storage
Hi,
On Wed, 12 May 2010, Kevin Keane wrote:
Because Windows Backup goes down to the sector or block level, it can
back up basically anything that is on your hard disk - Exchange, SQL
Server, virtual machines, registries, active directory, junction points,
case-sensitive files, files with
Hi,
On Tue, 11 May 2010, martinofmoscow wrote:
I kicked off a [400Gb, full] bacup at 1am on Saturday and it completed 11
hours later at mid-day.
At the risk of getting the sums wrong and looking silly:
400GB in 11 hours
~ 36GB per hour
~ 600MB per minute
~ 10MB per second
~ 82Mbit/sec
on the client but it also reduces the
bandwidth required. Assuming the CPU can keep up, it may well relieve the
network bottleneck.
Gavin McCullagh-2 wrote:
Interestingly, I just noticed that the Bacula dir was able to reel off a
backup of the fd on localhost of about 750Mb in under 10
On Thu, 06 May 2010, Steve Polyack wrote:
These are certainly good points. My thought is just that instead of
breaking out of bconsole to perform these tasks can be cumbersome.
Personally, I feel that it's something that I'd use a lot, simply to
prevent me from constantly breaking in and
On Thu, 06 May 2010, Carlo Filippetto wrote:
Hi all,
I have a problem with VSS, I receive this:
Warning: VSS was not initialized properly. VSS support is
disabled. ERR=Overlapped I/O operation is in progress.
One cause of VSS problems like this is running 32-bit Bacula-fd on a 64-bit
Hi,
On Fri, 02 Apr 2010, Avarca, Anthony wrote:
I'm using bacula to backup desktop and laptop clients. The desktops work
well with a schedule, but laptops are another story. Does anyone have a
strategy to backup laptops? Is it possible to have the user trigger a
backup?
It's not the
Hi,
On Fri, 02 Apr 2010, Avi Rozen wrote:
Gavin McCullagh wrote:
1. Start bconsole
2. Type runreturn
3. Type exitreturn
4. The messages are emailed to the user so they know when the job is
finished.
Assuming the laptop is running a Debian based Linux distro: can't
Hi,
On Thu, 01 Apr 2010, XZed wrote:
http://www.backupcentral.com/phpBB2/two-way-mirrors-of-external-mailing-lists-3/backuppc-21/full-backup-built-from-incrementals-104809/
As it keeps unanswered, i just wanted to have confirmation that Virtual
Backup feature of Bacula, is the right one i'm
On Tue, 23 Mar 2010, Bruce McCarthy wrote:
Basically I would like to get the server-side apps (Director Storage
Daemon) running on either Windows or Mac OS X. There are binaries available
for Windows up to 3.0.2 but everything I've read leads me to shy away from
this unsupported
Hi,
On Tue, 16 Mar 2010, Bob Cousins wrote:
- Data staged to disk on the way to tape -- allowing backups to spool
faster than tapes and when tape drives are full.
I'm pretty sure this already exists.
http://www.bacula.org/en/dev-manual/Data_Spooling.html
- Explicit capturing of
Hi,
On Mar 11, 2010, at 5:12, vishesh bacula-fo...@backupcentral.com
wrote:
I am new to bacula and implemented it on my RHEl server 5.2.
Everything is working great and now i want to use my another linux
system disk partition as backup storage. Should i use NFS or similar
network
Hi,
On Fri, 26 Feb 2010, Ralf Gross wrote:
I'm still thinking if it would be possible to use bacula for backing
up xxx TB of data, instead of a more expensive solution with LAN-less
backups and snapshots.
Problem is the time window and bandwith.
VirtualFull backups be a partial solution
On Sat, 27 Feb 2010, Gavin McCullagh wrote:
`
VirtualFull backups be a partial solution to your problem. We have a
laptop which we get very short backup time windows for -- never enough time
to run a full backup. Instead, we run incrementals (takes about 20% of the
time) and then run
Hi,
On Wed, 24 Feb 2010, Administrator wrote:
Is there any other solution, e.g. could i close a volume using a
script and then change the disk? Would bacula create silently a new
volume on the new disk if the last used volume was marked full before
the disk is removed?
Could you just use
Hi,
On Wed, 10 Feb 2010, Ken Barclay wrote:
$ add /public/share/120 SALES DIVISION/*.*
No files marked.
$ add /public/share/120 SALES DIVISION/
No files marked.
The command prompt you get from the bacula console is a little bit
primitive. Off the top of my head, I'd suggest you try
On Wed, 10 Feb 2010, Ken Barclay wrote:
Thanks Gavin, but
$ mark /public/share/120 SALES DIVISION
No files marked.
$ add /public/share/120 SALES DIVISION/
No files marked.
Sorry, I wasn't thinking. I'd suggest you do:
cd public
cd share
mark 120 SALES DIVISION
Hi,
On Mon, 08 Feb 2010, Khalid Pasha wrote:
I am new to Bacula, I am planning to install Bacula server on VM machine
and storage ( for keeping backup data ) on other server on which we have
mapped LUNs from EMC SAN, my question is it possible to configure Bacula
in this way, if yes please
Hi,
I'd just like to run something by you guys to see am I doing it right.
We have a senior staff member who exclusively works on a laptop and moves
around and travels a lot. A full backup takes 5+ hours due mainly to the
relatively slow disk and large amount of data. That's just not
On Wed, 27 Jan 2010, Dirk H. Schulz wrote:
Telnetting from external-fd to server-sd using the above mentionened FQDN
and the port of the storage daemon (telnet storage.server.sd 9103)
outputs exactly the same as telnetting internally to that port. Afaik,
that means: bacula-fd on the external
Hi,
as a relative bacula newbie myself I have a couple of suggestions.
On Tue, 26 Jan 2010, Cyril Lavier wrote:
Now that my exclude rule works perfectly (thank you guys), I just begin
to see a problem.
Backups are made on a LAN 100Mbit.
But the actual speed of bacula's backup is about
Hi,
is there any facility of profiling a job in bacula. By that I mean, being
able to gather information on the time taken for parts of the backups.
I can see a job is taking a long time (on a senior staff member's laptop)
and I can look at the status and see that it's spending large amounts of
1 - 100 of 144 matches
Mail list logo