Re: [Bacula-users] Client / laptop backups

2011-06-28 Thread Sean Clark
On 06/28/2011 02:24 PM, Roy Sigurd Karlsbakk wrote:
 Hi all

 We're using Bacula for some backups with three SDs so far, and I wonder if 
 it's possible somehow to allow for client / laptop backups in a good manner. 
 As far as I can see, this will need to either be client-initiated, client 
 saying I'm alive! or something, or having a polling process running to 
 check if the client's online for a given period of time.

 Is something like this possible or in the works, or is Bacula intended only 
 for server backups?
The simplest way to handle this that I've found is to set up space for
the laptops to rsync to, and the run bacula's scheduled backups against
that (as a bonus, you then also have an immediately-readable copy of the
laptop's files if you need to suddenly recover some accidentally-deleted
file without needing to initiate a bacula restore).  We've got laptop
users here that we CAN'T seem to run full bacula backups on because they
never stay plugged in long enough to finish, so rsync is the only way we
can get full backups.

You could, alternatively, set up a way for clients to ssh into the
director with an account permitted to send a run (their job name) to
bconsole to initiate a backup manually.

--
All of the data generated in your IT infrastructure is seriously valuable.
Why? It contains a definitive record of application performance, security 
threats, fraudulent activity, and more. Splunk takes this data and makes 
sense of it. IT sense. And common sense.
http://p.sf.net/sfu/splunk-d2d-c2
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] Director Control Protocol

2011-06-08 Thread Sean Clark
On 06/07/2011 05:52 PM, Tim Gustafson wrote:
 Can't you spawn bconsole from the web application? That is what most
 of the other web apps do.
 [...]
 I'm hoping that there might be a protocol that cuts out all the screen 
 parsing and instead lets me just do something like:

 1. Connect to the bacula-dir daemon
 2. Authenticate
 3. Send a command like show status client blah.foo.bar-fd

 and have that return machine-parse-able status information, rather than 
 human-readable information.
You can skip the parse the list of clients step by just sending
status client=blah.foo.bar-fd directly to bconsole, but I personally
would also love to see more easily machine-parseable output - just
adding to bconsole's commands themselves an out=xml or out=json or
similar modifier that returns the information in an easier to parse form
instead of the default human readable form would be extremely useful
to me.

(I wrote a web-interface script that talks to bconsole.  Getting the
response isn't really that hard, but I do end up going through a series
of regex's to pick out the individual bits of information that I want
from the output, which is obviously kind of a pain.)

--
EditLive Enterprise is the world's most technically advanced content
authoring tool. Experience the power of Track Changes, Inline Image
Editing and ensure content is compliant with Accessibility Checking.
http://p.sf.net/sfu/ephox-dev2dev
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


[Bacula-users] Again trying to compile bacula-sd for an embedded platform...

2011-06-03 Thread Sean Clark
bsock.c hates me.

The configure script says it sees fcntl.h, /usr/include/fcntl.h declares
posix_fadvise, but no matter what I've tried so far, I can't get
bsock.c to actually accept it:

Compiling bsock.c
bsock.c: In member function `bool BSOCK::despool(void (*)(ssize_t),
ssize_t)':
bsock.c:592: error: `posix_fadvise' undeclared (first use this function)
bsock.c:592: error: (Each undeclared identifier is reported only once
for each function it appears in.)


Any hints on how I can get past this and get it to compile?  I would
really like to get bacula-sd running natively on this thing much as I
have done (successfully, and pretty easily) on LaCie 2Big Network NAS
devices before.  Otherwise, I end up having to mount a share as CIFS on
some other box and have twice-as-horrible throughput on this person's
backups...

--
Simplify data backup and recovery for your virtual environment with vRanger.
Installation's a snap, and flexible recovery options mean your data is safe,
secure and there when you need it. Discover what all the cheering's about.
Get your free trial download today. 
http://p.sf.net/sfu/quest-dev2dev2 
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


[Bacula-users] Bacula slow transfer / Compression latency / feature request

2011-05-31 Thread Sean Clark
On 05/30/2011 02:11 PM, reiserfs wrote:
 Hello, im new with Bacula scene, i have used the HP Dataprotector with a HP 
 Library fiber channel backup.

 With the Dataprotector i got 1gbps interfaces and switch and all jobs get 
 done very fast, with transfer like 50-80MB/s.

 Now im using Bacula with a DELL TL2000 iSCSI, and with my first experience i 
 got only 6MB/s transfer with 1gbps interfaces and switch.

 So what im missing

 Used to tes:
 Bacula Director runing on Slackware64 13.1
 Bacula Cliente Windows 2003 Server
Turning on software gzip compression on the client is definitely a major
performance killer, unfortunately, so that would be my first guess as
well.  This looks like a good place to mention some testing I've done.

I've been doing some testing lately due to also being somewhat
aggravated as the apparently slow transfer rates I get during Bacula
backups, but it's starting to look like it's not really Bacula's fault
most of the time.  Most of the time, it looks like the problem is just
how fast the client can read files off of the disk and send them.  The
network (at least on Gb) is not usually the problem, nor even database
activity on the director (attribute spooling will help if you DO have
any problems with that).

Encryption and gzip compression by the client introduce major latency
that unavoidably slows down the transfer, and this isn't specifically a
bacula client issue.  Other things I have seen that cause major
slowdowns are antivirus software on Windows (particularly on-access
scanning) and active use of the computer while the backup is running.

Regarding compression, specifically, though - testing on my laptop here,
I tested just reading files from /usr and /home with tar, piping them
through pv to get the transfer rate (and then dumping them directly to
/dev/null).  I repeated the tests then with some different compression
schemes inserted.
for example:

tar -cf - /usr | pv -b -r -a  /dev/null (No Compression)
tar -cf - /usr | gzip -c | pv -b -r -a  /dev/null (GZIP)
tar -cf - /usr | gzip -1 -c | pv -b -r -a  /dev/null (GZIP1)
tar -cf - /usr/ | lzop -c | pv -b -r -a  /dev/null (LZO)

(and repeated for /home)

Here are my results:

/usr
No Compression: 5.58GB total data, Avg 13.1MB/s (436s to finish)
GZIP: 2.11GB Total data, Avg 2.97MB/s (727s to finish)
GZIP1: 2.36GB Total data, Avg 4.13MB/s (585s to finish)
LZO: 2.82GB Total Data, Avg 6.48MB/s (445s to finish)

/home (includes a lot of e.g. media files that are not very compressible)
No Compression: 91.56GB Total Data, 34.5MB/s Avg,, (~2700s to finish)
GZIP: 77.1GB Total Data, 9.78MB/s Avg, (8072s to finish)
GZIP1: 77.6GB Total Data, 11.7MB/s Avg, (~6790s to finish)
LZO: 80.6GB Total Data, 28.3MB/s Avg, (~2900s to finish)

So, yes, if you have gzip compression turned on, you'll almost certainly
see a huge increase in speed if you turn it off (I believe most tape
drives can or will do compression in hardware, so you don't need to
pre-compress at the client). 

If you are backing up to disk as I am (or for some reason aren't doing
hardware compression on the tape drive), you can also get a small speed
increase by dropping the gzip compression down to the minimum
(Compress=GZIP1 in the FileSet), which seem to compress almost as well
overall but induces less latency.

FEATURE REQUEST:
However, assuming my tests so far are representative, it looks like LZO
compression can get backup jobs transferred in almost the same amount of
time as no compression at all, while still substantially reducing the
amount of data transferred and stored (not as much as GZIP does, but
still a noteworthy amount).  Is it possible we could get a
Compress=LZOP capability added to bacula-fd?

tl;dr: Turn off compression until and unless an LZO compression option
is implemented, unless you are desperate for space on your backup media,
in which case you'll just have to cope with the slow backups.

--
Simplify data backup and recovery for your virtual environment with vRanger. 
Installation's a snap, and flexible recovery options mean your data is safe,
secure and there when you need it. Data protection magic?
Nope - It's vRanger. Get your free trial download today. 
http://p.sf.net/sfu/quest-sfdev2dev
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] Bacula 5.0.3 Macintosh file daemon?

2011-05-27 Thread Sean Clark
On 05/27/2011 08:50 AM, Graham Keeling wrote:
 Hello,
 Does anybody know where I might be able to find a bacula-5.0.3 Mac file 
 daemon?
 Or an installer?
 Thanks.
I recommend MacPort[1] for that.

port install bacula +client_only
port load bacula (as I recall)

[1] http://www.macports.org/install.php

--
vRanger cuts backup time in half-while increasing security.
With the market-leading solution for virtual backup and recovery, 
you get blazing-fast, flexible, and affordable data protection.
Download your free trial now. 
http://p.sf.net/sfu/quest-d2dcopy1
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] Speed of backups

2011-04-28 Thread Sean Clark
On 04/28/2011 02:06 PM, Jason Voorhees wrote:
 I tried to copy a 10 GB file between both servers (Bacula and
 Fileserver) with scp and I got a 48 MB/s speed transfer. Is this why
 my backups are always near to that speed?
Try it with scp -c arcfour - like compression, encryption introduces
enough latency to slow things down
even on a fast system.  (The arcfour encryption algorithm seems to be
the fastest, lowest-latency one available in openssh).

--
WhatsUp Gold - Download Free Network Management Software
The most intuitive, comprehensive, and cost-effective network 
management toolset available today.  Delivers lowest initial 
acquisition cost and overall TCO of any competing solution.
http://p.sf.net/sfu/whatsupgold-sd
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] Should I use data spooling when writing to nfs mounted storage?

2011-03-03 Thread Sean Clark
(Dangit, that last reply was supposed to go to the list, not just the
sender of the email...let me try this again)

On 03/03/2011 12:40 PM, Fabio Napoleoni - ZENIT wrote:
 Thank you for your analysis, after that I think that the problem is not the 
 nfs overhead, because the despooling phase (over nfs filsystem) has a 
 transfer rate of 7.3 MBps, so it's fine. Instead the first phase (bacula-fd 
 - bacula-sd) happens at 1.2 MBps which is very poor value I think.

 So the bottleneck should be the client configuration or something similar. 
 What I should check to improve performances?

 This is the fileset used for that backup

 FileSet {
   Name = FileSystem Full
   Include {
 Options {
   compression=GZIP # compress backup
[...]

In my experience, slow performance like this (i.e. 5MB/s on at least
100Mb ethernet) usually turns out to be the client's fault.  Compression
seems to be a very common culprit.  Try switching compression off
completely and see how much of a difference that makes.  The latency
introduced by waiting for compression - even pretty fast compression -
seems to substantially choke throughput down.
If bacula ends up with additional compression type options in addition
to gzip at some point, this might help (LZO compression doesn't compress
as well as gzip, but seems to have a lot less overhead, for example),
but in the meantime if you're not backing up over a very slow link OR
you are not desperate for space on your backup media, you are better off
without the compression.

The other thing (that I'm guessing doesn't apply here) is on Windows
systems and some Macs that run antivirus software that does on-access
scanning.  I've seen that bog down backups as well.

--
Free Software Download: Index, Search  Analyze Logs and other IT data in 
Real-Time with Splunk. Collect, index and harness all the fast moving IT data 
generated by your applications, servers and devices whether physical, virtual
or in the cloud. Deliver compliance at lower cost and gain new business 
insights. http://p.sf.net/sfu/splunk-dev2dev 
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] Should I use data spooling when writing to nfs mounted storage?

2011-03-03 Thread Sean Clark
On 03/03/2011 03:21 PM, John Drescher wrote:
 On Thu, Mar 3, 2011 at 4:11 PM, Fabio Napoleoni - ZENIT fa...@zenit.org 
 wrote:
[...]
 So the poor throughput is given by software compression. I don't know what 
 to choose in the speed vs space tradeoff. Ideally the best option could be 
 compression on the storage director (it has more powerful hardware than this 
 client) but I think that bacula currently doesn't support this feature.

 You could do that with filesystem compression. There are a few options
 for that. Although I admit none of them are in the mainline kernel.

 John
Well, depending on what you consider mainline.  I'm currently
experimenting with btrfs with filesystem compression.  As of 2.6.38,
this is available as both gzip/zlib and LZO.  I am, daringly enough,
experimenting with the 2.6.38 release candidates and LZO compression for
a storage daemon on USB external disk volumes.  It seems to work so far,
but I've only been using it for a week or so and haven't had a chance to
do any extensive checks.

The one problem I see with filesystem-supplied compression is that
trying not to run out of disk space for backup volumes becomes more
difficult, since it's hard to predict exactly how much real space the
compressed data will take up (whereas is bacula is doing the
compression, you can just define e.g. 8 20GB volumes and stick them on a
160GB drive). This may or may not turn out to be a 'real' problem.

It does at least take the compression burden off of the client computer,
which removes that bottleneck.

--
What You Don't Know About Data Connectivity CAN Hurt You
This paper provides an overview of data connectivity, details
its effect on application quality, and explores various alternative
solutions. http://p.sf.net/sfu/progress-d2d
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] Should I use data spooling when writing to nfs mounted storage?

2011-03-03 Thread Sean Clark
On 03/03/2011 05:12 PM, John Drescher wrote:
 That was one of the choices. btrfs is still considered highly
 experimental at this point.

 John
Well, that's definitely true, though so far for me it's been more
reliable than one might think.  (I did, just yesterday,
run into what appeared to be an odd bug in it, though, so not QUITE
ready for everybody to use yet, just eager nuts like
myself).

If I can find an available gigabit port to plug the experimental storage
daemon into I can do some real speed testing on the filesystem with bacula.

--
What You Don't Know About Data Connectivity CAN Hurt You
This paper provides an overview of data connectivity, details
its effect on application quality, and explores various alternative
solutions. http://p.sf.net/sfu/progress-d2d
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] Very low performance with compression and encryption !

2011-01-20 Thread Sean Clark
On 01/20/2011 10:01 AM, John Drescher wrote:
 I've been tempted to experiment with BTRFS using LZO or standard zlib
 compression for storing the volumes and see how the performance compares
 to having bacula-fd do the compression before sending - I have a
 suspicion the former might be better..

 Doing the compression at the filesystem level is an idea I have wanted
 to try for several years. Hopefully one of the filesystems that
 support this becomes stable soon.

 John
(Oops, thanks - my reply was SUPPOSED to go to the list, not just to you
personally...)

To follow up, I think I WILL try out BTRFS with compression (with
client-side compression switched off) for some experimental backups and
see how it does.  Due to the way our backup system is set up
(continuously growing, with volumes stored on external drives supplied
by the offices who want to be on the backup system), the speed that the
backups can be done is becoming an issue, but we don't have enough space
to shut off compression and still have backups go back far enough. 

I have been (bravely|foolhardily) using BTRFS as my primary filesystem
on my netbook, and on several other of my personal drives with no
problems so far, and I'm confident it's at least stable enough to do
serious experimentation with.  Once I've got it running I'll report back
on how it works.

--
Protect Your Site and Customers from Malware Attacks
Learn about various malware tactics and how to avoid them. Understand 
malware threats, the impact they can have on your business, and how you 
can protect your company and customers by using code signing.
http://p.sf.net/sfu/oracle-sfdevnl
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] Virtual Backups - Do we actually need full backups anymore?

2011-01-06 Thread Sean Clark
On 01/06/2011 11:24 AM, Mister IT Guru wrote:
 On 06/01/2011 17:16, Graham Keeling wrote:
 On Thu, Jan 06, 2011 at 05:02:47PM +, Mister IT Guru wrote:
 I've been trying to get my head around virtual full backups.

 [...]
 So, I would be very pleased if a VirtualFull also grabbed new files from the
 client.
 Thank you for pointing this out! So it doesn't grab new files from the 
 client first? Well, that's not the smartest! Hmm, I wonder - How would 
 you get a job to run run after another job, rather than have bacula 
 decide via priorities?
To be fair - if it's grabbing actual files directly from the client,
it's no longer a virtual backup.  I got the impression
that the point was to generate a full backup without having to talk to
the client at all.

I think if you give the virtual full a lower priority than the
incremental, you can schedule both for the same day and have it always
do the incremental then the virtual full in the correct order (haven't
actually TRIED to do this myself, so I'm guessing).

--
Learn how Oracle Real Application Clusters (RAC) One Node allows customers
to consolidate database storage, standardize their database environment, and, 
should the need arise, upgrade to a full multi-node Oracle RAC database 
without downtime or disruption
http://p.sf.net/sfu/oracle-sfdevnl
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


[Bacula-users] Stupid build question...

2010-12-02 Thread Sean Clark
Is it not possible to build a version of bacula that supports both mysql
and postgresql?

(I enabled both, but got an error while compiling.  Googling turns up a
post from back in August mentioning that the error indicates that the
postgres portion of the build is trying to use the mysql headers, or
something of the sort).

Fedora Core 13, if it matters.

--
Increase Visibility of Your 3D Game App  Earn a Chance To Win $500!
Tap into the largest installed PC base  get more eyes on your game by
optimizing for Intel(R) Graphics Technology. Get started today with the
Intel(R) Software Partner Program. Five $500 cash prizes are up for grabs.
http://p.sf.net/sfu/intelisp-dev2dev
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


[Bacula-users] stop bitching about it

2010-11-08 Thread Sean Clark
On 11/05/2010 10:57 PM, Dan Langille wrote:
 Just stop bitching about it and do something.  We aren't
 here to listen to trolls.
[...]
This kind of what I meant with my previous post about the hate-level in 
the replies here.  It's kind of surprising.

Did I miss some previous post by the guy who started this thread?  Does 
he post this a lot, or is this just a REALLY
touchy topic?

I really didn't get the impression he was trolling.  I'm not personally 
worried about Bacula Systems' business model doing any harm to the free 
edition of Bacula at all at the moment, but I can understand why someone 
might be these days.

--
The Next 800 Companies to Lead America's Growth: New Video Whitepaper
David G. Thomson, author of the best-selling book Blueprint to a 
Billion shares his insights and actions to help propel your 
business during the next growth cycle. Listen Now!
http://p.sf.net/sfu/SAP-dev2dev
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] [Bacula-devel] Bacula Project will Die

2010-11-05 Thread Sean Clark
On 11/05/2010 09:12 AM, Kern Sibbald wrote:
 On Friday 05 November 2010 14:52:32 Kern Sibbald wrote:
 On Friday 05 November 2010 14:08:37 Heitor Medrado de Faria wrote:
 Guys,

 Each new Bacula Enterprise feature, like the: New GUI Configurator,
 makes me feel that Bacula project will die.
 It's very frustrating that a project that become a huge success being a
 free software, is being destroyed like that.
 I acknowledge that Kern and other developers had lots of development
 work on Bacula - and there is not huge contribution. But creating a paid
 fork is not the way of get compensation.
 [...]
 I find it ungrateful of you to complain about a not getting everything you
 want for free -- especially when Bacula Systems has contributed *far* more
 than the community in creating version 5.0.x.  One of the projects I am
 working on for the next community release is restarting failed jobs.  When
 I hear statements and complaints such as yours, it makes me wonder why I
 shouldn't just put that code only into the Enterprise version.
 Just so it is really clear, I had and have no intention of putting Restarting
 Failed Jobs only into the Enterprise version.

 Kern
Now, see, I think that's actually the kind of thing the original poster 
was worried about, not that it's just plain
bad to make money on Legally-Free software or anything.

There's been some discussion of the open core business model going 
around lately, so it's not an unreasonable concern.  If it WERE the case 
that a lot of the new really useful features were going paid premium 
only, I think it WOULD slowly kill off the free community version.  I 
suspect that's really what Heitor was trying express concern about (and 
I suspect English is not his primary language, so his post ended up 
looking more intense than perhaps he'd intended).

It sounds to me like the actual functionality in the community version 
isn't likely to be allowed to stagnate any time soon, so I'm not too 
worried about there being some extra bonus features in the paid 
version (a GUI for configuration makes things easier to use, but not 
having it doesn't prevent us from using the actual bacula functionality 
in the community version).  I was just a little surprised that the hate 
level in the replies so far seem to be turned up a notch or two higher 
than necessary - at least if my interpretation of Heitor's original post 
is correct.



--
The Next 800 Companies to Lead America's Growth: New Video Whitepaper
David G. Thomson, author of the best-selling book Blueprint to a 
Billion shares his insights and actions to help propel your 
business during the next growth cycle. Listen Now!
http://p.sf.net/sfu/SAP-dev2dev
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] [Bacula-devel] Bacula Project will Die

2010-11-05 Thread Sean Clark
On 11/05/2010 03:22 PM, Ryan Novosielski wrote:
 -BEGIN PGP SIGNED MESSAGE-
 Hash: SHA1

 On 11/05/2010 04:09 PM, Sean Clark wrote:
 [...]
 It sounds to me like the actual functionality in the community version
 isn't likely to be allowed to stagnate any time soon, so I'm not too
 worried about there being some extra bonus features in the paid
 version (a GUI for configuration makes things easier to use, but not
 having it doesn't prevent us from using the actual bacula functionality
 in the community version).  I was just a little surprised that the hate
 level in the replies so far seem to be turned up a notch or two higher
 than necessary - at least if my interpretation of Heitor's original post
 is correct.
 You misread that message, I think:
No - I had only meant to point out that whether or not features like 
that WOULD be in the premium version only
was what the original poster was probably worried about (NOT to suggest 
that I thought it was actually happening - hence my
comment about not being worried about actual functionality stagnating in 
the community version).

I was really commenting more on the fact that the followup message 
saying This new feature isn't really being restricted to the paid 
version was more likely to address the concern in the original post, 
much more so than the stream of angry HOW DARE YOU, YOU INGRATE! 
messages that preceded it, which I think were a bit of an overreaction 
to the concern expressed by a (presumably) non-native-English speaker in 
the first message.  Indeed, the followup message was kind of necessary 
given that without it, it looked a lot like a threat to do exactly what 
the original post was worried about (as though to paradoxically to 
punish the original poster for suggesting it).

I don't think the clumsily-worded post was really intended to be as 
accusatory as it seemed to a lot of people.  That's all.

I think I'm just suffering from flamewar fatigue this week...

--
The Next 800 Companies to Lead America's Growth: New Video Whitepaper
David G. Thomson, author of the best-selling book Blueprint to a 
Billion shares his insights and actions to help propel your 
business during the next growth cycle. Listen Now!
http://p.sf.net/sfu/SAP-dev2dev
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] bcp, Bacula CoPy

2010-10-18 Thread Sean Clark
  On 10/18/2010 07:26 AM, Geert Stappers wrote:
 Op 20101018 om 03:45 schreef Dan Langille:
 On 10/11/2010 4:55 AM, Geert Stappers wrote:
 bcp, Bacula CoPy, copies files from one bacula file daemon to
 another bacula-fd. Reading from the source computer is like making a backup
 and writting to the destination is like doing a restore.

 So the copy is done by a backup and a restore at the same moment
 without a storage-deamon involved.
 Why do you want these?  What's wrong with just using scp?
 `scp` does authentication on user basis. `bcp` will use system wide
 authentication on the bacula passwords. So for `scp` one has to exchange
 ssh keys for each user, for `bcp` is that already done.
 My intented use case for `bcp` is for synchronisation data files for several
 users without the burden of ssh-key mangement for all those (system) users.

 `bcp` uses the backup ethernet segment by default. `scp` tends to prefer
 the production network.


 Groeten
 Geert Stappers
Also, scp doesn't work with the Windows® clients, unless one wants 
to install Cygwin and OpenSSH (or a proprietary SSH implementation) on 
all of them.

Personally, I'd love to have a few optional (when enabled in 
bacula-fd.conf on the client) utility functions like this
in bacula-fd.  A simple no-authentication-required yes, I am running 
correctly response that can be triggered with a plain-text telnet 
connection, a similarly simple authentication-optional (depending on 
setting in bacula-fd.conf) telnet-inducible bacula-fd version and 
status response, and a simple operating-environment report (free disk 
space, free RAM, CPU usage) from bacula-fd would all be handy for 
troubleshooting and monitoring.  (That first one would be a lot more 
comforting than the current well, I can connect, and it disconnects me 
when I type something, so maybe it's working method of troubleshooting.)

--
Download new Adobe(R) Flash(R) Builder(TM) 4
The new Adobe(R) Flex(R) 4 and Flash(R) Builder(TM) 4 (formerly 
Flex(R) Builder(TM)) enable the development of rich applications that run
across multiple browsers and platforms. Download your free trials today!
http://p.sf.net/sfu/adobe-dev2dev
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


[Bacula-users] Hello counts as a Job to bacula-fd?

2010-09-23 Thread Sean Clark
  I've been setting maximum jobs to 2 for most file daemons around here, 
since we should never actually be running more than 1 job (but we have 
room to run a second simultaneous job in case I ever think of a reason 
that we need to).

Having mistakenly forgotten to set Allow Duplicate Jobs=no, I now have 
a user whose laptop is running two simultaneous Full 100GB+ backups.  
I went to cancel the later-starting of the two and...I'm denied.  The 
director appears to hang for several minutes before finally giving me a 
Like, Dude, Something Went Wrong! message.  The error message suggests 
the problem is that I've hit the maximum concurrent jobs limit on the 
FD  (as I'm pretty confident that of the other two options, the 
passwords match and the two simultaneously-running jobs suggest there's 
nothing currently wrong with the networking...).

I find I can't even get client status.  status client=(hostname) for 
that system gives me:

23-Sep 12:46 bacula-dir JobId 0: Fatal error: Error sending Hello to 
File daemon at (hostname):9102. ERR=Interrupted system call

Does every connection count as a job?  And do I have any recourse 
other than either wondering if one of the jobs will actually finish 
getting through the remaining 70+GB before the end of the day when the 
user will no doubt unplug his laptop and go home with it (leaving us 
still needing to get a full backup to run again most of the day 
tomorrow), or halting the bacula director entirely to crash the job, 
also probably leaving us unable to finish the full backup before 
quittin' time today and therefore still needing to do a full backup 
tomorrow again?

--
Nokia and ATT present the 2010 Calling All Innovators-North America contest
Create new apps  games for the Nokia N8 for consumers in  U.S. and Canada
$10 million total in prizes - $4M cash, 500 devices, nearly $6M in marketing
Develop with Nokia Qt SDK, Web Runtime, or Java and Publish to Ovi Store 
http://p.sf.net/sfu/nokia-dev2dev
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] Hello counts as a Job to bacula-fd?

2010-09-23 Thread Sean Clark
  On 09/23/2010 01:17 PM, Phil Stracchino wrote:
 On 09/23/10 14:00, Sean Clark wrote:
 [...]

 Does every connection count as a job?
 No, but it does count as a *connection*, so if you have concurrency on
 the client set to 1, then any other access while a job is running will
 be denied or time out.  By the sound of it, you need to increase the
 concurrency on the client.
Well, that's what I mean, though - the terminology in the bacula-fd.conf 
file is Maximum Concurrent Jobs, so I hadn't expected to be denied 
non-job-running connections.
 And do I have any recourse [...]
 By the sound of it, you're pretty much between a rock and a hard place
 there.  If the Director can't connect to send a cancel, then you really
 have no mechanism for killing just one of the running jobs
I did try ssh'ing into the client, updating bacula-fd.conf to increase 
the Maximum Concurrent Jobs, and then sending SIGHUP
to bacula-fd in hopes of getting it to accept more connections, but it 
looks like the only option I've got is to kill the jobs by brute
force (killing bacula-fd or bacula-dir) and try again tomorrow...

--
Nokia and ATT present the 2010 Calling All Innovators-North America contest
Create new apps  games for the Nokia N8 for consumers in  U.S. and Canada
$10 million total in prizes - $4M cash, 500 devices, nearly $6M in marketing
Develop with Nokia Qt SDK, Web Runtime, or Java and Publish to Ovi Store 
http://p.sf.net/sfu/nokia-dev2dev
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


[Bacula-users] Minimum version of GCC needed to compile Bacula (5.0.3)?

2010-09-06 Thread Sean Clark
  I'm not finding it in the documentation so far...

I'm attempting to build a storage daemon on a Western Digital MyBook 
World Edition(tm) NAS, using the
most recent gcc available from ipkg, which unfortunately is all the way 
back to 3.4.6.  I'm getting some errors
as it tries to compile the bacula libraries, which I assume is why 
subsequent building of file and storage daemons fails.

Is 3.4.6 just too old, am I just doing something wrong, or is this some 
kind of bug?

Excerpt from the compile-time output below:

==Entering directory /shares/internal/temp/bacula-5.0.3/src/lib
make[1]: Entering directory `/shares/internal/temp/bacula-5.0.3/src/lib'
Compiling attr.c
Compiling base64.c
Compiling berrno.c
Compiling bsys.c
Compiling bget_msg.c
Compiling bnet.c
Compiling bnet_server.c
Compiling runscript.c
Compiling bsock.c
bsock.c: In member function `bool BSOCK::despool(void (*)(ssize_t), 
ssize_t)':
bsock.c:591: error: `posix_fadvise' was not declared in this scope
make[1]: *** [bsock.o] Error 1
make[1]: Leaving directory `/shares/internal/temp/bacula-5.0.3/src/lib'


   == Error in /shares/internal/temp/bacula-5.0.3/src/lib ==


==Entering directory /shares/internal/temp/bacula-5.0.3/src/findlib
make[1]: Entering directory `/shares/internal/temp/bacula-5.0.3/src/findlib'
Compiling find.c
Compiling match.c
Compiling find_one.c
Compiling attribs.c
Compiling create_file.c
Compiling bfile.c
bfile.c: In function `int bopen(BFILE*, const char*, int, mode_t)':
bfile.c:939: error: `posix_fadvise' was not declared in this scope
bfile.c: In function `int bclose(BFILE*)':
bfile.c:987: error: `posix_fadvise' was not declared in this scope
make[1]: *** [bfile.o] Error 1
make[1]: Leaving directory `/shares/internal/temp/bacula-5.0.3/src/findlib'


   == Error in /shares/internal/temp/bacula-5.0.3/src/findlib ==


==Entering directory /shares/internal/temp/bacula-5.0.3/src/filed
make[1]: Entering directory `/shares/internal/temp/bacula-5.0.3/src/filed'
Compiling filed.c
Compiling authenticate.c
Compiling acl.c
Compiling backup.c
Compiling estimate.c
Compiling fd_plugins.c
Compiling accurate.c
Compiling filed_conf.c
Compiling heartbeat.c
Compiling job.c
Compiling pythonfd.c
Compiling restore.c
Compiling status.c
Compiling verify.c
Compiling verify_vol.c
Compiling xattr.c
make[1]: *** No rule to make target `../findlib/libbacfind.a', needed by 
`bacula-fd'.  Stop.
make[1]: Leaving directory `/shares/internal/temp/bacula-5.0.3/src/filed'

--
This SF.net Dev2Dev email is sponsored by:

Show off your parallel programming skills.
Enter the Intel(R) Threading Challenge 2010.
http://p.sf.net/sfu/intel-thread-sfd
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users