Re: [Bacula-users] console messages

2012-10-05 Thread Geert Stappers
Op 20121003 om 15:05 schreef Dan Langille:
 On 2012-09-28 07:00, Gary Stainburn wrote:
  I don't log into bconsole for maybe a week at at time and when I do
  I'm usually messing about with config settings.
 
  I tend to always run 'auto on' as the first command but can sometimes
  have to wait over 5 minutes for a week's worth of log entries to
  scroll up my screen.
 
  Is there a way to limit the entries displayed to say the last 4 hours,
  or the last 1000 lines?
 
 You could truncate the old log.  Look for bacula-dir.conmsg in the 
 bacula-dir working directory
 
 I have no idea of the repercussions of doing this.

Here a less evil trick, execute from shell

   echo messages | /etc/bacula/scripts/bconsole  /dev/null

Put it in a cronjob that runs at night
to have each moring a (nearly) empty message buffer.


Cheers
Geert Stappers
-- 
http://www.vanadcimplicity.com/
--
Don't let slow site performance ruin your business. Deploy New Relic APM
Deploy New Relic app performance management and know exactly
what is happening inside your Ruby, Python, PHP, Java, and .NET app
Try New Relic at no cost today and get our sweet Data Nerd shirt too!
http://p.sf.net/sfu/newrelic-dev2dev
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] Simultaneousky backup to local HDD and Amazon S3

2012-10-05 Thread Geert Stappers
Op 20121005 om 05:18 schreef Pubudu Perera:
 On Fri, Oct 5, 2012 at 7:05 AM, John Drescher wrote:
  On Thu, Oct 4, 2012 at 8:59 PM, Pubudu Perera wrote:
   Hello everyone,
  
   I'm a newbie to Bacula and want to verify whether my requirement
   can get done using bacula.  I want to know whether it's possible
   to simultaneously make  backups to local HDD and Amazon S3 from
   the same source in Bacula.  Can someone please help me with this?
  
 
  I would backup to the local drive then mirror that with rsync to S3
  via s3fs fuse.
 
 
 So, that means there's no way to does the job simultaneously ?
 

How I understond the original question:

  How to make Bacula to write a backup
  to 2 different devices at the same time?
  The sollution should handle the difference in transfer speed.


I agree with John Drescher

  * Have an inner smile
  * Confirm seeing the question with a response
  * Provide a suggestion that has the same end result
  * Allow people the read in the discussion order



Things I want to add:

  * The rsync to the second device could be an AfterJobCommand
  * Repeat in caps   REPLY BELOW THE TEXT


Cheers
Geert Stappers

--
Don't let slow site performance ruin your business. Deploy New Relic APM
Deploy New Relic app performance management and know exactly
what is happening inside your Ruby, Python, PHP, Java, and .NET app
Try New Relic at no cost today and get our sweet Data Nerd shirt too!
http://p.sf.net/sfu/newrelic-dev2dev
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] console messages

2012-10-05 Thread Gary Stainburn
On Friday 05 October 2012 07:42:33 Geert Stappers wrote:
 Here a less evil trick, execute from shell

echo messages | /etc/bacula/scripts/bconsole  /dev/null

 Put it in a cronjob that runs at night
 to have each moring a (nearly) empty message buffer.


 Cheers
 Geert Stappers

Nice. That's the one I think I'll go for.  I've just run it from the command 
line and it took a total of 3 seconds to run so I think I can manage without 
a cron job a run it manually.

Cheers.

Gary


-- 
Gary Stainburn
Group I.T. Manager
Ringways Garages
http://www.ringways.co.uk 

--
Don't let slow site performance ruin your business. Deploy New Relic APM
Deploy New Relic app performance management and know exactly
what is happening inside your Ruby, Python, PHP, Java, and .NET app
Try New Relic at no cost today and get our sweet Data Nerd shirt too!
http://p.sf.net/sfu/newrelic-dev2dev
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] Simultaneousky backup to local HDD and Amazon S3

2012-10-05 Thread John Drescher
On Thu, Oct 4, 2012 at 11:18 PM, Pubudu Perera suharsha...@gmail.com wrote:
 Thank John!
 So, that means there's no way to does the job simultaneously ?

You could run 2 concurrent jobs (one to each storage) but that would
cause 2 times the load on your client.

John

--
Don't let slow site performance ruin your business. Deploy New Relic APM
Deploy New Relic app performance management and know exactly
what is happening inside your Ruby, Python, PHP, Java, and .NET app
Try New Relic at no cost today and get our sweet Data Nerd shirt too!
http://p.sf.net/sfu/newrelic-dev2dev
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] Simultaneousky backup to local HDD and Amazon S3

2012-10-05 Thread lst_hoe02

Zitat von John Drescher dresche...@gmail.com:

 On Thu, Oct 4, 2012 at 11:18 PM, Pubudu Perera suharsha...@gmail.com wrote:
 Thank John!
 So, that means there's no way to does the job simultaneously ?

 You could run 2 concurrent jobs (one to each storage) but that would
 cause 2 times the load on your client.

 John

But not with Windows Clients and VSS enabled. With this only one job  
at a time is started and subsequent jobs have to wait until the first  
finish :-(

Andreas



--
Don't let slow site performance ruin your business. Deploy New Relic APM
Deploy New Relic app performance management and know exactly
what is happening inside your Ruby, Python, PHP, Java, and .NET app
Try New Relic at no cost today and get our sweet Data Nerd shirt too!
http://p.sf.net/sfu/newrelic-dev2dev
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] LTO3 tape capacity (variable?)

2012-10-05 Thread Stephen Thompson

Thank you everyone for your help!

Oracle replaced the drive and while it's not running with as high a 
throughput as I would like, it's at least up at the 60MB/s (random data) 
that my other drives are at, rather than it's previous 30MB/s.

I'm still going to experiment with some of the ideas that were tossed 
out and see if I can't get even better throughput of for bacula.

thanks again,
Stephen



On 10/2/12 2:47 AM, Alan Brown wrote:
 On 02/10/12 01:35, Stephen Thompson wrote:


 Correction, the non-problem drive has a higher ECC fast error count,
 but the problem drive has a significantly higher Corrective algorithm
 invocations count.


 What that means is that it rewrote the data, which accounts for the
 lower throughput.

 LTO drives read as they write and if there are errors, they write again.

 If a cleaning tape doesn't work then you need to get the drive looked
 at/replaced under warranty.


 On 10/1/12 5:33 PM, Stephen Thompson wrote:

 On 10/1/12 4:06 PM, Alan Brown wrote:
 On 01/10/12 23:38, Stephen Thompson wrote:
 More importantly, I realized that my testing 6 months ago was not on
 all 4 of my drives, but only 2 of them.  Today, I discovered one of my
 drives (untested in the past) is getting 1/2 the throughput for random
 data writes as the others!!
 smartctl -a /dev/sg(drive) will tell you a lot

 Put a cleaning tape in it






 Cleaning tape did not improve results.

 I see some errors in the counter log on the problem drive, but I see
 even more errors on another drive which isn't having a throughput
 problem (specifically SL500 Drive 1 is the lower throughput, but C4
 Drive 1 actually has a higher error count).



 SL500 Drive 0 (~60MB/s random data throughput)
 =
 Error counter log:
   Errors Corrected by   Total   Correction
 GigabytesTotal
   ECC  rereads/errors   algorithm
 processeduncorrected
   fast | delayed   rewrites  corrected  invocations   [10^9
 bytes]  errors
 read:  00 0 0  0  0.000
  0
 write: 00 0 0  0  0.000
  0


 SL500 Drive 1 (~30MB/s random data throughput)
 =
 Error counter log:
   Errors Corrected by   Total   Correction
 GigabytesTotal
   ECC  rereads/errors   algorithm
 processeduncorrected
   fast | delayed   rewrites  corrected  invocations   [10^9
 bytes]  errors
 read:  00 0 0  0  0.000
  0
 write: 104540 0 0 821389  0.000
  0


 C4 Drive 0 (~60MB/s random data throughput)
 ==
 Error counter log:
   Errors Corrected by   Total   Correction
 GigabytesTotal
   ECC  rereads/errors   algorithm
 processeduncorrected
   fast | delayed   rewrites  corrected  invocations   [10^9
 bytes]  errors
 read:  20 0 0  2  0.000
  0
 write: 00 0 0  0  0.000
  0


 C4 Drive 1 (~60MB/s random data throughput)
 ==
 Error counter log:
   Errors Corrected by   Total   Correction
 GigabytesTotal
   ECC  rereads/errors   algorithm
 processeduncorrected
   fast | delayed   rewrites  corrected  invocations   [10^9
 bytes]  errors
 read:  20 0 0  2  0.000
  0
 write: 189610 0 0  48261  0.000
  0




 Stephen




-- 
Stephen Thompson   Berkeley Seismological Laboratory
step...@seismo.berkeley.edu215 McCone Hall # 4760
404.538.7077 (phone)   University of California, Berkeley
510.643.5811 (fax) Berkeley, CA 94720-4760

--
Don't let slow site performance ruin your business. Deploy New Relic APM
Deploy New Relic app performance management and know exactly
what is happening inside your Ruby, Python, PHP, Java, and .NET app
Try New Relic at no cost today and get our sweet Data Nerd shirt too!
http://p.sf.net/sfu/newrelic-dev2dev
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] Bacula Status

2012-10-05 Thread lst_hoe02

Zitat von Kern Sibbald k...@sibbald.com:

 Hello,
 ...
 My time:
 Due to my heavy workload in ensuring certain administrative aspects of
 Bacula
 Systems as well as working on major Bacula Systems programming projects,
 I am attempting to optimize my use of time.  One way I plan to reduce my
 workload
 is to stop doing the updates necessary to maintain the Windows platform as
 well as the Windows builds.  As a result, there are no Windows binaries
 for Bacula
 version 5.2.12 -- this isn't a very big problem since there were very
 few changes
 to the FD, if any, so everyone can continue using the 5.2.10 Windows
 binaries.
 However in the long run (9 months to a year) when significant changes
 are made
 to the Windows code or the libraries that they use, this will become a
 problem, so
 it would be nice to find an alternative.  There are three alternatives
 that I can see:

 1. You build them yourself, as you do with the Linux binaries (unless
 you use distro
 binaries, which can be quite old and out of date).

 2. Some community user learns how to build them and makes them available.

 3. Bacula Systems supplies them.

 Comments about the above:
 1.  Build your own is not too practical, because you need to be a C++
 programmer
 and have a number of mingw tools built and loaded.  The process is well
 documented,
 but not very easy to setup.

 2. Having a C++ knowledgeable community member build them is a bit more
 practical, but it is often hard to find volunteers and as is just a fact
 of open
 source life, the volunteer's life, time, or priority changes and they
 don't often
 continue long term.

 3. Having Bacula Systems build them would work nicely since it is a long
 term
 solution.  The only consideration is that Bacula Systems will want some
 very nominal financial compensation for doing so.

 You might also want to think about another idea, which is: perhaps Bacula
 Systems would be willing to provide binaries for a number of different
 platforms
 such as RedHat/CentOS where Bacula versions tend to lag seriously behind the
 development code.

 I would appreciate your opinions on these, and if you wish to express
 them publicly please
 send them to the bacula-devel list (and bacula-users list).  If you wish
 to express
 them privately, simply address an email just to me.  Please don't
 hesitate to indicate
 what sort of price you might be willing to pay for one or both of these
 services.

Hello

sorry for being late on this but here it goes:

Until now we are only in the test-phase but it is impressive what  
features/stability Bacula provides. I agree with you that it is a  
problem that many Linux distributions provide age-old versions (Ubuntu  
8.04 still in service come with Bacula 2.4.2) and it would be nice too  
have some reliable build service for binaries. From my point of view  
it would be no problem even for small companies to spent a yearly fee  
on this. Let the users do the build itself will not work because in  
case you most need the binaries (restore), no one like to get a build  
environment running first. That said if there is a need for community  
build service we might help out with machine power and build  
environments.

Regards

Andreas




--
Don't let slow site performance ruin your business. Deploy New Relic APM
Deploy New Relic app performance management and know exactly
what is happening inside your Ruby, Python, PHP, Java, and .NET app
Try New Relic at no cost today and get our sweet Data Nerd shirt too!
http://p.sf.net/sfu/newrelic-dev2dev
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] wilddir not working as expected

2012-10-05 Thread Martin Simmons
 On Thu, 04 Oct 2012 21:43:32 +0200, Radim Kolar said:
 
 I have following fileset:
 
 FileSet {
Name = Web Crawler
 
Include {
  Options {
signature = MD5
compression = GZIP9
exclude = yes
wilddir = target
  }
 
  File = /home/crawler
}
Exclude {
  File = /home/crawler/nutch-1.3/runtime/local/segments
  File = /home/crawler/solr/example/solr/nutch/data
  File = /home/crawler/.m2/repository
}
 }
 
 idea is not to backup directories which have target in name like:
 
 ./conf/target
 ./packages/tools/target
 ./packages/info/target
 ./packages/plugins/target
 ./info/target
 ./src/target
 ./src/plugin/index-basic/target
 ./src/plugin/parse-html/target
 ./src/plugin/urlnormalizer-pass/target
 ./src/plugin/parse-js/target
 ./src/plugin/microformats-reltag/target
 ./src/plugin/protocol-ftp/target
 ./src/plugin/urlnormalizer-basic/target
 
 but it didn't work, all target directories are backed up. Can anybody 
 spot a mistake?

I think you need wilddir = */target because it matches the whole path, not
just the final component.

__Martin

--
Don't let slow site performance ruin your business. Deploy New Relic APM
Deploy New Relic app performance management and know exactly
what is happening inside your Ruby, Python, PHP, Java, and .NET app
Try New Relic at no cost today and get our sweet Data Nerd shirt too!
http://p.sf.net/sfu/newrelic-dev2dev
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] Network error with FD during Backup: ERR=Connection reset by peer

2012-10-05 Thread Josh Fisher

On 10/4/2012 4:05 AM, DAHLBOKUM Markus (FPT INDUSTRIAL) wrote:


Hi Tom,

Thank you for your answer.

The heartbeats are only setup when a job with a client is initiated.

So, there should be no activity when no job is running.  When you

initiate a job with the client, the director sets up a connection with

the client telling the client what storage daemon to use.  The client

then initiates a connection back to that storage daemon.  If you have

the heartbeat settings in place as you do then you should see heartbeat

packets sent from the client back to the director in order to keep that

connection alive while the data is being sent back to the storage

daemon.  In addition, you may see heartbeat packets send from the

storage daemon to the client.  I'd have to re-look at the code but I

believe this is used in the scenario where the storage daemon is waiting

for a volume to write the data to (i.e. operator intervention).  If the

heartbeat setting is on then the storage daemon will send heartbeats

back to the client in order to keep the connection alive while it waits.

Yesterday I waited for the job to finish the first tape and then wait 
for me to insert the next one.


I opened wireshark to see if there is a heartbeat during waiting - and 
there was none. During the job the heartbeat was active.


From what you wrote the heartbeat should be active when waiting for a 
tape. Could you try to confirm that (have a look at the code)?


As one side of the backup is a VMware server I had a closer look to 
the configuration of this environment.


As far as I know Michael's environment (the starter of this thread) is 
also including VMware. So this might be interesting for him.


My job cancels exactly 15 min after entering the wait mode for a new 
tape. In the VMware settings there is an idle timeout set to 900 sec 
(i.e. 15 min).


The timeout doesn't exactly fit to that kind of connection, but you 
never know.


I disabled this timeout now and restarted my backup. In 7 hours I will 
see the result.


But even if this setting caused the trouble, I would have thought the 
heartbeat should solve this (idle connection timeout).


Again, it would be good to know if the heartbeat should be active 
during waiting for a tape.




It could be the client OS timing out too. For example, network activity 
in a Windows daemon does not necessarily keep Windows from going into 
suspend. This is because the myriad daemons checking for updates, etc. 
could potentially keep the machine from ever suspending. So Windows has 
an API function SetThreadExecutionState() that a daemon can use to 
prevent or allow suspend on an as needed basis. Recent versions of 
Windows are more aggressive on power management and default to allowing 
suspend even when there is network activity. I'm not sure what happens 
in VMWare when a client OS suspends a virtual NIC, but my guess is that 
the VMWare timeout might be ignored if the client OS powers off the 
interface on its own.


**
--
Don't let slow site performance ruin your business. Deploy New Relic APM
Deploy New Relic app performance management and know exactly
what is happening inside your Ruby, Python, PHP, Java, and .NET app
Try New Relic at no cost today and get our sweet Data Nerd shirt too!
http://p.sf.net/sfu/newrelic-dev2dev___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] Simultaneousky backup to local HDD and Amazon S3

2012-10-05 Thread Pubudu Perera
Oh OK. Thanks a lot for the valuable tip!

On Fri, Oct 5, 2012 at 5:37 PM, John Drescher dresche...@gmail.com wrote:

 On Thu, Oct 4, 2012 at 11:18 PM, Pubudu Perera suharsha...@gmail.com
 wrote:
  Thank John!
  So, that means there's no way to does the job simultaneously ?
 
 You could run 2 concurrent jobs (one to each storage) but that would
 cause 2 times the load on your client.

 John

--
Don't let slow site performance ruin your business. Deploy New Relic APM
Deploy New Relic app performance management and know exactly
what is happening inside your Ruby, Python, PHP, Java, and .NET app
Try New Relic at no cost today and get our sweet Data Nerd shirt too!
http://p.sf.net/sfu/newrelic-dev2dev___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] Simultaneousky backup to local HDD and Amazon S3

2012-10-05 Thread John Drescher
On Fri, Oct 5, 2012 at 8:35 AM,  lst_ho...@kwsoft.de wrote:

 Zitat von John Drescher dresche...@gmail.com:

 On Thu, Oct 4, 2012 at 11:18 PM, Pubudu Perera suharsha...@gmail.com wrote:
 Thank John!
 So, that means there's no way to does the job simultaneously ?

 You could run 2 concurrent jobs (one to each storage) but that would
 cause 2 times the load on your client.

 John

 But not with Windows Clients and VSS enabled. With this only one job
 at a time is started and subsequent jobs have to wait until the first
 finish :-(


For that you may (depending on your setup) be able to work around the
bug/issue by dividing your fileset up to the part that needs VSS and
all the rest and then run 4 jobs instead of 2.

John

--
Don't let slow site performance ruin your business. Deploy New Relic APM
Deploy New Relic app performance management and know exactly
what is happening inside your Ruby, Python, PHP, Java, and .NET app
Try New Relic at no cost today and get our sweet Data Nerd shirt too!
http://p.sf.net/sfu/newrelic-dev2dev
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] end of tape error issue and file rearchived issue

2012-10-05 Thread lst_hoe02

Zitat von Durand Toto gnew...@gmail.com:

 Hi all,

 I've been running bacula for more than a month now and it works quite
 smoothly except for two issues:
 1: some files are rearchived whereas I have no reason to believe they
 have changed. Could this be due to the use of SHA1 instead of MD5 ? Does it
 rearchive if the files have only been accessed but not modified ?
 2: I get the following message when I reach the end of a tape
 (concistently) and it's worying me seriously as for archive space issues, I
 don't have another copy of the data (for legal reasons I need to archive a
 petabyte of data but I don't have the budget to put that on HDDs). The rest
 seems fine but I am afraid I could loose data that way.

 Error: Re-read last block at EOT failed. ERR=block.c:1029 Read zero
 bytes at 818:0 on device LTO5 (/dev/nst0).

 I'm running debian stable with bacula 5.0 and a DELL TL2000

 Any ideas ?


Hello

check compatibility with btape utility  
(http://www.bacula.org/5.2.x-manuals/en/utility/utility/Volume_Utility_Tools.html#SECTION0029),
 LTO-5 should be working fine. You should also consider raising blocksize and 
maximum file size for LTO5 to get more speed. Be aware that changing the 
blocksize prevent bacula from reading previous used  
tapes.

Regards

Andreas



--
Don't let slow site performance ruin your business. Deploy New Relic APM
Deploy New Relic app performance management and know exactly
what is happening inside your Ruby, Python, PHP, Java, and .NET app
Try New Relic at no cost today and get our sweet Data Nerd shirt too!
http://p.sf.net/sfu/newrelic-dev2dev
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] Simultaneousky backup to local HDD and Amazon S3

2012-10-05 Thread Edward
On 5 October 2012 02:35, John Drescher dresche...@gmail.com wrote:
 On Thu, Oct 4, 2012 at 8:59 PM, Pubudu Perera suharsha...@gmail.com wrote:
 Hello everyone,

 I'm a newbie to Bacula and want to verify whether my requirement can get
 done using bacula.
 I want to know whether it's possible to simultaneously make  backups to
 local HDD and Amazon S3 from the same source in Bacula.

 Can someone please help me with this?
 Thanks in advance.

 I would backup to the local drive then mirror that with rsync to S3
 via s3fs fuse.

 John

 --
 Don't let slow site performance ruin your business. Deploy New Relic APM
 Deploy New Relic app performance management and know exactly
 what is happening inside your Ruby, Python, PHP, Java, and .NET app
 Try New Relic at no cost today and get our sweet Data Nerd shirt too!
 http://p.sf.net/sfu/newrelic-dev2dev
 ___
 Bacula-users mailing list
 Bacula-users@lists.sourceforge.net
 https://lists.sourceforge.net/lists/listinfo/bacula-users

From my experience using s3cmd is much more reliable than s3fs.
There's a sync command in s3cmd that acts like rsync, it's probably
more efficient with bandwidth as well. You could run s3cmd in a post
job script and set the job to error if the s3cmd sync command fails.


Ed

--
Don't let slow site performance ruin your business. Deploy New Relic APM
Deploy New Relic app performance management and know exactly
what is happening inside your Ruby, Python, PHP, Java, and .NET app
Try New Relic at no cost today and get our sweet Data Nerd shirt too!
http://p.sf.net/sfu/newrelic-dev2dev
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] Simultaneousky backup to local HDD and Amazon S3

2012-10-05 Thread Pubudu Perera
Yes, I have heard that there are issues with s3fs's reliability. I think
i'll need to start looking at s3cmd as well.
Thanks for your valuable  time Ed!

Pubudu.

On Sat, Oct 6, 2012 at 4:24 AM, Edward edjun...@gmail.com wrote:

 On 5 October 2012 02:35, John Drescher dresche...@gmail.com wrote:
  On Thu, Oct 4, 2012 at 8:59 PM, Pubudu Perera suharsha...@gmail.com
 wrote:
  Hello everyone,
 
  I'm a newbie to Bacula and want to verify whether my requirement can get
  done using bacula.
  I want to know whether it's possible to simultaneously make  backups to
  local HDD and Amazon S3 from the same source in Bacula.
 
  Can someone please help me with this?
  Thanks in advance.
 
  I would backup to the local drive then mirror that with rsync to S3
  via s3fs fuse.
 
  John
 
 
 --
  Don't let slow site performance ruin your business. Deploy New Relic APM
  Deploy New Relic app performance management and know exactly
  what is happening inside your Ruby, Python, PHP, Java, and .NET app
  Try New Relic at no cost today and get our sweet Data Nerd shirt too!
  http://p.sf.net/sfu/newrelic-dev2dev
  ___
  Bacula-users mailing list
  Bacula-users@lists.sourceforge.net
  https://lists.sourceforge.net/lists/listinfo/bacula-users

 From my experience using s3cmd is much more reliable than s3fs.
 There's a sync command in s3cmd that acts like rsync, it's probably
 more efficient with bandwidth as well. You could run s3cmd in a post
 job script and set the job to error if the s3cmd sync command fails.


 Ed

--
Don't let slow site performance ruin your business. Deploy New Relic APM
Deploy New Relic app performance management and know exactly
what is happening inside your Ruby, Python, PHP, Java, and .NET app
Try New Relic at no cost today and get our sweet Data Nerd shirt too!
http://p.sf.net/sfu/newrelic-dev2dev___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users