Re: [Bacula-users] Bacula for OSX 10.9

2014-09-07 Thread Paul Mather
On Sep 7, 2014, at 5:42 AM, Kern Sibbald k...@sibbald.com wrote:

 On 09/07/2014 07:33 AM, Eric Dannewitz wrote:
 I'm interested in perhaps deploying this in my k-8 school, but I have not 
 found a good tutorial of how to install it. Or if it even works right on Mac.
 
 Anyone have some insights on this? My idea would be to back about 30 macs to 
 an Ubuntu server.
 
 This would be a good way to setup Bacula.  The Director, SD and catalog
 work well on a Ubuntu server -- I recommend Trusty (14.04).  For the
 Mac's someone probably has made the binaries and distributes them on the
 Internet.  Otherwise if you load all the appropriate build tools on the
 Mac, you can easily build the FD.   Later this year, Bacula Systems will
 provide free binaries for MacOSX which should also help.

I've not used Bacula on a Mac, but I do notice that Homebrew 
(http://brew.sh) has a formula for bacula-fd, which could be used to 
install the client.  Right now, it's only for the 5.x version (5.2.13), 
though.

I hope this helps.

Cheers,

Paul.


--
Slashdot TV.  
Video for Nerds.  Stuff that matters.
http://tv.slashdot.org/
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] Recommended hardware

2013-05-03 Thread Paul Mather
On May 3, 2013, at 12:03 PM, lst_ho...@kwsoft.de wrote:

 
 Zitat von Francisco Garcia Perez fga...@gmail.com:
 
 Hello,
 I have a PowerVault 124T, but I want buy a new backup system with support
 for my old LTO-2 and LTO-3 tapes with at least 24 slots, 2 drives, a
 barcode scanner. What do you recommend me?
 
 
 If you really need support for reading your old LTO-2 tapes you are  
 limited to LTO-4 because the read compatibility is maintained two  
 steps down as far as i know. Instead of this you should rather use at  
 least LTO-5 today and copy your data still needed from the old LTO  
 tapes. As of brands recommended for tape libraries: Most of the  
 mid-sized libraries (HP, IBM, Dell) are rebranded BDTs which work well  
 with Bacula. For the tape drives some say that full-height are more  
 solid than the half-height, but i don't have first hand experience for  
 this.


Regarding the full-height vs. half-height issue, IMHO, it does benefit you to 
read the small print.

In my case, we bought a Quantum SuperLoader3 LTO-4HH SAS, 16 Slots/2 Magazine 
2U rack mount unit. I assumed at the time the HH (half-height) only affected 
the physical form factor and nothing else.  Turns out I was wrong.  Puzzled by 
the slower speeds I got with btape (65 to 78 MB/s), I took a hard look at the 
data sheet for the SuperLoader3 family[1] and the explanation leapt out at me.  
The quoted performance differs between the LTO-4HH and LTO-4 drives: 288 
GB/hour vs 432 GB/hour (native speeds).  The former equates to about 82 MB/s 
whereas the latter parallels the LTO-4 spec max native speed of 120 MB/s.

So, there was a definite difference between going half-height and full-height 
there.  Unfortunately, for their LTO-4 SAS offerings, only (slower) half-height 
was available.  Fortunately, for LTO-5 and above, there was no documented speed 
penalty for going half-height vs. full-height, so maybe Quantum have got their 
engineering sorted out re: what was affecting their LTO-4 drives?

In all other respects, though, I have been very happy with the Quantum 
SuperLoader3 LTO-4HH SAS unit.  (I realise this doesn't meet the OP's 
specifications, though.)

Cheers,

Paul.

[1] https://iq.quantum.com/exLink.asp?8357556OK69N63I33059211
--
Get 100% visibility into Java/.NET code with AppDynamics Lite
It's a free troubleshooting tool designed for production
Get down to code-level detail for bottlenecks, with 2% overhead.
Download for free and get started troubleshooting in minutes.
http://p.sf.net/sfu/appdyn_d2d_ap2
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] Backup strategy with Bacula

2013-03-27 Thread Paul Mather
On Mar 26, 2013, at 11:44 PM, Bill Arlofski waa-bac...@revpol.com wrote:

 On 03/26/13 22:33, Wood Peter wrote:
 Hi,
 
 I'm planning to use Bacula to do file level backup of about 15 Linux
 systems. Total backup size is about 2TB.
 
 For Bacula server I'm thinking to buy Dell PE R520 with 24TB internal
 storage (8x 3TB disks) and use virtual tapes.
 
 Hi Peter.   First, you're going to want some RAID level on that server and
 RAID0 is not (IMHO) RAID at all.  :)
 
 At a minimum you're going to want to set up (for example) RAID5 in the server
 which will give you a maximum of 7 x 3 = 21 TB since the equivalent of 1 full
 drive is used for parity, spread across all of the drives in a RAID5 array.
 
 Having said that, RAID5 does not have the best write speeds, but other RAID
 levels that will give more redundancy and better write speeds will use
 significantly more drives and give you less total storage. You may need to
 spend some time to consider your redundancy and read/write throughput
 requirements.
 
 Also, you will generally want to configure at least one drive as a hot spare
 so the RAID controller can automatically and immediately fail over to it in
 the case of a drive failure in the array.
 
 So that takes away at least 3 more TB, so now your down to 18TB total storage
 with a minimally configured RAID5 array with 1 hot spare.
 
 Just some things to consider. :)


Another thing to consider is that with large capacity drives (3 TB) combined 
into large RAID-5 arrays there is an increased likelihood of a catastrophic 
array failure during an array rebuild due to an initial drive failure.  For 
this reason, RAID-6 would be preferred for such large arrays.

Reliability of large RAID arrays is one of the motivations behind raidz3 
(triple-parity redundancy) in ZFS.  See, e.g., 
http://queue.acm.org/detail.cfm?id=1670144 for details.

Cheers,

Paul.


--
Own the Future-Intelreg; Level Up Game Demo Contest 2013
Rise to greatness in Intel's independent game demo contest.
Compete for recognition, cash, and the chance to get your game 
on Steam. $5K grand prize plus 10 genre and skill prizes. 
Submit your demo by 6/6/13. http://p.sf.net/sfu/intel_levelupd2d
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] Job canceling tip

2013-03-05 Thread Paul Mather
On Mar 5, 2013, at 9:54 AM, Dan Langille d...@langille.org wrote:

 On 2013-03-04 04:42, Konstantin Khomoutov wrote:
 On Mon, 4 Mar 2013 08:45:05 +0100
 Geert Stappers geert.stapp...@vanadgroup.com wrote:
 
 [...]
 Thank you for the tip. I want to share another.
 It is about canceling multiple jobs. Execute from shell
 
   for i in {17..21} ; do echo cancel yes jobid=404${i} | bconsole ;
 done
 
 Five jobs, 40417-40421, will be canceled.
 
 A minor nitpick: the construct
 
 for i in {17..21}; do ...
 
 is a bashism [1], so it won't work in any POSIX shell.
 
 A good point! I tried the above on FreeBSD:
 
 $ cat test.sh
 #!/bin/sh
 
 for i in {17..21} ; do echo cancel yes jobid=404${i} ; done
 
 
 [dan@bast:~/bin] $ ./sh test.sh
 cancel yes jobid=404{17..21}
 [dan@bast:~/bin] $
 
 
 A portable way to do the same is to use the `seq` program
 
 for i in `seq 17 21`; do ...
 
 or to maintain an explicit counter:
 
 i=17
 while [ $i -le 21 ]; do ...; i=$(($i+1)); done
 
 Then I tried this approach but didn't find seq at all.  I tried sh, 
 csh, and tcsh.


Seq appeared in FreeBSD 9, so if you tried it in earlier versions that's 
probably why you didn't find it.

Using seq, you might have to use -f %02g to get two-digit sequences with 
leading zeros (or -f %0Ng to get N-digit sequences with leading zeros).


 But I know about jot.  This does 5 numbers, starting at 17:
 
 $ jot 5 17
 17
 18
 19
 20
 21
 
 Thus, the script becomes:
 
 $ cat test.sh
 #!/bin/sh
 
 for i in `jot 5 17` ; do echo cancel yes jobid=404${i} ; done
 
 
 $ sh ./test.sh
 cancel yes jobid=40417
 cancel yes jobid=40418
 cancel yes jobid=40419
 cancel yes jobid=40420
 cancel yes jobid=40421


With jot you can shorten this even further:

jot -w cancel yes jobid=404%g 5 17

Again, you might want to zero-pad if you are cancelling, say, jobs 40405 to 
40423:

jot -w cancel yes jobid=404%02g 19 5

Or, better yet, just start from the job range beginning itself:

jot -w cancel yes jobid=%g 19 40405

Cheers,

Paul.


--
Everyone hates slow websites. So do we.
Make your web apps faster with AppDynamics
Download AppDynamics Lite for free today:
http://p.sf.net/sfu/appdyn_d2d_feb
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] FreeBSD 9 and ZFS with compression - should be fine?

2012-02-10 Thread Paul Mather
On Feb 10, 2012, at 1:53 AM, Silver Salonen wrote:

 On Thu, 9 Feb 2012 14:58:33 -0500, Paul Mather wrote:
 On Feb 9, 2012, at 2:21 PM, Steven Schlansker wrote:
 On the flip side, compression seems to be a very big win.  I'm 
 seeing ratios from 1.7 to 2.5x savings and the CPU usage is claimed to 
 be relatively cheap.
 
 
 That's what I am seeing, too.  On the fileset I tried to dedup, I'm
 currently seeing a compressratio of 1.51x, which I'm happy with for
 that data.  Enabling ZFS compression appears to have negligible
 overheads, so having turned it on is a big win for me.
 
 I'll ask just in case - you don't have Bacula FD's compression enabled 
 for these filesets which give these compression ratios, do you?


No, this is just using ZFS option compression=gzip-9 on the fileset in the 
pool.

Cheers,

Paul.



--
Virtualization  Cloud Management Using Capacity Planning
Cloud computing makes use of virtualization - but cloud computing 
also focuses on allowing computing to be delivered as a service.
http://www.accelacomm.com/jaw/sfnl/114/51521223/
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] FreeBSD 9 and ZFS with compression - should be fine?

2012-02-09 Thread Paul Mather
On Feb 9, 2012, at 2:21 PM, Steven Schlansker wrote:

 
 On Feb 9, 2012, at 11:05 AM, Mark wrote:
 Steven, out of curiosity, do you see any benefit with dedup (assuming that 
 bacula volumes are the only thing on a given zfs volume).  I did some 
 initial trials and it appeared that bacula savesets don't dedup much, if at 
 all, and some searching around pointed to the bacula volume format writing a 
 unique value (was it jobid?) to every block, so no two blocks are ever the 
 same.  I'd backup hundreds of gigs of data and the dedupratio always 
 remained 1.00x.
 
 I didn't do any research, but can confirm that it seems to be useless to turn 
 dedup on.  My pool has always been at 1.00x
 I'm going to turn it off because from what I hear dedup is pretty expensive 
 to run, especially if you don't actually save anything by it.


Some time ago, I enabled dedup on a fileset with ~8 TB of data (about 4 million 
files) on a FreeBSD 8-STABLE system.  Bad move!  The machine has 16 GB of RAM 
but enabling dedup utterly killed it.  I discovered, through further research, 
that dedup requires either a lot of RAM or a read-optimised SSD to hold the 
dedup table (DDT).  Small filesets may work fine, but anything else will 
quickly eat up RAM.  Worse still, the DDT is considered ZFS metadata, and so is 
limited to 25% of the ARC cache, so you need huge amounts of ARC for large DDT 
tables.  I've read that a rule of thumb is that for every 1 TB of data you 
should expect 5 GB of DDT, assuming an average block size of 64 KB.  For large 
sizes, therefore, it's not feasible to store the entire DDT in RAM and thus 
you'd be looking at a low-latency L2ARC solution instead (e.g., SSD).


 On the flip side, compression seems to be a very big win.  I'm seeing ratios 
 from 1.7 to 2.5x savings and the CPU usage is claimed to be relatively cheap.


That's what I am seeing, too.  On the fileset I tried to dedup, I'm currently 
seeing a compressratio of 1.51x, which I'm happy with for that data.  Enabling 
ZFS compression appears to have negligible overheads, so having turned it on is 
a big win for me.

Cheers,

Paul.



--
Virtualization  Cloud Management Using Capacity Planning
Cloud computing makes use of virtualization - but cloud computing 
also focuses on allowing computing to be delivered as a service.
http://www.accelacomm.com/jaw/sfnl/114/51521223/
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] Getting back to the basics; Volumes, Pools, Reusability

2011-08-05 Thread Paul Mather
On Aug 4, 2011, at 8:46 PM, Joseph L. Casale wrote:

 Why do we use volumes?  It sounds like a silly question, but it's
 genuine.  Is it so a backup can span several media types?  Tape, file,
 disk, Pandora's box, what?  Why do I care about volumes and how long
 they're retained, how often they're pruned, recycled, or what type
 they are?
 
 You have to associate where the data was placed, literally as tapes are
 finite in size and files shouldn't be infinite. It's a way to manage the
 correlation.  Don't think that files are any different than tapes, they
 have much the same limitations except for the obvious like wear.


Well, there are some very important differences between tape and disk that 
directly influence the implementation of things like volume recycling.  
Specifically, if you want Bacula to be somewhat media agnostic, then you have 
to pander to the more restrictive tape media, which makes some decisions seem 
illogical when applied to disk.  With tape, file marks explicitly delimit files 
on the linear tape; random addressing is difficult, though appending is 
natural.  This explains why volumes are recycled wholly and not partially.  For 
many, this doesn't make sense for the more flexible disk media, which doesn't 
require its blocks to be written contiguously.  But, for an abstracted storage 
model, it makes sense to let the lowest common denominator dictate to more 
flexible media types.

Many Bacula concepts make a lot more sense if you think in terms of managing 
tape.

Cheers,

Paul.



--
BlackBerryreg; DevCon Americas, Oct. 18-20, San Francisco, CA
The must-attend event for mobile developers. Connect with experts. 
Get tools for creating Super Apps. See the latest technologies.
Sessions, hands-on labs, demos  much more. Register early  save!
http://p.sf.net/sfu/rim-blackberry-1
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] Reliable Backups without Tapes?

2011-07-15 Thread Paul Mather
On Jul 14, 2011, at 7:56 PM, Ken Mandelberg wrote:

 Under Legato the license restriction artificially keep the file-device 
 small relative to the tape storage. However, these days disks are 
 cheaper than tapes and license free we could afford a lot of disk space.

I know hard drives are cheap these days, but I didn't realise they were cheaper 
than tape nowadays.  LTO-4 media works out at less than 5 cents per GB 
(uncompressed) last time I bought it, and I believe LTO-5 is less than 4 cents 
per GB (again, uncompressed).

But, don't underestimate some of the perhaps-neglected advantages of tape:

- Easier to offsite for long-term archiving and disaster recovery
- WORM capability for regulatory compliance where needed
- Easier to expand total capacity---just buy more media
- Increasing capacity doesn't continually eat up rack space and drive up power 
consumption

Personally, I like the fact that Bacula supports a mixed disk/tape solution, 
allowing for disk to provide faster near-line access to more recent backups 
(e.g., incrementals) and tape for older material.

Cheers,

Paul.


--
AppSumo Presents a FREE Video for the SourceForge Community by Eric 
Ries, the creator of the Lean Startup Methodology on Lean Startup 
Secrets Revealed. This video shows you how to validate your ideas, 
optimize your ideas and identify your business strategy.
http://p.sf.net/sfu/appsumosfdev2dev
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] Invalid Tape position - Marking tapes with error

2011-07-07 Thread Paul Mather
On Jul 7, 2011, at 2:13 PM, Martin Simmons wrote:

 On Thu, 7 Jul 2011 10:30:36 -0600, James Woodward said:
 
 Hello,
 
 I haven't seen anything about using nsa devices to use over something like
 an sa device. I did a bit of a search but haven't seen anything that really
 explains that portion to me.
 
 http://www.bacula.org/5.0.x-manuals/en/main/main/Storage_Daemon_Configuratio.html#SECTION00203
 http://www.bacula.org/5.0.x-manuals/en/problems/problems/Testing_Your_Tape_Drive.html#SECTION00413000
 
 Also, all of the examples use non-rewind devices.
 
 
 When I set these up I did see a recommendation to use the pass through
 devices e.g pass0, pass1, pass2, pass3. I don't remember the specifics but
 the passthrough devices themselves seemed to be problematic.
 
 Yes, the passthrough devices shouldn't be used for data transfer.  They should
 be used to control an autochanger robot.


Normally, under FreeBSD, you will have a ch device for your autoloader, which 
you should use instead.  E.g.,

tape# camcontrol devlist
QUANTUM ULTRIUM 4 2210   at scbus0 target 8 lun 0 (sa0,pass0)
QUANTUM UHDL 0075at scbus0 target 8 lun 1 (ch0,pass1)
[[...]]


The chio command uses /dev/ch0 by default.

Cheers,

Paul.


--
All of the data generated in your IT infrastructure is seriously valuable.
Why? It contains a definitive record of application performance, security 
threats, fraudulent activity, and more. Splunk takes this data and makes 
sense of it. IT sense. And common sense.
http://p.sf.net/sfu/splunk-d2d-c2
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] Invalid Tape position - Marking tapes with error

2011-07-07 Thread Paul Mather
On Jul 7, 2011, at 12:30 PM, James Woodward wrote:

 Hello,
 
 I haven't seen anything about using nsa devices to use over something like an 
 sa device. I did a bit of a search but haven't seen anything that really 
 explains that portion to me.


The FreeBSD tape driver uses different device names as shortcuts for the 
various different action-on-close semantics.  The most common are /dev/saN, 
which will rewind the tape when the device is closed, and /dev/nsaN, which does 
not rewind the tape when the device is closed.  (Think n for non-rewinding.)  
If you want multiple files on a single tape, you will usually want to use 
/dev/nsaN.

See man sa(4) for details.

Cheers,

Paul.


--
All of the data generated in your IT infrastructure is seriously valuable.
Why? It contains a definitive record of application performance, security 
threats, fraudulent activity, and more. Splunk takes this data and makes 
sense of it. IT sense. And common sense.
http://p.sf.net/sfu/splunk-d2d-c2
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] Very low performance with compression and encryption !

2011-01-20 Thread Paul Mather
On Jan 20, 2011, at 11:01 AM, John Drescher wrote:

 This is normal. If you want fast compression do not use software
 compression and use a tape drive with HW compression like LTO drives.
 
 John
 Not really an option for file/disk devices though.
 
 I've been tempted to experiment with BTRFS using LZO or standard zlib
 compression for storing the volumes and see how the performance compares
 to having bacula-fd do the compression before sending - I have a
 suspicion the former might be better..
 
 
 Doing the compression at the filesystem level is an idea I have wanted
 to try for several years. Hopefully one of the filesystems that
 support this becomes stable soon.

I've been using ZFS with a compression-enabled fileset for a while now under 
FreeBSD.  It is transparent and reliable.  Looking just now, I'm not getting 
great compression ratios for my backup data: 1.09x.  I am using the 
speed-oriented compression algorithm on this fileset, though, because the 
hardware is relatively puny.  (It is a Bacula test bed.)  Probably I'd get 
better compression if I enabled one of the GZIP levels.

Cheers,

Paul.



--
Protect Your Site and Customers from Malware Attacks
Learn about various malware tactics and how to avoid them. Understand 
malware threats, the impact they can have on your business, and how you 
can protect your company and customers by using code signing.
http://p.sf.net/sfu/oracle-sfdevnl
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] Very low performance with compression and encryption !

2011-01-20 Thread Paul Mather
On Jan 20, 2011, at 12:44 PM, Dan Langille wrote:

 
 On Thu, January 20, 2011 12:28 pm, Silver Salonen wrote:
 On Thursday 20 January 2011 19:02:33 Paul Mather wrote:
 On Jan 20, 2011, at 11:01 AM, John Drescher wrote:
 
 This is normal. If you want fast compression do not use software
 compression and use a tape drive with HW compression like LTO
 drives.
 
 John
 Not really an option for file/disk devices though.
 
 I've been tempted to experiment with BTRFS using LZO or standard zlib
 compression for storing the volumes and see how the performance
 compares
 to having bacula-fd do the compression before sending - I have a
 suspicion the former might be better..
 
 
 Doing the compression at the filesystem level is an idea I have wanted
 to try for several years. Hopefully one of the filesystems that
 support this becomes stable soon.
 
 I've been using ZFS with a compression-enabled fileset for a while now
 under FreeBSD.  It is transparent and reliable.  Looking just now, I'm
 not getting great compression ratios for my backup data: 1.09x.  I am
 using the speed-oriented compression algorithm on this fileset, though,
 because the hardware is relatively puny.  (It is a Bacula test bed.)
 Probably I'd get better compression if I enabled one of the GZIP levels.
 
 Isn't the low compression ratio because of bacula volume format that
 messes up data in FS point of view? The same thing that is a problem in
 implementing (or using an FS-based) deduplication in Bacula.
 
 I also use ZFS on FreeBSD.  Perhaps the above is a typo.  I get nearly 2.0
 compression ratio.
 
 $ zfs get et compressratio
 NAME  PROPERTY   VALUE  SOURCE
 storage   compressratio  1.89x  -
 storage/compressedcompressratio  1.90x  -
 storage/compressed/bacula compressratio  1.90x  -
 storage/compressed/bacula@2010.10.19  compressratio  1.91x  -
 storage/compressed/bacula@2010.10.20  compressratio  1.91x  -
 storage/compressed/bacula@2010.10.20a compressratio  1.91x  -
 storage/compressed/bacula@2010.10.20b compressratio  1.91x  -
 storage/compressed/bac...@pre.pool.merge  compressratio  1.94x  -
 storage/compressed/home   compressratio  1.00x  -
 storage/pgsql compressratio  1.00x  -


Nope, not a typo:

backup# zfs get compressratio
NAME  PROPERTY   VALUE  SOURCE
backups   compressratio  1.07x  -
backups/baculacompressratio  1.09x  -
backups/hosts compressratio  1.46x  -
backups/san   compressratio  1.06x  -
backups/san@filedrop  compressratio  1.06x  -


The backups/bacula fileset is where my Bacula volumes are stored.  As I 
surmised, I get better compression ratios under GZIP-9 compression:

backup# zfs get compression
NAME  PROPERTY VALUE SOURCE
backups   compression  off   default
backups/baculacompression  onlocal
backups/hosts compression  gzip-9local
backups/san   compression  onlocal
backups/san@filedrop  compression  - -

(Compression=on equates to lzjb, which is the most lightweight method, CPU 
resources wise, but not the best in terms of compression ratio achieved.)

I will probably switch the other filesets to GZIP compression, as ZFS 
performance has improved significantly under RELENG_8...

Cheers,

Paul.




--
Protect Your Site and Customers from Malware Attacks
Learn about various malware tactics and how to avoid them. Understand 
malware threats, the impact they can have on your business, and how you 
can protect your company and customers by using code signing.
http://p.sf.net/sfu/oracle-sfdevnl
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] incremental backups too large

2011-01-13 Thread Paul Mather
On Jan 13, 2011, at 3:44 PM, Lawrence Strydom wrote:

 I understand that something is adding data and logically the backup should 
 grow. What I don't understand is why the entire file has to be backed up if 
 only a few bytes of data has changed. It is mainly outlook.pst files and 
 MSSQL databse files that cuase these large backups. Some of these files are 
 several GB. 


Because Bacula is file-based (as are most other backup systems), and not, say, 
block-based (like, e.g., Norton Ghost), many messages in a single file 
mailbox formats like PST and mbox will tend to play havoc with backups, because 
even adding a single message to your mailbox will cause the whole mailbox to be 
backed up (i.e., the new message plus all the old messages in there).  That's 
one reason why Apple changed over to maildir-like message storage in their mail 
client when they introduced their Time Machine backup system---it is much 
friendlier to backups, as a new message only causes the file containing the new 
message to be backed up in a subsequent incremental backup.

You'll only get close to the sort of behaviour you want (i.e., only the changed 
data in the file is backed up) if and when Bacula gains some measure of 
deduplication support.  (Maybe not even then, depending upon how it decides to 
do it.)


 My understanding of an incremental backup is that only changed data is backed 
 up. It seems that at the moment my Bacula is doing differential backups, ie 
 backing up the entire file if the timestamp has changed, even though I have 
 configured it for incremental.


If the last modified timestamp changes since the previous incremental backup 
then Bacula will assume the file has changed and include it in the 
incremental---even if the file data has not changed.

You can enable Accurate backups and check other attributes (such as MD5 
checksums of the file) if you want Bacula to take more care in only backing up 
files that have truly changed.  This will slow down the backup speed, though.

Cheers,

Paul.


--
Protect Your Site and Customers from Malware Attacks
Learn about various malware tactics and how to avoid them. Understand 
malware threats, the impact they can have on your business, and how you 
can protect your company and customers by using code signing.
http://p.sf.net/sfu/oracle-sfdevnl
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] Verify differences: SHA1 sum doesn't match but it should

2010-08-30 Thread Paul Mather
On Aug 30, 2010, at 6:41 AM, Henrik Johansen wrote:

 Like most ZFS related stuff it all sounds (and looks) extremely easy but
 in reality it is not quite so simple.

Yes, but does ZFS makes things easier or harder?

Silent data corruption won't go away just because your pool is large. :-)

(But, this is all getting a bit off-topic for Bacula-users.)

Cheers,

Paul.


--
Sell apps to millions through the Intel(R) Atom(Tm) Developer Program
Be part of this innovative community and reach millions of netbook users 
worldwide. Take advantage of special opportunities to increase revenue and 
speed time-to-market. Join now, and jumpstart your future.
http://p.sf.net/sfu/intel-atom-d2d
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] Verify differences: SHA1 sum doesn't match but it should

2010-08-28 Thread Paul Mather
On Aug 28, 2010, at 7:12 AM, Steve Costaras wrote:

 Could be due to a transient error (transmission or wild/torn read at time of 
 calculation).   I see this a lot with integrity checking of files here (50TiB 
 of storage).
 
 Only way to get around this now is to do a known-good sha1/md5 hash of data 
 (2-3 reads of the file make sure that they all match and that the file is not 
 corrupted) save that as a baseline and then when doing reads/compares if one 
 fails do another re-read and see if the first one was in error and compare 
 that with your baseline. This is one  reason why I'm switching to the new 
 generation of sas drives that have ioecc checks on READS not just writes to 
 help cut down on some of this.
 
 Corruption does occur as well and is more probable with the higher the 
 capacity of the drive. Ideally you would have a drive that would  do 
 ioecc on reads, plus using T10 PI extensions (DIX/DIF) from drive to 
 controller up to your file system layer.It won't always prevent it by 
 itself but would allow if you have a raid setup to do some self-healing when 
 a drive reports a non transient (i.e. corrupted sector of data).   
 
 However the T10 PI extensions are only on sas/fc drives (520/528 byte blocks) 
 and so far as I can tell only the new LSI hba's support a small subset of 
 this (no hardware raid controllers I can find) and have not seen any support 
 up to the OS/filesystem level.SATA is not included at all as the T13 
 group opted not to include it in the spec.

You could also stick with your current hardware and use a file system that 
emphasises end-to-end data integrity like ZFS.  ZFS checksums at many levels, 
and has a don't trust the hardware mentality.  It can detect silent data 
corruption and automatically self-heal where redundancy permits.

ZFS also supports pool scrubbing---akin to the patrol reading of many RAID 
controllers---for proactive detection of silent data corruption.  With drive 
capacities becoming very large, the probability of an unrecoverable read 
becomes very high.  This becomes very significant even in redundant storage 
systems because a drive failure necessitates a lengthy rebuild period during 
which the storage array lacks any redundancy (in the case of RAID-5).  It is 
for this reason that RAID-6 (ZFS raidz2) is becoming de rigeur for 
many-terabyte arrays using large drives, and, specifically, the reason ZFS 
garnered its triple-parity raidz3 pool type (in ZFS pool version 17).

I believe Btrfs intends to bring many ZFS features to Linux.

Cheers,

Paul.--
Sell apps to millions through the Intel(R) Atom(Tm) Developer Program
Be part of this innovative community and reach millions of netbook users 
worldwide. Take advantage of special opportunities to increase revenue and 
speed time-to-market. Join now, and jumpstart your future.
http://p.sf.net/sfu/intel-atom-d2d___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] Dell PV-124T with Ultrium TD4, Hardware or Software compression?

2010-08-14 Thread Paul Mather
On Aug 13, 2010, at 2:23 PM, Dietz Pröpper wrote:

 You:
 On Aug 13, 2010, at 4:10 AM, Dietz Pröpper wrote:
 IMHO there are two problems with hardware compression:
 1. Data mix: The compression algorithms tend to work quite well on
 compressable stuff, but can't cope very well with precompressed stuff,
 i.e. encrypted data or media files. On an old DLT drive (but modern
 hardware should perform in a similar fashion), I get around 7MB/s
 with normal data and around 3MB/s with precrompessed stuff. The raw
 tape write rate is somewhere around 4MB/s. And even worse - due to
 the fact that the compression blurs precompressed data, it also takes
 noticeable more tape space.
 
 Those problems affect software compression, too.
 
 Hmm, I can't reproduce them with gzip on the data in question.

Maybe, then, your job is not I/O bound?

If backup is limited by the speed at which you can write to tape then it 
logically follows that you will get the observed behaviour you mention above.  
More compressible source data will lead to faster backup rates because those 
data will compress to less bits that need to be written to tape.  Conversely, 
if the compression algorithm does not take steps to guard against growth due to 
compressed input, backup speeds will fall below nominal write speeds with such 
data because the source data will result in more bits to be written, which will 
take longer (relative to the source).

If you are not getting something akin to this observed behaviour then your 
backup is not being limited by tape write speed, but by something else such as 
source input speed or compression speed.

Cheers,

Paul.


--
This SF.net email is sponsored by 

Make an app they can't live without
Enter the BlackBerry Developer Challenge
http://p.sf.net/sfu/RIM-dev2dev 
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] Dell PV-124T with Ultrium TD4, Hardware or Software compression?

2010-08-13 Thread Paul Mather
On Aug 13, 2010, at 4:10 AM, Dietz Pröpper wrote:

 IMHO there are two problems with hardware compression:
 1. Data mix: The compression algorithms tend to work quite well on 
 compressable stuff, but can't cope very well with precompressed stuff, i.e. 
 encrypted data or media files. On an old DLT drive (but modern hardware 
 should perform in a similar fashion), I get around 7MB/s with normal data 
 and around 3MB/s with precrompessed stuff. The raw tape write rate is 
 somewhere around 4MB/s. And even worse - due to the fact that the 
 compression blurs precompressed data, it also takes noticeable more tape 
 space.

Those problems affect software compression, too.

LTO takes steps to ameliorate the effects of pre-compressed/high entropy input 
data by allowing an output block to be prefixed as being uncompressed.  So, if 
input data would cause a block to grow due to compression, it can output the 
original input itself, with the block prefixed, meaning only a very tiny 
percentage increase in tape usage for stretches of high-entropy input.  
Software compression also takes steps to limit growth in output due to 
highly-compressed input.

 2. Vendors: I've seen it more than once that tape vendors managed to break 
 their own compression, which means that a replacement tape drive two years 
 younger than it's predecessor can no longer read the compressed tape. 
 Compatibility between vendors, the same.
 So, if the compression algorithm is not defined in the tape drive's 
 standard then it's no good idea to even think about using the tape's 
 hardware compression.

I agree with point 2, however I believe the trend has been to move towards 
using algorithms defined and documented in published standards for the very 
reasons you state.

Cheers,

Paul.


--
This SF.net email is sponsored by 

Make an app they can't live without
Enter the BlackBerry Developer Challenge
http://p.sf.net/sfu/RIM-dev2dev 
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] Quantum Scalar i500 slow write speed

2010-08-09 Thread Paul Mather
On Aug 9, 2010, at 2:55 AM, Henry Yen wrote:

 On Fri, Aug 06, 2010 at 10:48:10AM +0200, Christian Gaul wrote:
 Even when catting to /dev/dsp i use /dev/urandom.. Blocking on
 /dev/random happens much too quickly.. and when do you really need that
 much randomness.
 
 I get about 40 bytes on a small server before blocking.

On Linux, /dev/random will block if there is insufficient entropy in the pool.  
Unlike /dev/random, /dev/urandom will not block on Linux, but will reuse 
entropy in the pool.  Thus, /dev/random produces higher quality, lower quantity 
random data than /dev/urandom.  For the purposes of compressibility tests, the 
pseudorandom data of /dev/urandom is perfectly fine.  The /dev/random device is 
better used, e.g., for generating cryptographic keys.

 
 Reason 1: the example I gave yields a file size for tempchunk of 512MB,
 not 1MB, as given in your counter-example.  I agree that (at least 
 now-a-days)
 catting 1MB chunks into a 6MB chunk is likely (although not assured)
 to lead to greatly reduced size during later compression, but I disagree
 that catting 512MB chunks into a 3GB chunk is likely to be compressible
 by any general-purpose compressor.
 
 Which is what i meant with way bigger than the library size of the
 algorithm.  Mostly my Information was pitfalls to look out for when
 testing the speed of your equipment, if you went ahead and cat-ted 3000
 x 1MB, i believe the hardware compression would make something highly
 compressed out of it.
 My guess is it would work for most chunks around half as large as the
 buffer size of the drive (totally guessing).
 
 I think that the tape drive manufacturers don't make large buffer/CPU
 capacity in their drives yet.  I finally did a test on an SDLT2 (160GB)
 drive; admittedly, it's fairly old as tape drives go, but tape technology
 appears to be rather a bit slower than disk technology, at least as far
 as raw capacity is concerned.  I created two files from /dev/urandom;
 one was 1GB, the other a mere 10K.  I then created two identically-sized
 files corresponding to each of these two chunks (4 of the first and approx.
 400k of the second).  Writing them to the SDLT2 drive using 60k blocksize,
 with compression on, yielded uncanny results: the writable capacity before
 hitting EOT was within 0.01%, and the elapsed time was within 0.02%.

As I posted here recently, even modern LTO tape drives use only a 1 KB (1024 
byte) history buffer for its sliding window-based compression algorithm.  So, 
any repeated random chunk greater than 1 KB in size will be incompressible by 
LTO tape drives.

 I see there's a reason to almost completely ignore the so-called compressed
 capacity claims by tape drive manufacturers...

By definition, random data are not compressible.  It's my understanding that 
the compressed capacity of tapes is based explicitly on an expected 2:1 
compression ratio for source data (and this is usually cited somewhere in the 
small print).  That is a reasonable estimate for text.  Other data may compress 
better or worse.  Already-compressed or encrypted data will be incompressible 
to the tape drive.  In other words, compressed capacity is heavily dependent 
on your source data.

Cheers,

Paul.
--
This SF.net email is sponsored by 

Make an app they can't live without
Enter the BlackBerry Developer Challenge
http://p.sf.net/sfu/RIM-dev2dev 
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] Quantum Scalar i500 slow write speed

2010-08-06 Thread Paul Mather
On Aug 6, 2010, at 4:48 AM, Christian Gaul wrote:

 Am 05.08.2010 21:56, schrieb Henry Yen:
 On Thu, Aug 05, 2010 at 17:17:39PM +0200, Christian Gaul wrote:
 
[[...]]
 
 /dev/urandom seems to measure about 3MB/sec or thereabouts, so creating
 a large uncompressible file could be done sort of like:
 
   dd if=/dev/urandom of=tempchunk count=1048576
   cat tempchunk tempchunk tempchunk tempchunk tempchunk tempchunk  bigfile
 
 
 cat-ting random data a couple of times to make one big random file wont
 really work, unless the size of the chunks is way bigger than the
 library size of the compression algorithm.
 
 Reason 1: the example I gave yields a file size for tempchunk of 512MB,
 not 1MB, as given in your counter-example.  I agree that (at least 
 now-a-days)
 catting 1MB chunks into a 6MB chunk is likely (although not assured)
 to lead to greatly reduced size during later compression, but I disagree
 that catting 512MB chunks into a 3GB chunk is likely to be compressible
 by any general-purpose compressor.
 
 
 Which is what i meant with way bigger than the library size of the
 algorithm.  Mostly my Information was pitfalls to look out for when
 testing the speed of your equipment, if you went ahead and cat-ted 3000
 x 1MB, i believe the hardware compression would make something highly
 compressed out of it.
 My guess is it would work for most chunks around half as large as the
 buffer size of the drive (totally guessing).

The hardware compression standard used by LTO drives specifies a buffer size of 
1K (1024 bytes).  See 
http://www.ecma-international.org/publications/files/ECMA-ST/Ecma-321.pdf, 
Section 7.3, page 4.

A 1 MB chunk is even quite large for many current compression algorithms 
distributed with common operating systems.  The man page for gzip would seem to 
list the buffer size used by that algorithm as 32 KB.  The buffer size used by 
bzip2 can vary between 100,000 and 900,000 bytes (corresponding to compression 
settings -1 to -9).  The recent LZMA .xz format compressor would appear to vary 
maximum memory usage between 6 MB and 800 MB corresponding to the -0 to -9 
compression presets.  (The LZMA memory requirements are adjusted downwards to 
accommodate a percentage of available RAM where necessary.)

Cheers,

Paul.
--
This SF.net email is sponsored by 

Make an app they can't live without
Enter the BlackBerry Developer Challenge
http://p.sf.net/sfu/RIM-dev2dev 
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


[Bacula-users] Bacula 5.0.2 FreeBSD port fails to build during upgrade

2010-07-20 Thread Paul Mather
I'm running FreeBSD 8.1-PRERELEASE (RELENG_8).  Recently, the 
sysutils/bacula-{client,server} ports were updated to 5.0.2.  Unfortunately, 
when updating via portmaster, the bacula-client port updated successfully, but 
bacula-server did not.  It fails to build:

[[...]]
Compiling ua_restore.c
Compiling ua_run.c
Compiling ua_select.c
Compiling ua_server.c
Compiling ua_status.c
Compiling ua_tree.c
Compiling ua_update.c
Compiling vbackup.c
Compiling verify.c
Linking bacula-dir ...
/usr/ports/sysutils/bacula-server/work/bacula-5.0.2/libtool --silent --tag=CXX 
--mode=link /usr/bin/c++  -L/usr/local/lib -L../lib -L../cats -L../findlib -o 
bacula-dir dird.o admin.o authenticate.o autoprune.o backup.o bsr.o catreq.o 
dir_plugins.o dird_conf.o expand.o fd_cmds.o getmsg.o inc_conf.o job.o jobq.o 
migrate.o mountreq.o msgchan.o next_vol.o newvol.o pythondir.o recycle.o 
restore.o run_conf.o scheduler.o ua_acl.o ua_cmds.o ua_dotcmds.o ua_query.o 
ua_input.o ua_label.o ua_output.o ua_prune.o ua_purge.o ua_restore.o ua_run.o 
ua_select.o ua_server.o ua_status.o ua_tree.o ua_update.o vbackup.o verify.o  
-lbacfind -lbacsql -lbacpy -lbaccfg -lbac -lm   -L/usr/local/lib -lpq -lcrypt 
-lpthread  -lintl  -lwrap /usr/local/lib/libintl.so /usr/local/lib/libiconv.so 
-Wl,-rpath -Wl,/usr/local/lib -lssl -lcrypto
/usr/local/lib/libbacsql.so: undefined reference to 
`rwl_writelock(s_rwlock_tag*)'
*** Error code 1

Stop in /usr/ports/sysutils/bacula-server/work/bacula-5.0.2/src/dird.


  == Error in /usr/ports/sysutils/bacula-server/work/bacula-5.0.2/src/dird 
==


*** Error code 1

Stop in /usr/ports/sysutils/bacula-server/work/bacula-5.0.2.
*** Error code 1

Stop in /usr/ports/sysutils/bacula-server.
*** Error code 1

Stop in /usr/ports/sysutils/bacula-server.


It looks to me that the linking step above is wrong: it is picking up the old 
version of the library installed in /usr/local/lib by sysutils/bacula-server 
5.0.0_1.  It shouldn't be including -L/usr/local/lib in the invocation of 
libtool.

Anyone who builds the port from scratch will not have a problem, but anyone 
updating via portmaster or portupgrade will run into the problems above.

Cheers,

Paul.


--
This SF.net email is sponsored by Sprint
What will you do first with EVO, the first 4G phone?
Visit sprint.com/first -- http://p.sf.net/sfu/sprint-com-first
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] Bacula 5.0.2 FreeBSD port fails to build during upgrade

2010-07-20 Thread Paul Mather
On Jul 20, 2010, at 3:10 PM, Dan Langille wrote:

 On 7/20/2010 12:20 PM, Paul Mather wrote:
 I'm running FreeBSD 8.1-PRERELEASE (RELENG_8).  Recently, the 
 sysutils/bacula-{client,server} ports were updated to 5.0.2.  Unfortunately, 
 when updating via portmaster, the bacula-client port updated successfully, 
 but bacula-server did not.  It fails to build:
 
 [[...]]
 Compiling ua_restore.c
 Compiling ua_run.c
 Compiling ua_select.c
 Compiling ua_server.c
 Compiling ua_status.c
 Compiling ua_tree.c
 Compiling ua_update.c
 Compiling vbackup.c
 Compiling verify.c
 Linking bacula-dir ...
 /usr/ports/sysutils/bacula-server/work/bacula-5.0.2/libtool --silent 
 --tag=CXX --mode=link /usr/bin/c++  -L/usr/local/lib -L../lib -L../cats 
 -L../findlib -o bacula-dir dird.o admin.o authenticate.o autoprune.o 
 backup.o bsr.o catreq.o dir_plugins.o dird_conf.o expand.o fd_cmds.o 
 getmsg.o inc_conf.o job.o jobq.o migrate.o mountreq.o msgchan.o next_vol.o 
 newvol.o pythondir.o recycle.o restore.o run_conf.o scheduler.o ua_acl.o 
 ua_cmds.o ua_dotcmds.o ua_query.o ua_input.o ua_label.o ua_output.o 
 ua_prune.o ua_purge.o ua_restore.o ua_run.o ua_select.o ua_server.o 
 ua_status.o ua_tree.o ua_update.o vbackup.o verify.o  -lbacfind -lbacsql 
 -lbacpy -lbaccfg -lbac -lm   -L/usr/local/lib -lpq -lcrypt -lpthread  -lintl 
  -lwrap /usr/local/lib/libintl.so /usr/local/lib/libiconv.so -Wl,-rpath 
 -Wl,/usr/local/lib -lssl -lcrypto
 /usr/local/lib/libbacsql.so: undefined reference to 
 `rwl_writelock(s_rwlock_tag*)'
 *** Error code 1
 
 Stop in /usr/ports/sysutils/bacula-server/work/bacula-5.0.2/src/dird.
 
 
   == Error in 
 /usr/ports/sysutils/bacula-server/work/bacula-5.0.2/src/dird ==
 
 
 *** Error code 1
 
 Stop in /usr/ports/sysutils/bacula-server/work/bacula-5.0.2.
 *** Error code 1
 
 Stop in /usr/ports/sysutils/bacula-server.
 *** Error code 1
 
 Stop in /usr/ports/sysutils/bacula-server.
 
 
 It looks to me that the linking step above is wrong: it is picking up the 
 old version of the library installed in /usr/local/lib by 
 sysutils/bacula-server 5.0.0_1.  It shouldn't be including 
 -L/usr/local/lib in the invocation of libtool.
 
 Anyone who builds the port from scratch will not have a problem, but anyone 
 updating via portmaster or portupgrade will run into the problems above.
 
 Agreed.  I heard about this yesterday, but have not had time to fix it.
 
 We're also going to change the port to default to PostgreSQL instead of 
 SQLite.
 
 Sorry you encountered the problem.

No problems, as the workaround was simple and I wanted to give folks a 
heads-up.  (I guess I should have been more explicit, but the workaround is 
simply to pkg_delete the bacula-server port and reinstall it, rather than 
trying to upgrade via portmaster/portupgrade.  Deleting the port won't remove 
any local configuration files, for those who might be worried.)

Good to hear that PostgreSQL will become the default back-end database.  Nice 
work!

Cheers,

Paul.



--
This SF.net email is sponsored by Sprint
What will you do first with EVO, the first 4G phone?
Visit sprint.com/first -- http://p.sf.net/sfu/sprint-com-first
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] Bconsole not properly installed

2010-06-30 Thread Paul Mather
On Jun 30, 2010, at 9:13 AM, Albin Vega wrote:

 Hello
  
 First let me say that I havent been using FreeBSD and Bacula before, so its 
 all a bit new to me, and I might do some beginners mistakes.
  
  have installed Bacula server 5.0.0.1 on a FreeBsd 8 platform. Have followed 
 the instructions on http://www.freebsddiary.org/bacula.php and have run all 
 the scripts and have the bacula pids running. But now I have run into some 
 trouble. When i try to start the bconsole on the terminal window I get the 
 message Command not found.

Be careful about following that guide: it is somewhat out of date.  (For 
example, the sysutils/bacula port no longer exists, as I'm sure you discovered.)

 I then open Gnome and run the bconsole command in a treminal window. I get 
 the message Bconsole not properly installed
  
 I have located bconsole file in to places:
  
 1.
 /usr/local/share/bacula/bconsole
  
 This script looks like this:
 bacupserver# more /usr/local/share/bacula/bconsole
 #!/bin/sh
 which dirname /dev/null
 # does dirname exit?
 if [ $? = 0 ] ; then
   cwd=`dirname $0`
   if [ x$cwd = x. ]; then
  cwd=`pwd`
   fi
   if [ x$cwd = x/usr/local/sbin ] ; then
  echo bconsole not properly installed.
  exit 1
   fi
 fi
 if [ x/usr/local/sbin = x/usr/local/etc ]; then
echo bconsole not properly installed.
exit 1
 fi
 if [ $# = 1 ] ; then
echo doing bconsole $1.conf
/usr/local/sbin/bconsole -c $1.conf
 else
/usr/local/sbin/bconsole -c /usr/local/etc/bconsole.conf
 fi
 Running this script returns message:
 bconsole not properly installed.
  
 2. 
 /usr/ports/sysutils/bacula-server/work/bacula-5.0.0/scripts/bconsole
  
 This script looks like this:
 bacupserver# more 
 /usr/ports/sysutils/bacula-server/work/bacula-5.0.0/scripts/bconsole
 #!/bin/sh
 which dirname /dev/null
 # does dirname exit?
 if [ $? = 0 ] ; then
   cwd=`dirname $0`
   if [ x$cwd = x. ]; then
  cwd=`pwd`
   fi
   if [ x$cwd = x/sbin ] ; then
  echo bconsole not properly installed.
  exit 1
   fi
 fi
 if [ x/sbin = x/etc/bacula ]; then
echo bconsole not properly installed.
exit 1
 fi
 if [ $# = 1 ] ; then
echo doing bconsole $1.conf
/sbin/bconsole -c $1.conf
 else
/sbin/bconsole -c /etc/bacula/bconsole.conf
 fi
  
 Runnig this script returns the message:
 /usr/ports/sysutils/bacula-server/work/bacula-5.0.0/scripts/bconsole: 
 /sbin/bconsole: not found
  
  
 I then located the bconsole.conf file:
 /usr/ports/sysutils/bacula-server/work/bacula-5.0.0/src/console/bconsole.conf
 Tryed to manually move til to /etc/bacula but there is no /etc/bacula 
 directory... 
  
 I am running out of ideas on what to do here. Enybody have any ideas on what 
 to do? Would be very greatful I someone would point me in the right 
 direction

Did you install the sysutils/bacula-server port?  (I.e., did you do a make 
install in the /usr/ports/sysutils/bacula-server directory to build and 
install it?)  If you did, it should have installed bconsole for you under 
/usr/local.  In my case, it shows it to be installed under /usr/local/sbin:

backup# which bconsole
/usr/local/sbin/bconsole
backup# file /usr/local/sbin/bconsole
/usr/local/sbin/bconsole: ELF 32-bit LSB executable, Intel 80386, version 1 
(FreeBSD), dynamically linked (uses shared libs), for FreeBSD 8.0 (800505), not 
stripped

You shouldn't be trying to run the bconsole shell script; run 
/usr/local/sbin/bconsole directly.

Under FreeBSD, ports avoid putting configuration directories in the base 
operating system directories, to avoid polluting the base system.  The ports 
system tries to keep them separate, usually under /usr/local.  So, Bacula 
installed from ports on FreeBSD will not use /etc/bacula, it uses 
/usr/local/etc.  The bconsole installed knows to look in /usr/local/etc for 
configuration files.

Note, when you installed the Bacula server port, there will probably have been 
some messages about adding entries to /etc/rc.conf to ensure the Bacula daemons 
are run at system startup.  Also, there will have been information indicating 
that the configuration files for the daemons will have been installed in 
/usr/local/etc and will need to be edited to suit your local setup.

If you want to know what files were installed by the Bacula server port, you 
can use the following command: pkg_info -L bacula-server-5.0.0_1

Finally, Dan Langille is listed as the current maintainer of the 
sysutils/bacula-server port.  I believe Dan is subscribed to this list and 
sometimes posts.

Cheers,

Paul.

--
This SF.net email is sponsored by Sprint
What will you do first with EVO, the first 4G phone?
Visit sprint.com/first -- http://p.sf.net/sfu/sprint-com-first___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] Problem with connection in Bacula-Bat

2010-06-21 Thread Paul Mather
On Jun 21, 2010, at 6:43 AM, Cato Myhrhagen wrote:

 I have som problems getting Bacula-Bat working. Here is what i have done so 
 far:
  
 1. Installed FreeBSD 8.0 rel
 2. From the ports catalogue i have installed Gnome Lite
 3. Uppgraded all the ports with CVsup
 4. Then I installed Bacula 5.0.0.1 rel (with MySQL), altso from the ports 
 catalogue.

By this, do you mean the sysutils/bacula-server port?

 5. Installed BAT 5.0.0.1
  
 I then tried to start Bat by opening Gnome, starting a teminalvindow and 
 typing bat, but got error message (dont have it now becouse it stoped giving 
 me this message). I also notised that bacula-dir prosess stopped when i tried 
 to start Bat. Then i checked if MySQL was running, but it wasnt even 
 installed (i thought it would be installed together with Bacula, but no)

Only the MySQL client libraries are installed when you install Bacula.  The 
reason behind this is that you might be using a MySQL database on a different 
server to store your database.  If you are going to run a local MySQL server 
then you need to install an appropriate databases/mysql??-server port and 
configure that so it runs.

 Therefor i installed MySQL-server 5.0.90, started it and tried to run BAT 
 again. The program starts but dosent seem to work. It continues to try to 
 connect to the database i think. In the lover left corner of the BAT Gui i 
 continues to display the folloving: Connecting to Direbtor Localhost:9010 
 and then says Connection fails

After you installed the MySQL server did you run the scripts to create the 
databases and tables necessary for Bacula to run?  These are in 
/usr/local/share/bacula.  You need to run three scripts: 
create_bacula_database, make_bacula_tables, and grant_bacula_privileges before 
starting up Bacula for the first time.  The reason you are getting the error 
message about not being able to connect to the Director is that it probably 
died when it started up after not finding the correct tables in the database.

Note, also, you need to enable MySQL in /etc/rc.conf if you are using it as 
your back-end database, so that it runs at startup.  If it is not running, the 
Director will not be able to connect to the database.

Note, I use PostgreSQL as my back-end database.  I can easily check it is 
running via its startup script:

backup# /usr/local/etc/rc.d/postgresql status
pg_ctl: server is running (PID: 1180)
/usr/local/bin/postgres -D /usr/local/pgsql/data

(There should be something similar for MySQL.)  Similarly, for the Bacula 
Director:

backup# /usr/local/etc/rc.d/bacula-dir status
bacula_dir is running as pid 1280.

Cheers,

Paul.


--
ThinkGeek and WIRED's GeekDad team up for the Ultimate 
GeekDad Father's Day Giveaway. ONE MASSIVE PRIZE to the 
lucky parental unit.  See the prize list and enter to win: 
http://p.sf.net/sfu/thinkgeek-promo
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] Problem installing Bacula-BAT

2010-06-17 Thread Paul Mather
On Jun 17, 2010, at 7:24 AM, Cato Myhrhagen wrote:

 Hello
  
 First let me say that i am new to FreeBSD and Bacula, so my question might be 
 a bit trivial. Newertheless, I am having big problems installing Bacula BAT 
 on my FreeBSD server. Let me explain what i have done so far:
  
 1. Installed FreeBSD 8.0 rel
 2. From the ports catalogue i have installed Xorg
 3. Then I installed Bacula 3.0.2 rel (with MySQL), altso from the ports 
 catalogue.
  
 I then tried to install Backula-bat, but here i ran into trouble. First the 
 installation starts as it should and then goes on for quite a while. Then 
 suddenly it stops whith the following message:
  
 install: 
 /usr/ports/sysutils/bacula-bat/work/bacula-3.0.2/src/qt-console/.libs/bat: No 
 such file or directory
 *** Error code 71
 Stop in /usr/ports/sysutils/bacula-bat.
 *** Error code 1
 Stop in /usr/ports/sysutils/bacula-bat.

Is your ports tree up to date?  The current version of Bacula used in the 
sysutils/bacula-bat port is 5.0.0.  The fact it is using 3.0.2  for you 
suggests to me that your ports tree is at least partially outdated.

You can use portsnap fetch update to update your ports tree (assuming you are 
using portsnap, which comes with FreeBSD 8.0-RELEASE for keeping the ports tree 
up to date).

Cheers,

Paul.


--
ThinkGeek and WIRED's GeekDad team up for the Ultimate 
GeekDad Father's Day Giveaway. ONE MASSIVE PRIZE to the 
lucky parental unit.  See the prize list and enter to win: 
http://p.sf.net/sfu/thinkgeek-promo
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


[Bacula-users] bscan-recovered catalogue not usable :-(

2010-06-02 Thread Paul Mather
About three months ago I began using Bacula (5.0.0 on FreeBSD 8-STABLE), 
initially backing up to disk, but with a view to backing up to tape in the near 
future.  To keep things simple, I initially used Sqlite for the catalogue.  
About a week ago, I decided to move to using PostgreSQL for the catalogue, as 
the production system I intend to deploy will be backing up many more files 
than the test deployment, and so I wanted to get some experience of Bacula + 
PostgreSQL.

That's when it all went to hell in a hand basket. :-)

I rebuilt Bacula with PostgreSQL as the catalogue back-end.  I could not get 
the sqlite2pgsql script to migrate my Sqlite catalogue successfully.  I then 
tried to recover the catalogue into PostgreSQL from the volumes via bscan.  
This was more successful, but I am still not left with working backups.

When I run a backup job, I get an e-mail intervention notice informing me 
Cannot find any appendable volumes.  When I listed my volumes of the job in 
question, bscan had left the one and only volume in the Archive state.  So, I 
changed this to Append using update volume in bconsole.  Unfortunately, 
this status quickly reverted to Error when the job tried to run.  I believe 
this is due to a mismatch between the volbytes size of the volume vs. the size 
of the volume on the file system:

*list media pool=File
+-++---+-+---+--+--+-+--+---+---+-+
| mediaid | volumename | volstatus | enabled | volbytes  | volfiles | 
volretention | recycle | slot | inchanger | mediatype | lastwritten |
+-++---+-+---+--+--+-+--+---+---+-+
|   1 | TestVolume | Error |   1 | 1,263,847,514 |0 |   
31,536,000 |   1 |0 | 0 | File  | 2010-05-26 23:15:39 |
+-++---+-+---+--+--+-+--+---+---+-+

vs.

backup# ls -al /backups/bacula/TestVolume 
-rw-r-  1 bacula  bacula  1264553900 May 26 23:15 /backups/bacula/TestVolume

I would like to continue to use this volume for backups, as it is well under 
the 5 GB maximum volume bytes set in the pool definition.  Is this an 
unrealistic expectation for a bscan-recovered catalogue, or is there some 
simple way to get this volume recognised as appendable again?  (Is the 
expectation after bscan to start with a new volume?)

Is Bacula expected to act gracefully in the face of the loss/corruption of the 
catalogue?

Cheers,

Paul.



--
ThinkGeek and WIRED's GeekDad team up for the Ultimate 
GeekDad Father's Day Giveaway. ONE MASSIVE PRIZE to the 
lucky parental unit.  See the prize list and enter to win: 
http://p.sf.net/sfu/thinkgeek-promo
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


[Bacula-users] Sqlite3 to PostgreSQL 8.4 catalogue migration

2010-05-28 Thread Paul Mather
Does anyone have a working script to migrate a Sqlite3 catalogue database to 
PostgreSQL 8.4.4?

I'm using a very recent FreeBSD 8-STABLE and the sqlite2pgsql script in the 
examples/database directory of the source code doesn't work for me.  Has anyone 
got this to work successfully under a current Bacula installation on FreeBSD?

I'm using these versions of the ports:

bacula-server-5.0.0
postgresql-client-8.4.4
postgresql-server-8.4.4
sqlite3-3.6.23.1_1

My current Bacula server is backing up to disk and, being a relatively recent 
install, none of the volumes have been recycled.  So, as a way of getting a 
working PostgreSQL catalogue in lieu of a non-working sqlite2pgsql script, I 
thought I would run bscan -s to recover catalogue information from the volumes 
(having first run create_postgresql_database, make_postgresql_tables, and 
grant_postgresql_privileges).  Will this recreate the catalogue entirely, or 
will I be missing something other than log data?

Cheers,

Paul.
--

___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


[Bacula-users] Quantum SuperLoader 3 under Bacula on FreeBSD 8

2010-05-18 Thread Paul Mather
I am currently assembling a quote for an LTO-4 tape backup system.  So far, I 
am looking at using a 16-slot Quantum SuperLoader 3 with LTO-4HH drive as the 
tape unit.  Married to this will be a server to act as the backup server that 
will drive the tape unit using Bacula to manage backups.  The server will be a 
quad core X3440 system with 4 GB of RAM and four 1 TB SATA 7200 rpm hard drives 
in a case that has room for eight hot-swap drives.  I plan on using FreeBSD 8 
on the system, using ZFS to raidz the drives together to provide spool space 
for Bacula.  I will be using an Areca ARC-1300-4X PCIe SAS card to interface 
with the tape drive.

My main question is this: is the Quantum SuperLoader 3 LTO-4 tape drive 
supported by Bacula 5 on FreeBSD?  In particular, is the autoloader fully 
supported?  The Bacula documentation indicates the SuperLoader works fully 
under Bacula, though not explicitly whether under FreeBSD.

The backup server will serve a GigE network cluster of perhaps a dozen machines 
with over 6 TB of storage, most of which is on the cluster's NFS server.  Does 
anyone have good advice on sizing the spool/holding/disk pool for a Bacula 
server?  Is it imperative to have enough disk space to hold a full backup 
(i.e., 6 TB in this case), or is it sufficient to have enough space to maintain 
streaming to tape?  (I don't have much experience of Bacula, having used it 
only to back up to disk.)  In other words, do I need more 1 TB drives in my 
backup server?

Finally, is 4 GB of RAM sufficient for good performance with ZFS?  Will ZFS on 
FreeBSD be able to maintain full streaming speeds to tape, given the various 
reports of I/O stalls under ZFS reported recently?

Thanks in advance for any advice or information.

Cheers,

Paul.
--

___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] Quantum SuperLoader 3 under Bacula on FreeBSD 8

2010-05-18 Thread Paul Mather
On May 18, 2010, at 9:35 AM, Robert Hartzell wrote:

 On Mon, 2010-05-17 at 15:08 -0400, Paul Mather wrote:
 I am currently assembling a quote for an LTO-4 tape backup system.  So far, 
 I am looking at using a 16-slot Quantum SuperLoader 3 with LTO-4HH drive as 
 the tape unit.  Married to this will be a server to act as the backup server 
 that will drive the tape unit using Bacula to manage backups.  The server 
 will be a quad core X3440 system with 4 GB of RAM and four 1 TB SATA 7200 
 rpm hard drives in a case that has room for eight hot-swap drives.  I plan 
 on using FreeBSD 8 on the system, using ZFS to raidz the drives together to 
 provide spool space for Bacula.  I will be using an Areca ARC-1300-4X PCIe 
 SAS card to interface with the tape drive.
 
 My main question is this: is the Quantum SuperLoader 3 LTO-4 tape drive 
 supported by Bacula 5 on FreeBSD?  In particular, is the autoloader fully 
 supported?  The Bacula documentation indicates the SuperLoader works fully 
 under Bacula, though not explicitly whether under FreeBSD.
 
 The backup server will serve a GigE network cluster of perhaps a dozen 
 machines with over 6 TB of storage, most of which is on the cluster's NFS 
 server.  Does anyone have good advice on sizing the spool/holding/disk pool 
 for a Bacula server?  Is it imperative to have enough disk space to hold a 
 full backup (i.e., 6 TB in this case), or is it sufficient to have enough 
 space to maintain streaming to tape?  (I don't have much experience of 
 Bacula, having used it only to back up to disk.)  In other words, do I need 
 more 1 TB drives in my backup server?
 
 Finally, is 4 GB of RAM sufficient for good performance with ZFS?  Will ZFS 
 on FreeBSD be able to maintain full streaming speeds to tape, given the 
 various reports of I/O stalls under ZFS reported recently?
 
 ZFS loves ram. More ram = better performance. I'm not at all familiar
 with zfs performance on feebsd but zfs version 13 that's used on freebsd
 8 is pretty old. ZFS is currently at version 22.

That must be on OpenSolaris.  My current Solaris 10 system reports using pool 
version 15.  FreeBSD 8-STABLE is now up to pool version 14, and I believe 
porting is currently underway to jump it up more pool levels.

 I/O stalls? Is that a freebsd issue?

Some folks have reported short, bursty I/O stalls during very intense write 
workloads.  I've seen it reported on FreeBSD, but also in those threads there 
has been mention of it happening on, e.g, OpenSolaris, too.  The general 
workaround advice currently on FreeBSD is to lower the vfs.zfs.txg.timeout 
kernel tuneable from its default of 30 seconds to something lower.  IIRC, this 
problem may also only affect systems with large ARC sizes.

Thanks for the RAM advice; I will try and bump it up.

Cheers,

Paul.


--

___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users