[Bacula-users] Building CentOS 5 RPMs for Bacula 5.0.0-1

2010-02-10 Thread Burn

Hello. I'm having trouble with building 5.0.0-1 srpms on centos 5.4.
rpmbuild --rebuild bacula-5.0.0-1.src.rpm --define 'build_centos5 1' --define 
'build_postgresql 1'
results in
Checking for unpackaged file#40;s#41;#58; /usr/lib/rpm/check-files 
/var/tmp/bacula-root


RPM build errors#58;
nbsp; nbsp; InstallSourcePackage#58; Header V3 DSA signature#58; NOKEY, key 
ID 10a792ad
nbsp; nbsp; user sbarn does not exist - using root
nbsp; nbsp; user sbarn does not exist - using root
nbsp; nbsp; user sbarn does not exist - using root
nbsp; nbsp; user sbarn does not exist - using root
nbsp; nbsp; user sbarn does not exist - using root
nbsp; nbsp; File not found#58; /var/tmp/bacula-root/etc/bacula/bconsole.conf
nbsp; nbsp; File not found#58; /var/tmp/bacula-root/usr/sbin/bconsole
nbsp; nbsp; File not found#58; /var/tmp/bacula-root/etc/bacula/bconsole.conf
nbsp; nbsp; File not found#58; /var/tmp/bacula-root/usr/sbin/bconsole

How do I resolve it?

+--
|This was sent by sc...@gorodok.net via Backup Central.
|Forward SPAM to ab...@backupcentral.com.
+--



--
SOLARIS 10 is the OS for Data Centers - provides features such as DTrace,
Predictive Self Healing and Award Winning ZFS. Get Solaris 10 NOW
http://p.sf.net/sfu/solaris-dev2dev
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


[Bacula-users] Bacula and Fedora 12

2010-02-10 Thread rpenoyer

Here you go... the issue lies in the version of the OpenSSL libraries, so here 
are your choices (as I found out)

a) ./configure .  --with-openssl=no

b) do not run mysql

personally, I am running on a closed in network, and am more comfortable with 
mysql. So I went the no openssl route. But either path works.

+--
|This was sent by r...@wyattaccelerator.com via Backup Central.
|Forward SPAM to ab...@backupcentral.com.
+--



--
SOLARIS 10 is the OS for Data Centers - provides features such as DTrace,
Predictive Self Healing and Award Winning ZFS. Get Solaris 10 NOW
http://p.sf.net/sfu/solaris-dev2dev
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


[Bacula-users] Restore Dir/Files recursively

2010-02-10 Thread Ken Barclay
Hi All,

 

Help needed with Restore!   In Bconsole, restore, select #3, enter
jobid(s), now in 'file selection mode'

 

cwd is: /

$ help

  CommandDescription

  ======

  addadd dir/file to be restored recursively, wildcards allowed

 

mark   mark dir/file to be restored recursively, wildcards allowed

 

I read that as meaning that I can enter a path with wildcards and the
directories and files will be restored recursively, but 

 

$ add /public/share/120 SALES DIVISION/*.*

No files marked.

$ add /public/share/120 SALES DIVISION/

No files marked.

 

$ mark /public/share/120 SALES DIVISION

No files marked.

$ mark /public/share/120 SALES DIVISION/*.*

No files marked.

 

Am I misunderstanding something here?  If I continue to cd further into
the path, the files are there (still in the catalog).

What I need is to restore every file and directory below /120 SALES
DIVISION/.  Is that possible?

 

Thanks in advance,

Ken Barclay

--
SOLARIS 10 is the OS for Data Centers - provides features such as DTrace,
Predictive Self Healing and Award Winning ZFS. Get Solaris 10 NOW
http://p.sf.net/sfu/solaris-dev2dev___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] Restore Dir/Files recursively

2010-02-10 Thread Gavin McCullagh
Hi,

On Wed, 10 Feb 2010, Ken Barclay wrote:

 $ add /public/share/120 SALES DIVISION/*.*
 No files marked.
 
 $ add /public/share/120 SALES DIVISION/
 No files marked.

The command prompt you get from the bacula console is a little bit
primitive.  Off the top of my head, I'd suggest you try

mark /public/share/120 SALES DIVISION
add /public/share/120 SALES DIVISION

Gavin


--
SOLARIS 10 is the OS for Data Centers - provides features such as DTrace,
Predictive Self Healing and Award Winning ZFS. Get Solaris 10 NOW
http://p.sf.net/sfu/solaris-dev2dev
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] Restore Dir/Files recursively

2010-02-10 Thread Ken Barclay
-Original Message-
From: Gavin McCullagh [mailto:gavin.mccull...@gcd.ie]
Sent: Wednesday, 10 February 2010 4:54 PM
To: bacula-users@lists.sourceforge.net
Subject: Re: [Bacula-users] Restore Dir/Files recursively

Hi,

On Wed, 10 Feb 2010, Ken Barclay wrote:

 $ add /public/share/120 SALES DIVISION/*.*
 No files marked.

 $ add /public/share/120 SALES DIVISION/
 No files marked.

The command prompt you get from the bacula console is a little bit
primitive.  Off the top of my head, I'd suggest you try

  mark /public/share/120 SALES DIVISION
  add /public/share/120 SALES DIVISION

Gavin


Thanks Gavin, but

$ mark /public/share/120 SALES DIVISION
No files marked.

$ add /public/share/120 SALES DIVISION/
No files marked.

Any other ideas?

Ken


---
---
SOLARIS 10 is the OS for Data Centers - provides features such as
DTrace,
Predictive Self Healing and Award Winning ZFS. Get Solaris 10 NOW
http://p.sf.net/sfu/solaris-dev2dev
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users

--
SOLARIS 10 is the OS for Data Centers - provides features such as DTrace,
Predictive Self Healing and Award Winning ZFS. Get Solaris 10 NOW
http://p.sf.net/sfu/solaris-dev2dev
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] Restore Dir/Files recursively

2010-02-10 Thread Carlo Filippetto
On the other you can try entering in the subdir, make 'ls or dir' and
mark the files

CIAO

---
Carlo Filippetto



2010/2/10 Gavin McCullagh gavin.mccull...@gcd.ie:
 Hi,

 On Wed, 10 Feb 2010, Ken Barclay wrote:

 $ add /public/share/120 SALES DIVISION/*.*
 No files marked.

 $ add /public/share/120 SALES DIVISION/
 No files marked.

 The command prompt you get from the bacula console is a little bit
 primitive.  Off the top of my head, I'd suggest you try

        mark /public/share/120 SALES DIVISION
        add /public/share/120 SALES DIVISION

 Gavin


 --
 SOLARIS 10 is the OS for Data Centers - provides features such as DTrace,
 Predictive Self Healing and Award Winning ZFS. Get Solaris 10 NOW
 http://p.sf.net/sfu/solaris-dev2dev
 ___
 Bacula-users mailing list
 Bacula-users@lists.sourceforge.net
 https://lists.sourceforge.net/lists/listinfo/bacula-users


--
SOLARIS 10 is the OS for Data Centers - provides features such as DTrace,
Predictive Self Healing and Award Winning ZFS. Get Solaris 10 NOW
http://p.sf.net/sfu/solaris-dev2dev
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] Building CentOS 5 RPMs for Bacula 5.0.0-1

2010-02-10 Thread Carlo Filippetto
I don't know you response,
but why you don't try to build it from the source code?

CIAO

---
Carlo Filippetto


2010/2/9 Burn bacula-fo...@backupcentral.com:

 Hello. I'm having trouble with building 5.0.0-1 srpms on centos 5.4.
 rpmbuild --rebuild bacula-5.0.0-1.src.rpm --define 'build_centos5 1' --define 
 'build_postgresql 1'
 results in
 Checking for unpackaged file#40;s#41;#58; /usr/lib/rpm/check-files 
 /var/tmp/bacula-root


 RPM build errors#58;
 nbsp; nbsp; InstallSourcePackage#58; Header V3 DSA signature#58; NOKEY, 
 key ID 10a792ad
 nbsp; nbsp; user sbarn does not exist - using root
 nbsp; nbsp; user sbarn does not exist - using root
 nbsp; nbsp; user sbarn does not exist - using root
 nbsp; nbsp; user sbarn does not exist - using root
 nbsp; nbsp; user sbarn does not exist - using root
 nbsp; nbsp; File not found#58; 
 /var/tmp/bacula-root/etc/bacula/bconsole.conf
 nbsp; nbsp; File not found#58; /var/tmp/bacula-root/usr/sbin/bconsole
 nbsp; nbsp; File not found#58; 
 /var/tmp/bacula-root/etc/bacula/bconsole.conf
 nbsp; nbsp; File not found#58; /var/tmp/bacula-root/usr/sbin/bconsole

 How do I resolve it?

 +--
 |This was sent by sc...@gorodok.net via Backup Central.
 |Forward SPAM to ab...@backupcentral.com.
 +--



 --
 SOLARIS 10 is the OS for Data Centers - provides features such as DTrace,
 Predictive Self Healing and Award Winning ZFS. Get Solaris 10 NOW
 http://p.sf.net/sfu/solaris-dev2dev
 ___
 Bacula-users mailing list
 Bacula-users@lists.sourceforge.net
 https://lists.sourceforge.net/lists/listinfo/bacula-users


--
SOLARIS 10 is the OS for Data Centers - provides features such as DTrace,
Predictive Self Healing and Award Winning ZFS. Get Solaris 10 NOW
http://p.sf.net/sfu/solaris-dev2dev
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] Restore Dir/Files recursively

2010-02-10 Thread Gavin McCullagh
On Wed, 10 Feb 2010, Ken Barclay wrote:

 Thanks Gavin, but
 
 $ mark /public/share/120 SALES DIVISION
 No files marked.
 
 $ add /public/share/120 SALES DIVISION/
 No files marked.

Sorry, I wasn't thinking.  I'd suggest you do:

cd public
cd share
mark 120 SALES DIVISION

Gavin


--
SOLARIS 10 is the OS for Data Centers - provides features such as DTrace,
Predictive Self Healing and Award Winning ZFS. Get Solaris 10 NOW
http://p.sf.net/sfu/solaris-dev2dev
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] [Bacula-devel] Feature / Project request - support for file-system / volume / san dedup for file devices

2010-02-10 Thread Marc Schiffbauer
* Kern Sibbald schrieb am 10.02.10 um 08:56 Uhr:
 Hello,
 

Hi Kern,

very interesting to hear that more dedup features are in the queue! Great
news!

[...]

 With all the above, I do not think that it is yet time to discuss changing 
 the 
 Bacula Volume format (though a new (second) Volume format is one of the 
 options I am considering for item 3.

I think to make that filesystem-based dedup actually work, only a new
Volume format would do the job. With blocks aligned and metadata
seperated from the bulk data.
In that case bacula does not even have to care about dedup.

-Marc
-- 
:: WSS://WebServices Schiffbauer ::
Domains :: DNS ::  eMail :: Hosting

--
SOLARIS 10 is the OS for Data Centers - provides features such as DTrace,
Predictive Self Healing and Award Winning ZFS. Get Solaris 10 NOW
http://p.sf.net/sfu/solaris-dev2dev
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


[Bacula-users] Bacula Compression - other then GZIP

2010-02-10 Thread Gilberto Nunes
Hi folks...

I need to know if the Compression flag on FileSet must be gzip or I can
use another compress program...

I want use bzip2 to compress my file, because I thing bzip2 is more
efficient...

Thanks for any help...

Regards


Gilberto Nunes Ferreira 
TI
Selbetti Gestão de Documentos
Telefone: +55 (47) 3441-6004
Celular: +55 (47) 8861-6672 




--
SOLARIS 10 is the OS for Data Centers - provides features such as DTrace,
Predictive Self Healing and Award Winning ZFS. Get Solaris 10 NOW
http://p.sf.net/sfu/solaris-dev2dev
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


[Bacula-users] storagetek timberwolf STK9714

2010-02-10 Thread Dan Langille
FYI, I have just heard about Bacula being used with a Storagetek 
Timberwolf STK9714

http://www.bigkey.com/pic/Storage%20Tec/8097_9714%20Tape%20Library.jpg

** 100 tape slot
** 4 x DLT7000 (35/70G)

I heard about it from a person I know in a Linux User Group.

--
SOLARIS 10 is the OS for Data Centers - provides features such as DTrace,
Predictive Self Healing and Award Winning ZFS. Get Solaris 10 NOW
http://p.sf.net/sfu/solaris-dev2dev
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


[Bacula-users] Restore all files from a partial, failed full backup?

2010-02-10 Thread Richard Hartmann
Hi all,

a machine of mine died during a full backup. I did restore it from
previous backups, but I would want to get at the data which is in
said partial backup as it is obviously the newest.

bconsole obviously does not allow me to restore from a partial restore,
so I am looking for pointers on how to manually extract as much data as
possible and merge everything that remains by hand. As we are talking
Maildirs, this is relatively trivial to do once I get ahold of said
data. Getting at it seems to be more complicated.



Thanks,
Richard

--
SOLARIS 10 is the OS for Data Centers - provides features such as DTrace,
Predictive Self Healing and Award Winning ZFS. Get Solaris 10 NOW
http://p.sf.net/sfu/solaris-dev2dev
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] [Bacula-devel] Feature / Project request - support for file-system / volume / san dedup for file devices

2010-02-10 Thread Kern Sibbald
On Wednesday 10 February 2010 12:08:57 Marc Schiffbauer wrote:
 * Kern Sibbald schrieb am 10.02.10 um 08:56 Uhr:
  Hello,

 Hi Kern,

 very interesting to hear that more dedup features are in the queue! Great
 news!

 [...]

  With all the above, I do not think that it is yet time to discuss
  changing the Bacula Volume format (though a new (second) Volume format is
  one of the options I am considering for item 3.

 I think to make that filesystem-based dedup actually work, only a new
 Volume format would do the job. 

When I talked about filesystem deduplication, I meant deduplication based on 
raw device block changes, which is generally done by some program rather than 
the OS.  I was not referring to automatic deduplication that is implemented 
in an OS filesystem.  I probably need to rethink my terminology to ensure 
that this is clear in the future.

 With blocks aligned and metadata seperated from the bulk data.
 In that case bacula does not even have to care about dedup.

We are not planning to change Bacula's Volume format in the near future; it 
would destabilize the project too much, and it is very well adapted to 
keeping backup data.  However, since it is Open Source, you are free to 
experiment.  We would be interested to hear what you come up with.

Best regards,

Kern


--
SOLARIS 10 is the OS for Data Centers - provides features such as DTrace,
Predictive Self Healing and Award Winning ZFS. Get Solaris 10 NOW
http://p.sf.net/sfu/solaris-dev2dev
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] Restore all files from a partial, failed full backup?

2010-02-10 Thread Richard Hartmann
Got it to work via bconsole. Thanks, though :)


Richard

--
SOLARIS 10 is the OS for Data Centers - provides features such as DTrace,
Predictive Self Healing and Award Winning ZFS. Get Solaris 10 NOW
http://p.sf.net/sfu/solaris-dev2dev
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] Bacula Compression - other then GZIP

2010-02-10 Thread Anatoly Pugachev
On 10.02.2010 / 09:05:19 -0200, Gilberto Nunes wrote:
 Hi folks...
 
 I need to know if the Compression flag on FileSet must be gzip or I can
 use another compress program...
 
 I want use bzip2 to compress my file, because I thing bzip2 is more
 efficient...

or even Parallel BZIP2, see http://compression.ca/pbzip2/

Thanks.



--
SOLARIS 10 is the OS for Data Centers - provides features such as DTrace,
Predictive Self Healing and Award Winning ZFS. Get Solaris 10 NOW
http://p.sf.net/sfu/solaris-dev2dev
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] Bacula Compression - other then GZIP

2010-02-10 Thread Phil Stracchino
On 02/10/10 06:05, Gilberto Nunes wrote:
 Hi folks...
 
 I need to know if the Compression flag on FileSet must be gzip or I can
 use another compress program...
 
 I want use bzip2 to compress my file, because I thing bzip2 is more
 efficient...

It is true that bzip2 is more efficient than gzip, but it is also slower
and very much more CPU-intensive.  These are things to keep in mind.
gzip may not be the best compression out there, but it is fast.


-- 
  Phil Stracchino, CDK#2 DoD#299792458 ICBM: 43.5607, -71.355
  ala...@caerllewys.net   ala...@metrocast.net   p...@co.ordinate.org
 Renaissance Man, Unix ronin, Perl hacker, Free Stater
 It's not the years, it's the mileage.

--
SOLARIS 10 is the OS for Data Centers - provides features such as DTrace,
Predictive Self Healing and Award Winning ZFS. Get Solaris 10 NOW
http://p.sf.net/sfu/solaris-dev2dev
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] Bacula Compression - other then GZIP

2010-02-10 Thread Petar Bogdanovic
On Wed, Feb 10, 2010 at 09:05:19AM -0200, Gilberto Nunes wrote:
 
 (...) gzip or I can use another compress program...

No.


 I want use bzip2 to compress my file, because I thing bzip2 is more
 efficient...

Really?

   $ du -m /tmp/foo.iso
625 /tmp/foo.iso
   $ gzip -c /tmp/foo.iso | dd bs=64K /dev/null
0+34388 records in
0+34388 records out
563405802 bytes (563 MB) copied, 64.9428 s, 8.7 MB/s
   $ bzip2 -c /tmp/foo.iso | dd bs=64K /dev/null
0+137488 records in
0+137488 records out
563150276 bytes (563 MB) copied, 445.201 s, 1.3 MB/s

255526 bytes less while six times slower..

Petar Bogdanovic

--
SOLARIS 10 is the OS for Data Centers - provides features such as DTrace,
Predictive Self Healing and Award Winning ZFS. Get Solaris 10 NOW
http://p.sf.net/sfu/solaris-dev2dev
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] Bacula Compression - other then GZIP

2010-02-10 Thread Steve Polyack
On 2/10/2010 8:16 AM, Petar Bogdanovic wrote:
 I want use bzip2 to compress my file, because I thing bzip2 is more
 efficient...
  
 Really?

 $ du -m /tmp/foo.iso
   625 /tmp/foo.iso
 $ gzip -c/tmp/foo.iso | dd bs=64K/dev/null
   0+34388 records in
   0+34388 records out
   563405802 bytes (563 MB) copied, 64.9428 s, 8.7 MB/s
 $ bzip2 -c/tmp/foo.iso | dd bs=64K/dev/null
   0+137488 records in
   0+137488 records out
   563150276 bytes (563 MB) copied, 445.201 s, 1.3 MB/s

 255526 bytes less while six times slower..

   Petar Bogdanovic


This is extremely dependent on the contents of foo.iso.  I don't think 
its a good test because you are only seeing 10% compression either way.  
There is a good chance that much of the data within your ISO is already 
compressed.  When using data which is typically more compressible (text 
and other data that is not already compressed), the resulting size of 
something compressed with bzip2 can be much smaller than when compressed 
using gzip.  It's true that it is much slower, but if he's talking about 
it being more efficient in terms of disk space used, then he is correct.

--
SOLARIS 10 is the OS for Data Centers - provides features such as DTrace,
Predictive Self Healing and Award Winning ZFS. Get Solaris 10 NOW
http://p.sf.net/sfu/solaris-dev2dev
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] Bacula Compression - other then GZIP

2010-02-10 Thread Sean M Clark
On 2010Feb10 8:50 AM, Steve Polyack wrote:
 On 2/10/2010 8:16 AM, Petar Bogdanovic wrote:
 I want use bzip2 to compress my file, because I thing bzip2 is more
 efficient...
  
 Really?
[...]
 255526 bytes less while six times slower..

 This is extremely dependent on the contents of foo.iso.  I don't think 
 its a good test because you are only seeing 10% compression either way.  
 There is a good chance that much of the data within your ISO is already 
 compressed.  When using data which is typically more compressible (text 
 and other data that is not already compressed), the resulting size of 
 something compressed with bzip2 can be much smaller than when compressed 
 using gzip.  It's true that it is much slower, but if he's talking about 
 it being more efficient in terms of disk space used, then he is correct.

xz/lzma is another consideration.  At moderate compression levels, lzma
seems to be about the same or slightly faster than bzip2 with a little
better compression.  At lower compression levels it seems like it's
about as fast as gzip while compressing noticeably farther - at least
in the small amount of testing I've done so far with the xz
implementation of lzma compression.

(The small amount of testing I've done so far suggests to me that xz
with a compression level of 1 runs about as fast as gzip4 with
compression at or better than gzip7, approaching bzip2 for some types of
files.  Cranking up to xz 6 or 7 runs a bit faster than bzip2 default
but tends to give better compression.)

--
SOLARIS 10 is the OS for Data Centers - provides features such as DTrace,
Predictive Self Healing and Award Winning ZFS. Get Solaris 10 NOW
http://p.sf.net/sfu/solaris-dev2dev
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] Bacula Compression - other then GZIP

2010-02-10 Thread Steve Polyack
On 2/10/2010 10:36 AM, Sean M Clark wrote:
 On 2010Feb10 8:50 AM, Steve Polyack wrote:

 On 2/10/2010 8:16 AM, Petar Bogdanovic wrote:
  
 I want use bzip2 to compress my file, because I thing bzip2 is more
 efficient...

  
 Really?

 [...]

 255526 bytes less while six times slower..


 This is extremely dependent on the contents of foo.iso.  I don't think
 its a good test because you are only seeing 10% compression either way.
 There is a good chance that much of the data within your ISO is already
 compressed.  When using data which is typically more compressible (text
 and other data that is not already compressed), the resulting size of
 something compressed with bzip2 can be much smaller than when compressed
 using gzip.  It's true that it is much slower, but if he's talking about
 it being more efficient in terms of disk space used, then he is correct.
  
 xz/lzma is another consideration.  At moderate compression levels, lzma
 seems to be about the same or slightly faster than bzip2 with a little
 better compression.  At lower compression levels it seems like it's
 about as fast as gzip while compressing noticeably farther - at least
 in the small amount of testing I've done so far with the xz
 implementation of lzma compression.

 (The small amount of testing I've done so far suggests to me that xz
 with a compression level of 1 runs about as fast as gzip4 with
 compression at or better than gzip7, approaching bzip2 for some types of
 files.  Cranking up to xz 6 or 7 runs a bit faster than bzip2 default
 but tends to give better compression.)


On the other side of the spectrum, LZO/LZO2 compression is available 
which greatly favors compression speed while still providing a decent 
compression ratio.  I'd like to see these algorithms make their way into 
Bacula, but their doesn't seem to be much interest in doing so.  I 
suppose its understandable, as GZIP is fairly flexible.

--
SOLARIS 10 is the OS for Data Centers - provides features such as DTrace,
Predictive Self Healing and Award Winning ZFS. Get Solaris 10 NOW
http://p.sf.net/sfu/solaris-dev2dev
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] How to backup the catalog

2010-02-10 Thread John Drescher
 One question, how to backup the catalog.

 We are using MySQL for the bacula catalog. This database is approximately 46
 Gb in size.

 When we use the backup script make_catalog_backup (supplied with bacula) to
 dump the database, bacula is not usable during the mysqldump process due to
 locked tables.

 In this case its not possible to make a backup of the catalog every day. We
 don’t like a not responding bacula system ;)

 My question is how do you make a good backup of the catalog  without
 interrupting the bacula functionality?


Can't you backup the database after all of your backups are finished?
My database is around 30GB and it takes less than 20 minutes to backup
on a 5 year old Opteron 246 server.

John

--
SOLARIS 10 is the OS for Data Centers - provides features such as DTrace,
Predictive Self Healing and Award Winning ZFS. Get Solaris 10 NOW
http://p.sf.net/sfu/solaris-dev2dev
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] How to backup the catalog

2010-02-10 Thread Thomas Mueller
Am Wed, 10 Feb 2010 16:52:54 +0100 schrieb JanJaap Scholing:

 Hi List,
 
 
 One
 question, how to backup the catalog.
 
 
 We are
 using MySQL for the bacula catalog. This database is approximately 46 Gb
 in size.
 
 When we use
 the backup script make_catalog_backup (supplied with bacula) to dump the
 database, bacula is not usable during the mysqldump process due to
 locked tables.
 
 
 In this
 case its not possible to make a backup of the catalog every day. We
 don’t like a not responding bacula system ;)
 
 
 My question
 is how do you make a good backup of the catalog  without interrupting
 the bacula functionality?


thoughts:
* take an lvm snapshot and backup the db files directly or
* configure a replicated mysql setup and take the backup on the second 
server or
* use mysqlhotcopy (never used myself) or
* split the catalog in x catalogs to get smaller databases (hmm.. size 
doesnt change but maybe bacula remains usable while backup) or
* there are for sure other ways to do it (like buying a faster server). ;)

- Thomas


--
SOLARIS 10 is the OS for Data Centers - provides features such as DTrace,
Predictive Self Healing and Award Winning ZFS. Get Solaris 10 NOW
http://p.sf.net/sfu/solaris-dev2dev
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] How to backup the catalog

2010-02-10 Thread Mike Ruskai

On 2/10/2010 10:52 AM, JanJaap Scholing wrote:


Hi List,


One question, how to backup the catalog.


We are using MySQL for the bacula catalog. This database is 
approximately 46 Gb in size.


When we use the backup script make_catalog_backup (supplied with 
bacula) to dump the database, bacula is not usable during the 
mysqldump process due to locked tables.



In this case its not possible to make a backup of the catalog every 
day. We don’t like a not responding bacula system ;)



My question is how do you make a good backup of the catalog without 
interrupting the bacula functionality?



Thanks and regards

Jan Jaap


So you want to back up a database while still being allowed to write to 
that database at the same time?  It's simply not possible as stated.  
Your options, as far as I know, are these:


1)  Dump the database, as the Bacula script does, and wait for it to 
complete.


2)  Flush the tables and copy the database files (if MyISAM), which may 
or may not be faster (still must prevent DB writes).


3)  Set up a slave database that uses MySQL replication to mirror the 
master database.  When you want to backup, take the slave offline to do 
a dump or copy, leaving the master free to continue working.  You will 
not back up whatever changes were made since the slave was taken offline.


I haven't messed around with MySQL replication just yet, so I don't know 
how easily option 3 will work in practice.  But if you don't want Bacula 
to be offline while doing a DB backup, that's your only real option.  
Even the expensive commercial DBMS's don't have a very good solution to 
doing live backups.



--
SOLARIS 10 is the OS for Data Centers - provides features such as DTrace,
Predictive Self Healing and Award Winning ZFS. Get Solaris 10 NOW
http://p.sf.net/sfu/solaris-dev2dev___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] Bacula Compression - other then GZIP

2010-02-10 Thread John Doe
From: Sean M Clark smcl...@tamu.edu
 xz/lzma is another consideration.  At moderate compression levels, lzma
 seems to be about the same or slightly faster than bzip2 with a little
 better compression.  At lower compression levels it seems like it's
 about as fast as gzip while compressing noticeably farther - at least
 in the small amount of testing I've done so far with the xz
 implementation of lzma compression.
 
 (The small amount of testing I've done so far suggests to me that xz
 with a compression level of 1 runs about as fast as gzip4 with
 compression at or better than gzip7, approaching bzip2 for some types of
 files.  Cranking up to xz 6 or 7 runs a bit faster than bzip2 default
 but tends to give better compression.)

Judjing by the following becnhmarks, lzma seems quite resource hungry...
http://tukaani.org/lzma/benchmarks.html

JD


  

--
SOLARIS 10 is the OS for Data Centers - provides features such as DTrace,
Predictive Self Healing and Award Winning ZFS. Get Solaris 10 NOW
http://p.sf.net/sfu/solaris-dev2dev
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] Bacula Compression - other than GZIP {xz/lzma]

2010-02-10 Thread Sean M Clark
On 2010Feb10 10:31 AM, John Doe wrote:
 From: Sean M Clark smcl...@tamu.edu
 xz/lzma is another consideration.  At moderate compression levels, lzma
 seems to be about the same or slightly faster than bzip2 with a little
 better compression.  At lower compression levels it seems like it's
 about as fast as gzip while compressing noticeably farther - at least
 in the small amount of testing I've done so far with the xz
 implementation of lzma compression.
[...]
 Judjing by the following becnhmarks, lzma seems quite resource hungry...
 http://tukaani.org/lzma/benchmarks.html

Hmmm, those results more or less reflect what I remember from the
testing I did.  I don't remember the difference in compression speed
between xz and bzip2 being quite as high as this, but that could either
be due to xz being more efficient than lzmash and/or my own faulty memory.

I note that lzma -2 tended to compress better than bzip2 could manage at
any setting, and faster than default bzip2.

I had forgotten about the much larger memory usage of xz, though in a
modern context the amount still looks pretty trivial (even at the
default setting it requires less than 90MB [the me of 5 years ago
would be appalled to see me describe 90MB as trivial, but still...).
lzma -2 only requires 12M in those results.

Wouldn't necessarily bother with lzma compression on a tiny NAS box
with only 32-64MB RAM in it, but I think it'd be a useful option on a
real computer.

I have no idea what would be involved in adding additional compression
options to bacula-fd/bacula-sd, though.

--
SOLARIS 10 is the OS for Data Centers - provides features such as DTrace,
Predictive Self Healing and Award Winning ZFS. Get Solaris 10 NOW
http://p.sf.net/sfu/solaris-dev2dev
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] Building CentOS 5 RPMs for Bacula 5.0.0-1

2010-02-10 Thread Andy Howell
Burn wrote:
 Hello. I'm having trouble with building 5.0.0-1 srpms on centos 5.4.
 rpmbuild --rebuild bacula-5.0.0-1.src.rpm --define 'build_centos5 1' --define 
 'build_postgresql 1'
 results in
 Checking for unpackaged file#40;s#41;#58; /usr/lib/rpm/check-files 
 /var/tmp/bacula-root
 
 
 RPM build errors#58;
 nbsp; nbsp; InstallSourcePackage#58; Header V3 DSA signature#58; NOKEY, 
 key ID 10a792ad
 nbsp; nbsp; user sbarn does not exist - using root
 nbsp; nbsp; user sbarn does not exist - using root
 nbsp; nbsp; user sbarn does not exist - using root
 nbsp; nbsp; user sbarn does not exist - using root
 nbsp; nbsp; user sbarn does not exist - using root
 nbsp; nbsp; File not found#58; 
 /var/tmp/bacula-root/etc/bacula/bconsole.conf
 nbsp; nbsp; File not found#58; /var/tmp/bacula-root/usr/sbin/bconsole
 nbsp; nbsp; File not found#58; 
 /var/tmp/bacula-root/etc/bacula/bconsole.conf
 nbsp; nbsp; File not found#58; /var/tmp/bacula-root/usr/sbin/bconsole
 
 How do I resolve it?
 

Burn,

I built it on CentOS 5.4 without any problem. The only difference is I 
edited the 
bacula.spec file for the options I wanted. I used the same options as you, plus 
added 
python support:

diff bacula.spec bacula.spec.5.0.0.1-working
306c306
 %define centos5 0
---
  %define centos5 1
397c397
 %define postgresql 0
---
  %define postgresql 1
468c468
 %define python 0
---
  %define python 1

My guess is the build is not completing earlier, maybe some required package is 
missing? I 
can rebuild and send you the output of the build if that helps.

Regards,

Andy

--
SOLARIS 10 is the OS for Data Centers - provides features such as DTrace,
Predictive Self Healing and Award Winning ZFS. Get Solaris 10 NOW
http://p.sf.net/sfu/solaris-dev2dev
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] How to backup the catalog

2010-02-10 Thread Heitor Medrado de Faria
Mike Ruskai wrote:
 On 2/10/2010 10:52 AM, JanJaap Scholing wrote:

 Hi List,


 One question, how to backup the catalog.


 We are using MySQL for the bacula catalog. This database is 
 approximately 46 Gb in size.

 When we use the backup script make_catalog_backup (supplied with 
 bacula) to dump the database, bacula is not usable during the 
 mysqldump process due to locked tables.


 In this case its not possible to make a backup of the catalog every 
 day. We don’t like a not responding bacula system ;)


 My question is how do you make a good backup of the catalog  without 
 interrupting the bacula functionality?


 Thanks and regards

 Jan Jaap


 So you want to back up a database while still being allowed to write 
 to that database at the same time?  It's simply not possible as 
 stated.  Your options, as far as I know, are these:

 1)  Dump the database, as the Bacula script does, and wait for it to 
 complete.

 2)  Flush the tables and copy the database files (if MyISAM), which 
 may or may not be faster (still must prevent DB writes).

 3)  Set up a slave database that uses MySQL replication to mirror the 
 master database.  When you want to backup, take the slave offline to 
 do a dump or copy, leaving the master free to continue working.  You 
 will not back up whatever changes were made since the slave was taken 
 offline.

 I haven't messed around with MySQL replication just yet, so I don't 
 know how easily option 3 will work in practice.  But if you don't want 
 Bacula to be offline while doing a DB backup, that's your only real 
 option.  Even the expensive commercial DBMS's don't have a very good 
 solution to doing live backups.

If you use Postgresql, you could put database in backup mode for hot 
backup = http://www.postgresql.org/docs/8.1/static/backup-online.html.

Regards,

Heitor Faria
www.bacula.com.br

--
SOLARIS 10 is the OS for Data Centers - provides features such as DTrace,
Predictive Self Healing and Award Winning ZFS. Get Solaris 10 NOW
http://p.sf.net/sfu/solaris-dev2dev
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] Full Backup After Previously Successful Full with Ignore FileSet Changes Enabled

2010-02-10 Thread Graham Sparks

   Hello,
 
   I'm a fairly new Bacula user (all daemons running on same machine-Ubuntu804 
 and a FD on Windows XP Home client).  I've set up a Full backup of a drive on 
 the client that ran on Saturday and have an incremental backup of the same 
 fileset done on Monday.  Having noticed that the file size was large for the 
 two day's worth of data, I excluded the Windows swap file from the fileset.
 
   Today's incremental however wouldn't run.  Bacula insisted on running a new 
 full backup.
 
   I'm aware that this is because I have changed the fileset, but read about 
 an option (Ignore FileSet Changes = yes) that is supposed to ignore that fact 
 and continue to perform incrementals.  After adding this and reloading the 
 configuration, Bacula still won't perform an incremental backup.
 
   Is there a reason why it still refuses to run an incremental backup (I 
 deleted the JobID for the failed promoted Full backup with the delete JobIB 
 command)?
 
 Have a try if restarting the director helps.
 
 Reload should be enough, but recently I noticed that 3.0.3 didn't recognize 
 fileset option changes reliably after reload.
 
 --
 TiN

I performed as restart (and a separate start/stop) but it's the same.

I've tested it with a smaller job and it seems to be the case that the 
IgnoreFileSetChanges option only takes effect if present in the FileSet 
definition when the original Full backup runs (adding it in afterwards doesn't 
make a difference).

Many thanks for the reply though!

Graham
  
_
Send us your Hotmail stories and be featured in our newsletter
http://clk.atdmt.com/UKM/go/195013117/direct/01/--
SOLARIS 10 is the OS for Data Centers - provides features such as DTrace,
Predictive Self Healing and Award Winning ZFS. Get Solaris 10 NOW
http://p.sf.net/sfu/solaris-dev2dev___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] Bacula Compression - other then GZIP

2010-02-10 Thread Phil Stracchino
On 02/10/10 10:36, Sean M Clark wrote:
 xz/lzma is another consideration.  At moderate compression levels, lzma
 seems to be about the same or slightly faster than bzip2 with a little
 better compression.  At lower compression levels it seems like it's
 about as fast as gzip while compressing noticeably farther - at least
 in the small amount of testing I've done so far with the xz
 implementation of lzma compression.


I was going to mention xz myself.  I just completed some rather more
extensive tests.

I'm using three example test files here.  The first, a 590MB ISO of
Windows XP Pro SP3, contains a large amount of already-compressed data,
and can be expected to compress poorly.  The second, an 8.5MB stripped
ELF 32-bit LSB executable, can probably be expected to compress
moderately well.  The third, a ebook resaved in text format, isabout
1.5MB of English ASCII text and should compress very well.  I'm
compressing each with gzip default options, gzip -9, bzip2, xz default
options, and xz -7.  (The xz man page notes that compression settings
above 7 are not recommended unless absolute maximum compression is
necessary due to time and memory usage.)

First, the WinXP ISO (whitespace adjusted for clarity):

babylon5:alaric:~:10 $ ls -l winxp.iso
-rw-r- 1 alaric users 617754624 Feb 10 10:24 winxp.iso

babylon5:alaric:~:11 $ time gzip -c  winxp.iso | dd bs=64K /dev/null
0+35022 records in
0+35022 records out
573799160 bytes (574 MB) copied, 78.782 s, 7.3 MB/s
real1m18.935s
user0m53.804s
sys 0m4.357s
compression: 7.12%
compression/time: 0.0901

babylon5:alaric:~:12 $ time gzip -9 -c  winxp.iso | dd bs=64K /dev/null
0+35013 records in
0+35013 records out
573652786 bytes (574 MB) copied, 111.185 s, 5.2 MB/s
real1m51.207s
user1m11.860s
sys 0m4.905s
compression: 7.14%
compression/time: 0.0643

babylon5:alaric:~:13 $ time bzip2 -c  winxp.iso | dd bs=64K /dev/null
0+140444 records in
0+140444 records out
575258513 bytes (575 MB) copied, 808.258 s, 712 kB/s
real13m28.370s
user10m11.257s
sys 0m6.221s
compression: 6.88%
compression/time: 0.0085

babylon5:alaric:~:14 $ time xz -c  winxp.iso | dd bs=64K /dev/null
0+69111 records in
0+69111 records out
566328660 bytes (566 MB) copied, 1395.3 s, 406 kB/s
real23m15.341s
user17m39.189s
sys 0m9.664s
compression: 8.43%
compression/time: 0.0060

babylon5:alaric:~:15 $ time xz -7 -c  winxp.iso | dd bs=64K /dev/null
0+69040 records in
0+69040 records out
565609576 bytes (566 MB) copied, 1512.2 s, 374 kB/s
real25m12.247s
user19m7.363s
sys 0m10.943s
compression: 8.45%
compression/time: 0.0055

With this poorly compressible data, both gzip and gzip -9 yield better
compression than bzip2, with roughly an order of magnitude higher
throughput and lower CPU usage.  The best compression on this file, by a
hair, is achieved by xz -7, with default xz only 0.02% behind but taking
8% less time.  The worst compression of 6.88% is bzip2, but it takes
around half the time xz takes to do it, resulting in an actual
compression/time score 50% better than xz.  gzip achieves about 1.3%
less compression than xz and about 0.25% better than bzip2, but does it
7 to 10 times faster than bzip2 and 12 to 20 times faster than xz.  The
best compression per unit time score is achieved by default gzip.  The
worst, xz -7, is an order of magnitude worse than gzip -9 in
compression/time and achieves only 1.29% additional compression.


Next, the ELF executable.

babylon5:alaric:~:21 $ ls -l mplayer
-rwxr-x--- 1 alaric users 8485168 Feb 10 12:04 mplayer

babylon5:alaric:~:22 $ time gzip -c  mplayer | dd bs=64K /dev/null
0+230 records in
0+230 records out
3752190 bytes (3.8 MB) copied, 1.26176 s, 3.0 MB/s
real0m1.266s
user0m1.032s
sys 0m0.055s
compression: 55.8%
compression/time: 44.075

babylon5:alaric:~:23 $ time gzip -9 -c  mplayer | dd bs=64K /dev/null
0+228 records in
0+228 records out
3734027 bytes (3.7 MB) copied, 2.76918 s, 1.3 MB/s
real0m2.779s
user0m2.119s
sys 0m0.054s
compression: 56%
compression/time: 20.173

babylon5:alaric:~:24 $ time bzip2 -c  mplayer | dd bs=64K /dev/null
0+880 records in
0+880 records out
3603587 bytes (3.6 MB) copied, 6.41314 s, 562 kB/s
real0m6.426s
user0m5.128s
sys 0m0.050s
compression: 57.5%
compression/time: 8.948

babylon5:alaric:~:25 $ time xz -c  mplayer | dd bs=64K /dev/null
0+362 records in
0+362 records out
2964084 bytes (3.0 MB) copied, 21.0693 s, 141 kB/s
real0m21.098s
user0m15.434s
sys 0m0.316s
compression: 65%
compression/time: 3.081

babylon5:alaric:~:26 $ time xz -7 -c  mplayer | dd bs=64K /dev/null
0+362 records in
0+362 records out
2964084 bytes (3.0 MB) copied, 19.8819 s, 149 kB/s
real0m19.913s
user0m15.347s
sys 0m0.301s
compression: 65%
compression/time: 3.264

This is not all that dissimilar a picture.  Interestingly, here, default
xz and xz -7 achieve identical compression, but xz -7 accomplishes it
slightly over a second faster.  Both 

[Bacula-users] Fatal error: fd_cmds.c:177 FD command not found: 8F~8D

2010-02-10 Thread Ralf Gross
Hi,

bacula 3.0.3 SD+ DIR, 2.4.4 FD, Debian Lenny, psql 8.4

The backup job 19429 was running for nearly two days and then failed while
changing the LTO3 tape. The job failed two times now. No messages in syslog.

The message ERR=Datei oder Verzeichnis nicht gefunden means ERR=file or
directory not found

I deleted the 06D132L3 tape in the bacula catalog, erased the bacula label with
mt and labled it again. No problem while loading/unloading or writing the lable.

The strage thing is that this error blocked an other job (19427) that was
running at the same time but on a different storage daemon! The other SD just
stoped writing. The status was still running but no activity on the SD side. 

Any ideas?

[...]
10-Feb 16:17 VU0EA003-sd JobId 19427: Despooling elapsed time = 00:03:33, 
Transfer rate = 100.8 M bytes/second
10-Feb 16:17 VU0EA003-sd JobId 19427: Spooling dat ...
10-Feb 16:18 VUMEM004-sd JobId 19429: Despooling elapsed time = 00:05:49, 
Transfer rate = 61.53 M bytes/second
10-Feb 16:18 VUMEM004-sd JobId 19429: Spooling data again ...
10-Feb 16:26 VUMEM004-sd JobId 19429: User specified spool size reached.
10-Feb 16:26 VUMEM004-sd JobId 19429: Writing spooled data to Volume. 
Despooling 21,474,877,357 bytes ...
10-Feb 16:31 VUMEM004-sd JobId 19429: End of Volume 06D142L3 at 575:9470 on 
device LTO3 (/dev/ULTRIUM-TD3). Write of 64512 bytes got -1.
10-Feb 16:31 VUMEM004-sd JobId 19429: Re-read of last block succeeded.
10-Feb 16:31 VUMEM004-sd JobId 19429: End of medium on Volume 06D142L3 
Bytes=575,574,128,640 Blocks=8,921,969 at 10-Feb-2010 16:31.
10-Feb 16:31 VUMEM004-sd JobId 19429: 3307 Issuing autochanger unload slot 23, 
drive 0 command.
10-Feb 16:31 VUMEM004-sd JobId 19429: 3995 Bad autochanger unload slot 23, 
drive 0: ERR=Datei oder Verzeichnis nicht gefunden
Results=
10-Feb 16:31 VUMEM004-dir JobId 19429: There are no more Jobs associated with 
Volume 06D132L3. Marking it purged.
10-Feb 16:31 VUMEM004-dir JobId 19429: All records pruned from Volume 
06D132L3; marking it Purged
10-Feb 16:31 VUMEM004-dir JobId 19429: Recycled volume 06D132L3
10-Feb 16:31 VUMEM004-sd JobId 19429: 3301 Issuing autochanger loaded? drive 
0 command.
10-Feb 16:31 VUMEM004-sd JobId 19429: 3991 Bad autochanger loaded? drive 0 
command: ERR=Datei oder Verzeichnis nicht gefunden.
Results=
10-Feb 16:31 VUMEM004-sd JobId 19429: 3301 Issuing autochanger loaded? drive 
0 command.
10-Feb 16:31 VUMEM004-sd JobId 19429: 3991 Bad autochanger loaded? drive 0 
command: ERR=Datei oder Verzeichnis nicht gefunden.
Results=
10-Feb 16:31 VUMEM004-sd JobId 19429: 3304 Issuing autochanger load slot 13, 
drive 0 command.
10-Feb 16:31 VUMEM004-sd JobId 19429: Fatal error: 3992 Bad autochanger load 
slot 13, drive 0: ERR=Datei oder Verzeichnis nicht gefunden.
Results=
10-Feb 16:31 VUMEM004-sd JobId 19429: Fatal error: spool.c:302 Fatal append 
error on device LTO3 (/dev/ULTRIUM-TD3): ERR=
10-Feb 16:31 VUMEM004-sd JobId 19429: Despooling elapsed time = 00:04:37, 
Transfer rate = 77.52 M bytes/second
10-Feb 16:32 VUMEM004-sd JobId 19429: Fatal error: fd_cmds.c:177 FD command not 
found: 8F~8DD7_96F0DCee905lEDFF^\EF=FBF89A
ESC8EACC2C1^\zF98CCE`9DRD4F8`EA=BEFA#F0I^X?q+A3EB^N̖FFE1D7cۈF6#w^C+kg^KCFUBD{9BuLj9EF196^^D6a
E784^^B7mAC^^^QBAy99)N9FFA^C9A2D0scD83kX9FA6IA4+O\šB4%D1FC
  ^U^R^KJDDE6^G3DF:̢B9C4˨BCE981D2̿NJ
^VE1^A7^P9D^H83B9C0A^GtٴC4rwAD젼RDC1F3E0E2^ZC9^?r҃^ZC0^H^Mu9E568EF6FAFCE2\99^?E9x`14E8wB2Q,A0:骘:6
DD^?^XQ4^]F986^YD2^\^]D3FE8F^EB5qB8B3^G8C86@C0C5cESCoFACD^As1fB7C2i{A18DF6vADBF^AB790)-8C^V^C
EA 
DEFAA6^AE^F87]^V9EC0B6jb2D4$6`ӅCED2V949FHF482593]CF00X^L8A98FB\ACѵFD90h|8F/A7^ODCD^UG5^D{
CEF7~F9GZk80CCF892BEABo^L938DhF1dDE9B7-C18C^]8B3^O82^_DB^\UAA88F388EB^\F6^Sf989DS#D7FC
99BF):11^_^Ol
2'^_DBC8EB
D7E78CCEE2nx8E
B3^UC8A^G^BD9F6M}PZpC09^TA0C6r]EF8A
AEAS85D7j/G^W84C4CBBC6F3CBE6!...@t^m^u80]oF2E6^D}CF*^T%87MFB^VyC2dFEX~t^O^ACIC9G^ZfA6ACD71\3H8E}yB7B7ސIMD7ϱBEyQB7D7pD3U؟E7A8ЦA8kyF9)ABFFE1EBK1A5D5F5^KdD2ἮFED7`G9CEBAFvFA
10-Feb 16:32 VUMEM004-sd JobId 19429: Job write elapsed time = 47:21:16, 
Transfer rate = 19.25 M bytes/second
10-Feb 16:32 VUMEM004-sd JobId 19429: Fatal error: fd_cmds.c:166 Command error 
with FD, hanging up. Append data error.

10-Feb 16:32 VU0EM003 JobId 19429: Fatal error: backup.c:892 Network send error 
to SD. ERR=Connection reset by peer
10-Feb 16:32 VUMEM004-dir JobId 19429: Error: Bacula VUMEM004-dir 3.0.3 
(18Oct09): 10-Feb-2010 16:32:20
  Build OS:   x86_64-pc-linux-gnu debian 4.0
  JobId:  19429
  Job:VU0EM003.2010-02-08_16.26.19_08
  Backup Level:   Full
  Client: VU0EM003 2.4.4 (28Dec08) 
x86_64-pc-linux-gnu,debian,4.0
  FileSet:VU0EM003 2007-06-12 00:05:01
  Pool:   Full (From Job FullPool override)
  Catalog:MyCatalog (From Client resource)
  Storage:NEC-T40A_B-Net (From user selection)
  Scheduled time: 08-Feb-2010 16:25:56
  Start time: 08-Feb-2010 

Re: [Bacula-users] How to backup the catalog

2010-02-10 Thread Henrik Johansen
Hi,

On 02/10/10 04:52 PM, JanJaap Scholing wrote:
 Hi List,

 One question, how to backup the catalog.

 We are using MySQL for the bacula catalog. This database is
 approximately 46 Gb in size.

Are you using MyISAM or InnoDB ? Backup procedures can vary according to 
the MySQL storage engine being used.

 When we use the backup script make_catalog_backup (supplied with bacula)
 to dump the database, bacula is not usable during the mysqldump process
 due to locked tables.

 In this case its not possible to make a backup of the catalog every day.
 We don’t like a not responding bacula system ;)

 My question is how do you make a good backup of the catalog without
 interrupting the bacula functionality?

For MyISAM I would either deploy filesystem snapshots or replication to 
another server for backups of large MySQL databases.

InnoDB is generally more suited for hot backups -  google should be able 
to provide you with both commercial and / or open source solutions for 
that.

We use filesystem snapshots for MySQL backup (both MyISAM  InnoDB) and 
it works very well.


 Thanks and regards

 Jan Jaap


 
 Ontdek nu Windows phone. De smartphone van dit moment
 http://www.microsoft.com/windowsmobile/nl-nl/default.mspx


-- 
Med venlig hilsen / Best Regards

Henrik Johansen
hen...@scannet.dk
Tlf. 75 53 35 00

ScanNet Group
A/S ScanNet

--
SOLARIS 10 is the OS for Data Centers - provides features such as DTrace,
Predictive Self Healing and Award Winning ZFS. Get Solaris 10 NOW
http://p.sf.net/sfu/solaris-dev2dev
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] How to backup the catalog

2010-02-10 Thread Dan Langille
Heitor Medrado de Faria wrote:

 If you use Postgresql, you could put database in backup mode for hot 
 backup = http://www.postgresql.org/docs/8.1/static/backup-online.html.

I just use pg_dump without doing anything special

--
SOLARIS 10 is the OS for Data Centers - provides features such as DTrace,
Predictive Self Healing and Award Winning ZFS. Get Solaris 10 NOW
http://p.sf.net/sfu/solaris-dev2dev
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] Fatal error: fd_cmds.c:177 FD command not found: 8F~8D

2010-02-10 Thread Ralf Gross
follow up

Cacti shows that swap started growing this morning and reached it's
maximum when the job failed...


Ralf Gross schrieb:
 Hi,
 
 bacula 3.0.3 SD+ DIR, 2.4.4 FD, Debian Lenny, psql 8.4
 
 The backup job 19429 was running for nearly two days and then failed while
 changing the LTO3 tape. The job failed two times now. No messages in syslog.
 
 The message ERR=Datei oder Verzeichnis nicht gefunden means ERR=file or
 directory not found
 
 I deleted the 06D132L3 tape in the bacula catalog, erased the bacula label 
 with
 mt and labled it again. No problem while loading/unloading or writing the 
 lable.
 
 The strage thing is that this error blocked an other job (19427) that was
 running at the same time but on a different storage daemon! The other SD just
 stoped writing. The status was still running but no activity on the SD side. 
 
 Any ideas?
 
 [...]
 10-Feb 16:17 VU0EA003-sd JobId 19427: Despooling elapsed time = 00:03:33, 
 Transfer rate = 100.8 M bytes/second
 10-Feb 16:17 VU0EA003-sd JobId 19427: Spooling dat ...
 10-Feb 16:18 VUMEM004-sd JobId 19429: Despooling elapsed time = 00:05:49, 
 Transfer rate = 61.53 M bytes/second
 10-Feb 16:18 VUMEM004-sd JobId 19429: Spooling data again ...
 10-Feb 16:26 VUMEM004-sd JobId 19429: User specified spool size reached.
 10-Feb 16:26 VUMEM004-sd JobId 19429: Writing spooled data to Volume. 
 Despooling 21,474,877,357 bytes ...
 10-Feb 16:31 VUMEM004-sd JobId 19429: End of Volume 06D142L3 at 575:9470 on 
 device LTO3 (/dev/ULTRIUM-TD3). Write of 64512 bytes got -1.
 10-Feb 16:31 VUMEM004-sd JobId 19429: Re-read of last block succeeded.
 10-Feb 16:31 VUMEM004-sd JobId 19429: End of medium on Volume 06D142L3 
 Bytes=575,574,128,640 Blocks=8,921,969 at 10-Feb-2010 16:31.
 10-Feb 16:31 VUMEM004-sd JobId 19429: 3307 Issuing autochanger unload slot 
 23, drive 0 command.
 10-Feb 16:31 VUMEM004-sd JobId 19429: 3995 Bad autochanger unload slot 23, 
 drive 0: ERR=Datei oder Verzeichnis nicht gefunden
 Results=
 10-Feb 16:31 VUMEM004-dir JobId 19429: There are no more Jobs associated with 
 Volume 06D132L3. Marking it purged.
 10-Feb 16:31 VUMEM004-dir JobId 19429: All records pruned from Volume 
 06D132L3; marking it Purged
 10-Feb 16:31 VUMEM004-dir JobId 19429: Recycled volume 06D132L3
 10-Feb 16:31 VUMEM004-sd JobId 19429: 3301 Issuing autochanger loaded? drive 
 0 command.
 10-Feb 16:31 VUMEM004-sd JobId 19429: 3991 Bad autochanger loaded? drive 0 
 command: ERR=Datei oder Verzeichnis nicht gefunden.
 Results=
 10-Feb 16:31 VUMEM004-sd JobId 19429: 3301 Issuing autochanger loaded? drive 
 0 command.
 10-Feb 16:31 VUMEM004-sd JobId 19429: 3991 Bad autochanger loaded? drive 0 
 command: ERR=Datei oder Verzeichnis nicht gefunden.
 Results=
 10-Feb 16:31 VUMEM004-sd JobId 19429: 3304 Issuing autochanger load slot 13, 
 drive 0 command.
 10-Feb 16:31 VUMEM004-sd JobId 19429: Fatal error: 3992 Bad autochanger load 
 slot 13, drive 0: ERR=Datei oder Verzeichnis nicht gefunden.
 Results=
 10-Feb 16:31 VUMEM004-sd JobId 19429: Fatal error: spool.c:302 Fatal append 
 error on device LTO3 (/dev/ULTRIUM-TD3): ERR=
 10-Feb 16:31 VUMEM004-sd JobId 19429: Despooling elapsed time = 00:04:37, 
 Transfer rate = 77.52 M bytes/second
 10-Feb 16:32 VUMEM004-sd JobId 19429: Fatal error: fd_cmds.c:177 FD command 
 not found: 8F~8DD7_96F0DCee905lEDFF^\EF=FBF89A
 ESC8EACC2C1^\zF98CCE`9DRD4F8`EA=BEFA#F0I^X?q+A3EB^N̖FFE1D7cۈF6#w^C+kg^KCFUBD{9BuLj9EF196^^D6a
 E784^^B7mAC^^^QBAy99)N9FFA^C9A2D0scD83kX9FA6IA4+O\šB4%D1FC
   ^U^R^KJDDE6^G3DF:̢B9C4˨BCE981D2̿NJ
 ^VE1^A7^P9D^H83B9C0A^GtٴC4rwAD젼RDC1F3E0E2^ZC9^?r҃^ZC0^H^Mu9E568EF6FAFCE2\99^?E9x`14E8wB2Q,A0:骘:6
 DD^?^XQ4^]F986^YD2^\^]D3FE8F^EB5qB8B3^G8C86@C0C5cESCoFACD^As1fB7C2i{A18DF6vADBF^AB790)-8C^V^C
 EA 
 DEFAA6^AE^F87]^V9EC0B6jb2D4$6`ӅCED2V949FHF482593]CF00X^L8A98FB\ACѵFD90h|8F/A7^ODCD^UG5^D{
 CEF7~F9GZk80CCF892BEABo^L938DhF1dDE9B7-C18C^]8B3^O82^_DB^\UAA88F388EB^\F6^Sf989DS#D7FC
 99BF):11^_^Ol
 2'^_DBC8EB
 D7E78CCEE2nx8E
 B3^UC8A^G^BD9F6M}PZpC09^TA0C6r]EF8A
 AEAS85D7j/G^W84C4CBBC6F3CBE6!...@t^m^u80]oF2E6^D}CF*^T%87MFB^VyC2dFEX~t^O^ACIC9G^ZfA6ACD71\3H8E}yB7B7ސIMD7ϱBEyQB7D7pD3U؟E7A8ЦA8kyF9)ABFFE1EBK1A5D5F5^KdD2ἮFED7`G9CEBAFvFA
 10-Feb 16:32 VUMEM004-sd JobId 19429: Job write elapsed time = 47:21:16, 
 Transfer rate = 19.25 M bytes/second
 10-Feb 16:32 VUMEM004-sd JobId 19429: Fatal error: fd_cmds.c:166 Command 
 error with FD, hanging up. Append data error.
 
 10-Feb 16:32 VU0EM003 JobId 19429: Fatal error: backup.c:892 Network send 
 error to SD. ERR=Connection reset by peer
 10-Feb 16:32 VUMEM004-dir JobId 19429: Error: Bacula VUMEM004-dir 3.0.3 
 (18Oct09): 10-Feb-2010 16:32:20
   Build OS:   x86_64-pc-linux-gnu debian 4.0
   JobId:  19429
   Job:VU0EM003.2010-02-08_16.26.19_08
   Backup Level:   Full
   Client: VU0EM003 2.4.4 (28Dec08) 
 x86_64-pc-linux-gnu,debian,4.0
   FileSet:VU0EM003 2007-06-12 00:05:01
   Pool:   Full (From 

[Bacula-users] Building CentOS 5 RPMs for Bacula 5.0.0-1

2010-02-10 Thread Burn


Carlo Filippetto wrote:
 I don't know you response,
 but why you don't try to build it from the source code?
 

because it is not scalable nor mantainable.


Andy Howell wrote:
 
 My guess is the build is not completing earlier, maybe some required package 
 is missing? I 
 can rebuild and send you the output of the build if that helps.
 
 Regards,
 
   Andy
 

Thanks for your comment. After some research I discovered that configure script 
does not detect termcap.h:

checking for msgfmt... #40;cached#41; /usr/bin/msgfmt
checking termcap.h usability... no
checking termcap.h presence... no
checking for termcap.h... no
checking curses.h usability... yes

...

==Entering directory /home/rpmbuilder/rpmbuild/BUILD/bacula-5.0.0/src/console
make#91;1#93;#58; Entering directory 
`/home/rpmbuilder/rpmbuild/BUILD/bacula-5.0.0/src/console'
conio.c#58;87#58;21#58; error#58; termcap.h#58; No such file or directory
make#91;1#93;#58; *** #91;depend#93; Error 1
make#91;1#93;#58; Leaving directory 
`/home/rpmbuilder/rpmbuild/BUILD/bacula-5.0.0/src/console'

I checked permissions, reinstalled both ncurses and ncurses-devel, that didn't 
help. Then I tried
cdnbsp; /usr/include
ln -s ncurses/termcap.h

and that actually did the trick. Looks like script looking in the wrong place. 
Installed version of ncurses is ncurses-5.5-24.20060715, 
ncurses-devel-5.5-24.20060715

+--
|This was sent by sc...@gorodok.net via Backup Central.
|Forward SPAM to ab...@backupcentral.com.
+--



--
SOLARIS 10 is the OS for Data Centers - provides features such as DTrace,
Predictive Self Healing and Award Winning ZFS. Get Solaris 10 NOW
http://p.sf.net/sfu/solaris-dev2dev
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] How to backup the catalog

2010-02-10 Thread Fahrer, Julian
Hey,

for innodb should mysqldump with the --single-transaction option work.
http://dev.mysql.com/doc/refman/5.0/en/mysqldump.html#option_mysqldump_s
ingle-transaction


Kinds regards

Julian


--
SOLARIS 10 is the OS for Data Centers - provides features such as DTrace,
Predictive Self Healing and Award Winning ZFS. Get Solaris 10 NOW
http://p.sf.net/sfu/solaris-dev2dev
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] Building CentOS 5 RPMs for Bacula 5.0.0-1

2010-02-10 Thread Andy Howell
Burn wrote:
 Thanks for your comment. After some research I discovered that configure 
 script does not detect termcap.h:
 
 checking for msgfmt... #40;cached#41; /usr/bin/msgfmt
 checking termcap.h usability... no
 checking termcap.h presence... no
 checking for termcap.h... no
 checking curses.h usability... yes
 
 ...
 
 ==Entering directory /home/rpmbuilder/rpmbuild/BUILD/bacula-5.0.0/src/console
 make#91;1#93;#58; Entering directory 
 `/home/rpmbuilder/rpmbuild/BUILD/bacula-5.0.0/src/console'
 conio.c#58;87#58;21#58; error#58; termcap.h#58; No such file or directory
 make#91;1#93;#58; *** #91;depend#93; Error 1
 make#91;1#93;#58; Leaving directory 
 `/home/rpmbuilder/rpmbuild/BUILD/bacula-5.0.0/src/console'
 
 I checked permissions, reinstalled both ncurses and ncurses-devel, that 
 didn't help. Then I tried
 cdnbsp; /usr/include
 ln -s ncurses/termcap.h
 
 and that actually did the trick. Looks like script looking in the wrong 
 place. Installed version of ncurses is ncurses-5.5-24.20060715, 
 ncurses-devel-5.5-24.20060715

Burn,

My system has termcap.h from libtermcap-devel:

rpm -qf /usr/include/termcap.h
libtermcap-devel-2.0.8-46.1

Regards,

Andy


--
SOLARIS 10 is the OS for Data Centers - provides features such as DTrace,
Predictive Self Healing and Award Winning ZFS. Get Solaris 10 NOW
http://p.sf.net/sfu/solaris-dev2dev
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] Restore Dir/Files recursively

2010-02-10 Thread Ken Barclay
Thanks Gavin, you got it!

cwd is: /public/share/
$ mark 120 SALES DIVISION
11,234 files marked.

Restore currently in progress.
Thanks again,
Ken

-Original Message-
From: Gavin McCullagh [mailto:gavin.mccull...@gcd.ie]
Sent: Wednesday, 10 February 2010 5:11 PM
To: Ken Barclay
Cc: bacula-users@lists.sourceforge.net
Subject: Re: [Bacula-users] Restore Dir/Files recursively

On Wed, 10 Feb 2010, Ken Barclay wrote:

 Thanks Gavin, but

 $ mark /public/share/120 SALES DIVISION
 No files marked.

 $ add /public/share/120 SALES DIVISION/
 No files marked.

Sorry, I wasn't thinking.  I'd suggest you do:

  cd public
  cd share
  mark 120 SALES DIVISION

Gavin


--
SOLARIS 10 is the OS for Data Centers - provides features such as DTrace,
Predictive Self Healing and Award Winning ZFS. Get Solaris 10 NOW
http://p.sf.net/sfu/solaris-dev2dev
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


[Bacula-users] AUTO: Lars Breimo är inte på kontoret. (tillbaka 2010-02-14)

2010-02-10 Thread lars . breimo

Jag kommer inte att vara på kontoret och kommer inte tillbaka förrän
2010-02-14.

Jag kommer att svara på meddelandet när jag kommer tillbaka.


Obs! Det här är ett automatiskt svar på ditt meddelande  [Bacula-users]
Building CentOS 5 RPMs for Bacula 5.0.0-1 sänt 2/10/10 8:45:44 PM.

Det här är det enda meddelande du kommer att få om att den här personen
inte är på kontoret.


--
SOLARIS 10 is the OS for Data Centers - provides features such as DTrace,
Predictive Self Healing and Award Winning ZFS. Get Solaris 10 NOW
http://p.sf.net/sfu/solaris-dev2dev
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


[Bacula-users] Overhead of incremental backup ???

2010-02-10 Thread haridas n
Hi ,

I'm Using bacula 3.0.3 for to backup number of servers,in which some servers
got lot of data around 110 GB .So due to the lack of space in the backup
server to constantly  maintain a backup for more than one month period, I
forced to remove the differential backup from the schedule and now it take
only full and incremental backup for all my serves (one month cycling).

This is the schedule for this month period for all the clients.
---

2010-01-24 23:55 : mail.net-pool (Full)
2010-01-27 02:05 : mail.net-pool (Incremental)
2010-01-31 02:05 : mail.net-pool (Incremental)
2010-02-03 02:05 : mail.net-pool (Incremental)
2010-02-07 02:05 : mail.net-pool (Incremental)
2010-02-10 02:05 : mail.net-pool (Incremental)
2010-02-14 02:05 : mail.net-pool (Incremental)
2010-02-17 02:05 : mail.net-pool (Incremental)
2010-02-21 02:05 : mail.net-pool (Incremental)
2010-02-24 02:05 : mail.net-pool (Incremental)
2010-02-28 23:55 : mail.net-pool (Full)
2010-03-03 02:05 : mail.net-pool (Incremental)
2010-03-07 02:05 : mail.net-pool (Incremental)
2010-03-10 02:05 : mail.net-pool (Incremental)
2010-03-14 02:05 : mail.net-pool (Incremental)
2010-03-17 02:05 : mail.net-pool (Incremental)
2010-03-21 02:05 : mail.net-pool (Incremental)
2010-03-24 02:05 : mail.net-pool (Incremental)



Now I have one doubt about, if we increase the number of incremental
backups this would have any problem to the backup time.I need only for
one month period for my clients. So please help me to know more about this ..

Thanks and Regards,
Haridas N.




-- 
Have a nice day...
--
SOLARIS 10 is the OS for Data Centers - provides features such as DTrace,
Predictive Self Healing and Award Winning ZFS. Get Solaris 10 NOW
http://p.sf.net/sfu/solaris-dev2dev___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


[Bacula-users] bextract consumes 3+Gb memory

2010-02-10 Thread Andy Howell
Hello,

I was testing extracting the catalog from disk volume, only to find the 
machine swapping, 
and bextract using more the 3Gb res mem, 4+Gb virtual. I had compression turned 
on in the 
catalog fileset. When I turned that off, ran BackupCatalog again to a new disk 
volume, I 
was able to extract the bacula.sql file.

I tried it on another disk voluem that was compressed and had the same problem.

This is bacula 5.0.0 on CentOS 5.4.

Any ideas?

Thanks,

Andy

--
SOLARIS 10 is the OS for Data Centers - provides features such as DTrace,
Predictive Self Healing and Award Winning ZFS. Get Solaris 10 NOW
http://p.sf.net/sfu/solaris-dev2dev
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


[Bacula-users] Building CentOS 5 RPMs for Bacula 5.0.0-1

2010-02-10 Thread Burn


Andy Howell wrote:
 Burn wrote:
 Burn,
 
 My system has termcap.h from libtermcap-devel:
 
 rpm -qf /usr/include/termcap.h
 libtermcap-devel-2.0.8-46.1
 

I removed the symlink, installed libtermcap-devel and rebuilt package, that 
also worked ok. Looking into spec, turned out that I hadn't a bunch of other 
dependencies, also. Not sure why it wasn't complaining about that. Might have 
something to do with the fact I'm building in an OpenVZ container, not physical 
machine.

+--
|This was sent by sc...@gorodok.net via Backup Central.
|Forward SPAM to ab...@backupcentral.com.
+--



--
SOLARIS 10 is the OS for Data Centers - provides features such as DTrace,
Predictive Self Healing and Award Winning ZFS. Get Solaris 10 NOW
http://p.sf.net/sfu/solaris-dev2dev
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users