Re: [Bacula-users] How to share a tape library between two bacula servers.

2012-04-16 Thread Manuel Trujillo
2012/4/13 Martin Simmons mar...@lispworks.com:
 storage.conf looks bogus -- where did you get it from?

 The SD normally reads a file called bacula-sd.conf and you should add a second
 Director resource to it (with the Password from the bacula-dir.conf).

This is the configuration running when I arrive to this company. There
is a file with the bacula-sd, and another file included into
bacula-dir with the storage definitions... Maybe I need to reorder my
ideas :-/

Thank you Martin.

Manu.

--
For Developers, A Lot Can Happen In A Second.
Boundary is the first to Know...and Tell You.
Monitor Your Applications in Ultra-Fine Resolution. Try it FREE!
http://p.sf.net/sfu/Boundary-d2dvs2
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


[Bacula-users] Running a fake incremental that can be considered as Full ( Database Dumps )

2012-04-16 Thread Hugo Letemplier
Hello

I use Bacula 5.0.3

On few linux servers I have got BDD dumps that run every nights at a
specified time.
For synchronism reasons between databases theses backups are run via
crontab and not directly from bacula.

I need that bacula save theses databases dumps every morning
- The filesystem is a read only LVM snapshot of a Virtual Machine (
the backup is ran on the physical host and not on the virtual machine
)
- The snapshot is generated and mounted in a Run Before Job script

Rotation schemas that deletes old dumps on the backed up server is not
the same than on the configuration of bacula servers

I need bacula to :
- Run a full
- Save only the dumps that haven't been already backed up .

I must have a full:
- If I do increments, I will need to keep the full and this is not
what I want, if the full is deleted it will create a new one
- Moreover a DB dump as no dependency in previous dumps

I can't select only the dump of the day :
- If bacula job is not working one day, the next one must backup the
missed db dump that where not backed up during the failed job

I can't use a predefined list of files in fileset because the estimate
seems to be done before Run Before Job script that generates the
snapshot so it doesn't validate the include path.
File = \\|bash -c \find …… wont work because it's ran before my
snapshot creation

I think that it rests options sections from fileset but I didn't
found anything that's fine

In fact I want to run a full that saves only files since the last
successful backup without using the incremental method because it will
generate a full that will be deleted so I will have a useless FULL -
INC dependence

Have you got an idea ?

Thanks

--
For Developers, A Lot Can Happen In A Second.
Boundary is the first to Know...and Tell You.
Monitor Your Applications in Ultra-Fine Resolution. Try it FREE!
http://p.sf.net/sfu/Boundary-d2dvs2
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] Running a fake incremental that can be considered as Full ( Database Dumps )

2012-04-16 Thread Christian Manal
On 16.04.2012 12:09, Hugo Letemplier wrote:
 Hello
 
 I use Bacula 5.0.3
 
 On few linux servers I have got BDD dumps that run every nights at a
 specified time.
 For synchronism reasons between databases theses backups are run via
 crontab and not directly from bacula.
 
 I need that bacula save theses databases dumps every morning
 - The filesystem is a read only LVM snapshot of a Virtual Machine (
 the backup is ran on the physical host and not on the virtual machine
 )
 - The snapshot is generated and mounted in a Run Before Job script
 
 Rotation schemas that deletes old dumps on the backed up server is not
 the same than on the configuration of bacula servers
 
 I need bacula to :
 - Run a full
 - Save only the dumps that haven't been already backed up .
 
 I must have a full:
 - If I do increments, I will need to keep the full and this is not
 what I want, if the full is deleted it will create a new one
 - Moreover a DB dump as no dependency in previous dumps
 
 I can't select only the dump of the day :
 - If bacula job is not working one day, the next one must backup the
 missed db dump that where not backed up during the failed job
 
 I can't use a predefined list of files in fileset because the estimate
 seems to be done before Run Before Job script that generates the
 snapshot so it doesn't validate the include path.
 File = \\|bash -c \find …… wont work because it's ran before my
 snapshot creation
 
 I think that it rests options sections from fileset but I didn't
 found anything that's fine
 
 In fact I want to run a full that saves only files since the last
 successful backup without using the incremental method because it will
 generate a full that will be deleted so I will have a useless FULL -
 INC dependence
 
 Have you got an idea ?
 
 Thanks

Hi,

if I understand you right, you want Bacula's virtual backup. You can run
your usual Full and Incremental jobs and then consolidate them into a
new Full backup. See

http://bacula.org/5.2.x-manuals/en/main/main/New_Features_in_3_0_0.html#SECTION00137


Regards,
Christian Manal


--
For Developers, A Lot Can Happen In A Second.
Boundary is the first to Know...and Tell You.
Monitor Your Applications in Ultra-Fine Resolution. Try it FREE!
http://p.sf.net/sfu/Boundary-d2dvs2
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


[Bacula-users] Running out of space on tape

2012-04-16 Thread Jack Cobb
Hi,

 

I am using Bacula 5.0.1 running on an Ubuntu 10.04 64-bit server with a Dell
PowerVault TL2000 tape library attached via a SAS controller.  In the
library are two LTO4 drives and I am using LTO4 media.  From a backup over
the weekend I found the following message in the bacula log:

 

utility-sd JobId 377: End of medium on Volume 32 Bytes=843,378,278,400
Blocks=13,073,199 at 14-Apr-2012 19:45

 

I was hoping to get more than 843GB on a tape that can hold 800GB without
compression.  I guess I can go to two tapes for this backup in the future
but I would sure hate to spend money on more tapes.  Has anyone run into a
similar situation?

 

Jack Cobb

 

--
For Developers, A Lot Can Happen In A Second.
Boundary is the first to Know...and Tell You.
Monitor Your Applications in Ultra-Fine Resolution. Try it FREE!
http://p.sf.net/sfu/Boundary-d2dvs2
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


[Bacula-users] Running out of space on tape

2012-04-16 Thread Jack Cobb
Hi,

 

I am using Bacula 5.0.1 running on an Ubuntu 10.04 64-bit server with a Dell
PowerVault TL2000 tape library attached via a SAS controller.  In the
library are two LTO4 drives and I am using LTO4 media.  From a backup over
the weekend I found the following message in the bacula log:

 

utility-sd JobId 377: End of medium on Volume 32 Bytes=843,378,278,400
Blocks=13,073,199 at 14-Apr-2012 19:45

 

I was hoping to get more than 843GB on a tape that can hold 800GB without
compression.  I guess I can go to two tapes for this backup in the future
but I would sure hate to spend money on more tapes.  Has anyone run into a
similar situation?

 

Jack Cobb

 

--
For Developers, A Lot Can Happen In A Second.
Boundary is the first to Know...and Tell You.
Monitor Your Applications in Ultra-Fine Resolution. Try it FREE!
http://p.sf.net/sfu/Boundary-d2dvs2
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] Running out of space on tape

2012-04-16 Thread John Drescher
 I am using Bacula 5.0.1 running on an Ubuntu 10.04 64-bit server with a Dell
 PowerVault TL2000 tape library attached via a SAS controller.  In the
 library are two LTO4 drives and I am using LTO4 media.  From a backup over
 the weekend I found the following message in the bacula log:



 utility-sd JobId 377: End of medium on Volume 32 Bytes=843,378,278,400
 Blocks=13,073,199 at 14-Apr-2012 19:45



 I was hoping to get more than 843GB on a tape that can hold 800GB without
 compression.  I guess I can go to two tapes for this backup in the future
 but I would sure hate to spend money on more tapes.  Has anyone run into a
 similar situation?


Is your data already compressed (most pictures, video or audio files,
zip files ...) ? Remember you do generally do not gain much by
compressing data 2 (or more times).

If that is not the case check for HW errors in your dmesg. Bacula will
assume the tape is at the end when it hits a write error.

John

--
For Developers, A Lot Can Happen In A Second.
Boundary is the first to Know...and Tell You.
Monitor Your Applications in Ultra-Fine Resolution. Try it FREE!
http://p.sf.net/sfu/Boundary-d2dvs2
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] Running out of space on tape

2012-04-16 Thread Marcello Romani
Il 16/04/2012 15:59, Jack Cobb ha scritto:
 Hi,

 I am using Bacula 5.0.1 running on an Ubuntu 10.04 64-bit server with a
 Dell PowerVault TL2000 tape library attached via a SAS controller. In
 the library are two LTO4 drives and I am using LTO4 media. From a backup
 over the weekend I found the following message in the bacula log:

 utility-sd JobId 377: End of medium on Volume 32
 Bytes=843,378,278,400 Blocks=13,073,199 at 14-Apr-2012 19:45

 I was hoping to get more than 843GB on a tape that can hold 800GB
 without compression. I guess I can go to two tapes for this backup in
 the future but I would sure hate to spend money on more tapes. Has
 anyone run into a similar situation?

 Jack Cobb



 --
 For Developers, A Lot Can Happen In A Second.
 Boundary is the first to Know...and Tell You.
 Monitor Your Applications in Ultra-Fine Resolution. Try it FREE!
 http://p.sf.net/sfu/Boundary-d2dvs2



 ___
 Bacula-users mailing list
 Bacula-users@lists.sourceforge.net
 https://lists.sourceforge.net/lists/listinfo/bacula-users

If the data sent to tape is already compressed there's little that can 
be done by the tape drive to compress it further.
For example, if all you store on the drive are text files, there are 
good chances you'll be able to store more than two times the 
uncompressed tape capacity. On the other side, if you store only jpeg 
images, it will fill up after no more than is native capacity, because 
the jpeg algorithm already eliminates redundand data from images in a 
very domain-specific way (thus any other algorithm can't do much better 
than that).
So the actual size of data that can be stored on a tape depends on the 
level of redundancy of the data itself. In other words, one can only 
make assumptions about the native (i.e. uncompressed) tape capacity.

HTH

-- 
Marcello Romani

--
For Developers, A Lot Can Happen In A Second.
Boundary is the first to Know...and Tell You.
Monitor Your Applications in Ultra-Fine Resolution. Try it FREE!
http://p.sf.net/sfu/Boundary-d2dvs2
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] Running out of space on tape

2012-04-16 Thread John Drescher
 Thanks for the quick reply.  No errors that I can find in dmesg.  However,
 there are a lot of jpeg files on this server...that does make sense that
 compressed files cannot be compressed a second time.


Also if you are using encryption in bacula. I am not sure if that
bacula uses compressible encryption.

John

--
For Developers, A Lot Can Happen In A Second.
Boundary is the first to Know...and Tell You.
Monitor Your Applications in Ultra-Fine Resolution. Try it FREE!
http://p.sf.net/sfu/Boundary-d2dvs2
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] Bacula MySQL Catalog binlog restore

2012-04-16 Thread Martin Simmons
 On Fri, 13 Apr 2012 14:13:41 -0400, Phil Stracchino said:
 
 On 04/13/2012 01:02 PM, Martin Simmons wrote:
  On Tue, 10 Apr 2012 15:27:22 -0400, Phil Stracchino said:
  You shouldn't think of a temporary table as persistent DB data.  Think
  of them instead as part of the transient state of a single ongoing
  transaction.  Is it a reasonable expectation for a DB restore to be able
  to restore any part of the transient state of a transaction that
  happened to be running when the backup was made?  Even if it could,
  where would the results go?
 
  There's no way to resume an interrupted transaction from the middle, and
  so there's no point in backing up any of its state except to roll back
  anything it's already partly done.  If you want to repeat the
  transaction, you have to restart it from the beginning, which will
  recreate any temporary tables it was using anyway.
  
  OK, so it looks like Bacula's use of temporary tables outside a transaction 
  is
  at best incompatible with live backups of MySQL and at worst incorrect.
 
 I'm not understanding what you think is incorrect here.
 
 Let's try it this way:  How are you thinking that temporary tables
 should work, and what do you think should happen if you try to restore
 your Bacula catalog to a state that represents a point in the middle of
 a job that is not currently running?

I think it should be impossible to make a database backup that represents a
point in the middle of anything.  By should be impossible I mean that either
MySQL should always prevent it or database clients should use MySQL in such a
way that allows MySQL to prevent it.

The design of the binlogs seems to rule out the first option.

However, the second option looks possible if the client uses temporary tables
within a single transaction.  I.e. it starts a transaction, creates its
temporary tables, fills/uses them to update the permanent tables, drops the
temporary tables and then commits the transaction.

Can that work with mysqldump --single-transaction?  Note that it allows MySQL
to forcibly abort the client's transaction if necessary, so it can be
rerecorded in a binlog that comes after the backup.

Bacula's use of temporary tables doesn't follow this approach because it
doesn't start a transaction before creating the table.

__Martin

--
For Developers, A Lot Can Happen In A Second.
Boundary is the first to Know...and Tell You.
Monitor Your Applications in Ultra-Fine Resolution. Try it FREE!
http://p.sf.net/sfu/Boundary-d2dvs2
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] Bacula MySQL Catalog binlog restore

2012-04-16 Thread Martin Simmons
 On Fri, 13 Apr 2012 21:47:04 +0100, Joe Nyland said:
 
 On 13 Apr 2012, at 18:14, Martin Simmons wrote:
 
  On Wed, 11 Apr 2012 11:09:12 +0100, =?utf-8?Q?Joe Nyland?= said:
  
  Ok, firstly: sorry for not following up sooner. There have been several
  replies for this thread since my last reply, so thanks everyone for your
  input.
  
  Phil, you point out that the problem here is most likely to do with the 
  fact
  that other jobs are running whilst the full DB backup is being run and I
  agree with this.
  
  Now, to me, the simple resolution to this is to assign a backup window for
  the catalog backup to run in, when no other backups are running. There are,
  however, two issues I can foresee: 
  - The MySQL backup script is being called by a Bacula job as a run before
  script, so there will allways be a job 'running' whilst the catalog dump is
  being performed. 
  
  Should be OK because Bacula isn't in the middle of inserting File records 
  for
  that job when the before script runs.
  
  
  - One backup job in particular takes approx 24-48 hrs to complete, so this
  will be constantly updating the catalog with file records, no matter when I
  schedule the full catalog backup for. 
  
  Yes, that will be a problem.
  
  
  I'm therefore back to square one.
  
  Whilst I understand the complications with temporary tables (and threads)
  that Phil has pointed out, I do agree with what Martin was suggesting that
  there may be a better way to use temporary tables. I don't, however, have
  enough MySQL knowledge to provide any further suggestions to improve this 
  in
  later versions of Bacula.
  
  Stephen, yes, I am running the backup whilst other backups are running, for
  reasons stated above. You are right - if there were no jobs running, the
  temp tables wouldn't be referred to in the first place, as they would not
  exist.
  
  Your suggestion of keeping a 'buffer' of bin logs is interesting and this
  was the last thing I tried to resolve this problem with. I basically 
  removed
  the '--delete-master-logs' option from the mysqldump line, so that old
  binary logs are not removed. (Side point: I'm not sure when the
  '--delete-master-logs' actually deletes the binary logs - the mysqldump 
  docs
  state that it [sends a] PURGE BINARY LOGS statement to the server 
  __after__
  performing the dump operation. This doesn't make sense to me, as anything
  that's created between the dump being initiated and the dump completing 
  will
  be deleted from the logs and there's no guarantee it's included in the
  dump). So I removed the '--delete-master-logs' option, in a hope this would
  somehow allow me to see the CREATE TEMPORARY TABLE statements earlier in 
  the
  binary logs, but it didn't.
  
  What I didn't do that you have suggested is to restore older logs first
  i.e. logs from before the dump, then let it work through them, skipping any
  duplicate key errors. My issue with this workaround though, is that I fear
  that it could corrupt the database in some way (unbeknown to me at the time
  of restoring) if I am just 'skipping errors'.
  
  Yes, it sounds like a risky approach.  I think you would have filter the 
  older
  logs to find the commands that are needed to make the newer logs work.
  
  
  To me, this all does beg the question: Why use temporary tables in the
  first place? Again, I may not have enough MySQL experience to realise this
  for myself yet, but it's my view at this stage, after battling this for a
  while now!
  
  The main reason was probably for simplicity with multiple connections to the
  catalog (each connection automatically gets its own temporary tables).  I 
  see
  that some of the accurate-mode code is using a different approach though, 
  with
  a table named after the jobid to make it unique.
  
  __Martin
  
 
 Martin and Phil, thanks for your responses.
 
 I seem to have settled on the suggestion that maybe I should't worry about
 these temporary tables too much. As I think Phil is trying to explain; if
 there were a way in which these temporary tables could be backed up and
 restored somehow, then the actual data they would bring to the database
 would be useless anyway - the job wouldn't suddenly start running again from
 where it left off. So for all intents and purposes, the information about
 files being backed up in the job running at the moment when the database is
 being backed up may as well not even be in the database. After all, the
 temporary batch tables would only be deleted later on the the binary logs
 when the file records are actually moved to the file table, so to me, it
 seems pointless to worry about the batch table references. (If I've
 misunderstood something here, please accept my apologies)
 
 So... I've been doing daily test restores, but with the addition of the '-f'
 flag to mysql, so that it will continue to read from the binary logs, even
 if it come across an issue.
 
 Further to the above, all I've found it's 

Re: [Bacula-users] Bacula MySQL Catalog binlog restore

2012-04-16 Thread Phil Stracchino
On 04/16/2012 10:38 AM, Martin Simmons wrote:
 I think it should be impossible to make a database backup that represents a
 point in the middle of anything.  By should be impossible I mean that either
 MySQL should always prevent it or database clients should use MySQL in such a
 way that allows MySQL to prevent it.

It's not MySQL's business to do any such thing.  In fact, the preference
from the DB point of view is that you should be able to start a DB
backup in the middle of ANYTHING and still obtain a consistent backup.
And indeed you can, if you're not backing up binlogs.  If you back up
binlogs and attempt to replay them, the onus is somewhat upon you to
ensure that the transactions you're replaying make sense in the context
of the state of the DB.

 However, the second option looks possible if the client uses temporary tables
 within a single transaction.  I.e. it starts a transaction, creates its
 temporary tables, fills/uses them to update the permanent tables, drops the
 temporary tables and then commits the transaction.
 
 Can that work with mysqldump --single-transaction?  Note that it allows MySQL
 to forcibly abort the client's transaction if necessary, so it can be
 rerecorded in a binlog that comes after the backup.
 
 Bacula's use of temporary tables doesn't follow this approach because it
 doesn't start a transaction before creating the table.

Bacula cannot follow this approach because, unfortunately, even as long
in the tooth and obsolete as the MyISAM storage engine is, Bacula cannot
safely assume that it is working with a transactional storage engine.


-- 
  Phil Stracchino, CDK#2 DoD#299792458 ICBM: 43.5607, -71.355
  ala...@caerllewys.net   ala...@metrocast.net   p...@co.ordinate.org
  Renaissance Man, Unix ronin, Perl hacker, SQL wrangler, Free Stater
 It's not the years, it's the mileage.

--
For Developers, A Lot Can Happen In A Second.
Boundary is the first to Know...and Tell You.
Monitor Your Applications in Ultra-Fine Resolution. Try it FREE!
http://p.sf.net/sfu/Boundary-d2dvs2
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] Unable to restore some encrypted Windows 2003 backups with master.pem

2012-04-16 Thread Martin Simmons
 On Sat, 14 Apr 2012 13:53:37 +0200, Hugo Letemplier said:
 
 2012/4/11 Martin Simmons mar...@lispworks.com:
  On Wed, 4 Apr 2012 16:59:58 +0200, Hugo Letemplier said:
 
  Hello, I have tested encryption/decryption on many bacula backups but
  one job is tricky
 
  I have Linux, MacOSX and Windows 2003 servers
  I have master.cert and one fd.pem for encryption on each client.
  fd.pem is specific for each client
  master.cert is on every client and allow to decrypt with the secret
  master.pem in the case we loose the specific backup key.
 
  My bacula server is unable to restore 1 of my three Windows servers
  using the master.pem keypair
 
  Saying unable to restore is too vague -- what is the error message?
 
 
 I wanted to say that Master encryption/decryption doesn't work
 although the client specific encryption/decryption works
 It's just saying :
 
 Error: Missing private key required to decrypt encrypted backup data.

OK.


  Which one fails to restore?
 
  Is it definitely using the correct bacula-fd.conf?  E.g. try temporarily
  deleting the master.pem file and see if the bacula-fd fails to start.
 
 The file daemon with master.pem is decrypting every other backup fine
 (linux, mac windows) so it can't come from the restore FD but more
 from the backup fd when it loads the master.cert that contains the
 master public key.

That points to a problem on the Windows machine's file daemon.  E.g. try
temporarily deleting the master.pem file from the Windows client and verify
that you get an error when you restart its bacula-fd.

__Martin

--
For Developers, A Lot Can Happen In A Second.
Boundary is the first to Know...and Tell You.
Monitor Your Applications in Ultra-Fine Resolution. Try it FREE!
http://p.sf.net/sfu/Boundary-d2dvs2
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


[Bacula-users] Fileset: need a second pair of eyes

2012-04-16 Thread Steve Thompson
Bacula 5.0.2. This fileset:

FileSet {
   Name = toe_home_x
   Include {
 Options {
   exclude = yes
   wilddir = /mnt/toe/data*/home/*/.NetBin
   wilddir = /mnt/toe/data*/home/*/.Trash
   wilddir = /mnt/toe/data*/home/*0
   wilddir = /mnt/toe/data*/home/*1
   wilddir = /mnt/toe/data*/home/*2
   wilddir = /mnt/toe/data*/home/*3
   wilddir = /mnt/toe/data*/home/*4
   wilddir = /mnt/toe/data*/home/*5
   wilddir = /mnt/toe/data*/home/*6
   wilddir = /mnt/toe/data*/home/*7
   wilddir = /mnt/toe/data*/home/*8
   wilddir = /mnt/toe/data*/home/*9
 }
 Options {
   compression = GZIP
   sparse = yes
   noatime = yes
 }
 File = /mnt/toe/data1/home
 File = /mnt/toe/data2/home
   }
}

is intended to backup the entire contents of the two file systems 
/mnt/toe/data1/home and /mnt/toe/data2/home with the exception of the 
first-level directories that end with a number (the directories to be 
included are variable in name and number). Well, it works, except that it 
does not backup any directories (and their contents) in (say) 
/mnt/toe/data1/home/foo that have white space in their names. What have I 
done wrong?

Steve
-- 

Steve Thompson E-mail:  smt AT vgersoft DOT com
Voyager Software LLC   Web: http://www DOT vgersoft DOT com
39 Smugglers Path  VSW Support: support AT vgersoft DOT com
Ithaca, NY 14850
   186,282 miles per second: it's not just a good idea, it's the law


--
For Developers, A Lot Can Happen In A Second.
Boundary is the first to Know...and Tell You.
Monitor Your Applications in Ultra-Fine Resolution. Try it FREE!
http://p.sf.net/sfu/Boundary-d2dvs2
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users