Re: [Bacula-users] PKI decryption problem: missing private key

2012-06-30 Thread Hugo Letemplier
I think that you need to use the master.pem or the fd-example.pem that is on A 
if you want to restore on B
Dont forget to restart the File Daemon on B after config modifications

Hugo


Le 21 juin 2012 à 22:43, Ricky Tong a écrit :

 Hi,
 
 we are trying to restore a backup of a client(let's call it A) to a
 new machine(let's call it B). B is identical to A, which both have
 centos 6.2 running and same version of bacula-fd running.
 
 the public and private keys of A are stored in
 /etc/bacula/fd-example.pem and /etc/bacula/master.cert, and these keys
 are also copied at B at the same path.
 
 in the bacula-fd.conf on both A and B, we use the following:
 
 PKI Signatures = Yes
 PKI Encryption = Yes
 PKI Keypair = /etc/bacula/fd-example.pem
 PKI Master Key = /etc/bacula/master.cert
 
 OK, here is the problem:
 - when we try to restore the backup from A to /tmp/bacula-restore in
 A, it's fine.
 - however, when we try to restore the backup from A to B, bconsole is
 repeatly printing:
 
 21-Jun 00:52 tvb_restore_test-fd JobId 397: Error: Missing private key
 required to decrypt encrypted backup data.
 21-Jun 00:52 tvb_restore_test-fd JobId 397: Error: Missing private key
 required to decrypt encrypted backup data.
 ...
 
 The keys in A are the same as the keys in B.(I did the md5sum to
 verify that they are the same) and the permissions are the same too.
 
 Please advise. Any suggestions or help will be appreciated.
 
 Thank you!
 
 Ricky
 
 --
 Live Security Virtual Conference
 Exclusive live event will cover all the ways today's security and 
 threat landscape has changed and how IT managers can respond. Discussions 
 will include endpoint security, mobile security and the latest in malware 
 threats. http://www.accelacomm.com/jaw/sfrnl04242012/114/50122263/
 ___
 Bacula-users mailing list
 Bacula-users@lists.sourceforge.net
 https://lists.sourceforge.net/lists/listinfo/bacula-users


--
Live Security Virtual Conference
Exclusive live event will cover all the ways today's security and 
threat landscape has changed and how IT managers can respond. Discussions 
will include endpoint security, mobile security and the latest in malware 
threats. http://www.accelacomm.com/jaw/sfrnl04242012/114/50122263/
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] Migrate/copy from one pool to different pools based on schedule

2012-06-28 Thread Hugo Letemplier
2012/6/28 Mario Moder li...@demaio.de:
 Hi Bacula Community.

 Simple setup: We have one file-pool which is used on a weekly
 schedule (saturdays). We have another tape-pool and have configured a
 copy job, scheduled every monday, which copies all uncopied jobs from
 the file-pool to the tape-pool. This works, no problems so far.

 Now I want to change the copy schedule so that it copies the
 file-pool jobs to a monthly-tape-pool every 1st monday and copies
 the jobs from file-pool to a weekly-tape-pool every 2nd to 5th
 monday (for some kind of father-son tape rotation)

 As far as I understand from the docs there may only be one Next pool
 directive in the file-pool. For the above to work I would somehow need
 to overide the Next pool from within the schedule configuration (use
 Next Pool = monthly-tape-pool on 1st monday and use Next pool =
 weekly-tape-pool on the remaining mondays. Is this possible?

 Greetings

 Mario



 --
 Live Security Virtual Conference
 Exclusive live event will cover all the ways today's security and
 threat landscape has changed and how IT managers can respond. Discussions
 will include endpoint security, mobile security and the latest in malware
 threats. http://www.accelacomm.com/jaw/sfrnl04242012/114/50122263/
 ___
 Bacula-users mailing list
 Bacula-users@lists.sourceforge.net
 https://lists.sourceforge.net/lists/listinfo/bacula-users


Hi

You should use a dummy pool that you will use to run your copy job
and this dummy pool's next pool should be set to the destination of
your copy job

I hope you will understand

Hugo

--
Live Security Virtual Conference
Exclusive live event will cover all the ways today's security and 
threat landscape has changed and how IT managers can respond. Discussions 
will include endpoint security, mobile security and the latest in malware 
threats. http://www.accelacomm.com/jaw/sfrnl04242012/114/50122263/
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] Migrate/copy from one pool to different pools based on schedule

2012-06-28 Thread Hugo Letemplier
2012/6/28 Mario Moder li...@demaio.de:
 Am 28.06.2012 13:31, schrieb Hugo Letemplier:
 2012/6/28 Mario Moder li...@demaio.de:
 Hi Bacula Community.

 Simple setup: We have one file-pool which is used on a weekly
 schedule (saturdays). We have another tape-pool and have
 configured a
 copy job, scheduled every monday, which copies all uncopied jobs
 from
 the file-pool to the tape-pool. This works, no problems so far.

 Now I want to change the copy schedule so that it copies the
 file-pool jobs to a monthly-tape-pool every 1st monday and
 copies
 the jobs from file-pool to a weekly-tape-pool every 2nd to 5th
 monday (for some kind of father-son tape rotation)

 As far as I understand from the docs there may only be one Next
 pool
 directive in the file-pool. For the above to work I would somehow
 need
 to overide the Next pool from within the schedule configuration
 (use
 Next Pool = monthly-tape-pool on 1st monday and use Next pool =
 weekly-tape-pool on the remaining mondays. Is this possible?



 You should use a dummy pool that you will use to run your copy job
 and this dummy pool's next pool should be set to the destination of
 your copy job

 I hope you will understand


 Thanks for your answer, Hugo.

 How does the copy job know then, _from_ which pool it should copy, when
 the dummy pool has no volumes?

 I should have included my config for clarity, here it is:

 # bacula-dir.conf
 # File pool
 Pool {
   Name = File
   Pool Type = Backup
   Recycle = yes                       # Bacula can automatically
 recycle Volumes
   AutoPrune = yes                     # Prune expired volumes
   Volume Retention = 30 days
   Maximum Volume Bytes = 100G          # Limit Volume size to something
 reasonable
   Maximum Volumes = 10               # Limit number of Volumes in Pool
   Action On Purge = Truncate
   Next Pool = tape-weekly-pool
   Storage = file-stor
 }

 # tape weekly pool
 Pool {
   Name = tape-weekly-pool
   Pool Type = Backup
   Recycle = yes
   AutoPrune = yes
   Volume Retention = 25 days
   Volume Use Duration = 3 days
   Recycle Current Volume = yes
   LabelFormat = tape-weekly-
   Maximum Volumes = 4
   Storage = tape-stor
 }

 # tape monthly pool
 Pool {
   Name = tape-monthly-pool
   Pool Type = Backup
   Recycle = yes
   AutoPrune = yes
   Volume Retention = 5 months
   Volume Use Duration = 3 days
   Recycle Current Volume = yes
   LabelFormat = tape-monthly-
   Maximum Volumes = 6
 }

 Job {
   Name = copy-job
   Type = Copy
   Level = Full
   Client = bacula-fd
   FileSet = Full Set
   Messages = Standard
 #  Schedule = copy-schedule
   Pool = File
   Selection Type = PoolUncopiedJobs
 }



 The fictional schedule (it doesn't work this way, but it should be
 easier to understand) should look like this:

 Schedule {
   Name = copy-schedule
 #  Run = Level=Full Pool=File NextPool=tape-monthly-pool 1st mon at
 10:00
 #  Run = Level=Full Pool=File NextPool=tape-weekly-pool 2nd-5th mon at
 10:00
 }

 I don't quite understand how the dummy pool you suggested should look
 like, it would be nice if you could elaborate on that.

 Thanks for your answer

 Mario

Ah ok, I use a selection in Postgresql database to get the jobid list
for my copy job

Hugo

--
Live Security Virtual Conference
Exclusive live event will cover all the ways today's security and 
threat landscape has changed and how IT managers can respond. Discussions 
will include endpoint security, mobile security and the latest in malware 
threats. http://www.accelacomm.com/jaw/sfrnl04242012/114/50122263/
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] Create two identical tapes, with one tape drive.

2012-06-23 Thread Hugo Letemplier

Le 15 juin 2012 à 00:36, Yougo a écrit :

 Hi
 
 A second demand to the list this evening. ;-)
 
 I have an external script that select jobids in my db and then select them 
 for copying via a copy job into a tape, once the selection is done it sends 
 the sequentially to bconsole.
 This create an external tape archive of the enterprise main data
 I am supposed to keep the file records from theses tapes as long as possible, 
 at least for some particular jobs.
 
 Also, to be fully redundant i want to create a second copy of this archive, 
 the first one would be stored locally in order to recover some particular 
 files, the second one is externalized just in case the first one is destroyed 
 in a disaster. I have only one LTO3 drive so I cant copy directly from one 
 tape to another.
 
 To keep the catalog as light as possible I don't want to create a second copy 
 of the file records inside bacula but in want to be able to restore from one 
 tape as well as from the other one.
 This last condition infers that both tape should have the same label.
 
 What methods are possible to make it ?
 
 I have minded in using dd,bcopy… But I cant say which one is the more 
 appropriated and safe.
 
 Also I tried to write to a kind of spool file device and then to dd it to 
 my two tapes. I saw that bcopy is only writing to a volume that got a 
 different name, does someone succeeded in cloning a file into a tape
 
 Thanks for your answers
 
 Hugo
 
 


Hello ?

No one already did that ?

I have tried bcopy but the problem is that I should use it from one tape to 
another with a different label. So I lose the capacity to restore easily from 
that tape because I don't have the file/job records associated to that tape in 
database.

Hugo
--
Live Security Virtual Conference
Exclusive live event will cover all the ways today's security and 
threat landscape has changed and how IT managers can respond. Discussions 
will include endpoint security, mobile security and the latest in malware 
threats. http://www.accelacomm.com/jaw/sfrnl04242012/114/50122263/
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] Checking Storage Daemon file system free space before mounting/creating a volume and before running a job

2012-06-15 Thread Hugo Letemplier
2012/6/15 Uwe Schuerkamp uwe.schuerk...@nionex.net:

 On Thu, Jun 14, 2012 at 11:25:03PM +0200, Yougo wrote:

 Is it possible to check a free space ratio on the storage daemon,
  fail a job if required instead of failing also the volumes, waiting
  for a mount… and thus avoid a lot of successive error that could
  cause miscomprehension for the operator. Else have you got a
  solution to avoid my issue. I will have more space in few weeks but
  my backups are also being much larger due to enterprise expansion.

 For the moment I think your best bet would be a RunBeforeJob script
 that checks the free space on the filesystem, retrieves the size of
 the last full backup for the client being backed up and then does some
 comparisons based on these parameters.

 If the question is allowed: Why do you have so many pools defined? I
 usually only have my incremental an full pools, but of course once the
 PHBs start bugging you with different backup classes and
 strategies, that's the first thing that usually goes out of the
 window.

 All the best, Uwe


I have different classes of jobs because I have about 4 different
kind of backup
I split data from the system backup ( I need b, so we had 6 pools also
I have an old mail system based on big database files that need to be
kept for a shorter time

--
Live Security Virtual Conference
Exclusive live event will cover all the ways today's security and 
threat landscape has changed and how IT managers can respond. Discussions 
will include endpoint security, mobile security and the latest in malware 
threats. http://www.accelacomm.com/jaw/sfrnl04242012/114/50122263/
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] Checking Storage Daemon file system free space before mounting/creating a volume and before running a job

2012-06-15 Thread Hugo Letemplier
2012/6/15 Hugo Letemplier hugo.let...@gmail.com:
 2012/6/15 Uwe Schuerkamp uwe.schuerk...@nionex.net:

 On Thu, Jun 14, 2012 at 11:25:03PM +0200, Yougo wrote:

 Is it possible to check a free space ratio on the storage daemon,
  fail a job if required instead of failing also the volumes, waiting
  for a mount… and thus avoid a lot of successive error that could
  cause miscomprehension for the operator. Else have you got a
  solution to avoid my issue. I will have more space in few weeks but
  my backups are also being much larger due to enterprise expansion.

 For the moment I think your best bet would be a RunBeforeJob script
 that checks the free space on the filesystem, retrieves the size of
 the last full backup for the client being backed up and then does some
 comparisons based on these parameters.

 If the question is allowed: Why do you have so many pools defined? I
 usually only have my incremental an full pools, but of course once the
 PHBs start bugging you with different backup classes and
 strategies, that's the first thing that usually goes out of the
 window.

 All the best, Uwe





 I have different classes of jobs because I have about 4 different
 kind of backup
 I split data from the system backup ( I need b, so we had 6 pools also
 I have an old mail system based on big database files that need to be
 kept for a shorter time

Sorry my mail was sent before I finished writing :-(

Also I have got different kind of content that should be kept for a
very long time.
To finish I have an Archive pool that stores job copies on LTO Tapes

--
Live Security Virtual Conference
Exclusive live event will cover all the ways today's security and 
threat landscape has changed and how IT managers can respond. Discussions 
will include endpoint security, mobile security and the latest in malware 
threats. http://www.accelacomm.com/jaw/sfrnl04242012/114/50122263/
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] Unable to restore some encrypted Windows 2003 backups with master.pem

2012-05-14 Thread Hugo Letemplier
 OK, that suggests to me that the bacula-fd.conf doesn't load master.cert for
 some reason.

 Check the command line arguments of the service to find the bacula-fd.conf.
 You could try renaming that bacula-fd.conf and then restart the service (I
 would expect it to fail to start in that case).

 Having identified the bacula-fd.conf that the service is using, compare it to
 the working ones.

Hello,

I rewrite by the hand the full fd config file and now it's working and
now it detects when the master.cert is missing. (probably a bad
character)
I did a job that I succeeded to restore via the master.pem

I consider the problem as solved. :-)

Thank you !

Hugo

--
Live Security Virtual Conference
Exclusive live event will cover all the ways today's security and 
threat landscape has changed and how IT managers can respond. Discussions 
will include endpoint security, mobile security and the latest in malware 
threats. http://www.accelacomm.com/jaw/sfrnl04242012/114/50122263/
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


[Bacula-users] Restoring partial catalog backup after unexpected pruning

2012-05-10 Thread Hugo Letemplier
Hello,

Few days before I had a bad behaviour from bacula. It pruned lots of
backups from my archives ONLY for one client in one pool before a
specified date!.

I couldn't reread tapes because there was a lot of tapes.
I couldn't restore the full catalog dump because lots of jobs had run
since this unexpected behaviour and so if I did it I would loose
theses jobs.

My postgresql Select was on job and jobfiles for a pool and for a
specific client

Can you confirm that I did right to reimport theses jobs and files :

- The tables that I restored were : job, log, file, jobmedia

- The tables filename and path seems to be ok  ? Is this a normal
behaviour that filenames and path were not pruned like the files ?

- Is there a possibility that this catalog import was wrong ? If yes
which points should I check ?


Also what can be the origin of the unexpected behaviour ? It happened
on a restore job on the client that was pruned.

Have you got an idea ?

Thank you

--
Live Security Virtual Conference
Exclusive live event will cover all the ways today's security and 
threat landscape has changed and how IT managers can respond. Discussions 
will include endpoint security, mobile security and the latest in malware 
threats. http://www.accelacomm.com/jaw/sfrnl04242012/114/50122263/
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] Unable to restore some encrypted Windows 2003 backups with master.pem

2012-05-04 Thread Hugo Letemplier
2012/4/25 Martin Simmons mar...@lispworks.com:
 On Wed, 25 Apr 2012 12:05:59 +0200, Hugo Letemplier said:

 2012/4/16 Martin Simmons mar...@lispworks.com:
  On Sat, 14 Apr 2012 13:53:37 +0200, Hugo Letemplier said:
 
  2012/4/11 Martin Simmons mar...@lispworks.com:
   On Wed, 4 Apr 2012 16:59:58 +0200, Hugo Letemplier said:
  
   Hello, I have tested encryption/decryption on many bacula backups but
   one job is tricky
  
   I have Linux, MacOSX and Windows 2003 servers
   I have master.cert and one fd.pem for encryption on each client.
   fd.pem is specific for each client
   master.cert is on every client and allow to decrypt with the secret
   master.pem in the case we loose the specific backup key.
  
   My bacula server is unable to restore 1 of my three Windows servers
   using the master.pem keypair
  
   Saying unable to restore is too vague -- what is the error message?
  
 
  I wanted to say that Master encryption/decryption doesn't work
  although the client specific encryption/decryption works
  It's just saying :
 
  Error: Missing private key required to decrypt encrypted backup data.
 
  OK.
 
 
   Which one fails to restore?
  
   Is it definitely using the correct bacula-fd.conf?  E.g. try temporarily
   deleting the master.pem file and see if the bacula-fd fails to start.
 
  The file daemon with master.pem is decrypting every other backup fine
  (linux, mac windows) so it can't come from the restore FD but more
  from the backup fd when it loads the master.cert that contains the
  master public key.
 
  That points to a problem on the Windows machine's file daemon.  E.g. try
  temporarily deleting the master.pem file from the Windows client and verify
  that you get an error when you restart its bacula-fd.
 
  __Martin
 

 Did you want to say master.cert file ? Instead of master.pem

 Oops yes, thanks for the correction.

 __Martin

Hello

Indeed, if I rename the file bacula services starts without any
warning and if I do a status client=MyWIndowsFD in bconsole
everything seems to be fine.

On the other windows server, I tried the same and the service refused
to start, I simply don't understand

What should I do ?

Thanks

Hugo

--
Live Security Virtual Conference
Exclusive live event will cover all the ways today's security and 
threat landscape has changed and how IT managers can respond. Discussions 
will include endpoint security, mobile security and the latest in malware 
threats. http://www.accelacomm.com/jaw/sfrnl04242012/114/50122263/
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] Running a fake incremental that can be considered as Full ( Database Dumps )

2012-04-25 Thread Hugo Letemplier
2012/4/20 Martin Simmons mar...@lispworks.com:
 On Thu, 19 Apr 2012 11:02:33 +0200, Hugo Letemplier said:

 2012/4/18 Martin Simmons mar...@lispworks.com:
  On Wed, 18 Apr 2012 15:41:02 +0200, Hugo Letemplier said:
 
  2012/4/18 Hugo Letemplier hugo.let...@gmail.com:
   2012/4/16 Christian Manal moen...@informatik.uni-bremen.de:
   On 16.04.2012 12:09, Hugo Letemplier wrote:
   Hello
  
   I use Bacula 5.0.3
  
   On few linux servers I have got BDD dumps that run every nights at a
   specified time.
   For synchronism reasons between databases theses backups are run via
   crontab and not directly from bacula.
  
   I need that bacula save theses databases dumps every morning
   - The filesystem is a read only LVM snapshot of a Virtual Machine (
   the backup is ran on the physical host and not on the virtual machine
   )
   - The snapshot is generated and mounted in a Run Before Job script
  
   Rotation schemas that deletes old dumps on the backed up server is not
   the same than on the configuration of bacula servers
  
   I need bacula to :
   - Run a full
   - Save only the dumps that haven't been already backed up .
  
   I must have a full:
   - If I do increments, I will need to keep the full and this is not
   what I want, if the full is deleted it will create a new one
   - Moreover a DB dump as no dependency in previous dumps
  
   I can't select only the dump of the day :
   - If bacula job is not working one day, the next one must backup the
   missed db dump that where not backed up during the failed job
  
   I can't use a predefined list of files in fileset because the estimate
   seems to be done before Run Before Job script that generates the
   snapshot so it doesn't validate the include path.
   File = \\|bash -c \find …… wont work because it's ran before my
   snapshot creation
  
   I think that it rests options sections from fileset but I didn't
   found anything that's fine
  
   In fact I want to run a full that saves only files since the last
   successful backup without using the incremental method because it will
   generate a full that will be deleted so I will have a useless FULL -
   INC dependence
  
   Have you got an idea ?
  
   Thanks
  
   Hi,
  
   if I understand you right, you want Bacula's virtual backup. You can 
   run
   your usual Full and Incremental jobs and then consolidate them into a
   new Full backup. See
  
   http://bacula.org/5.2.x-manuals/en/main/main/New_Features_in_3_0_0.html#SECTION00137
  
  
   Regards,
   Christian Manal
  
  
   --
   For Developers, A Lot Can Happen In A Second.
   Boundary is the first to Know...and Tell You.
   Monitor Your Applications in Ultra-Fine Resolution. Try it FREE!
   http://p.sf.net/sfu/Boundary-d2dvs2
   ___
   Bacula-users mailing list
   Bacula-users@lists.sourceforge.net
   https://lists.sourceforge.net/lists/listinfo/bacula-users
  
   I dont think that Virtual Backup can help me.
   Moreover, I can't put a virtual Backup in the same pool than the
   previous full so it
  
   My first idea was to move/rename the files that have been backed up
   after each job so they wont match a regexp option in fileset
   = This is not possible because of the Read Only FS
  
   I think that i am gonna store a file list from the previous job on the 
   director
   Then I will read this list as an exclude list in the next job
   I think that it's fine because every dump file has a unique file name
   with a timestamp
 
  My option regexp is working to select good files
  Among this selection I Exclude all files that are in a  list with :
  Exclude {
          File = \\/var/lib/bacula/lastjob.list
   }
  This list as been generated at the end of the previous job and
  contains a listing of the already saved files
 
  I have run the file daemon with -d 200 -v -f
  It says that the exclude file list as been imported but file Excluding
  doesn't work
 
  Have you got an idea ?
 
  The format of /var/lib/bacula/lastjob.list might be wrong -- can you give 
  an
  example of its contents?
 
  Also, is that file on the client machine (as you've used the \\ syntax)?  
  If
  the file on the Director machine then remove the \\.
 
  __Martin
 

 Hello

 here is the content of one line from the file, every line is similar
 to the following:
 /var/lib/bacula/lvm_mount/vm-205-disk-2/Backup/dev4/pgsql_backup/pg_dump_schema.dev4.120417.2100.sql.gz

 FileSet {
       Name = MyFileset
     Exclude {
         File = \\/var/lib/bacula/MyfileList.last
     }
       Include {

         Options {
             compression = GZIP5
                   Signature = MD5
                   ACL Support = yes
                   xattrsupport = yes
                   strippath = 5
                   onefs = yes
                   regexfile = 
 ^.*/Backup/dev.*/pgsql_backup/pg_dump_schema.*\.sql\.gz

Re: [Bacula-users] Unable to restore some encrypted Windows 2003 backups with master.pem

2012-04-25 Thread Hugo Letemplier
2012/4/16 Martin Simmons mar...@lispworks.com:
 On Sat, 14 Apr 2012 13:53:37 +0200, Hugo Letemplier said:

 2012/4/11 Martin Simmons mar...@lispworks.com:
  On Wed, 4 Apr 2012 16:59:58 +0200, Hugo Letemplier said:
 
  Hello, I have tested encryption/decryption on many bacula backups but
  one job is tricky
 
  I have Linux, MacOSX and Windows 2003 servers
  I have master.cert and one fd.pem for encryption on each client.
  fd.pem is specific for each client
  master.cert is on every client and allow to decrypt with the secret
  master.pem in the case we loose the specific backup key.
 
  My bacula server is unable to restore 1 of my three Windows servers
  using the master.pem keypair
 
  Saying unable to restore is too vague -- what is the error message?
 

 I wanted to say that Master encryption/decryption doesn't work
 although the client specific encryption/decryption works
 It's just saying :

 Error: Missing private key required to decrypt encrypted backup data.

 OK.


  Which one fails to restore?
 
  Is it definitely using the correct bacula-fd.conf?  E.g. try temporarily
  deleting the master.pem file and see if the bacula-fd fails to start.

 The file daemon with master.pem is decrypting every other backup fine
 (linux, mac windows) so it can't come from the restore FD but more
 from the backup fd when it loads the master.cert that contains the
 master public key.

 That points to a problem on the Windows machine's file daemon.  E.g. try
 temporarily deleting the master.pem file from the Windows client and verify
 that you get an error when you restart its bacula-fd.

 __Martin


Did you want to say master.cert file ? Instead of master.pem

--
Live Security Virtual Conference
Exclusive live event will cover all the ways today's security and 
threat landscape has changed and how IT managers can respond. Discussions 
will include endpoint security, mobile security and the latest in malware 
threats. http://www.accelacomm.com/jaw/sfrnl04242012/114/50122263/
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] Tandberg LTO-3 only writing ~200GB

2012-04-25 Thread Hugo Letemplier
2012/4/23 John Drescher dresche...@gmail.com:
 When you recycle tapes they get relabelled. At that point the larger block
 size will be used automatically.

 It works for me.

 Thanks. I will try to test this soon provided I get some time.

 John

 --
 For Developers, A Lot Can Happen In A Second.
 Boundary is the first to Know...and Tell You.
 Monitor Your Applications in Ultra-Fine Resolution. Try it FREE!
 http://p.sf.net/sfu/Boundary-d2dvs2
 ___
 Bacula-users mailing list
 Bacula-users@lists.sourceforge.net
 https://lists.sourceforge.net/lists/listinfo/bacula-users


I had some problems with my LTO3 device and tapes
Sometimes tape where marked as FULL whereas I was just at the
beggining or half of the tape.
First my LTO drive was on an iSCSI share with a dummy iSCSI
implementation ( drive was shared between 2 machines )
Also I had I/O problem that made my local hard drive slower than my
LTO tape so my tape drive was often stopping and waiting for data.
Once I solved theses 2 problems, everything worked fine.

I erased my tapes with an mt -f /dev/nst0 erase  after having
stopped bacula-sd and freed all iscsi layers

Hugo

--
Live Security Virtual Conference
Exclusive live event will cover all the ways today's security and 
threat landscape has changed and how IT managers can respond. Discussions 
will include endpoint security, mobile security and the latest in malware 
threats. http://www.accelacomm.com/jaw/sfrnl04242012/114/50122263/
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] Running a fake incremental that can be considered as Full ( Database Dumps )

2012-04-19 Thread Hugo Letemplier
2012/4/18 Martin Simmons mar...@lispworks.com:
 On Wed, 18 Apr 2012 15:41:02 +0200, Hugo Letemplier said:

 2012/4/18 Hugo Letemplier hugo.let...@gmail.com:
  2012/4/16 Christian Manal moen...@informatik.uni-bremen.de:
  On 16.04.2012 12:09, Hugo Letemplier wrote:
  Hello
 
  I use Bacula 5.0.3
 
  On few linux servers I have got BDD dumps that run every nights at a
  specified time.
  For synchronism reasons between databases theses backups are run via
  crontab and not directly from bacula.
 
  I need that bacula save theses databases dumps every morning
  - The filesystem is a read only LVM snapshot of a Virtual Machine (
  the backup is ran on the physical host and not on the virtual machine
  )
  - The snapshot is generated and mounted in a Run Before Job script
 
  Rotation schemas that deletes old dumps on the backed up server is not
  the same than on the configuration of bacula servers
 
  I need bacula to :
  - Run a full
  - Save only the dumps that haven't been already backed up .
 
  I must have a full:
  - If I do increments, I will need to keep the full and this is not
  what I want, if the full is deleted it will create a new one
  - Moreover a DB dump as no dependency in previous dumps
 
  I can't select only the dump of the day :
  - If bacula job is not working one day, the next one must backup the
  missed db dump that where not backed up during the failed job
 
  I can't use a predefined list of files in fileset because the estimate
  seems to be done before Run Before Job script that generates the
  snapshot so it doesn't validate the include path.
  File = \\|bash -c \find …… wont work because it's ran before my
  snapshot creation
 
  I think that it rests options sections from fileset but I didn't
  found anything that's fine
 
  In fact I want to run a full that saves only files since the last
  successful backup without using the incremental method because it will
  generate a full that will be deleted so I will have a useless FULL -
  INC dependence
 
  Have you got an idea ?
 
  Thanks
 
  Hi,
 
  if I understand you right, you want Bacula's virtual backup. You can run
  your usual Full and Incremental jobs and then consolidate them into a
  new Full backup. See
 
  http://bacula.org/5.2.x-manuals/en/main/main/New_Features_in_3_0_0.html#SECTION00137
 
 
  Regards,
  Christian Manal
 
 
  --
  For Developers, A Lot Can Happen In A Second.
  Boundary is the first to Know...and Tell You.
  Monitor Your Applications in Ultra-Fine Resolution. Try it FREE!
  http://p.sf.net/sfu/Boundary-d2dvs2
  ___
  Bacula-users mailing list
  Bacula-users@lists.sourceforge.net
  https://lists.sourceforge.net/lists/listinfo/bacula-users
 
  I dont think that Virtual Backup can help me.
  Moreover, I can't put a virtual Backup in the same pool than the
  previous full so it
 
  My first idea was to move/rename the files that have been backed up
  after each job so they wont match a regexp option in fileset
  = This is not possible because of the Read Only FS
 
  I think that i am gonna store a file list from the previous job on the 
  director
  Then I will read this list as an exclude list in the next job
  I think that it's fine because every dump file has a unique file name
  with a timestamp

 My option regexp is working to select good files
 Among this selection I Exclude all files that are in a  list with :
 Exclude {
         File = \\/var/lib/bacula/lastjob.list
  }
 This list as been generated at the end of the previous job and
 contains a listing of the already saved files

 I have run the file daemon with -d 200 -v -f
 It says that the exclude file list as been imported but file Excluding
 doesn't work

 Have you got an idea ?

 The format of /var/lib/bacula/lastjob.list might be wrong -- can you give an
 example of its contents?

 Also, is that file on the client machine (as you've used the \\ syntax)?  If
 the file on the Director machine then remove the \\.

 __Martin


Hello

here is the content of one line from the file, every line is similar
to the following:
/var/lib/bacula/lvm_mount/vm-205-disk-2/Backup/dev4/pgsql_backup/pg_dump_schema.dev4.120417.2100.sql.gz

FileSet {
Name = MyFileset
Exclude {
File = \\/var/lib/bacula/MyfileList.last
}
Include {

Options {
compression = GZIP5
Signature = MD5
ACL Support = yes
xattrsupport = yes
strippath = 5
onefs = yes
regexfile = 
^.*/Backup/dev.*/pgsql_backup/pg_dump_schema.*\.sql\.gz$
}
options {
regexfile = .*
exclude= yes
}
File = /var/lib/bacula/lvm_mount/vm-205-disk-2/Backup/
}
}

The file is on the client machine

Thanks

Re: [Bacula-users] Running a fake incremental that can be considered as Full ( Database Dumps )

2012-04-18 Thread Hugo Letemplier
2012/4/16 Christian Manal moen...@informatik.uni-bremen.de:
 On 16.04.2012 12:09, Hugo Letemplier wrote:
 Hello

 I use Bacula 5.0.3

 On few linux servers I have got BDD dumps that run every nights at a
 specified time.
 For synchronism reasons between databases theses backups are run via
 crontab and not directly from bacula.

 I need that bacula save theses databases dumps every morning
 - The filesystem is a read only LVM snapshot of a Virtual Machine (
 the backup is ran on the physical host and not on the virtual machine
 )
 - The snapshot is generated and mounted in a Run Before Job script

 Rotation schemas that deletes old dumps on the backed up server is not
 the same than on the configuration of bacula servers

 I need bacula to :
 - Run a full
 - Save only the dumps that haven't been already backed up .

 I must have a full:
 - If I do increments, I will need to keep the full and this is not
 what I want, if the full is deleted it will create a new one
 - Moreover a DB dump as no dependency in previous dumps

 I can't select only the dump of the day :
 - If bacula job is not working one day, the next one must backup the
 missed db dump that where not backed up during the failed job

 I can't use a predefined list of files in fileset because the estimate
 seems to be done before Run Before Job script that generates the
 snapshot so it doesn't validate the include path.
 File = \\|bash -c \find …… wont work because it's ran before my
 snapshot creation

 I think that it rests options sections from fileset but I didn't
 found anything that's fine

 In fact I want to run a full that saves only files since the last
 successful backup without using the incremental method because it will
 generate a full that will be deleted so I will have a useless FULL -
 INC dependence

 Have you got an idea ?

 Thanks

 Hi,

 if I understand you right, you want Bacula's virtual backup. You can run
 your usual Full and Incremental jobs and then consolidate them into a
 new Full backup. See

 http://bacula.org/5.2.x-manuals/en/main/main/New_Features_in_3_0_0.html#SECTION00137


 Regards,
 Christian Manal


 --
 For Developers, A Lot Can Happen In A Second.
 Boundary is the first to Know...and Tell You.
 Monitor Your Applications in Ultra-Fine Resolution. Try it FREE!
 http://p.sf.net/sfu/Boundary-d2dvs2
 ___
 Bacula-users mailing list
 Bacula-users@lists.sourceforge.net
 https://lists.sourceforge.net/lists/listinfo/bacula-users

I dont think that Virtual Backup can help me.
Moreover, I can't put a virtual Backup in the same pool than the
previous full so it

My first idea was to move/rename the files that have been backed up
after each job so they wont match a regexp option in fileset
= This is not possible because of the Read Only FS

I think that i am gonna store a file list from the previous job on the director
Then I will read this list as an exclude list in the next job
I think that it's fine because every dump file has a unique file name
with a timestamp

--
Better than sec? Nothing is better than sec when it comes to
monitoring Big Data applications. Try Boundary one-second 
resolution app monitoring today. Free.
http://p.sf.net/sfu/Boundary-dev2dev
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] Running a fake incremental that can be considered as Full ( Database Dumps )

2012-04-18 Thread Hugo Letemplier
2012/4/18 Hugo Letemplier hugo.let...@gmail.com:
 2012/4/16 Christian Manal moen...@informatik.uni-bremen.de:
 On 16.04.2012 12:09, Hugo Letemplier wrote:
 Hello

 I use Bacula 5.0.3

 On few linux servers I have got BDD dumps that run every nights at a
 specified time.
 For synchronism reasons between databases theses backups are run via
 crontab and not directly from bacula.

 I need that bacula save theses databases dumps every morning
 - The filesystem is a read only LVM snapshot of a Virtual Machine (
 the backup is ran on the physical host and not on the virtual machine
 )
 - The snapshot is generated and mounted in a Run Before Job script

 Rotation schemas that deletes old dumps on the backed up server is not
 the same than on the configuration of bacula servers

 I need bacula to :
 - Run a full
 - Save only the dumps that haven't been already backed up .

 I must have a full:
 - If I do increments, I will need to keep the full and this is not
 what I want, if the full is deleted it will create a new one
 - Moreover a DB dump as no dependency in previous dumps

 I can't select only the dump of the day :
 - If bacula job is not working one day, the next one must backup the
 missed db dump that where not backed up during the failed job

 I can't use a predefined list of files in fileset because the estimate
 seems to be done before Run Before Job script that generates the
 snapshot so it doesn't validate the include path.
 File = \\|bash -c \find …… wont work because it's ran before my
 snapshot creation

 I think that it rests options sections from fileset but I didn't
 found anything that's fine

 In fact I want to run a full that saves only files since the last
 successful backup without using the incremental method because it will
 generate a full that will be deleted so I will have a useless FULL -
 INC dependence

 Have you got an idea ?

 Thanks

 Hi,

 if I understand you right, you want Bacula's virtual backup. You can run
 your usual Full and Incremental jobs and then consolidate them into a
 new Full backup. See

 http://bacula.org/5.2.x-manuals/en/main/main/New_Features_in_3_0_0.html#SECTION00137


 Regards,
 Christian Manal


 --
 For Developers, A Lot Can Happen In A Second.
 Boundary is the first to Know...and Tell You.
 Monitor Your Applications in Ultra-Fine Resolution. Try it FREE!
 http://p.sf.net/sfu/Boundary-d2dvs2
 ___
 Bacula-users mailing list
 Bacula-users@lists.sourceforge.net
 https://lists.sourceforge.net/lists/listinfo/bacula-users

 I dont think that Virtual Backup can help me.
 Moreover, I can't put a virtual Backup in the same pool than the
 previous full so it

 My first idea was to move/rename the files that have been backed up
 after each job so they wont match a regexp option in fileset
 = This is not possible because of the Read Only FS

 I think that i am gonna store a file list from the previous job on the 
 director
 Then I will read this list as an exclude list in the next job
 I think that it's fine because every dump file has a unique file name
 with a timestamp

My option regexp is working to select good files
Among this selection I Exclude all files that are in a  list with :
Exclude {
File = \\/var/lib/bacula/lastjob.list
 }
This list as been generated at the end of the previous job and
contains a listing of the already saved files

I have run the file daemon with -d 200 -v -f
It says that the exclude file list as been imported but file Excluding
doesn't work

Have you got an idea ?

Thanks

--
Better than sec? Nothing is better than sec when it comes to
monitoring Big Data applications. Try Boundary one-second 
resolution app monitoring today. Free.
http://p.sf.net/sfu/Boundary-dev2dev
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


[Bacula-users] Running a fake incremental that can be considered as Full ( Database Dumps )

2012-04-16 Thread Hugo Letemplier
Hello

I use Bacula 5.0.3

On few linux servers I have got BDD dumps that run every nights at a
specified time.
For synchronism reasons between databases theses backups are run via
crontab and not directly from bacula.

I need that bacula save theses databases dumps every morning
- The filesystem is a read only LVM snapshot of a Virtual Machine (
the backup is ran on the physical host and not on the virtual machine
)
- The snapshot is generated and mounted in a Run Before Job script

Rotation schemas that deletes old dumps on the backed up server is not
the same than on the configuration of bacula servers

I need bacula to :
- Run a full
- Save only the dumps that haven't been already backed up .

I must have a full:
- If I do increments, I will need to keep the full and this is not
what I want, if the full is deleted it will create a new one
- Moreover a DB dump as no dependency in previous dumps

I can't select only the dump of the day :
- If bacula job is not working one day, the next one must backup the
missed db dump that where not backed up during the failed job

I can't use a predefined list of files in fileset because the estimate
seems to be done before Run Before Job script that generates the
snapshot so it doesn't validate the include path.
File = \\|bash -c \find …… wont work because it's ran before my
snapshot creation

I think that it rests options sections from fileset but I didn't
found anything that's fine

In fact I want to run a full that saves only files since the last
successful backup without using the incremental method because it will
generate a full that will be deleted so I will have a useless FULL -
INC dependence

Have you got an idea ?

Thanks

--
For Developers, A Lot Can Happen In A Second.
Boundary is the first to Know...and Tell You.
Monitor Your Applications in Ultra-Fine Resolution. Try it FREE!
http://p.sf.net/sfu/Boundary-d2dvs2
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] Unable to restore some encrypted Windows 2003 backups with master.pem

2012-04-14 Thread Hugo Letemplier
2012/4/11 Martin Simmons mar...@lispworks.com:
 On Wed, 4 Apr 2012 16:59:58 +0200, Hugo Letemplier said:

 Hello, I have tested encryption/decryption on many bacula backups but
 one job is tricky

 I have Linux, MacOSX and Windows 2003 servers
 I have master.cert and one fd.pem for encryption on each client.
 fd.pem is specific for each client
 master.cert is on every client and allow to decrypt with the secret
 master.pem in the case we loose the specific backup key.

 My bacula server is unable to restore 1 of my three Windows servers
 using the master.pem keypair

 Saying unable to restore is too vague -- what is the error message?


I wanted to say that Master encryption/decryption doesn't work
although the client specific encryption/decryption works
It's just saying :

Error: Missing private key required to decrypt encrypted backup data.


 With bacula, I used an SQLQuery to check all the master.pem certificates.

 SELECT DISTINCT
   path.path,
   file.md5,
   job.starttime,
   client.name
 FROM
     public.client,
     public.file,
     public.filename,
     public.path,
     public.job
 WHERE
     client.clientid = job.clientid AND
     file.jobid = job.jobid AND
     file.filenameid = filename.filenameid AND
     file.pathid = path.pathid AND
     filename.name = 'master.cert'
 ORDER BY file.md5,client.name,path.path,job.starttime

 Result shows me that md5 hash are different on different OS
 ex 1 hash on all osx server, one hash on all linux server

 But on windows md5 are always different whatever is the machine !

 That is probably OK.  The backup on Windows will include various other data
 about the file which could vary between machines (assuming you didn't set
 portable=yes in the fileset).


Ok so file attributes may be included in the md5 hash


 2 of my three windows machines uses the same bacula 5.0.3 binaries
 downloaded from the bacula Repo

 Where did the third binary come from?

Humm finally this is wrong, in fact all 3 installs of bacula for
Windows were from the same package.


 Which one fails to restore?

 Is it definitely using the correct bacula-fd.conf?  E.g. try temporarily
 deleting the master.pem file and see if the bacula-fd fails to start.

The file daemon with master.pem is decrypting every other backup fine
(linux, mac windows) so it can't come from the restore FD but more
from the backup fd when it loads the master.cert that contains the
master public key.


Thanks for your help

Hugo

--
For Developers, A Lot Can Happen In A Second.
Boundary is the first to Know...and Tell You.
Monitor Your Applications in Ultra-Fine Resolution. Try it FREE!
http://p.sf.net/sfu/Boundary-d2dvs2
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


[Bacula-users] Unable to restore some encrypted Windows 2003 backups with master.pem

2012-04-04 Thread Hugo Letemplier
Hello, I have tested encryption/decryption on many bacula backups but
one job is tricky

I have Linux, MacOSX and Windows 2003 servers
I have master.cert and one fd.pem for encryption on each client.
fd.pem is specific for each client
master.cert is on every client and allow to decrypt with the secret
master.pem in the case we loose the specific backup key.

My bacula server is unable to restore 1 of my three Windows servers
using the master.pem keypair

With bacula, I used an SQLQuery to check all the master.pem certificates.

SELECT DISTINCT
  path.path,
  file.md5,
  job.starttime,
  client.name
FROM
public.client,
public.file,
public.filename,
public.path,
public.job
WHERE
client.clientid = job.clientid AND
file.jobid = job.jobid AND
file.filenameid = filename.filenameid AND
file.pathid = path.pathid AND
filename.name = 'master.cert'
ORDER BY file.md5,client.name,path.path,job.starttime

Result shows me that md5 hash are different on different OS
ex 1 hash on all osx server, one hash on all linux server

But on windows md5 are always different whatever is the machine !
2 of my three windows machines uses the same bacula 5.0.3 binaries
downloaded from the bacula Repo

All the master.cert are ASCII files with the same content.
All the master.cert on Windows are coded with CRLF carrier return
All the master.cert on Linux/Mac are coded with LF carrier return

With another md5 function i got the same master.cert hash on every
Linux/Mac and the same other hash on every Windows system.

I dont understand where does the problem come from …
For the moment I keep in security every pem files from my file daemons
but it's a really trikky situation that makes no error  Every
thing works except the restore on one machine 
That passes completely unperceived because your are not checking that
master restore is working on every client deployment 

I think that bacula have to check the encryption certificates, that
dummy Windows bacula version never checks the validity of the master
public key !

What should be the right format and encoding for bacula certificates ?
Everything works except on one Windows !
I advice everybody to check their windows restoration via the master.pem file


Thank for your help


Hugo

--
Better than sec? Nothing is better than sec when it comes to
monitoring Big Data applications. Try Boundary one-second 
resolution app monitoring today. Free.
http://p.sf.net/sfu/Boundary-dev2dev
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] jobs fail with various broken pipe errors

2012-02-22 Thread Hugo Letemplier
I think you can try to configure the Heartbeat Interval directive on
your various daemons.





2012/2/22 Silver Salonen sil...@serverock.ee:
 Hi.



 Recently we changed the network connection for our backup server which is
 Bacula 5.2.3 on FreeBSD 9.0.



 After that many jobs running across WAN started failing with various broken
 pipe errors. Some examples:



 21-Feb 22:42 fbsd1-fd JobId 57779: Error: bsock.c:398 Wrote 32151 bytes to
 Storage daemon:backupsrv.url:9103, but only 16384 accepted.

 21-Feb 22:42 fbsd1-fd JobId 57779: Fatal error: backup.c:1024 Network send
 error to SD. ERR=Broken pipe

 21-Feb 22:42 fbsd1-fd JobId 57779: Error: bsock.c:339 Socket has errors=1 on
 call to Storage daemon:backupsrv.url:9103



 (this one runs from the same backup-network to another SD)

 22-Feb 00:14 backupsrv-dir JobId 57852: Fatal error: Network error with FD
 during Backup: ERR=Broken pipe

 22-Feb 00:14 backupsrv-sd2 JobId 57852: JobId=57852
 Job=linux1-userdata.2012-02-21_23.05.02_02 marked to be canceled.

 22-Feb 00:14 backupsrv-sd2 JobId 57852: Job write elapsed time = 00:33:20,
 Transfer rate = 591.5 K Bytes/second

 22-Feb 00:14 backupsrv-sd2 JobId 57852: Error: bsock.c:529 Read expected
 65568 got 1448 from client:123.45.67.81:36643

 22-Feb 00:14 backupsrv-dir JobId 57852: Fatal error: No Job status returned
 from FD.



 22-Feb 00:16 backupsrv-dir JobId 57821: Fatal error: Network error with FD
 during Backup: ERR=Broken pipe

 22-Feb 00:16 backupsrv-sd JobId 57821: Job write elapsed time = 00:57:00,
 Transfer rate = 26.69 K Bytes/second

 22-Feb 00:16 backupsrv-dir JobId 57821: Fatal error: No Job status returned
 from FD.



 22-Feb 00:24 winsrv1-fd JobId 57784: Error:
 /home/kern/bacula/k/bacula/src/lib/bsock.c:393 Write error sending 9363
 bytes to Storage daemon:backupsrv.url:9103: ERR=Input/output error

 22-Feb 00:24 winsrv1-fd JobId 57784: Fatal error:
 /home/kern/bacula/k/bacula/src/filed/backup.c:1024 Network send error to SD.
 ERR=Input/output error

 22-Feb 00:26 winsrv1-fd JobId 57784: Error:
 /home/kern/bacula/k/bacula/src/lib/bsock.c:339 Socket has errors=1 on call
 to Storage daemon:backupsrv.url:9103



 (this one runs from the same backup-network to another SD)

 22-Feb 01:33 backupsrv-dir JobId 57872: Fatal error: Socket error on
 ClientRunBeforeJob command: ERR=Broken pipe

 22-Feb 01:33 backupsrv-dir JobId 57872: Fatal error: Client winsrv2-fd
 RunScript failed.

 22-Feb 01:33 backupsrv-dir JobId 57872: Fatal error: Network error with FD
 during Backup: ERR=Broken pipe

 22-Feb 01:33 backupsrv-sd2 JobId 57872: JobId=57872
 Job=winsrv2.2012-02-22_01.00.00_27 marked to be canceled.

 22-Feb 01:33 backupsrv-dir JobId 57872: Fatal error: No Job status returned
 from FD.



 22-Feb 01:51 fbsd2-fd JobId 57806: Error: bsock.c:398 Wrote 61750 bytes to
 Storage daemon:backupsrv.url:9103, but only 16384 accepted.

 22-Feb 01:51 fbsd2-fd JobId 57806: Fatal error: backup.c:1024 Network send
 error to SD. ERR=Broken pipe

 22-Feb 01:51 fbsd2-fd JobId 57806: Error: bsock.c:339 Socket has errors=1 on
 call to Storage daemon:backupsrv.url:9103



 22-Feb 02:15 backupsrv-dir JobId 57819: Fatal error: Network error with FD
 during Backup: ERR=Connection reset by peer

 22-Feb 02:15 backupsrv-dir JobId 57819: Fatal error: No Job status returned
 from FD.





 These jobs have been failing every day for a week now. Meanwhile other jobs
 complete just fine, and it seems not to about jobs' size or scripts to be
 run before jobs on clients etc.



 Any idea what could be wrong?



 --

 Silver


 --
 Virtualization  Cloud Management Using Capacity Planning
 Cloud computing makes use of virtualization - but cloud computing
 also focuses on allowing computing to be delivered as a service.
 http://www.accelacomm.com/jaw/sfnl/114/51521223/
 ___
 Bacula-users mailing list
 Bacula-users@lists.sourceforge.net
 https://lists.sourceforge.net/lists/listinfo/bacula-users


--
Virtualization  Cloud Management Using Capacity Planning
Cloud computing makes use of virtualization - but cloud computing 
also focuses on allowing computing to be delivered as a service.
http://www.accelacomm.com/jaw/sfnl/114/51521223/
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] Bacula Encryption, master key and certificate, how to check the content ?

2012-02-01 Thread Hugo Letemplier
2012/1/31 Hugo Letemplier hugo.let...@gmail.com:
 Hello

 By the past, I made experimentations with bacula certificates.
 When passing Bacula in Production, I reset every certificates on all
 the client in order to make everything fine.

 It appears recently that I couldn't restore a windows server with the
 master keypair because the master.cert was wrong.

 I did the following query :


 SELECT DISTINCT
  path.path,
  file.md5,
  job.starttime,
  client.name
 FROM
    public.client,
    public.file,
    public.filename,
    public.path,
    public.job
 WHERE
    client.clientid = job.clientid AND
    file.jobid = job.jobid AND
    file.filenameid = filename.filenameid AND
    file.pathid = path.pathid AND
    filename.name = 'master.cert'
 ORDER BY file.md5,client.name,path.path,job.starttime
 ;

 First I have seen that between operating systems (Linux,Mac) the md5
 hash was different
 Then on my four windows machines, the hash is always different (each
 master.cert hash is unique on each windows machine.

 How is this possible ?

 Furthermore, if I test the master.cert manually with openssl md5, on
 windows machines, it's not working if the .cert has the same openssl
 md5 hash than on Linux or Mac

 It seems to be a problem of EndOfLine or Encoding.

 How should be stored the certificate in order to make it compliant
 between every systems ?

 Why do the bacula hashs on windows are always differents ?

 Thanks

 Hugo



I solved partially my problem. My master.cert was with /LF as
endline caracters
The remaining problem is that bacula never told me that the cert was wrong !

Why bacula doesn't check the certificate content ?


Hugo

--
Keep Your Developer Skills Current with LearnDevNow!
The most comprehensive online learning library for Microsoft developers
is just $99.99! Visual Studio, SharePoint, SQL - plus HTML5, CSS3, MVC3,
Metro Style Apps, more. Free future releases when you subscribe now!
http://p.sf.net/sfu/learndevnow-d2d
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] Windows File Daemon: VSS or System State or both ?

2012-01-31 Thread Hugo Letemplier
2012/1/27 Kevin Keane (subscriptions) subscript...@kkeane.com:
 A VSS backup and system state are *not* equivalent. In my mind, trying to
 back up Windows with plain VSS snapshots is not much better than not having
 any backups.



 Conceptually, the system state backup is two things: a predefined file set,
 and a mechanism to do snapshot it. But that's only the top of the ice berg;
 the devil is in the detail.



 First of all, VSS applies only to one drive. The system state can extend
 over several drives (most commonly if you chose to follow Microsoft's
 recommendations and put the log files for the various database on a
 different physical drive from the database itself).



 Second, there is no guarantee that restoring Windows in bits and pieces will
 work. When restoring the registry, you may end up with something that
 doesn't match any Windows updates made since the backup was taken.
 Basically, if you do a VSS snapshot to back up those things, you are
 manually redoing what Microsoft already did for you, and then you hope that
 you selected the right files so that a restore will later work. Sometimes it
 does, sometimes it doesn't



 And in any case - how are you going to restore just the registry? You can't
 do it from within Windows, because the files are always in use. So you have
 to use something like BartPE. What about file permissions for the registry
 files? The system state does allof that for you.



 Active Directory is special even within the system state. You can restore it
 in three different ways:



 - Simply restoring the system state. That is usually recommended. It will
 give you a non-authoritative version of AD. It is joined to the domain, but
 the data is not accurate. It will then try and replicate the data from other
 domain controllers. The end result is that AD is completely up to date with
 the latest changes.



 - Restoring the system state in directory restore mode. That will give you
 an authoritative version of AD. This will roll back AD to the time of the
 backup. Only do it if you don't have other AD controllers to replicate data
 from.



 - demote the DC, remove it from the domain, rejoin and then re-promote it.
 This is necessary if you had to seize any of the FSMO roles onto another AD
 controller.



 - rebuild the domain from scratch.



 -Original message-


 From: Simone Caronni negativ...@gmail.com
 Sent: Wed 25-01-2012 07:38
 Subject: Re: [Bacula-users] Windows File Daemon: VSS or System State or
 both ?
 To: bacula-users bacula-users@lists.sourceforge.net;
 Hello,

 I used to do one of the following depending on whether is Windows 2003
 or 2008+ to have some additional protection:

 ClientRunBeforeJob = start /w ntbackup backup systemstate /F
 C:\\SystemState.bkf
 ClientRunAfterJob = del C:\\SystemState.bkf

 ClientRunBeforeJob = start /w wbadmin start systemstatebackup
 -backuptarget:D: -quiet
 ClientRunAfterJob = rmdir /s /q D:\\WindowsImageBackup

 But if you run with VSS enabled you're able to back up also registry
 and open files, so unless you have a strict requirement to restore
 only a part of the System State I suggest you to use only VSS.

 I made many restore of complete Windows systems with BartPE and the
 bacula plugin. The only thing you will see is shutdown unexpected
 when you reboot the restored system.

 From my experiences, Active Directory domain controllers are a LOT
 easier and faster to be replicated from another instance than to be
 restored from scratch.

 Regards,
 --Simone



 On 25 January 2012 11:48, Hugo Letemplier hugo.let...@gmail.com wrote:
 Hello,

 I am backing up Windows servers.

 I have a doubt on my file set. I run backups with VSS enabled and I
 run NTBackup to make a dump of the System State.

 This is recommended on the Bacula documentation but with both the VSS
 enabled and the NTBackup System state backup, have we got a redundancy
 of information ?

 Thank you

 Hugo


 --
 Keep Your Developer Skills Current with LearnDevNow!
 The most comprehensive online learning library for Microsoft developers
 is just $99.99! Visual Studio, SharePoint, SQL - plus HTML5, CSS3, MVC3,
 Metro Style Apps, more. Free future releases when you subscribe now!
 http://p.sf.net/sfu/learndevnow-d2d
 ___
 Bacula-users mailing list
 Bacula-users@lists.sourceforge.net
 https://lists.sourceforge.net/lists/listinfo/bacula-users



 --
 You cannot discover new oceans unless you have the courage to lose
 sight of the shore (R. W. Emerson).

 --
 Keep Your Developer Skills Current with LearnDevNow!
 The most comprehensive online learning library for Microsoft developers
 is just $99.99! Visual Studio, SharePoint, SQL - plus HTML5, CSS3, MVC3,
 Metro Style Apps, more. Free future releases when you subscribe now!
 http://p.sf.net/sfu

[Bacula-users] Bacula Encryption, master key and certificate, how to check the content ?

2012-01-31 Thread Hugo Letemplier
Hello

By the past, I made experimentations with bacula certificates.
When passing Bacula in Production, I reset every certificates on all
the client in order to make everything fine.

It appears recently that I couldn't restore a windows server with the
master keypair because the master.cert was wrong.

I did the following query :


SELECT DISTINCT
  path.path,
  file.md5,
  job.starttime,
  client.name
FROM
public.client,
public.file,
public.filename,
public.path,
public.job
WHERE
client.clientid = job.clientid AND
file.jobid = job.jobid AND
file.filenameid = filename.filenameid AND
file.pathid = path.pathid AND
filename.name = 'master.cert'
ORDER BY file.md5,client.name,path.path,job.starttime
;

First I have seen that between operating systems (Linux,Mac) the md5
hash was different
Then on my four windows machines, the hash is always different (each
master.cert hash is unique on each windows machine.

How is this possible ?

Furthermore, if I test the master.cert manually with openssl md5, on
windows machines, it's not working if the .cert has the same openssl
md5 hash than on Linux or Mac

It seems to be a problem of EndOfLine or Encoding.

How should be stored the certificate in order to make it compliant
between every systems ?

Why do the bacula hashs on windows are always differents ?

Thanks

Hugo

--
Keep Your Developer Skills Current with LearnDevNow!
The most comprehensive online learning library for Microsoft developers
is just $99.99! Visual Studio, SharePoint, SQL - plus HTML5, CSS3, MVC3,
Metro Style Apps, more. Free future releases when you subscribe now!
http://p.sf.net/sfu/learndevnow-d2d
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


[Bacula-users] Windows File Daemon: VSS or System State or both ?

2012-01-25 Thread Hugo Letemplier
Hello,

I am backing up Windows servers.

I have a doubt on my file set. I run backups with VSS enabled and I
run NTBackup to make a dump of the System State.

This is recommended on the Bacula documentation but with both the VSS
enabled and the NTBackup System state backup, have we got a redundancy
of information ?

Thank you

Hugo

--
Keep Your Developer Skills Current with LearnDevNow!
The most comprehensive online learning library for Microsoft developers
is just $99.99! Visual Studio, SharePoint, SQL - plus HTML5, CSS3, MVC3,
Metro Style Apps, more. Free future releases when you subscribe now!
http://p.sf.net/sfu/learndevnow-d2d
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] Windows File Daemon: VSS or System State or both ?

2012-01-25 Thread Hugo Letemplier
2012/1/25 Simone Caronni negativ...@gmail.com:
 Hello,

 I used to do one of the following depending on whether is Windows 2003
 or 2008+ to have some additional protection:

 ClientRunBeforeJob = start /w ntbackup backup systemstate /F
 C:\\SystemState.bkf
 ClientRunAfterJob = del C:\\SystemState.bkf

 ClientRunBeforeJob = start /w wbadmin start systemstatebackup
 -backuptarget:D: -quiet
 ClientRunAfterJob = rmdir /s /q D:\\WindowsImageBackup

 But if you run with VSS enabled you're able to back up also registry
 and open files, so unless you have a strict requirement to restore
 only a part of the System State I suggest you to use only VSS.

 I made many restore of complete Windows systems with BartPE and the
 bacula plugin. The only thing you will see is shutdown unexpected
 when you reboot the restored system.

 From my experiences, Active Directory domain controllers are a LOT
 easier and faster to be replicated from another instance than to be
 restored from scratch.

 Regards,
 --Simone

Well explained.

I don't have any replica for my AD because It's just for a particular
application that required one and it's totally independent (HA not
needed, it's just for data security). Domain management for my users
is done on another system. Restore will be only from scratch.

I tried a restore it worked well even without restoring the System
State. I will keep the System State for a while, do many restore
retries and if it works I will only use VSS.

Thank you

--
Keep Your Developer Skills Current with LearnDevNow!
The most comprehensive online learning library for Microsoft developers
is just $99.99! Visual Studio, SharePoint, SQL - plus HTML5, CSS3, MVC3,
Metro Style Apps, more. Free future releases when you subscribe now!
http://p.sf.net/sfu/learndevnow-d2d
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] is it make any sense if i backups Bacula volumes

2012-01-10 Thread Hugo Letemplier
2012/1/6 Bill Arlofski waa-bac...@revpol.com:
 On 01/05/12 23:39, Rushdhi Mohamed wrote:
 hi...all


 pls see the output of ls  for the folder where a NAS is mounted..

 backup-server01:/opt/NASBackup # ls
 Defu-vol2        NASDailyVol-0012  NASDailyVol-0017  NASDailyVol-0021
 NAVol-mobilepicdump  RAC2ARCHVol-0027  RAC2ARCHVol-0035
 RAC2ARCHVol-0041
 lost+found       NASDailyVol-0014  NASDailyVol-0018  NASDailyVol-0031
 RAC2ARCHVol-0022     RAC2ARCHVol-0028  RAC2ARCHVol-0036
 RAC2ARCHVol-0067
 MOBPICPVol-0030  NASDailyVol-0015  NASDailyVol-0019  NASDailyVol-0033
 RAC2ARCHVol-0025     RAC2ARCHVol-0032  RAC2ARCHVol-0038
 TCSAPPVol-0068
 MOBPICPVol-0051  NASDailyVol-0016  NASDailyVol-0020  NASDailyVol-0040
 RAC2ARCHVol-0026     RAC2ARCHVol-0034  RAC2ARCHVol-0039

 if i copy the Bacula volumes to tape after all the backups are done
 ... i mean copy the path /opt/NASBackup to tape.. is it make any
 sense?

 Is it possible to restore if i do like that...

 Hi Rushdhi.

 Take a look at Copy Jobs.    That is the correct Bacula way of copying
 backup jobs from one media such as the file volumes on the NAS to another
 media such as removable tapes for off-site archiving.

In my own opinion, Copy Jobs is nice for Archiving or to make a copy
to a différent pool or storage type.
Here you want to copy from disk to tape so simply run a bacula Copy job.

In another case if you are looking for redundancy just run a tape copy
to another tape or a rsync of your volumes on another hard drive

Hugo

--
Write once. Port to many.
Get the SDK and tools to simplify cross-platform app development. Create 
new or port existing apps to sell to consumers worldwide. Explore the 
Intel AppUpSM program developer opportunity. appdeveloper.intel.com/join
http://p.sf.net/sfu/intel-appdev
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


[Bacula-users] Needing some advices, tape drive over iscsi

2012-01-10 Thread Hugo Letemplier
Hello,


My bacula server is on a virtual machine, I pass my Tape Drive via
iscsi to my bacula server. This tape drive is also shared sometimes
with my older backup server in order to restore old backups.

I want bacula to free the tape drive at the iscsi level after it ran
the jobs and a little time of inactivity. What config  options should
I use ?

Bacula and iscsi are very very ustable for an unknown reason when I am
using the eject button of my tape drive so I should use offline on
unmount to eject the tape.

What other options should i look ?

Does anyones experienced problems with iscsi tape drives ?

Thanks

Hugo

--
Write once. Port to many.
Get the SDK and tools to simplify cross-platform app development. Create 
new or port existing apps to sell to consumers worldwide. Explore the 
Intel AppUpSM program developer opportunity. appdeveloper.intel.com/join
http://p.sf.net/sfu/intel-appdev
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] Needing some advices, tape drive over iscsi

2012-01-10 Thread Hugo Letemplier
2012/1/10 James Harper james.har...@bendigoit.com.au:

 My bacula server is on a virtual machine, I pass my Tape Drive via iscsi to 
 my
 bacula server. This tape drive is also shared sometimes with my older backup
 server in order to restore old backups.

 Can you pass the tape drive through from the driver VM? Any enterprise 
 virtualisation should allow you to do this.

 Xen supports scsi passthrough and also pci passthrough. I use the former for 
 running Backup Exec test restores in a Windows VM, and I've never used the 
 latter but it should work fine.

 James

I need to share it via iscsi for my older backup server.
There is a passtrough feature on the Proxmox Cluster that I previously
used but I had problem using it.
Moreover, with the need of sharing the tape drive on two devices,
iscsi is a good solution

Historically we got this virtualisation solution which work well most
of the time but  not reallly fine for I/O but I have to do with it :-(

--
Write once. Port to many.
Get the SDK and tools to simplify cross-platform app development. Create 
new or port existing apps to sell to consumers worldwide. Explore the 
Intel AppUpSM program developer opportunity. appdeveloper.intel.com/join
http://p.sf.net/sfu/intel-appdev
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] purge/mark unavailable all volumes on a given storage volume

2012-01-04 Thread Hugo Letemplier
2011/12/27 James Harper james.har...@bendigoit.com.au:
 I backup to permanently attached USB disk (3 weeks retention of weekly full + 
 3x daily differentials) then to offsite USB disk (virtual full), and one of 
 the permanently attached disks has just failed. Is there a shortcut to tell 
 Bacula to purge all volumes on that disk (they aren't coming back), prompting 
 it to just do a full backup next time the jobs run?

 My fallback is to just create a shell script based on a query result but 
 maybe there's another trick someone knows?

 Thanks

 James

 --
 Write once. Port to many.
 Get the SDK and tools to simplify cross-platform app development. Create
 new or port existing apps to sell to consumers worldwide. Explore the
 Intel AppUpSM program developer opportunity. appdeveloper.intel.com/join
 http://p.sf.net/sfu/intel-appdev
 ___
 Bacula-users mailing list
 Bacula-users@lists.sourceforge.net
 https://lists.sourceforge.net/lists/listinfo/bacula-users

You said that the physical volume is dead, so the bacula media files
cannot be recovered, so i think you have to simply delete all the
bacula volumes that were on this device and then recreate new ones

Hugo

--
Ridiculously easy VDI. With Citrix VDI-in-a-Box, you don't need a complex
infrastructure or vast IT resources to deliver seamless, secure access to
virtual desktops. With this all-in-one solution, easily deploy virtual 
desktops for less than the cost of PCs and save 60% on VDI infrastructure 
costs. Try it free! http://p.sf.net/sfu/Citrix-VDIinabox
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] Using bconsole to restore UTF8… encoded filenames with specials characters

2011-12-23 Thread Hugo Letemplier
2011/12/22 Alan Brown a...@mssl.ucl.ac.uk:
 Hugo Letemplier wrote:

 My problem is when I want to restore one file. I cant select this
 file/directory in bconsole ! Because ls command in bconsole returns
 good file names but when you try to select the files with the same
 names it's not working


 enclose the filenames in  or use \ to escape strange characters.

 This is documented somewhere.




I am using ssh to run a local bconsole on my bacula server

As an example:
Coût SAV is the directory that I want to cd.

cd Coût SAV doesn't work just like
cd Co\ût SAV

What would you do for such a name ?

Hugo

--
Write once. Port to many.
Get the SDK and tools to simplify cross-platform app development. Create 
new or port existing apps to sell to consumers worldwide. Explore the 
Intel AppUpSM program developer opportunity. appdeveloper.intel.com/join
http://p.sf.net/sfu/intel-appdev
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


[Bacula-users] Using bconsole to restore UTF8… encoded filenames with specials characters

2011-12-22 Thread Hugo Letemplier
Hello,

I have got a recurrent problem,
I run bacula 5.0
I run backups on an apple file server. People uses a lot file names
with UTF8 or Mac OSX char encoding.
When I want to restore, I 'd like to have some options that are in
bconsole but not in Bat.
My problem is when I want to restore one file. I cant select this
file/directory in bconsole ! Because ls command in bconsole returns
good file names but when you try to select the files with the same
names it's not working

Many times I restored  a whole part of the tree but it's often impossible .
Also it's annoying me in order to make a simple procedure for my
colleagues who aren't very good with bacula but who will have to
follow my documentation when they want to do a restore or in a
disaster recovery case. Moreover I wont always be there in . (use
bconsole, bat,… it's always a great question… if one fail, use the
other one, reselect the file…)

So have you an idea ?


Thanks in advance

Hugo

--
Write once. Port to many.
Get the SDK and tools to simplify cross-platform app development. Create 
new or port existing apps to sell to consumers worldwide. Explore the 
Intel AppUpSM program developer opportunity. appdeveloper.intel.com/join
http://p.sf.net/sfu/intel-appdev
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


[Bacula-users] Many projects idea, what do you think about it ?

2011-12-12 Thread Hugo Letemplier
Hello,

I have some project ideas. If people like it, maybe it can be
developed in next bacula versions.

I am running bacula 5.0.3.

My experience with bacula was long  slow at the beginning. This was
due to various concept that I didn't understood and also for some
missing function that I developed myself via python over bconsole.
Moreover, some packages didn't included some options or where
including bugs

I ran ./configure and the compilation from source myself and a lot of
problem disappeared.

Idea --- Have a COPY/MIGRATE functionality just like the RESTORE functionality
= You simply select the jobs that you want to copy and the destination storage.
= An estimation indicates the size of the selected jobs
= That functionality can be integrated in bat.
Why:
= You cant manually copy many jobs from bconsole eg: run
job=MyCopyJob jobid=1000here i cant specify many jobs
= Copy job are, in my opinion, mostly used for Archiving on a
separate device so it cant be a Fully automated process ( tape
size…). In my case, an external python script loads sequentially
bconsole commands to run job. This script let me choose which job I
want to add to the tape.
With bacula integrated options for copy such as Sql Query or Pool
Uncopied , I would like to check the job list before running the job,
eg: I want to limit one tape by month so my archiving should no exceed
the estimated size of an LTO3 Tape (~400Go).


Idea --- Disable only one occurrence of a job or pre-upgrade the next
occurrence of a job.
= Add an upgrade command
= Add a next option to the disable command
Why:
= Simply imagine that your backups are growing faster than expected,
you want that the next time the job run with an higher level.
= You do not want to reload you configuration
= There is already the run command and the Allow Duplicate Jobs =
No directive but for the moment, but it conflicts with Copy jobs
= You may not be able to re-enable the job just after ( eg : you want
to run the job during the week end )



Idea --- The pruning algorithm between 5.0 and 5.2 versions of bacula.
That would be great if it's possible to select which algorithm to use
/pool or /job

Idea --- Enable running virtual-full in the same pool as my full job

Idea --- Ability to bextract cyphered jobs
= My Disaster recovery plan is a little tricky without this functionality


Idea --- Ability to dynamically use a specific encryption key on a file daemon.
= eg : You are in BAT, you want to restore files from a job on a file
daemon that is different from the backup client.
Why:
= Be able to restore on a different file daemon without deploying
massively ( in clear text) the master key.
Solution 1:
= Add a menu in bat that upload the key on the file daemon
Solution 2:
= Predeploy an encrypted version of the master key on the file
daemons, bat or bconsole ask the user for a passphrase in order to
decrypt this key.



Idea --- Disable auto in ./configure script.
Why
= Some bacula version implements ACL other ones do not and it's the
same for lots of libraries. When you are installing bacula you are
expecting ACL and many functionality but theses ones are there only if
the libraries are on the system.
= When I installed various builds of bacula from various repository
(MacPort, Sourceforge, Apt, RPM…) on various systems ( MacOSX,
Windows, CentOS, Debian) I had a lot of bug coming from some version
that were build without some options. To make this right, I had to run
installation from source on many systems with the same configure
options.
Solution:
= By default, all Bacula versions should implement the same options
on compression, encryption, acl, batch inserts…. If the user want it
without ACL he can do it himself by deselecting with ./configure.
I really think that it's better to have a defined number of options
that are officially supported that should be integrated in every
version. I don't think that many people can do backups without acl,
encryption…


Idea --- Add an option to automatically run a new full if the
estimated size of the [incremental|differential] job reaches a certain
ratio compared to the previous full.
Why:
eg: On my file server, people can add no more files during months and
sometimes there is a huge activity.
= It can happen that some incremental jobs are larger than the
differential and some differentials larger than the full. In this
case, you have better to run a new full.
Solution1:
= Implement directly this functionality
Solution2:
= Implement upgrade-level directive and  post estimate runscript
directive with a variable integration.


Idea --- Having a programming interface that can talk like bconsole.
It's not always easy to talk to another shell and bconsole uses
As I understood, python integration in bacula is effective only on a
bacula event. Please correct me if I am wrong.
I created a library in python (via pexpect) in order to have an
interface for my scripts. It's talking to bconsole and directly to the
catalog (read only).
I can give this 

[Bacula-users] Canceling a job on tape ? Rerun the job, Where does bacula will append the tape ?

2011-10-04 Thread Hugo Letemplier
Hi,

I had a job that was canceled at the half of the run.
This job was going to be on a tape.

When I rerun the job, does bacula append the tape from the last valid
and terminated job ? or does bacula writes the tape from the last
writen bytes of the canceled job ?

I didn't found a good explanation about it on the documentation

Thank you

Hugo

--
All the data continuously generated in your IT infrastructure contains a
definitive record of customers, application performance, security
threats, fraudulent activity and more. Splunk takes this data and makes
sense of it. Business sense. IT sense. Common sense.
http://p.sf.net/sfu/splunk-d2dcopy1
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] API to Communicate with Bacula from a client portal?

2011-08-24 Thread Hugo Letemplier
2011/8/16 Thomas Mueller tho...@chaschperli.ch:
 On 09.08.2011 17:31, Yazan wrote:
 Hey guys,

 We have been using Bacula for some time now and I wanted to automate a few 
 things and allow the clients to take control of their backups. We want to 
 give our clients the option of buying backup space and creating/editing 
 backup schedules of their data and their databases. We're hoping to automate 
 this as well as allow the clients to have access to their backup files with 
 the option of restoring a file (or multiple) all via our PHP based portal.

 Please let me know if anybody has worked with this in the past and if they 
 are willing to help. We can work out compensation as well if you have a 
 solution.

 I would really appreciate your help and feedback on this matter. Thank you!

 IMHO the easiest way is to create a webfrontent, genenrate the conf
 dynamically and include it with with a script (
 @|/etc/bacula/get-conf.sh )

 note: only the director can be reloaded without restart.

 - Thomas


 --
 uberSVN's rich system and user administration capabilities and model
 configuration take the hassle out of deploying and managing Subversion and
 the tools developers use with it. Learn more about uberSVN and get a free
 download at:  http://p.sf.net/sfu/wandisco-dev2dev
 ___
 Bacula-users mailing list
 Bacula-users@lists.sourceforge.net
 https://lists.sourceforge.net/lists/listinfo/bacula-users


take a look at this:
http://sourceforge.net/projects/web-bat/

I know that it was done for Bacula version 3 so I dont think that it's
been updated since the last versions of bacula.

Hugo

--
EMC VNX: the world's simplest storage, starting under $10K
The only unified storage solution that offers unified management 
Up to 160% more powerful than alternatives and 25% more efficient. 
Guaranteed. http://p.sf.net/sfu/emc-vnx-dev2dev
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


[Bacula-users] Upgrade to full if incremental or diff estimate was reaching a specified size ?

2011-08-24 Thread Hugo Letemplier
Hi

When I run a Job, I would like to automaticaly upgrade the job to a
full if the estimate was big ( maybe a certain percentage of the
initial full ).

How can I do that, I was thinking with python on the job start Event ?

Can I do that in a before job command ? I may need to cancel the first
job and automaticaly run a new one?

Thank you

Hugo

--
EMC VNX: the world's simplest storage, starting under $10K
The only unified storage solution that offers unified management 
Up to 160% more powerful than alternatives and 25% more efficient. 
Guaranteed. http://p.sf.net/sfu/emc-vnx-dev2dev
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] Upgrade to full if incremental or diff estimate was reaching a specified size ?

2011-08-24 Thread Hugo Letemplier
2011/8/25 Hugo Letemplier hugo.let...@gmail.com:
 Hi

 When I run a Job, I would like to automaticaly upgrade the job to a
 full if the estimate was big ( maybe a certain percentage of the
 initial full ).

 How can I do that, I was thinking with python on the job start Event ?

 Can I do that in a before job command ? I may need to cancel the first
 job and automaticaly run a new one?

 Thank you

 Hugo

Also, does the run before job command is executed before or after the estimate ?

--
EMC VNX: the world's simplest storage, starting under $10K
The only unified storage solution that offers unified management 
Up to 160% more powerful than alternatives and 25% more efficient. 
Guaranteed. http://p.sf.net/sfu/emc-vnx-dev2dev
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] I don't want to prune files ! …… Not working as expected

2011-08-22 Thread Hugo Letemplier
2011/8/19 Dan Langille d...@langille.org:

 On Aug 19, 2011, at 6:05 AM, Hugo Letemplier wrote:

 2011/8/18 Dan Langille d...@langille.org:

 On Aug 18, 2011, at 11:46 AM, Hugo Letemplier wrote:

 2011/8/18 Dan Langille d...@langille.org:

 On Aug 18, 2011, at 9:51 AM, Hugo Letemplier wrote:

 2011/8/18 Hugo Letemplier hugo.let...@gmail.com:
 2011/8/16 Dan Langille d...@langille.org:

 On Aug 16, 2011, at 7:27 AM, Hugo Letemplier wrote:

 2011/8/16 Dan Langille d...@langille.org:

 On Aug 16, 2011, at 6:36 AM, Hugo Letemplier wrote:

 2011/8/16 Jeremy Maes j...@schaubroeck.be:
 Op 16/08/2011 11:45, Hugo Letemplier schreef:

 Hi the list !

 I have a recurrent issue in bacula.

 I specified a very long duration for job and file retention.
 I sepecified Purge File = No, Purge Job = No, Purge Volume = yes
 I have autoprune = yes in the pool description

 I work on a volume rotation basis, one volume is kept a time 
 according
 to his pool's retention parameters.

 I want to keep in base, for some jobs, all the files that are in 
 the
 volume because theses jobs are more usually used to restore some 
 files
 than the whole job.

 But I still have Puged Files = Yes in the job run list in Bat.

 I don't understand what's happenning !

 Could you help me ?

 How can I check that my pruning directives work as expected ?

 Thank you in advance.

 Hugo

 Did you set these long retention times etc after you created the 
 volumes?
 Those parameters are set for a volume when it's created, and are 
 not
 automatically updated when you change the config and/or reload 
 bacula. You
 have to issue the update command in bconsole and update your 
 pool from
 resource for all pools and volume parameters all volumes from 
 pool. (or
 all volumes from all pools).

 Kind regards,
 Jeremy

  DISCLAIMER 
 http://www.schaubroeck.be/maildisclaimer.htm

 Thanks for your answer

 Yes, I already have updated my volumes. So I dont understand very 
 well
 what 's happening


 When in doubt, ignore bat. Instead, look at what bconsole reports.

 --
 Dan Langille - http://langille.org


 I checked via a restore
 I think there is no doubt because bacula done an inc as big as a full
 just after the jobs that were marked Pruned Files : yes
 Also when i do a restore It says that there are missing files entries
 and I cant select file and dirs with the tree in bconsole.


 I believe you are reaching conclusions based on suspicions, not facts. 
  The
 conclusions may be correct, but that doesn't help find the cause.

 Look at the output of 'list media'.  Look at the value in 
 volretention.  It is in seconds.

 Is this value matching up with that expect?


 --
 Dan Langille - http://langille.org



 The Volume retention is correct. Moreover, the problem isn't on the
 volume retention but on the file retention.

 My volumes are never pruned because for the moment I put it a little
 too big duration, I keep an eye on the volume size and the remaining
 space on the hard drive. I delete jobs then prune the volumes that
 contains no jobs and purge action=Truncate manually on the purged
 volumes ( I am at this point because bacula is pruning files and I
 dont known the origin of the problem. So I do a lot of things manually
 for safety reasons).

 In other words I set AutoPrune = No


 Where can I check this File retention, is it writen in the database
 like for the volume retention, or only in the configuration file ?
 What configuration directives should I check ?

 There are defaults for File retention (see below) so if you have not 
 specified, them, they are default.

 Look in the client table:

 bacula=# select clientid, name, autoprune, fileretention, jobretention 
 from client limit 3;
  clientid |   name   | autoprune | fileretention | jobretention
 --+--+---+---+--
       21 | havoc-fd |         1 |       5184000 |     15552000
       12 | nz-fd    |         1 |       2592000 |     15552000
       10 | ducky-fd |         1 |       5184000 |     15552000
 (3 rows)


 Retention is in seconds.


 I updated all the file retention directives ( in client and pool ) to
 a duration of 1 year.
 So, now, it shouldn't purge files except if their jobs are deleted, if
 job have its files purged or if the volume that contain the job is
 purged, isn't it


 No.

 Job, File, and Volume retention are all separate elements.

 I set my Job and File retention to a large value, say 30 years, then let 
 Volume
 retention take care of pruning.

 this is what I wanted to say, volume pruning take precedence over the
 other directives

 Umm, what do you mean by 'take precedence'?  If any retention value is 
 reached, that particular item
 is pruned.  But if you lose a Volume to pruning, you lose files and jobs.  
 And if you loose
 a job to pruning, you lose the files.


 We are ok on this definition ;-)




 From:
 http://www.bacula.org/5.0.x-manuals/en/main/main/Catalog_Maintenance.html

Re: [Bacula-users] I don't want to prune files ! …… Not working as expected

2011-08-19 Thread Hugo Letemplier
2011/8/18 Dan Langille d...@langille.org:

 On Aug 18, 2011, at 11:46 AM, Hugo Letemplier wrote:

 2011/8/18 Dan Langille d...@langille.org:

 On Aug 18, 2011, at 9:51 AM, Hugo Letemplier wrote:

 2011/8/18 Hugo Letemplier hugo.let...@gmail.com:
 2011/8/16 Dan Langille d...@langille.org:

 On Aug 16, 2011, at 7:27 AM, Hugo Letemplier wrote:

 2011/8/16 Dan Langille d...@langille.org:

 On Aug 16, 2011, at 6:36 AM, Hugo Letemplier wrote:

 2011/8/16 Jeremy Maes j...@schaubroeck.be:
 Op 16/08/2011 11:45, Hugo Letemplier schreef:

 Hi the list !

 I have a recurrent issue in bacula.

 I specified a very long duration for job and file retention.
 I sepecified Purge File = No, Purge Job = No, Purge Volume = yes
 I have autoprune = yes in the pool description

 I work on a volume rotation basis, one volume is kept a time 
 according
 to his pool's retention parameters.

 I want to keep in base, for some jobs, all the files that are in the
 volume because theses jobs are more usually used to restore some 
 files
 than the whole job.

 But I still have Puged Files = Yes in the job run list in Bat.

 I don't understand what's happenning !

 Could you help me ?

 How can I check that my pruning directives work as expected ?

 Thank you in advance.

 Hugo

 Did you set these long retention times etc after you created the 
 volumes?
 Those parameters are set for a volume when it's created, and are not
 automatically updated when you change the config and/or reload 
 bacula. You
 have to issue the update command in bconsole and update your pool 
 from
 resource for all pools and volume parameters all volumes from 
 pool. (or
 all volumes from all pools).

 Kind regards,
 Jeremy

  DISCLAIMER 
 http://www.schaubroeck.be/maildisclaimer.htm

 Thanks for your answer

 Yes, I already have updated my volumes. So I dont understand very well
 what 's happening


 When in doubt, ignore bat. Instead, look at what bconsole reports.

 --
 Dan Langille - http://langille.org


 I checked via a restore
 I think there is no doubt because bacula done an inc as big as a full
 just after the jobs that were marked Pruned Files : yes
 Also when i do a restore It says that there are missing files entries
 and I cant select file and dirs with the tree in bconsole.


 I believe you are reaching conclusions based on suspicions, not facts.  
 The
 conclusions may be correct, but that doesn't help find the cause.

 Look at the output of 'list media'.  Look at the value in volretention.  
 It is in seconds.

 Is this value matching up with that expect?


 --
 Dan Langille - http://langille.org



 The Volume retention is correct. Moreover, the problem isn't on the
 volume retention but on the file retention.

 My volumes are never pruned because for the moment I put it a little
 too big duration, I keep an eye on the volume size and the remaining
 space on the hard drive. I delete jobs then prune the volumes that
 contains no jobs and purge action=Truncate manually on the purged
 volumes ( I am at this point because bacula is pruning files and I
 dont known the origin of the problem. So I do a lot of things manually
 for safety reasons).

 In other words I set AutoPrune = No


 Where can I check this File retention, is it writen in the database
 like for the volume retention, or only in the configuration file ?
 What configuration directives should I check ?

 There are defaults for File retention (see below) so if you have not 
 specified, them, they are default.

 Look in the client table:

 bacula=# select clientid, name, autoprune, fileretention, jobretention from 
 client limit 3;
  clientid |   name   | autoprune | fileretention | jobretention
 --+--+---+---+--
       21 | havoc-fd |         1 |       5184000 |     15552000
       12 | nz-fd    |         1 |       2592000 |     15552000
       10 | ducky-fd |         1 |       5184000 |     15552000
 (3 rows)


 Retention is in seconds.


 I updated all the file retention directives ( in client and pool ) to
 a duration of 1 year.
 So, now, it shouldn't purge files except if their jobs are deleted, if
 job have its files purged or if the volume that contain the job is
 purged, isn't it


 No.

 Job, File, and Volume retention are all separate elements.

 I set my Job and File retention to a large value, say 30 years, then let 
 Volume
 retention take care of pruning.

 this is what I wanted to say, volume pruning take precedence over the
 other directives

 Umm, what do you mean by 'take precedence'?  If any retention value is 
 reached, that particular item
 is pruned.  But if you lose a Volume to pruning, you lose files and jobs.  
 And if you loose
 a job to pruning, you lose the files.


We are ok on this definition ;-)




 From:
 http://www.bacula.org/5.0.x-manuals/en/main/main/Catalog_Maintenance.html#SECTION00461

 ###
 File Retention = time-period-specification
 The File Retention record defines the length

Re: [Bacula-users] I don't want to prune files ! …… Not working as expected

2011-08-18 Thread Hugo Letemplier
2011/8/16 Dan Langille d...@langille.org:

 On Aug 16, 2011, at 7:27 AM, Hugo Letemplier wrote:

 2011/8/16 Dan Langille d...@langille.org:

 On Aug 16, 2011, at 6:36 AM, Hugo Letemplier wrote:

 2011/8/16 Jeremy Maes j...@schaubroeck.be:
 Op 16/08/2011 11:45, Hugo Letemplier schreef:

 Hi the list !

 I have a recurrent issue in bacula.

 I specified a very long duration for job and file retention.
 I sepecified Purge File = No, Purge Job = No, Purge Volume = yes
 I have autoprune = yes in the pool description

 I work on a volume rotation basis, one volume is kept a time according
 to his pool's retention parameters.

 I want to keep in base, for some jobs, all the files that are in the
 volume because theses jobs are more usually used to restore some files
 than the whole job.

 But I still have Puged Files = Yes in the job run list in Bat.

 I don't understand what's happenning !

 Could you help me ?

 How can I check that my pruning directives work as expected ?

 Thank you in advance.

 Hugo

 Did you set these long retention times etc after you created the volumes?
 Those parameters are set for a volume when it's created, and are not
 automatically updated when you change the config and/or reload bacula. You
 have to issue the update command in bconsole and update your pool from
 resource for all pools and volume parameters all volumes from pool. (or
 all volumes from all pools).

 Kind regards,
 Jeremy

  DISCLAIMER 
 http://www.schaubroeck.be/maildisclaimer.htm

 Thanks for your answer

 Yes, I already have updated my volumes. So I dont understand very well
 what 's happening


 When in doubt, ignore bat. Instead, look at what bconsole reports.

 --
 Dan Langille - http://langille.org


 I checked via a restore
 I think there is no doubt because bacula done an inc as big as a full
 just after the jobs that were marked Pruned Files : yes
 Also when i do a restore It says that there are missing files entries
 and I cant select file and dirs with the tree in bconsole.


 I believe you are reaching conclusions based on suspicions, not facts.  The
 conclusions may be correct, but that doesn't help find the cause.

 Look at the output of 'list media'.  Look at the value in volretention.  It 
 is in seconds.

 Is this value matching up with that expect?


 --
 Dan Langille - http://langille.org



The Volume retention is correct. Moreover, the problem isn't on the
volume retention but on the file retention.

My volumes are never pruned because for the moment I put it a little
too big duration, I keep an eye on the volume size and the remaining
space on the hard drive. I delete jobs then prune the volumes that
contains no jobs and purge action=Truncate manually on the purged
volumes ( I am at this point because bacula is pruning files and I
dont known the origin of the problem. So I do a lot of things manually
for safety reasons).

Where can I check this File retention, is it writen in the database
like for the volume retention, or only in the configuration file ?
What configuration directives should I check ?

I updated all the file retention directives ( in client and pool ) to
a duration of 1 year.
So, now, it shouldn't purge files except if their jobs are deleted, if
job have its files purged or if the volume that contain the job is
purged, isn't it ?

Regards

Hugo

--
Get a FREE DOWNLOAD! and learn more about uberSVN rich system, 
user administration capabilities and model configuration. Take 
the hassle out of deploying and managing Subversion and the 
tools developers use with it. http://p.sf.net/sfu/wandisco-d2d-2
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] I don't want to prune files ! …… Not working as expected

2011-08-18 Thread Hugo Letemplier
2011/8/18 Hugo Letemplier hugo.let...@gmail.com:
 2011/8/16 Dan Langille d...@langille.org:

 On Aug 16, 2011, at 7:27 AM, Hugo Letemplier wrote:

 2011/8/16 Dan Langille d...@langille.org:

 On Aug 16, 2011, at 6:36 AM, Hugo Letemplier wrote:

 2011/8/16 Jeremy Maes j...@schaubroeck.be:
 Op 16/08/2011 11:45, Hugo Letemplier schreef:

 Hi the list !

 I have a recurrent issue in bacula.

 I specified a very long duration for job and file retention.
 I sepecified Purge File = No, Purge Job = No, Purge Volume = yes
 I have autoprune = yes in the pool description

 I work on a volume rotation basis, one volume is kept a time according
 to his pool's retention parameters.

 I want to keep in base, for some jobs, all the files that are in the
 volume because theses jobs are more usually used to restore some files
 than the whole job.

 But I still have Puged Files = Yes in the job run list in Bat.

 I don't understand what's happenning !

 Could you help me ?

 How can I check that my pruning directives work as expected ?

 Thank you in advance.

 Hugo

 Did you set these long retention times etc after you created the volumes?
 Those parameters are set for a volume when it's created, and are not
 automatically updated when you change the config and/or reload bacula. 
 You
 have to issue the update command in bconsole and update your pool from
 resource for all pools and volume parameters all volumes from pool. 
 (or
 all volumes from all pools).

 Kind regards,
 Jeremy

  DISCLAIMER 
 http://www.schaubroeck.be/maildisclaimer.htm

 Thanks for your answer

 Yes, I already have updated my volumes. So I dont understand very well
 what 's happening


 When in doubt, ignore bat. Instead, look at what bconsole reports.

 --
 Dan Langille - http://langille.org


 I checked via a restore
 I think there is no doubt because bacula done an inc as big as a full
 just after the jobs that were marked Pruned Files : yes
 Also when i do a restore It says that there are missing files entries
 and I cant select file and dirs with the tree in bconsole.


 I believe you are reaching conclusions based on suspicions, not facts.  The
 conclusions may be correct, but that doesn't help find the cause.

 Look at the output of 'list media'.  Look at the value in volretention.  It 
 is in seconds.

 Is this value matching up with that expect?


 --
 Dan Langille - http://langille.org



 The Volume retention is correct. Moreover, the problem isn't on the
 volume retention but on the file retention.

 My volumes are never pruned because for the moment I put it a little
 too big duration, I keep an eye on the volume size and the remaining
 space on the hard drive. I delete jobs then prune the volumes that
 contains no jobs and purge action=Truncate manually on the purged
 volumes ( I am at this point because bacula is pruning files and I
 dont known the origin of the problem. So I do a lot of things manually
 for safety reasons).

In other words I set AutoPrune = No


 Where can I check this File retention, is it writen in the database
 like for the volume retention, or only in the configuration file ?
 What configuration directives should I check ?

 I updated all the file retention directives ( in client and pool ) to
 a duration of 1 year.
 So, now, it shouldn't purge files except if their jobs are deleted, if
 job have its files purged or if the volume that contain the job is
 purged, isn't it ?

 Regards

 Hugo


--
Get a FREE DOWNLOAD! and learn more about uberSVN rich system, 
user administration capabilities and model configuration. Take 
the hassle out of deploying and managing Subversion and the 
tools developers use with it. http://p.sf.net/sfu/wandisco-d2d-2
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] I don't want to prune files ! …… Not working as expected

2011-08-18 Thread Hugo Letemplier
2011/8/18 Dan Langille d...@langille.org:

 On Aug 18, 2011, at 9:51 AM, Hugo Letemplier wrote:

 2011/8/18 Hugo Letemplier hugo.let...@gmail.com:
 2011/8/16 Dan Langille d...@langille.org:

 On Aug 16, 2011, at 7:27 AM, Hugo Letemplier wrote:

 2011/8/16 Dan Langille d...@langille.org:

 On Aug 16, 2011, at 6:36 AM, Hugo Letemplier wrote:

 2011/8/16 Jeremy Maes j...@schaubroeck.be:
 Op 16/08/2011 11:45, Hugo Letemplier schreef:

 Hi the list !

 I have a recurrent issue in bacula.

 I specified a very long duration for job and file retention.
 I sepecified Purge File = No, Purge Job = No, Purge Volume = yes
 I have autoprune = yes in the pool description

 I work on a volume rotation basis, one volume is kept a time according
 to his pool's retention parameters.

 I want to keep in base, for some jobs, all the files that are in the
 volume because theses jobs are more usually used to restore some files
 than the whole job.

 But I still have Puged Files = Yes in the job run list in Bat.

 I don't understand what's happenning !

 Could you help me ?

 How can I check that my pruning directives work as expected ?

 Thank you in advance.

 Hugo

 Did you set these long retention times etc after you created the 
 volumes?
 Those parameters are set for a volume when it's created, and are not
 automatically updated when you change the config and/or reload bacula. 
 You
 have to issue the update command in bconsole and update your pool 
 from
 resource for all pools and volume parameters all volumes from pool. 
 (or
 all volumes from all pools).

 Kind regards,
 Jeremy

  DISCLAIMER 
 http://www.schaubroeck.be/maildisclaimer.htm

 Thanks for your answer

 Yes, I already have updated my volumes. So I dont understand very well
 what 's happening


 When in doubt, ignore bat. Instead, look at what bconsole reports.

 --
 Dan Langille - http://langille.org


 I checked via a restore
 I think there is no doubt because bacula done an inc as big as a full
 just after the jobs that were marked Pruned Files : yes
 Also when i do a restore It says that there are missing files entries
 and I cant select file and dirs with the tree in bconsole.


 I believe you are reaching conclusions based on suspicions, not facts.  The
 conclusions may be correct, but that doesn't help find the cause.

 Look at the output of 'list media'.  Look at the value in volretention.  
 It is in seconds.

 Is this value matching up with that expect?


 --
 Dan Langille - http://langille.org



 The Volume retention is correct. Moreover, the problem isn't on the
 volume retention but on the file retention.

 My volumes are never pruned because for the moment I put it a little
 too big duration, I keep an eye on the volume size and the remaining
 space on the hard drive. I delete jobs then prune the volumes that
 contains no jobs and purge action=Truncate manually on the purged
 volumes ( I am at this point because bacula is pruning files and I
 dont known the origin of the problem. So I do a lot of things manually
 for safety reasons).

 In other words I set AutoPrune = No


 Where can I check this File retention, is it writen in the database
 like for the volume retention, or only in the configuration file ?
 What configuration directives should I check ?

 There are defaults for File retention (see below) so if you have not 
 specified, them, they are default.

 Look in the client table:

 bacula=# select clientid, name, autoprune, fileretention, jobretention from 
 client limit 3;
  clientid |   name   | autoprune | fileretention | jobretention
 --+--+---+---+--
       21 | havoc-fd |         1 |       5184000 |     15552000
       12 | nz-fd    |         1 |       2592000 |     15552000
       10 | ducky-fd |         1 |       5184000 |     15552000
 (3 rows)


 Retention is in seconds.


 I updated all the file retention directives ( in client and pool ) to
 a duration of 1 year.
 So, now, it shouldn't purge files except if their jobs are deleted, if
 job have its files purged or if the volume that contain the job is
 purged, isn't it


 No.

 Job, File, and Volume retention are all separate elements.

 I set my Job and File retention to a large value, say 30 years, then let 
 Volume
 retention take care of pruning.

this is what I wanted to say, volume pruning take precedence over the
other directives


 From:
 http://www.bacula.org/5.0.x-manuals/en/main/main/Catalog_Maintenance.html#SECTION00461

 ###
 File Retention = time-period-specification
 The File Retention record defines the length of time that Bacula will keep 
 File records in the Catalog database. When this time period expires, and if 
 AutoPrune is set to yes, Bacula will prune (remove) File records that are 
 older than the specified File Retention period. The pruning will occur at the 
 end of a backup Job for the given Client. Note that the Client database 
 record contains a copy of the File and Job

[Bacula-users] I don't want to prune files ! …… Not working as expected

2011-08-16 Thread Hugo Letemplier
Hi the list !

I have a recurrent issue in bacula.

I specified a very long duration for job and file retention.
I sepecified Purge File = No, Purge Job = No, Purge Volume = yes
I have autoprune = yes in the pool description

I work on a volume rotation basis, one volume is kept a time according
to his pool's retention parameters.

I want to keep in base, for some jobs, all the files that are in the
volume because theses jobs are more usually used to restore some files
than the whole job.

But I still have Puged Files = Yes in the job run list in Bat.

I don't understand what's happenning !

Could you help me ?

How can I check that my pruning directives work as expected ?

Thank you in advance.

Hugo

--
uberSVN's rich system and user administration capabilities and model 
configuration take the hassle out of deploying and managing Subversion and 
the tools developers use with it. Learn more about uberSVN and get a free 
download at:  http://p.sf.net/sfu/wandisco-dev2dev
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] I don't want to prune files ! …… Not working as expected

2011-08-16 Thread Hugo Letemplier
2011/8/16 Jeremy Maes j...@schaubroeck.be:
 Op 16/08/2011 11:45, Hugo Letemplier schreef:

 Hi the list !

 I have a recurrent issue in bacula.

 I specified a very long duration for job and file retention.
 I sepecified Purge File = No, Purge Job = No, Purge Volume = yes
 I have autoprune = yes in the pool description

 I work on a volume rotation basis, one volume is kept a time according
 to his pool's retention parameters.

 I want to keep in base, for some jobs, all the files that are in the
 volume because theses jobs are more usually used to restore some files
 than the whole job.

 But I still have Puged Files = Yes in the job run list in Bat.

 I don't understand what's happenning !

 Could you help me ?

 How can I check that my pruning directives work as expected ?

 Thank you in advance.

 Hugo

 Did you set these long retention times etc after you created the volumes?
 Those parameters are set for a volume when it's created, and are not
 automatically updated when you change the config and/or reload bacula. You
 have to issue the update command in bconsole and update your pool from
 resource for all pools and volume parameters all volumes from pool. (or
 all volumes from all pools).

 Kind regards,
 Jeremy

  DISCLAIMER 
 http://www.schaubroeck.be/maildisclaimer.htm

Thanks for your answer

Yes, I already have updated my volumes. So I dont understand very well
what 's happening

Hugo

--
uberSVN's rich system and user administration capabilities and model 
configuration take the hassle out of deploying and managing Subversion and 
the tools developers use with it. Learn more about uberSVN and get a free 
download at:  http://p.sf.net/sfu/wandisco-dev2dev
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] I don't want to prune files ! …… Not working as expected

2011-08-16 Thread Hugo Letemplier
2011/8/16 Dan Langille d...@langille.org:

 On Aug 16, 2011, at 6:36 AM, Hugo Letemplier wrote:

 2011/8/16 Jeremy Maes j...@schaubroeck.be:
 Op 16/08/2011 11:45, Hugo Letemplier schreef:

 Hi the list !

 I have a recurrent issue in bacula.

 I specified a very long duration for job and file retention.
 I sepecified Purge File = No, Purge Job = No, Purge Volume = yes
 I have autoprune = yes in the pool description

 I work on a volume rotation basis, one volume is kept a time according
 to his pool's retention parameters.

 I want to keep in base, for some jobs, all the files that are in the
 volume because theses jobs are more usually used to restore some files
 than the whole job.

 But I still have Puged Files = Yes in the job run list in Bat.

 I don't understand what's happenning !

 Could you help me ?

 How can I check that my pruning directives work as expected ?

 Thank you in advance.

 Hugo

 Did you set these long retention times etc after you created the volumes?
 Those parameters are set for a volume when it's created, and are not
 automatically updated when you change the config and/or reload bacula. You
 have to issue the update command in bconsole and update your pool from
 resource for all pools and volume parameters all volumes from pool. (or
 all volumes from all pools).

 Kind regards,
 Jeremy

  DISCLAIMER 
 http://www.schaubroeck.be/maildisclaimer.htm

 Thanks for your answer

 Yes, I already have updated my volumes. So I dont understand very well
 what 's happening


 When in doubt, ignore bat. Instead, look at what bconsole reports.

 --
 Dan Langille - http://langille.org


I checked via a restore
I think there is no doubt because bacula done an inc as big as a full
just after the jobs that were marked Pruned Files : yes
Also when i do a restore It says that there are missing files entries
and I cant select file and dirs with the tree in bconsole.

Thank you

Hugo

--
uberSVN's rich system and user administration capabilities and model 
configuration take the hassle out of deploying and managing Subversion and 
the tools developers use with it. Learn more about uberSVN and get a free 
download at:  http://p.sf.net/sfu/wandisco-dev2dev
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


[Bacula-users] Python in bacula

2011-08-04 Thread Hugo Letemplier
Hi


I am begining with Python scripting.

I imagine to have monitoring/management scripts for bacula coded with python.
Indeed Bash is not very easy to manage bacula object concepts ( jobs
schedule … )

First i saw that bacula can execute python code according to internal events

Is there a library for python that contain class description for
bacula content ?

Can I directly call python methods that will communicate with Director
or database ?

Immagine that I want to update my volumes parameters via python
Whithout sending commands to bconsole ?

I dont like so much to use bconsole in script because it's not always
easy to do what you want.
Also I dont like to access directly the database because it's internal
to the bacula functionment.

How can I do that kind of stuff ?

I would like to drive Bacula with python like I can do it with
bconsole. Is this possible ?

Thanks

--
BlackBerryreg; DevCon Americas, Oct. 18-20, San Francisco, CA
The must-attend event for mobile developers. Connect with experts. 
Get tools for creating Super Apps. See the latest technologies.
Sessions, hands-on labs, demos  much more. Register early  save!
http://p.sf.net/sfu/rim-blackberry-1
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


[Bacula-users] Manually upgrade next job, disable next job ?

2011-07-26 Thread Hugo Letemplier
Hi

I have scheduling for a job everydays.
Sometimes, I have to disable the next job but without disabling the
others that are scheduled after.

Indeed, eg: I 've just seen that the previous Incremental has become
really big. How can I directly say to bacula to upgrade the next job
of this name to a full ?

The solution would be :

disable next job=MyJob
run job=MyJob level=Full when=

Or

upgrade next job=MyJob level=Full

How can I do that ?

If it's not possible I think that it could be a nice feature to add to
the next bacula versions

Thanks

--
Magic Quadrant for Content-Aware Data Loss Prevention
Research study explores the data loss prevention market. Includes in-depth
analysis on the changes within the DLP market, and the criteria used to
evaluate the strengths and weaknesses of these DLP solutions.
http://www.accelacomm.com/jaw/sfnl/114/51385063/
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] Manually upgrade next job, disable next job ?

2011-07-26 Thread Hugo Letemplier
Yes this is what I was using but since I am doing Copy jobs in order
to make archives this functionality is not used.
Is there another solution ?

Thanks

2011/7/26 Pietro Bertera pie...@bertera.it:
 Hi,

 a workaround might be to set for your job:

 Allow Duplicate Jobs = no
 Cancel Lower Level Duplicates = yes

 and schedule the full job.

 Regards,

 Pietro

 2011/7/26 Hugo Letemplier hugo.let...@gmail.com

 Hi

 I have scheduling for a job everydays.
 Sometimes, I have to disable the next job but without disabling the
 others that are scheduled after.

 Indeed, eg: I 've just seen that the previous Incremental has become
 really big. How can I directly say to bacula to upgrade the next job
 of this name to a full ?

 The solution would be :

 disable next job=MyJob
 run job=MyJob level=Full when=

 Or

 upgrade next job=MyJob level=Full

 How can I do that ?

 If it's not possible I think that it could be a nice feature to add to
 the next bacula versions

 Thanks

 --
 Magic Quadrant for Content-Aware Data Loss Prevention
 Research study explores the data loss prevention market. Includes in-depth
 analysis on the changes within the DLP market, and the criteria used to
 evaluate the strengths and weaknesses of these DLP solutions.
 http://www.accelacomm.com/jaw/sfnl/114/51385063/
 ___
 Bacula-users mailing list
 Bacula-users@lists.sourceforge.net
 https://lists.sourceforge.net/lists/listinfo/bacula-users

 --
 Magic Quadrant for Content-Aware Data Loss Prevention
 Research study explores the data loss prevention market. Includes in-depth
 analysis on the changes within the DLP market, and the criteria used to
 evaluate the strengths and weaknesses of these DLP solutions.
 http://www.accelacomm.com/jaw/sfnl/114/51385063/
 ___
 Bacula-users mailing list
 Bacula-users@lists.sourceforge.net
 https://lists.sourceforge.net/lists/listinfo/bacula-users


--
Magic Quadrant for Content-Aware Data Loss Prevention
Research study explores the data loss prevention market. Includes in-depth
analysis on the changes within the DLP market, and the criteria used to
evaluate the strengths and weaknesses of these DLP solutions.
http://www.accelacomm.com/jaw/sfnl/114/51385063/
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] Pruning Jobs older than 6 months

2011-07-21 Thread Hugo Letemplier
Dont forget to update your pool record.

2011/7/20 Rickifer Barros rickiferbar...@gmail.com:
 Also, I could set up for my Jobs and Files Retention the same that Volume
 Retention without problems, is that right?

 On Wed, Jul 20, 2011 at 11:15 AM, Rickifer Barros rickiferbar...@gmail.com
 wrote:

 Thanks Jeremy...I'll do that.

 On Wed, Jul 20, 2011 at 11:13 AM, Jeremy Maes j...@schaubroeck.be wrote:

 Op 20/07/2011 16:03, Rickifer Barros schreef:

 Ok...but I don't want to disable the pruning entirely...I want Bacula do
 the pruning only according my settings. If I set the Autoprune=no Will do
 Bacula pruning jobs according the Volume Retention? Because I have
 configured the Volume Retentions but not the Job Retentions.

 Then you will have to set a Job retention time for your clients or pools
 that is very large so it will never affect your jobs (100 years for
 example). If you don't explicitly set this it'll just take the default of
 180 days (aka 6 months).
 If your Volume retention is shorter than 6 months it won't make any
 difference though because job records are also pruned when the volume
 they're on is pruned.

 The same goes for File Retention, which has a default value of 60 days
 (see the manual). If your volume retention is set higher I'd advise setting
 this to a very large time just like the job retention.

 If you set Autoprune = no bacula will do no pruning at all, so you'll
 want to keep this at Autoprune = yes.

 Regards,
 Jeremy

 On Wed, Jul 20, 2011 at 10:48 AM, Jeremy Maes j...@schaubroeck.be wrote:

 Op 20/07/2011 15:20, Rickifer Barros schreef:

 Hello Guys...

 Why Bacula always after each Job try to prune Jobs older than 6 months
 even when there's no settings to this?

 20-Jul 10:01 test-dir JobId 96: Begin pruning Jobs older than 6 months
 .
 20-Jul 10:01 test-dir JobId 96: No Jobs found to prune.
 20-Jul 10:01 test-dir JobId 96: Begin pruning Jobs.
 20-Jul 10:01 test-dir JobId 96: No Files found to prune.
 20-Jul 10:01 test-dir JobId 96: End auto prune.

 Thanks.

 If I'm not mistaken 6 months is the default value used for pruning when
 you don't define any retention periods.
 If you want to disable pruning entirely you need to add an Autoprune =
 no directive to your pool configs.

 Kind regards,
 Jeremy

  DISCLAIMER 
 http://www.schaubroeck.be/maildisclaimer.htm

  DISCLAIMER 
 http://www.schaubroeck.be/maildisclaimer.htm


 --
 10 Tips for Better Web Security
 Learn 10 ways to better secure your business today. Topics covered include:
 Web security, SSL, hacker attacks  Denial of Service (DoS), private keys,
 security Microsoft Exchange, secure Instant Messaging, and much more.
 http://www.accelacomm.com/jaw/sfnl/114/51426210/
 ___
 Bacula-users mailing list
 Bacula-users@lists.sourceforge.net
 https://lists.sourceforge.net/lists/listinfo/bacula-users



--
5 Ways to Improve  Secure Unified Communications
Unified Communications promises greater efficiencies for business. UC can 
improve internal communications as well as offer faster, more efficient ways
to interact with customers and streamline customer service. Learn more!
http://www.accelacomm.com/jaw/sfnl/114/51426253/
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] Web interface

2011-07-07 Thread Hugo Letemplier
2011/7/6 Mauro Colorio mauro.colo...@gmail.com:
 I would like to say exactly webmin bacula module


 what's wrong with webmin module?
 for non sysadmin user is enough, I think non admin needs just to
 restore a file if something goes wrong :)

 ciao
 Mauro

 --
 All of the data generated in your IT infrastructure is seriously valuable.
 Why? It contains a definitive record of application performance, security
 threats, fraudulent activity, and more. Splunk takes this data and makes
 sense of it. IT sense. And common sense.
 http://p.sf.net/sfu/splunk-d2d-c2
 ___
 Bacula-users mailing list
 Bacula-users@lists.sourceforge.net
 https://lists.sourceforge.net/lists/listinfo/bacula-users


With webacula you can add acl and users.
Its sometimes failing on restore but I think that it's a good project
and many improvements will come later.

Something that I dont understand is that there is a lots of projects I
think it's too much.
That would be nice if theses projects can work together.

Hope…

Hugo

--
All of the data generated in your IT infrastructure is seriously valuable.
Why? It contains a definitive record of application performance, security 
threats, fraudulent activity, and more. Splunk takes this data and makes 
sense of it. IT sense. And common sense.
http://p.sf.net/sfu/splunk-d2d-c2
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] Feature idea : Copy in bat ; copy selection improvement ; multiple jobid in bconsole copy job and other copy idea…

2011-06-17 Thread Hugo Letemplier
2011/6/7 Hugo Letemplier hugo.let...@gmail.com:
 Hi

 I have some ideas. What do you think about it ?

 1. Be able to select dynamically various job to copy : a sort of
 manual copy selection when you run a new job manually

 2. Add a copy option in the jobrun list of bat when you right click on
 one job or a job selection (with the CTRL key )

 3. Select multiple job via the following command in bconsole:
 run job=MyCopyJob-to-tape jobid=1000
 So you can do the following
 run job=MyCopyJob-to-tape jobid=1000,999,998…

 4. Be able to copy from many pool in one so the nextpool directive
 should be in the copy job resource.

 For me I got many pools and one that is for Archiving on tapes ( call
 it WORM tapes ) each month I want all the uncopied job and their
 dependencies ( previous full, or diff or incs ) from many pools to be
 archived on a tape. So I will have to use one tape to restore the full
 and some incs.

 For the moment I am trying to do that with an sql query but I need
 more flexibility because on some months I will have to copy/or not
 some extra data because my tape will be exceptionally ful or the
 montly.

 Does anybody know any supplementary manual about copy jobs because I
 think it's not completely explained in Bacula manual

 Best Regards


I have a question about this previous mail that I sent.
Is there a risk to copy or migrate a job to a next pool that is from
the copy job's pool and not from the orginal job's pool.


My copy job select job id in database, from many pools.
It's configured with a pool named _copy that have a copy nextpool.

Most of the copy job selection directive are relative to the job
pool's next pool.
Its really limmitating because each copy job should be executed in the
same pool that the job to copy
Does that kind of configuration is tricky ?

Moreover I select the job with many quite complex requests so it's not
very easy and it took me a long time to make a good selection.
Is there a documentation about the role of the collons in the database ?
and about the database functionnment of pruning and copy features ?

A last question relative to copy jobs : I often makes catalog backup
w/ my LTO tape archive of the month. Is there a mean to read an old
catalog backup or to copy the file list of my archives in a particular
table of my catalog ?
An option like find filename in the archive catalog could be
interesting. That could go more slowly than to find it in the classic
catalog but with that you will be able to find an old file that's
archived on a WORM tape.

BR

Hugo

--
EditLive Enterprise is the world's most technically advanced content
authoring tool. Experience the power of Track Changes, Inline Image
Editing and ensure content is compliant with Accessibility Checking.
http://p.sf.net/sfu/ephox-dev2dev
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


[Bacula-users] Base Job ?

2011-06-07 Thread Hugo Letemplier
Hi

I am backing up some systems with base jobs.
I have done the base and then a full.
But I want to check that the full has been based on the base job but I cant.

On the mailling list, I found the following content with the line :
  Base files/Used files:  39336/39114 (99.44%)
 JobId:  371
  Job:   server-fd-data.2011-05-09_16.58.33_17
  Backup Level:   Full (upgraded from Incremental)
  Client: server-fd 5.0.3 (04Aug10) Linux,Cross-compile,Win32
  FileSet:server-set 2011-05-09 14:29:09
  Pool:   Yearly (From User input)
  Catalog:MyCatalog (From Client resource)
  Storage:FileStorage (From Job resource)
  Scheduled time: 09-May-2011 16:58:24
  Start time: 09-May-2011 17:02:10
  End time:   09-May-2011 18:01:16
  Elapsed time:   59 mins 6 secs
  Priority:   10
  FD Files Written:   39,344
  SD Files Written:   39,344
  FD Bytes Written:   8,600,464,982 (8.600 GB)
  SD Bytes Written:   8,606,510,471 (8.606 GB)
  Rate:   2425.4 KB/s
  Software Compression:   39.7 %
  Base files/Used files:  39336/39114 (99.44%)
  VSS:yes
  Encryption: no
  Accurate:   yes
  Volume name(s): Tape-Year-0001
  Volume Session Id:  358
  Volume Session Time:1302812565
  Last Volume Bytes:  252,270,109,076 (252.2 GB)
  Non-fatal FD errors:0
  SD Errors:  0
  FD termination status:  OK
  SD termination status:  OK
  Termination:Backup OK


On my bacul

--
EditLive Enterprise is the world's most technically advanced content
authoring tool. Experience the power of Track Changes, Inline Image
Editing and ensure content is compliant with Accessibility Checking.
http://p.sf.net/sfu/ephox-dev2dev
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] Base Job ?

2011-06-07 Thread Hugo Letemplier
2011/6/7 Hugo Letemplier hugo.let...@gmail.com:
 Hi

 I am backing up some systems with base jobs.
 I have done the base and then a full.
 But I want to check that the full has been based on the base job but I cant.

 On the mailling list, I found the following content with the line :
  Base files/Used files:  39336/39114 (99.44%)
  JobId:                  371
  Job:                   server-fd-data.2011-05-09_16.58.33_17
  Backup Level:           Full (upgraded from Incremental)
  Client:                 server-fd 5.0.3 (04Aug10) Linux,Cross-compile,Win32
  FileSet:                server-set 2011-05-09 14:29:09
  Pool:                   Yearly (From User input)
  Catalog:                MyCatalog (From Client resource)
  Storage:                FileStorage (From Job resource)
  Scheduled time:         09-May-2011 16:58:24
  Start time:             09-May-2011 17:02:10
  End time:               09-May-2011 18:01:16
  Elapsed time:           59 mins 6 secs
  Priority:               10
  FD Files Written:       39,344
  SD Files Written:       39,344
  FD Bytes Written:       8,600,464,982 (8.600 GB)
  SD Bytes Written:       8,606,510,471 (8.606 GB)
  Rate:                   2425.4 KB/s
  Software Compression:   39.7 %
  Base files/Used files:  39336/39114 (99.44%)
  VSS:                    yes
  Encryption:             no
  Accurate:               yes
  Volume name(s):         Tape-Year-0001
  Volume Session Id:      358
  Volume Session Time:    1302812565
  Last Volume Bytes:      252,270,109,076 (252.2 GB)
  Non-fatal FD errors:    0
  SD Errors:              0
  FD termination status:  OK
  SD termination status:  OK
  Termination:            Backup OK


 On my bacul


Sorry a ing bug sent the mail dont know what appened.

So I continue my email now :)

On my bacula logs i never have the line   Base files/Used files:
39336/39114 (99.44%)

Here is my configuration :

Job {
 Name = Base:OSXServer:10.6
 Client = default
 Type = Backup
 Level = Base
 Accurate = yes
 Priority = 40
 Pool = default
 Fileset = MacFull
 Messages = Standard
}

Job {
 Name = Sauvegarde:ns1:Systeme
 JobDefs = Systeme
 Base = Sauvegarde:ns1:Systeme,Base:OSXServer:10.6
 Client = ns1
 Fileset = MacFull
 }
I ran the backup
run job=Base:OSXServer:10.6 client=MyServer Pool=System

I ran my full

run job=Sauvegarde:ns1:Systeme Pool=System

In the log of the job I got nothing in reference to the base job !

Can you help me ?

Thanks

--
EditLive Enterprise is the world's most technically advanced content
authoring tool. Experience the power of Track Changes, Inline Image
Editing and ensure content is compliant with Accessibility Checking.
http://p.sf.net/sfu/ephox-dev2dev
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


[Bacula-users] Feature idea : Copy in bat ; copy selection improvement ; multiple jobid in bconsole copy job and other copy idea…

2011-06-07 Thread Hugo Letemplier
Hi

I have some ideas. What do you think about it ?

1. Be able to select dynamically various job to copy : a sort of
manual copy selection when you run a new job manually

2. Add a copy option in the jobrun list of bat when you right click on
one job or a job selection (with the CTRL key )

3. Select multiple job via the following command in bconsole:
run job=MyCopyJob-to-tape jobid=1000
So you can do the following
run job=MyCopyJob-to-tape jobid=1000,999,998…

4. Be able to copy from many pool in one so the nextpool directive
should be in the copy job resource.

For me I got many pools and one that is for Archiving on tapes ( call
it WORM tapes ) each month I want all the uncopied job and their
dependencies ( previous full, or diff or incs ) from many pools to be
archived on a tape. So I will have to use one tape to restore the full
and some incs.

For the moment I am trying to do that with an sql query but I need
more flexibility because on some months I will have to copy/or not
some extra data because my tape will be exceptionally ful or the
montly.

Does anybody know any supplementary manual about copy jobs because I
think it's not completely explained in Bacula manual

Best Regards

--
EditLive Enterprise is the world's most technically advanced content
authoring tool. Experience the power of Track Changes, Inline Image
Editing and ensure content is compliant with Accessibility Checking.
http://p.sf.net/sfu/ephox-dev2dev
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] Bacula slow transfer / Compression latency / feature request

2011-06-01 Thread Hugo Letemplier
2011/5/31 Sean Clark smcl...@tamu.edu:
 On 05/30/2011 02:11 PM, reiserfs wrote:
 Hello, im new with Bacula scene, i have used the HP Dataprotector with a HP 
 Library fiber channel backup.

 With the Dataprotector i got 1gbps interfaces and switch and all jobs get 
 done very fast, with transfer like 50-80MB/s.

 Now im using Bacula with a DELL TL2000 iSCSI, and with my first experience i 
 got only 6MB/s transfer with 1gbps interfaces and switch.

 So what im missing

 Used to tes:
 Bacula Director runing on Slackware64 13.1
 Bacula Cliente Windows 2003 Server
 Turning on software gzip compression on the client is definitely a major
 performance killer, unfortunately, so that would be my first guess as
 well.  This looks like a good place to mention some testing I've done.

 I've been doing some testing lately due to also being somewhat
 aggravated as the apparently slow transfer rates I get during Bacula
 backups, but it's starting to look like it's not really Bacula's fault
 most of the time.  Most of the time, it looks like the problem is just
 how fast the client can read files off of the disk and send them.  The
 network (at least on Gb) is not usually the problem, nor even database
 activity on the director (attribute spooling will help if you DO have
 any problems with that).

 Encryption and gzip compression by the client introduce major latency
 that unavoidably slows down the transfer, and this isn't specifically a
 bacula client issue.  Other things I have seen that cause major
 slowdowns are antivirus software on Windows (particularly on-access
 scanning) and active use of the computer while the backup is running.

 Regarding compression, specifically, though - testing on my laptop here,
 I tested just reading files from /usr and /home with tar, piping them
 through pv to get the transfer rate (and then dumping them directly to
 /dev/null).  I repeated the tests then with some different compression
 schemes inserted.
 for example:

 tar -cf - /usr | pv -b -r -a  /dev/null (No Compression)
 tar -cf - /usr | gzip -c | pv -b -r -a  /dev/null (GZIP)
 tar -cf - /usr | gzip -1 -c | pv -b -r -a  /dev/null (GZIP1)
 tar -cf - /usr/ | lzop -c | pv -b -r -a  /dev/null (LZO)

 (and repeated for /home)

 Here are my results:

 /usr
 No Compression: 5.58GB total data, Avg 13.1MB/s (436s to finish)
 GZIP: 2.11GB Total data, Avg 2.97MB/s (727s to finish)
 GZIP1: 2.36GB Total data, Avg 4.13MB/s (585s to finish)
 LZO: 2.82GB Total Data, Avg 6.48MB/s (445s to finish)

 /home (includes a lot of e.g. media files that are not very compressible)
 No Compression: 91.56GB Total Data, 34.5MB/s Avg,, (~2700s to finish)
 GZIP: 77.1GB Total Data, 9.78MB/s Avg, (8072s to finish)
 GZIP1: 77.6GB Total Data, 11.7MB/s Avg, (~6790s to finish)
 LZO: 80.6GB Total Data, 28.3MB/s Avg, (~2900s to finish)

 So, yes, if you have gzip compression turned on, you'll almost certainly
 see a huge increase in speed if you turn it off (I believe most tape
 drives can or will do compression in hardware, so you don't need to
 pre-compress at the client).

 If you are backing up to disk as I am (or for some reason aren't doing
 hardware compression on the tape drive), you can also get a small speed
 increase by dropping the gzip compression down to the minimum
 (Compress=GZIP1 in the FileSet), which seem to compress almost as well
 overall but induces less latency.

 FEATURE REQUEST:
 However, assuming my tests so far are representative, it looks like LZO
 compression can get backup jobs transferred in almost the same amount of
 time as no compression at all, while still substantially reducing the
 amount of data transferred and stored (not as much as GZIP does, but
 still a noteworthy amount).  Is it possible we could get a
 Compress=LZOP capability added to bacula-fd?

 tl;dr: Turn off compression until and unless an LZO compression option
 is implemented, unless you are desperate for space on your backup media,
 in which case you'll just have to cope with the slow backups.

 --
 Simplify data backup and recovery for your virtual environment with vRanger.
 Installation's a snap, and flexible recovery options mean your data is safe,
 secure and there when you need it. Data protection magic?
 Nope - It's vRanger. Get your free trial download today.
 http://p.sf.net/sfu/quest-sfdev2dev
 ___
 Bacula-users mailing list
 Bacula-users@lists.sourceforge.net
 https://lists.sourceforge.net/lists/listinfo/bacula-users



+1, LZO seems really faster, that would be a good feature

I want to add that I don't think that the whole directory is in a
tar. I think that each file is compressed and encrypted separately.
Can someone tell us the algorithm that is used by bacula to compress
and encrypt ( by block, by file, … )  ?

--
Simplify data backup and recovery for 

Re: [Bacula-users] Bacula 5.0.3 Macintosh file daemon?

2011-05-31 Thread Hugo Letemplier
2011/5/27 Sean Clark smcl...@tamu.edu:
 On 05/27/2011 08:50 AM, Graham Keeling wrote:
 Hello,
 Does anybody know where I might be able to find a bacula-5.0.3 Mac file 
 daemon?
 Or an installer?
 Thanks.
 I recommend MacPort[1] for that.

 port install bacula +client_only
 port load bacula (as I recall)

 [1] http://www.macports.org/install.php

 --
 vRanger cuts backup time in half-while increasing security.
 With the market-leading solution for virtual backup and recovery,
 you get blazing-fast, flexible, and affordable data protection.
 Download your free trial now.
 http://p.sf.net/sfu/quest-d2dcopy1
 ___
 Bacula-users mailing list
 Bacula-users@lists.sourceforge.net
 https://lists.sourceforge.net/lists/listinfo/bacula-users


Take care with the ACL option if you need to backup ACL from your files.
There is a bug with lasts OSX versions. Something may changed in the
acl lib from Apple.

Moreover I dont recomend to use port to install bacula because most of
the time you are backing up a server. And on a server you dont have
the need to install all the developper environnement ( Mac Port
require XCode )

I think that the best way to install the osx FD is to install the
XCode on a recent mac and then to compile and build it as a package.
Never make a complete package with macport because if you want to keep
the libraries that you used for compilation with mac port you will
need to install all theses libraries via the package. If you want to
deploy this same package on another machine that got port installed it
will erase many directories of this port installation without warning.

So use the sources from the github and ./configure then ./make -C
./platforms/osx
Then you will have a package in the products subdirectory
I hope you understood, I am not very good in english.

--
Simplify data backup and recovery for your virtual environment with vRanger. 
Installation's a snap, and flexible recovery options mean your data is safe,
secure and there when you need it. Data protection magic?
Nope - It's vRanger. Get your free trial download today. 
http://p.sf.net/sfu/quest-sfdev2dev
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] Erase a disk volume ?

2011-05-26 Thread Hugo Letemplier
2011/5/26 Radosław Korzeniewski rados...@korzeniewski.net:
 Hi,

 W dniu 25 maja 2011 16:35 użytkownik Hugo Letemplier hugo.let...@gmail.com
 napisał:

  console:
  * purge volume action=all allpools storage=yourstorage-sd

  If you updated your volumes parameters then execute following command in
 Are you sure this command will purge all my volumes ? I have critical
 datas on some other volumes.

 I'm using this command in Admin Job for cyclic volume truncate. It is
 working as expected.

oh ! It's magic ! It's really treating because it's starting by purge :)

Thanks

--
vRanger cuts backup time in half-while increasing security.
With the market-leading solution for virtual backup and recovery, 
you get blazing-fast, flexible, and affordable data protection.
Download your free trial now. 
http://p.sf.net/sfu/quest-d2dcopy1
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


[Bacula-users] Erase a disk volume ?

2011-05-25 Thread Hugo Letemplier
Hi,

I got some disk volumes that I just want to erase on demand.
Indeed, theses ones were not recycled for various reasons (not in the
good pool, new job upgraded to full, bad retention times…)
I don't want to delete theses volumes because I want to check that in
the time theses volumes will be recycled and so I can validate my
backup strategy.

All I want is to free the space used by theses old volumes because
they have been purged .

I thought in keeping the file and writing a null value into it : maybe
something like echo   /Volumes/LabelOfTheVolume
I want to keep the label in order to reuse the volume

Thank you in advance

--
vRanger cuts backup time in half-while increasing security.
With the market-leading solution for virtual backup and recovery, 
you get blazing-fast, flexible, and affordable data protection.
Download your free trial now. 
http://p.sf.net/sfu/quest-d2dcopy1
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] Erase a disk volume ?

2011-05-25 Thread Hugo Letemplier
2011/5/25 Phil Stracchino ala...@metrocast.net:
 On 05/25/11 05:59, Hugo Letemplier wrote:
 Hi,

 I got some disk volumes that I just want to erase on demand.
 Indeed, theses ones were not recycled for various reasons (not in the
 good pool, new job upgraded to full, bad retention times…)
 I don't want to delete theses volumes because I want to check that in
 the time theses volumes will be recycled and so I can validate my
 backup strategy.

 All I want is to free the space used by theses old volumes because
 they have been purged .

 I thought in keeping the file and writing a null value into it : maybe
 something like echo   /Volumes/LabelOfTheVolume
 I want to keep the label in order to reuse the volume

 In Bacula 5.0.3, you can use Action On Purge = truncate, iirc.

I added this option to my pool but it's not working, I still see the
important size of the volume files on my drive.
I updated all my volume before purging.

Hugo

--
vRanger cuts backup time in half-while increasing security.
With the market-leading solution for virtual backup and recovery, 
you get blazing-fast, flexible, and affordable data protection.
Download your free trial now. 
http://p.sf.net/sfu/quest-d2dcopy1
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] Erase a disk volume ?

2011-05-25 Thread Hugo Letemplier
2011/5/25 Radosław Korzeniewski rados...@korzeniewski.net:
 Hi,

 2011/5/25 Hugo Letemplier hugo.let...@gmail.com

 I added this option to my pool but it's not working, I still see the
 important size of the volume files on my drive.
 I updated all my volume before purging.


 If you updated your volumes parameters then execute following command in
 console:
 * purge volume action=all allpools storage=yourstorage-sd

Are you sure this command will purge all my volumes ? I have critical
datas on some other volumes.

--
vRanger cuts backup time in half-while increasing security.
With the market-leading solution for virtual backup and recovery, 
you get blazing-fast, flexible, and affordable data protection.
Download your free trial now. 
http://p.sf.net/sfu/quest-d2dcopy1
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] hfsplussupport enabled and data encryption produces Signature is invalid errors

2011-05-12 Thread Hugo Letemplier
2010/11/22 Paulo Martinez martinez...@googlemail.com:

 Dear List,

 i have still problems by enabling encryption and hfsplussupport for
 osx's fs.

 Setup: fd on osx, dir and sd on linux and data encryption enabled for
 fd .

 FileSets for the osx client is configured on the linux box (dir
 daemon) with:

             Options {
                       signature = MD5
                       compression = gzip
                   # hfsplussupport = yes
                       xattrsupport = yes
                     }

 If i try to restore the files back to the fd (osx) and the FileSets of
 the corresponding Job had
 hfsplussupport enabled, then i get Signature is invalid errors while
 restoring the files. But
 the files are decrypted correctly and saved on the destination (client
 osx), the job is finished
 with errors.

 If i disable the hfsplussupport option - the restore job runs fine.

 Errors:
 3-Nov 13:29 stan-client-fd JobId 2: Error: restore.c:977 Signature
 validation failed for file /srv/Pool/Restore/efi/refit/enable-
 always.sh: ERR=Signature is invalid
 13-Nov 13:29 stan-client-fd JobId 2: Error: openssl.c:86 OpenSSL
 digest Verify final failed: ERR=error:04091068:rsa
 routines:INT_RSA_VERIFY:bad signature


 I just want to ask, if somebody have the same setup running correctly?

 Thanks in advance

 PM




 --
 Beautiful is writing same markup. Internet Explorer 9 supports
 standards for HTML5, CSS3, SVG 1.1,  ECMAScript5, and DOM L2  L3.
 Spend less time writing and  rewriting code and more time creating great
 experiences on the web. Be a part of the beta today
 http://p.sf.net/sfu/msIE9-sfdev2dev
 ___
 Bacula-users mailing list
 Bacula-users@lists.sourceforge.net
 https://lists.sourceforge.net/lists/listinfo/bacula-users


Hi

I reload this topic.

I am having the same problem, Is there a solution ?

Thanks
Hugo

--
Achieve unprecedented app performance and reliability
What every C/C++ and Fortran developer should know.
Learn how Intel has extended the reach of its next-generation tools
to help boost performance applications - inlcuding clusters.
http://p.sf.net/sfu/intel-dev2devmay
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] About retention, and pruning.

2011-05-05 Thread Hugo Letemplier
Hi, I will try to explain in other words.

Lots of people use bacula more for its Tape feature than for disk
backups. But there are more and more disk to disk to tape backup
strategy where tapes take the role of off site archiving
Theses two models are really different and if you use the disk backup
strategy that come with Bacula ( chapter 27 of the main doc ) you got
the example of 6 jobs per volume an a 7 volume limit for the
incremental pool. The problem is that if you need to cancel a job or
if it fails, the next incremental sequence will start on a bad volume
because the previous incremental sequence wont reach the maximum
volume job limit.

Moreover you can use the volume use duration to rotate ( your
incremental sequence )from a volume to another (the solution that I
use for the moment) . In this case you can imagine that your last job
fails and then the volume retention is not updated at the end of the
last job (because this one failed).

Another problem is that with such a strategy you cant use one Job per Volume :
Indeed, if you use 1 volume per job and auto-pruning and recycling.
And then you want to restore the last job ( considering that you use
Full + diffs + incs jobs).
The incremental volume used for one incremental job will have a
defined retention period. After this retention period you can consider
that the volume is lost (it can be pruned at anytime)
The next day you do the next incremental. The retention period for
this volume/job will be the same that for the previous one and so it
will be purgeable one day after the previous job.

= : lifetime of a volume

diff  : ==
inc  :   ===
inc  : ===x
inc  :   ===

You wont be able to do a reliable restoration at point x because
your first incremental could have been purged and recycled.

Does anyone see this problem ?

2011/5/5 Dan Langille d...@langille.org:
 On May 4, 2011, at 3:26 AM, Graham Keeling wrote:

 On Fri, Apr 29, 2011 at 11:11:24AM +0200, Hugo Letemplier wrote:
 2011/4/29 Jérôme Blion jerome.bl...@free.fr:
 On Thu, 28 Apr 2011 17:33:48 +0200, Hugo Letemplier
 hugo.let...@gmail.com
 wrote:
 After the job ran many times: I have the following volume = job
 matching
 Vol name   Level      Time
 Test1         Full        15:50
 324            Inc         16:00
 325            Inc         16:10
 326            Inc         16:20
 324            Inc         16:30
 Test2         Full        16:40
 325            Inc         16:50
 326            Inc         17:00

 This is problematic because Vol324 is recycled instead of creating a new
 one
 I am not sure to understand the various retention periods : File, job,
 volume
 I think that I can increase the retention times but the problem will
 always be the same.
 ex : if I keep my incremental one hour then my first ones will always
 be purged first
 In a good strategy you purge the full sequence of incremental at the
 same time because you need to recycle you volume and don't want to
 keep a recent volume (incremental) without the previous ones.

 You would waste your tape/disk space.

 To do that I imagine that I need to create one pool per day and reduce
 progressively the retention periods. It doesn't makes sense !
  I turned the problem on all its sides but I cant find a good
 solution. Maybe the other retention period are the solution but I
 didn't succeeded ?
 Thanks in advance

 That means that your upper backup levels should have greater retentions to
 be sure that at any time, you can use the full + diff + inc if needed.
 Keeping incremental without full backup can be useful to restore only
 specific files.
 Yes, but this problem is the same between incremental backups:
 Lots of people recommended me to use one pool per level:
 It works for Full and differentials, but not for inc pool
 Maybe one inc-pool per incremental run of a scheduling cycle should
 be good ? But it 's not simple
 I think that a new feature that add dependency between various job
 levels for the next versions of bacula could be cool.
 The idea is to allow pruning only for volume/jobs that aren't needed
 by other ones whatever are the retention time.
 As a consequence : you can prune a full only (((if the differential is
 pruned) if the XXX incrementals are pruned) if the last incremental is
 pruned )
 So you you can say that the maximum retention time for a full is at
 least equal to the retention time of the last inc + the delay between
 the full and the this last inc so you have something like this :
 full  : 
 inc  :   =
 inc  :     =
 inc  :       =
 inc  :         =
 inc  :           =
 inc  :             =
 diff  :               
 inc  :                 =
 inc  :                   =
 inc  :                     =
 inc  :                       =
 inc  :                         =
 inc  :                           =
 diff

[Bacula-users] Backing up virtual machines Windows XP and Terminal Server 2003 on linux kvm

2011-05-05 Thread Hugo Letemplier
Hi,

I will have to backup some windows virtual machines. Theses machine
are on a Proxmox platform that use snapshots.
For the moment I backed up my linux VMs via a snapshot that was
mounted directly on the host.

Should I use LVM snapshot on the host or VSS with windows client on
the Virtual machine ?

I have many doubt ? What is the best solution according to you ?

BR

Hugo

--
WhatsUp Gold - Download Free Network Management Software
The most intuitive, comprehensive, and cost-effective network 
management toolset available today.  Delivers lowest initial 
acquisition cost and overall TCO of any competing solution.
http://p.sf.net/sfu/whatsupgold-sd
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] About retention, and pruning.

2011-05-04 Thread Hugo Letemplier
2011/5/4 Graham Keeling gra...@equiinet.com:
 On Fri, Apr 29, 2011 at 11:11:24AM +0200, Hugo Letemplier wrote:
 2011/4/29 Jérôme Blion jerome.bl...@free.fr:
  On Thu, 28 Apr 2011 17:33:48 +0200, Hugo Letemplier
  hugo.let...@gmail.com
  wrote:
  After the job ran many times: I have the following volume = job
  matching
  Vol name   Level      Time
  Test1         Full        15:50
  324            Inc         16:00
  325            Inc         16:10
  326            Inc         16:20
  324            Inc         16:30
  Test2         Full        16:40
  325            Inc         16:50
  326            Inc         17:00
 
  This is problematic because Vol324 is recycled instead of creating a new
  one
  I am not sure to understand the various retention periods : File, job,
  volume
  I think that I can increase the retention times but the problem will
  always be the same.
  ex : if I keep my incremental one hour then my first ones will always
  be purged first
  In a good strategy you purge the full sequence of incremental at the
  same time because you need to recycle you volume and don't want to
  keep a recent volume (incremental) without the previous ones.
 
  You would waste your tape/disk space.
 
  To do that I imagine that I need to create one pool per day and reduce
  progressively the retention periods. It doesn't makes sense !
   I turned the problem on all its sides but I cant find a good
  solution. Maybe the other retention period are the solution but I
  didn't succeeded ?
  Thanks in advance
 
  That means that your upper backup levels should have greater retentions to
  be sure that at any time, you can use the full + diff + inc if needed.
  Keeping incremental without full backup can be useful to restore only
  specific files.
 Yes, but this problem is the same between incremental backups:
 Lots of people recommended me to use one pool per level:
 It works for Full and differentials, but not for inc pool
 Maybe one inc-pool per incremental run of a scheduling cycle should
 be good ? But it 's not simple
 I think that a new feature that add dependency between various job
 levels for the next versions of bacula could be cool.
 The idea is to allow pruning only for volume/jobs that aren't needed
 by other ones whatever are the retention time.
 As a consequence : you can prune a full only (((if the differential is
 pruned) if the XXX incrementals are pruned) if the last incremental is
 pruned )
 So you you can say that the maximum retention time for a full is at
 least equal to the retention time of the last inc + the delay between
 the full and the this last inc so you have something like this :
 full  : 
 inc  :   =
 inc  :     =
 inc  :       =
 inc  :         =
 inc  :           =
 inc  :             =
 diff  :               
 inc  :                 =
 inc  :                   =
 inc  :                     =
 inc  :                       =
 inc  :                         =
 inc  :                           =
 diff  :                             
 inc  :                               =
 inc  :                                 =
 inc  :                                   =
 inc  :                                     =
 inc  :                                       =
 inc  :                                         =

 and not like that :
 diff  : ==
 inc  :   ===
 inc  :     ===
 inc  :       ===

 What do you think about such a feature ?

 A while ago, I made a patch that does it. Nobody seemed to want it though.
 http://www.adsm.org/lists/html/Bacula-users/2011-01/msg00308.html


 --
 WhatsUp Gold - Download Free Network Management Software
 The most intuitive, comprehensive, and cost-effective network
 management toolset available today.  Delivers lowest initial
 acquisition cost and overall TCO of any competing solution.
 http://p.sf.net/sfu/whatsupgold-sd
 ___
 Bacula-users mailing list
 Bacula-users@lists.sourceforge.net
 https://lists.sourceforge.net/lists/listinfo/bacula-users

First I think your patch would be nice to have that kind of security
option to avoid breaking backup dependency.
I think it's strange that nobody else needed this.
For the moment I don't want to leave the official packages and compile
a new bacula.

I found some tip to do it as I want, but it is still possible to have mistakes.

1 volume / full
1 volume / diff
1 volume / incremental sequence
An incremental sequence means all the successive increments between
fulls and/or diffs
Then I used Volume use duration. I defined this directive to the
duration of my incremental sequence ex : one week.
After one week the volume is not used anymore.
I specify a volume retention

Re: [Bacula-users] About retention, and pruning.

2011-04-29 Thread Hugo Letemplier
2011/4/29 Jérôme Blion jerome.bl...@free.fr:
 On Thu, 28 Apr 2011 17:33:48 +0200, Hugo Letemplier
 hugo.let...@gmail.com
 wrote:
 After the job ran many times: I have the following volume = job
 matching
 Vol name   Level      Time
 Test1         Full        15:50
 324            Inc         16:00
 325            Inc         16:10
 326            Inc         16:20
 324            Inc         16:30
 Test2         Full        16:40
 325            Inc         16:50
 326            Inc         17:00

 This is problematic because Vol324 is recycled instead of creating a new
 one
 I am not sure to understand the various retention periods : File, job,
 volume
 I think that I can increase the retention times but the problem will
 always be the same.
 ex : if I keep my incremental one hour then my first ones will always
 be purged first
 In a good strategy you purge the full sequence of incremental at the
 same time because you need to recycle you volume and don't want to
 keep a recent volume (incremental) without the previous ones.

 You would waste your tape/disk space.

 To do that I imagine that I need to create one pool per day and reduce
 progressively the retention periods. It doesn't makes sense !
  I turned the problem on all its sides but I cant find a good
 solution. Maybe the other retention period are the solution but I
 didn't succeeded ?
 Thanks in advance

 That means that your upper backup levels should have greater retentions to
 be sure that at any time, you can use the full + diff + inc if needed.
 Keeping incremental without full backup can be useful to restore only
 specific files.
Yes, but this problem is the same between incremental backups:
Lots of people recommended me to use one pool per level:
It works for Full and differentials, but not for inc pool
Maybe one inc-pool per incremental run of a scheduling cycle should
be good ? But it 's not simple
I think that a new feature that add dependency between various job
levels for the next versions of bacula could be cool.
The idea is to allow pruning only for volume/jobs that aren't needed
by other ones whatever are the retention time.
As a consequence : you can prune a full only (((if the differential is
pruned) if the XXX incrementals are pruned) if the last incremental is
pruned )
So you you can say that the maximum retention time for a full is at
least equal to the retention time of the last inc + the delay between
the full and the this last inc so you have something like this :
full  : 
inc  :   =
inc  : =
inc  :   =
inc  : =
inc  :   =
inc  : =
diff  :   
inc  : =
inc  :   =
inc  : =
inc  :   =
inc  : =
inc  :   =
diff  : 
inc  :   =
inc  : =
inc  :   =
inc  : =
inc  :   =
inc  : =

and not like that :
diff  : ==
inc  :   ===
inc  : ===
inc  :   ===

What do you think about such a feature ?

--
WhatsUp Gold - Download Free Network Management Software
The most intuitive, comprehensive, and cost-effective network 
management toolset available today.  Delivers lowest initial 
acquisition cost and overall TCO of any competing solution.
http://p.sf.net/sfu/whatsupgold-sd
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] About retention, and pruning.

2011-04-28 Thread Hugo Letemplier
Hi,

I am adding this to precise question and reopen the topic.
I read many time the chapter of the documentation but I was never sure
of what I understood.

As you know when you do an inc you need a  sequence of job  : at
least a full + maybe 1 diff + maybe many incs
That would be nice to have some security option that make sure that a
full a diff or a previous inc wont be purged although  there is still
a valid job that is dependent of theses ones.
For the moment i have set 3 pools for a kind of data ( ex: mail data )
For the full I configured on 0 (unlimited)
For the diff 2 week
For the incs 1 week

one problem is that the last inc of a sequence will be kept whereas
the first inc of the same sequence is purged.

when I do a diff that should be like that :

diff  : ==
inc  :   ===
inc  : ===
inc  :   ===

and not like that :
diff  : ==
inc  :   ===
inc  : ===
inc  :   ===

Can you tell me if bacula, when it's going to purge a job check if
another job depends of the one that it's going to purge ?

2011/3/24 Hugo Letemplier hugo.let...@gmail.com:
 Hi,

 Hi,
 I work for an enterprise that need a good backup software and I chosen Bacula;
 I am now trying to configure it but I have a doubt.
 I am setting up a disk based backup server with manual LTO tapes archiving.
 I have 4 Class of data to backup :
 -My mail server
 -My infrastructure (DNS, Directory…)
 -My file server
 -My information services

 I created 3 pools per class (Full - Diff - Inc ) plus one more for
 Archive with an infinite retention.

 My question, for the moment is about my file server that represent the
 biggest amount of data to backup and the higher backup frequency w/
 the datas of the intra/extranet software

 My difficulties are in setting retention periods for my pools. I read
 a bacula-dir.conf example and I saw that it keeps the full for one 1
 year.
 I am about to keep my full for 28 weeks my diff for 4 weeks my inc for 10 days
 I will do 1 inc per day 2 diff per months and 3 full per year
 How can I be sure that it will work ? If I do my last inc does it will
 update retention time on the previous incs so the restore would be
 possible 1 week after the last inc.
 as an example:
 - 1st inc (1 week retention)
 - 2nd inc (1 w)
 - 3rd inc (1 w)
 .
 - 8th inc (1 w)

 I have the same kind of question with many jobs of different level.
 Does a higher level job inherits the lower job retention ? Indeed, an
 inc need the full and eventually a diff to be fully restored.
 Does bacula come with a mechanism
 that avoid an inc or a diff to be there without the full (or the diff,
 and previous incs) that it used as a base ?

 My second question is more bound to my file server. How can I
 estimate and configure safely job retention because I have no idea
 on how my datas will grow job after job ? Is there a good method ?

 I am using bacula 5.0.3 on CentOS 5.5

 I hope you understood my English and you will have a time to answer my
 questions.

 Thank you very much in advance.

 Hugo


--
WhatsUp Gold - Download Free Network Management Software
The most intuitive, comprehensive, and cost-effective network 
management toolset available today.  Delivers lowest initial 
acquisition cost and overall TCO of any competing solution.
http://p.sf.net/sfu/whatsupgold-sd
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] Speed of backups

2011-04-28 Thread Hugo Letemplier
Did you activated attribute spooling ( and maybe data spooling too if
you use LTO )?

2011/4/28 Jason Voorhees jvoorhe...@gmail.com:
 Hi:

 On Thu, Apr 28, 2011 at 10:19 AM, John Drescher dresche...@gmail.com wrote:
 On Thu, Apr 28, 2011 at 11:08 AM, Jason Voorhees jvoorhe...@gmail.com 
 wrote:
 Hi:

 I'm running Bacula 5.0.3 in RHEL 6.0 x86_64 with a Library tape IBM
 TS3100 with hardware compression enabled and software (Bacula)
 compression disabled, using LTO-5 tapes. I have a Gigabit Ethernet
 network and iperf tests report me a bandwidth of 112 MB/s.

 I'm not using any spooling configuration and I'm running concurrent
 jobs, just only one. This is the configuration of my fileset:

 FileSet {
       Name = fset-qsrpsfs1
       Include {
               File = /etc
               File = /root
               File = /var/spool/cron
               File = /var/run/utmp
               File = /var/log
               File = /data
               Options {
                       signature=SHA1
                       #compression=GZIP
               }
       }
 }

 My backups were running with a minimum of 54 MB/s and a maximum of 79
 MB/s. Are these speeds normal for my scenario?

 Is the source a raid? Do you have many small files?

 John


 No, there are just a normal number of files from a shared folder of
 my fileserver with spreadsheets, documents, images, PDFs, just
 information of final users.

 --
 WhatsUp Gold - Download Free Network Management Software
 The most intuitive, comprehensive, and cost-effective network
 management toolset available today.  Delivers lowest initial
 acquisition cost and overall TCO of any competing solution.
 http://p.sf.net/sfu/whatsupgold-sd
 ___
 Bacula-users mailing list
 Bacula-users@lists.sourceforge.net
 https://lists.sourceforge.net/lists/listinfo/bacula-users


--
WhatsUp Gold - Download Free Network Management Software
The most intuitive, comprehensive, and cost-effective network 
management toolset available today.  Delivers lowest initial 
acquisition cost and overall TCO of any competing solution.
http://p.sf.net/sfu/whatsupgold-sd
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] About retention, and pruning.

2011-04-28 Thread Hugo Letemplier
2011/4/28 Hugo Letemplier hugo.let...@gmail.com:
 Hi,

 I am adding this to precise question and reopen the topic.
 I read many time the chapter of the documentation but I was never sure
 of what I understood.

 As you know when you do an inc you need a  sequence of job  : at
 least a full + maybe 1 diff + maybe many incs
 That would be nice to have some security option that make sure that a
 full a diff or a previous inc wont be purged although  there is still
 a valid job that is dependent of theses ones.
 For the moment i have set 3 pools for a kind of data ( ex: mail data )
 For the full I configured on 0 (unlimited)
 For the diff 2 week
 For the incs 1 week

 one problem is that the last inc of a sequence will be kept whereas
 the first inc of the same sequence is purged.

 when I do a diff that should be like that :

 diff  : ==
 inc  :   ===
 inc  :     ===
 inc  :       ===

 and not like that :
 diff  : ==
 inc  :   ===
 inc  :     ===
 inc  :       ===

 Can you tell me if bacula, when it's going to purge a job check if
 another job depends of the one that it's going to purge ?

 2011/3/24 Hugo Letemplier hugo.let...@gmail.com:
 Hi,

 Hi,
 I work for an enterprise that need a good backup software and I chosen 
 Bacula;
 I am now trying to configure it but I have a doubt.
 I am setting up a disk based backup server with manual LTO tapes archiving.
 I have 4 Class of data to backup :
 -My mail server
 -My infrastructure (DNS, Directory…)
 -My file server
 -My information services

 I created 3 pools per class (Full - Diff - Inc ) plus one more for
 Archive with an infinite retention.

 My question, for the moment is about my file server that represent the
 biggest amount of data to backup and the higher backup frequency w/
 the datas of the intra/extranet software

 My difficulties are in setting retention periods for my pools. I read
 a bacula-dir.conf example and I saw that it keeps the full for one 1
 year.
 I am about to keep my full for 28 weeks my diff for 4 weeks my inc for 10 
 days
 I will do 1 inc per day 2 diff per months and 3 full per year
 How can I be sure that it will work ? If I do my last inc does it will
 update retention time on the previous incs so the restore would be
 possible 1 week after the last inc.
 as an example:
 - 1st inc (1 week retention)
 - 2nd inc (1 w)
 - 3rd inc (1 w)
 .
 - 8th inc (1 w)

 I have the same kind of question with many jobs of different level.
 Does a higher level job inherits the lower job retention ? Indeed, an
 inc need the full and eventually a diff to be fully restored.
 Does bacula come with a mechanism
 that avoid an inc or a diff to be there without the full (or the diff,
 and previous incs) that it used as a base ?

 My second question is more bound to my file server. How can I
 estimate and configure safely job retention because I have no idea
 on how my datas will grow job after job ? Is there a good method ?

 I am using bacula 5.0.3 on CentOS 5.5

 I hope you understood my English and you will have a time to answer my
 questions.

 Thank you very much in advance.

 Hugo



I have made some tests:
Here is my pool configuration
 Job {
 Name = Test:retention
 Schedule = Test:retention
 Type = Backup
 Pool = default
 Full Backup Pool = TestRetention-full
 Incremental Backup Pool = TestRetention-incs
 Client = bacula-fd
 Priority = 10
 Fileset = Test:retention
 Spool Data  = no
 Spool Attributes = yes
 Messages = Standard
}
Schedule {
Name = Test:retention
Run = Full hourly at 00:40
Run = Incremental hourly at 00:50
Run = Incremental hourly at 00:00
Run = Incremental hourly at 00:10
Run = Incremental hourly at 00:20
Run = Incremental hourly at 00:30
}
Pool {  
Name = TestRetention-full
Storage = Local
Pool Type = Backup
Recycle = yes   
AutoPrune = yes 
Maximum Volume Bytes = 5G   
Maximum Volumes = 100   
Volume Retention = 1 hour
File retention = 1 hour
Job retention = 1 hour
Use volume once = yes
}

Pool {  
Name = TestRetention-incs
Storage = Local
Pool Type = Backup
Recycle = yes   
AutoPrune = yes
Use volume once = yes
Maximum Volume Bytes = 5G   
Maximum Volumes = 100   
Volume Retention = 20 minutes
Label Format = TestRetention-
File retention = 20 minutes
Job retention = 20 minutes
Use volume once = yes
}

After the job ran many times: I have the following volume = job matching
Vol name   Level  Time
Test1 Full15:50
324Inc

Re: [Bacula-users] Fileset Difficulties -- help please

2011-04-22 Thread Hugo Letemplier
2011/4/21 Hugo Letemplier hugo.let...@gmail.com:
 Hi

 I am having a Mac OSX File server where users home are stored.
 Before having a powerfull mail server that stores all the content of
 my emails, I store all the Microsoft Entourage Mailbox on this file
 server.
 Theses mailbox must be backed up separately from the rest of my
 server. Indeed, mailboxes size can reach some Gigabytes and it's an
 archive format (not a maildir) so at every backup a new big file need
 to be saved.

 the path to my mailboxes is :

 /FileServer/Username/Documents/*/Office 2004//My Database

 I have to create 2 filessets. One that includes only the Mail database
 One that contains the rests of the user home directory.
 Also, I have an option section that select some files that must no be
 compressed but it doesn't work:

 FileSet {
        Name = FileServer
        Include {
             Options {

 regexfile='^.*\.(mp3|mp2|m4a|wma|flac|ogg|cda|aac|zip|lzh|rar|7z|gz|jar|bz|bz2|arj|deb|msi|pkg|tgz|ico|tif|gif|jpg|jpeg|png|gif|rpm|avi|vob|mkv|flv|mpg|mpeg|divx|wmv|avi|mp4|mov|qt)'
                    exclude = no
                    HFSPlus Support = yes
                    Signature = MD5
                    ACL Support = yes
             }
            Options {
                   compression = GZIP3
                   HFSPlus Support = yes
                   Signature = MD5
                   ACL Support = yes
            }
            File = /FileServer
    }
    Exclude {
        File = /FileServer/*/Office 2004
        File = /FileServer/*/Identit*s Office 2008/*My Database
        File = */.DS_Store
        File = */.Spotlight-V100
        File = */.TemporaryItems
    }
 }

 FileSet {
        Name = FileServer-Mailbox
        Include {
        Options {
            compression = GZIP6
            HFSPlus Support = yes
            Signature = MD5
            ACL Support = yes
            wild = /FileServer/*/Office 2004/*My Database
            wild = /FileServer/*/Identit*s Office 2008/*My Database
        }

        File = /FileServer
    }
    Exclude {
        File = */.DS_Store
        File = */.Spotlight-V100
        File = */.TemporaryItems
    }
 }

 I have tried many times with regexp and wild but it never did that as I wish.

 Have you got an idea ?

 Thanks

 Hugo


Hi

I precise my problem,

I succeeded for the fileset that exclude the Mailboxes I got this :

FileSet {
Name = FileServer
 Ignore Fileset Changes = yes
Include {
  Options {
  regexfile=
^.*\.(mp3|mp2|m4a|wma|flac|ogg|cda|aac|zip|lzh|rar|7z|gz|jar|bz|bz2|arj|deb|msi|pkg|tgz|ico|tif|gif|jpg|jpeg|png|gif|rpm|avi|vob|mkv|flv|mpg|mpeg|divx|wmv|avi|mp4|mov|qt)
exclude = no
HFSPlus Support = yes
Signature = MD5
ACL Support = yes
xattrsupport = yes
}
Options {
compression = GZIP3
exclude = no
   HFSPlus Support = yes
Signature = MD5
ACL Support = yes
xattrsupport = yes
}
Options {
Signature = MD5
ACL Support = yes
HFSPlus Support = yes
xattrsupport = yes
IgnoreCase = yes
regex = /FileServer/.*/Utilisateurs Office 2004/Identit.*
principale/.*
regex = /FileServer/.*/Identit.*s Office
2008/Identit.* principale/.*
exclude = yes
}
File = /FileServer
}
Exclude {
File = */.DS_Store
File = */.Spotlight-V100
File = */.TemporaryItems
}
}


I still got the problem to write the other one. :-(
I am still failing in only including files that match the following
regex
regex = /FileServer/.*/Utilisateurs Office 2004/Identit.* principale/.*
regex = /FileServer/.*/Identit.*s Office 2008/Identit.* principale/.*

Thank you

--
Fulfilling the Lean Software Promise
Lean software platforms are now widely adopted and the benefits have been 
demonstrated beyond question. Learn why your peers are replacing JEE 
containers with lightweight application servers - and what you can gain 
from the move. http://p.sf.net/sfu/vmware-sfemails
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


[Bacula-users] Fileset Difficulties -- help please

2011-04-21 Thread Hugo Letemplier
Hi

I am having a Mac OSX File server where users home are stored.
Before having a powerfull mail server that stores all the content of
my emails, I store all the Microsoft Entourage Mailbox on this file
server.
Theses mailbox must be backed up separately from the rest of my
server. Indeed, mailboxes size can reach some Gigabytes and it's an
archive format (not a maildir) so at every backup a new big file need
to be saved.

the path to my mailboxes is :

/FileServer/Username/Documents/*/Office 2004//My Database

I have to create 2 filessets. One that includes only the Mail database
One that contains the rests of the user home directory.
Also, I have an option section that select some files that must no be
compressed but it doesn't work:

FileSet {
Name = FileServer
Include {
 Options {

regexfile='^.*\.(mp3|mp2|m4a|wma|flac|ogg|cda|aac|zip|lzh|rar|7z|gz|jar|bz|bz2|arj|deb|msi|pkg|tgz|ico|tif|gif|jpg|jpeg|png|gif|rpm|avi|vob|mkv|flv|mpg|mpeg|divx|wmv|avi|mp4|mov|qt)'
exclude = no
HFSPlus Support = yes
Signature = MD5
ACL Support = yes
 }
Options {
   compression = GZIP3
   HFSPlus Support = yes
   Signature = MD5
   ACL Support = yes
}
File = /FileServer
}
Exclude {
File = /FileServer/*/Office 2004
File = /FileServer/*/Identit*s Office 2008/*My Database
File = */.DS_Store
File = */.Spotlight-V100
File = */.TemporaryItems
}
}

FileSet {
Name = FileServer-Mailbox
Include {
Options {
compression = GZIP6
HFSPlus Support = yes
Signature = MD5
ACL Support = yes
wild = /FileServer/*/Office 2004/*My Database
wild = /FileServer/*/Identit*s Office 2008/*My Database
}

File = /FileServer
}
Exclude {
File = */.DS_Store
File = */.Spotlight-V100
File = */.TemporaryItems
}
}

I have tried many times with regexp and wild but it never did that as I wish.

Have you got an idea ?

Thanks

Hugo

--
Benefiting from Server Virtualization: Beyond Initial Workload 
Consolidation -- Increasing the use of server virtualization is a top
priority.Virtualization can reduce costs, simplify management, and improve 
application availability and disaster protection. Learn more about boosting 
the value of server virtualization. http://p.sf.net/sfu/vmware-sfdev2dev
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] Feature idea : Advanced status of a current job;

2011-04-14 Thread Hugo Letemplier
2011/4/11 Gavin McCullagh gavin.mccull...@gcd.ie:
 Hi,

 On Mon, 11 Apr 2011, Hugo Letemplier wrote:

 I imagine a command like status job  jobid=

 I presume you've looked at status client=

 It does much of what you want (current job duration, data transferred,
 rate, num files, current file), but without the predictive information
 you're looking for (time/data remaining).  In principal, if you or bacula
 ran an estimate beforehand, you could probably work out an estimated time
 remaining but I don't think that feature is present.  Estimates are only
 available for full backups.

 *estimate job=CeartgoleorBackups-Job
 Using Catalog MyCatalog
 Connecting to Client ceartgoleor-fd at ceartgoleor.:9102
 2000 OK estimate files=223,128 bytes=9,724,976,371

 Gavin

Ok thanks, I will use that for the moment. But this information might
be best placed on the director status in my opinion

Hugo

--
Benefiting from Server Virtualization: Beyond Initial Workload 
Consolidation -- Increasing the use of server virtualization is a top
priority.Virtualization can reduce costs, simplify management, and improve 
application availability and disaster protection. Learn more about boosting 
the value of server virtualization. http://p.sf.net/sfu/vmware-sfdev2dev
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


[Bacula-users] Feature idea : Advanced status of a current job;

2011-04-11 Thread Hugo Letemplier
Hi
Since many days I am thinking thats it could be nice to have some
feature that enable the user to have an idea of the speed that the job
is running.

I imagine something that report during the job : Network rates, Job
advancement, Files backed up, Bytes Copied ?

Also I use Webacula, and when

--
Xperia(TM) PLAY
It's a major breakthrough. An authentic gaming
smartphone on the nation's most reliable network.
And it wants your games.
http://p.sf.net/sfu/verizon-sfdev
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] Feature idea : Advanced status of a current job;

2011-04-11 Thread Hugo Letemplier
Sorry, there was a bug, message sent while I was writing :(

2011/4/11 Hugo Letemplier hugo.let...@gmail.com:
 Hi
 Since many days I am thinking thats it could be nice to have some
 feature that enable the user to have an idea of the speed that the job
 is running.

 I imagine something that report during the job : Network rates, Job
 advancement, Files backed up, Bytes Copied ?

 Also I use Webacula, and when I look at the timeline the currently running 
 job cant be viewed


I imagine a command like status job  jobid=
- Data already backed up
- Estimation of the remaining Data (or time)
- Elapsed time
- Network data transfert
- Datarate at two level ( with/without compression )
and some information that could be fine to estimate the duration of a job.
Lots of backup software have a graphical progression bar. When you
start a job in bacula you don't have statistics before the job is
terminated.

Maybe I missed a feature like that that already exist ?

I think that a lot of people will be interested by that king of feature.

Thank you

Hugo

--
Xperia(TM) PLAY
It's a major breakthrough. An authentic gaming
smartphone on the nation's most reliable network.
And it wants your games.
http://p.sf.net/sfu/verizon-sfdev
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] Feature idea : Advanced status of a current job;

2011-04-11 Thread Hugo Letemplier
2011/4/11 Edgars Mazurs edgars.maz...@lattelecom.lv:
 Hi

 Why not use Bweb? It does and shows exactly what you want.

 Edgars

 -Original Message-
 From: Hugo Letemplier [mailto:hugo.let...@gmail.com]
 Sent: Monday, April 11, 2011 5:55 PM
 To: bacula-users
 Subject: [Bacula-users] Feature idea : Advanced status of a current job;

 Hi
 Since many days I am thinking thats it could be nice to have some
 feature that enable the user to have an idea of the speed that the job
 is running.

 I imagine something that report during the job : Network rates, Job
 advancement, Files backed up, Bytes Copied ?

 Also I use Webacula, and when


I had lots of problem when implementing bweb. Moreover webacula seems
to be more active web interface project for bacula.
Maybe a sort of fusion between the web projects would be cool ? I feel
like theses web projects are working independently. Each one with is
advantages and drawbacks.

Also I often use bconsole, I think it's best to include such a command
in the core of the software than in the web interface.

: )

Hugo

--
Xperia(TM) PLAY
It's a major breakthrough. An authentic gaming
smartphone on the nation's most reliable network.
And it wants your games.
http://p.sf.net/sfu/verizon-sfdev
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] What happen if I change the client of a job ? Multi node virtualization cluster

2011-03-31 Thread Hugo Letemplier
2011/3/30 John Drescher dresche...@gmail.com:
 My first question is : How can I tell bacula to choose dynamically the
 client according to the result of a script ?


 You can script that by echoing commands to bconsole. Then execute your
 script as a cron job.

 My second question is : Can I safely/often change the client of a job
 without having bacula going bad ? Does a client change will create a
 rupture in my backups ? ( eg: Upgrading an incremental to full )


 I believe you will be fine to do this. Although it is not something I
 have done often.

 John


Hi

I just tried, the same job on two clients but using the same fileset,

= Run 1 time on 1 client.
Changed the client in the conf
= Run 1 time on the second client


Bacula upgraded my incremental job to a full. That would be nice to
limit the client rôle to a way of accessing the content.
Is there a way to force this like if you do it with ignore fileset changes ?

I don't install the file daemon on the VM to backup because I need
to use LVM snapshots. LVM management is done directly on the physical
host and not on the client.
Then I use the strippath option of the fileset in order to backup the
snapshot without the full path to its mount point.

***
For the moment, I will just run a before job script that fails the job
to run on a bad client ( eg: the machine is migrated on another node
).
In my virtual environnent virtual machines are just migrated if there
is a problem so most of the time they stays on there original node and
so it isn't mandatory to attribute dynamically the client. To foresee
futures evolutions of my cluster's architecture that would be a good
solution to backup my VMs even if they aren't on the original node.
***

The ideal thing would be to use a variable for the client name ; to
set this variable at the good value ( Node name ) with a before job
script ; and then to mount snapshot and execute the job on the good
node (client or fd) of the virtualization cluster.

I use Proxmox with DRBD for my cluster management.

Does anyone have a solution ?

thanks

Hugo

--
Create and publish websites with WebMatrix
Use the most popular FREE web apps or write code yourself; 
WebMatrix provides all the features you need to develop and 
publish your website. http://p.sf.net/sfu/ms-webmatrix-sf
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] What happen if I change the client of a job ? Multi node virtualization cluster

2011-03-30 Thread Hugo Letemplier
Hi,

I found no solution to do that.

What I think is to have a script on my virtualization cluster that
returns the value of the Bacula client that the job should use.



My first question is : How can I tell bacula to choose dynamically the
client according to the result of a script ?

My second question is : Can I safely/often change the client of a job
without having bacula going bad ? Does a client change will create a
rupture in my backups ? ( eg: Upgrading an incremental to full )

If someone have some idea, that would be very cool !

Thanks
Hugo


2011/3/10 Hugo Letemplier hugo.let...@gmail.com:
 Hi,

 I am using a KVM and open vz virtualization cluster and I am going to
 do my backup directly from the host instead of from each virtual
 machine.
 So I increase my backup performance by reducing the virtual layers and
 also it will give me an advantage in managing LVM snapshots during my
 pre and post backup scripts.

 The difficulty is that VMs are supposed to be migrated on many nodes
 of the cluster.

 For the moment I have one File Daemon per node.

 I thought I could change the client of a job after a migration ?
 Nevertheless, I have some doubts:
 - It will be complicated to rewrite dir configuration after each
 migration, with obligation to reload the conf
 - Is there a mean to do that via bconsole ?
 - I think that changing the client will break the continuity of a job
 so it will create a new full like if the system detect a fileset
 change.

 - Have you another approach of the problem ?

 Thanks

 Hugo


--
Enable your software for Intel(R) Active Management Technology to meet the
growing manageability and security demands of your customers. Businesses
are taking advantage of Intel(R) vPro (TM) technology - will your software 
be a part of the solution? Download the Intel(R) Manageability Checker 
today! http://p.sf.net/sfu/intel-dev2devmar
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


[Bacula-users] About retention, and pruning.

2011-03-24 Thread Hugo Letemplier
Hi,

Hi,
I work for an enterprise that need a good backup software and I chosen Bacula;
I am now trying to configure it but I have a doubt.
I am setting up a disk based backup server with manual LTO tapes archiving.
I have 4 Class of data to backup :
-My mail server
-My infrastructure (DNS, Directory…)
-My file server
-My information services

I created 3 pools per class (Full - Diff - Inc ) plus one more for
Archive with an infinite retention.

My question, for the moment is about my file server that represent the
biggest amount of data to backup and the higher backup frequency w/
the datas of the intra/extranet software

My difficulties are in setting retention periods for my pools. I read
a bacula-dir.conf example and I saw that it keeps the full for one 1
year.
I am about to keep my full for 28 weeks my diff for 4 weeks my inc for 10 days
I will do 1 inc per day 2 diff per months and 3 full per year
How can I be sure that it will work ? If I do my last inc does it will
update retention time on the previous incs so the restore would be
possible 1 week after the last inc.
as an example:
- 1st inc (1 week retention)
- 2nd inc (1 w)
- 3rd inc (1 w)
.
- 8th inc (1 w)

I have the same kind of question with many jobs of different level.
Does a higher level job inherits the lower job retention ? Indeed, an
inc need the full and eventually a diff to be fully restored.
Does bacula come with a mechanism
that avoid an inc or a diff to be there without the full (or the diff,
and previous incs) that it used as a base ?

My second question is more bound to my file server. How can I
estimate and configure safely job retention because I have no idea
on how my datas will grow job after job ? Is there a good method ?

I am using bacula 5.0.3 on CentOS 5.5

I hope you understood my English and you will have a time to answer my
questions.

Thank you very much in advance.

Hugo

--
Enable your software for Intel(R) Active Management Technology to meet the
growing manageability and security demands of your customers. Businesses
are taking advantage of Intel(R) vPro (TM) technology - will your software 
be a part of the solution? Download the Intel(R) Manageability Checker 
today! http://p.sf.net/sfu/intel-dev2devmar
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


[Bacula-users] What happen if I change the client of a job ? Multi node virtualization cluster

2011-03-10 Thread Hugo Letemplier
Hi,

I am using a KVM and open vz virtualization cluster and I am going to
do my backup directly from the host instead of from each virtual
machine.
So I increase my backup performance by reducing the virtual layers and
also it will give me an advantage in managing LVM snapshots during my
pre and post backup scripts.

The difficulty is that VMs are supposed to be migrated on many nodes
of the cluster.

For the moment I have one File Daemon per node.

I thought I could change the client of a job after a migration ?
Nevertheless, I have some doubts:
- It will be complicated to rewrite dir configuration after each
migration, with obligation to reload the conf
- Is there a mean to do that via bconsole ?
- I think that changing the client will break the continuity of a job
so it will create a new full like if the system detect a fileset
change.

- Have you another approach of the problem ?

Thanks

Hugo

--
Colocation vs. Managed Hosting
A question and answer guide to determining the best fit
for your organization - today and in the future.
http://p.sf.net/sfu/internap-sfd2d
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


[Bacula-users] Compilation and deployment for MacOSX / Standelone bacula fd

2011-02-23 Thread Hugo Letemplier
Hi
I am trying to compile bacula-fd on mac osx.
I tried to create mpkg with macport for eg:
sudo port mpkg bacula

The advantage is that mpkg allow you to be sure to have all libraries
updated and so you have no problem to run the FD.
This command makes issues when there is already a macport on the
installation system (overwrite other libraries)

I want that the generated mpkg extract bacula files an library in its
own path eg something like /usr/local/MyBaculaFD
This directory should contains /bin, /sbin, /etc, /lib (contain
dependancies), /working
I need that the bacula installation interacts as less as possible with
the client.
I had previously some problem when deploying bacula-fd on various
different systems and so creating a standalone package would be a nice
solution.

Have you got an idea to realize this with or without macports ?


Thanks

Hugo

--
Free Software Download: Index, Search  Analyze Logs and other IT data in 
Real-Time with Splunk. Collect, index and harness all the fast moving IT data 
generated by your applications, servers and devices whether physical, virtual
or in the cloud. Deliver compliance at lower cost and gain new business 
insights. http://p.sf.net/sfu/splunk-dev2dev 
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


[Bacula-users] Very low performance with compression and encryption !

2011-01-20 Thread Hugo Letemplier
Hi,

I am running bacula 5.0.3 on CentOS 5.6.

When I run a simple Job I can have rates between 20 or 40 MB/s over a
Gigabyte network but when I am running this job with client encryption
and compression everything become slow below 5 MB/s and sometimes
under 500 KB/s.
Generally I test with a backup job on a full system and then do Inc.
Both are very slow.
What should I check ?
Maybe it's my file set ?
I used both MacOSX client and CentOS clients with equivalent slowness.
Does anybody got this problem ?
Maybe it can come from zlib or openssl that I use ?
I am practically sure it's coming from the File Daemon.
I use compression at 3/10 level

Thank you very much

Hugo

I joined the config I used to generate the RPM :
#!/bin/sh
cat  __EOC__
$ ./configure  '--host=i686-redhat-linux-gnu'
'--build=i686-redhat-linux-gnu' '--target=i386-redhat-linux'
'--program-prefix=' '--exec-prefix=/usr' '--bindir=/usr/bin'
'--sysconfdir=/etc' '--datadir=/usr/share' '--includedir=/usr/include'
'--libdir=/usr/lib' '--libexecdir=/usr/libexec' '--localstatedir=/var'
'--sharedstatedir=/usr/com' '--infodir=/usr/share/info'
'--prefix=/usr' '--sbindir=/usr/sbin' '--sysconfdir=/etc/bacula'
'--mandir=/usr/share/man' '--with-scriptdir=/usr/lib/bacula'
'--with-working-dir=/var/lib/bacula'
'--with-plugindir=/usr/lib/bacula' '--with-pid-dir=/var/run'
'--with-subsys-dir=/var/lock/subsys' '--enable-smartalloc'
'--disable-gnome' '--disable-bwx-console' '--disable-tray-monitor'
'--disable-conio' '--enable-readline' '--with-postgresql'
'--disable-bat' '--with-dir-user=bacula' '--with-dir-group=bacula'
'--with-sd-user=bacula' '--with-sd-group=disk' '--with-fd-user=root'
'--with-fd-group=bacula'
'--with-dir-password=XXX_REPLACE_WITH_DIRECTOR_PASSWORD_XXX'
'--with-fd-password=XXX_REPLACE_WITH_CLIENT_PASSWORD_XXX'
'--with-sd-password=XXX_REPLACE_WITH_STORAGE_PASSWORD_XXX'
'--with-mon-dir-password=XXX_REPLACE_WITH_DIRECTOR_MONITOR_PASSWORD_XXX'
'--with-mon-fd-password=XXX_REPLACE_WITH_CLIENT_MONITOR_PASSWORD_XXX'
'--with-mon-sd-password=XXX_REPLACE_WITH_STORAGE_MONITOR_PASSWORD_XXX'
'--with-openssl' 'build_alias=i686-redhat-linux-gnu'
'host_alias=i686-redhat-linux-gnu' 'target_alias=i386-redhat-linux'
'CFLAGS=-O2 -g -m32 -march=i386 -mtune=generic
-fasynchronous-unwind-tables' 'CXXFLAGS=-O2 -g -m32 -march=i386
-mtune=generic -fasynchronous-unwind-tables'

Configuration on Thu Oct 14 13:33:48 CEST 2010:

   Host:i686-redhat-linux-gnu -- redhat
   Bacula version:  Bacula 5.0.3 (30 August 2010)
   Source code location:.
   Install binaries:/usr/sbin
   Install libraries:   /usr/lib
   Install config files:/etc/bacula
   Scripts directory:   /usr/lib/bacula
   Archive directory:   /tmp
   Working directory:   /var/lib/bacula
   PID directory:   /var/run
   Subsys directory:/var/lock/subsys
   Man directory:   /usr/share/man
   Data directory:  /usr/share
   Plugin directory:/usr/lib/bacula
   C Compiler:  gcc 4.1.2
   C++ Compiler:/usr/bin/g++ 4.1.2
   Compiler flags:   -O2 -g -m32 -march=i386 -mtune=generic
-fasynchronous-unwind-tables -fno-strict-aliasing -fno-exceptions
-fno-rtti
   Linker flags:
   Libraries:   -lpthread -ldl
   Statically Linked Tools: no
   Statically Linked FD:no
   Statically Linked SD:no
   Statically Linked DIR:   no
   Statically Linked CONS:  no
   Database type:   PostgreSQL
   Database port:   
   Database lib:-L/usr/lib -lpq -lcrypt
   Database name:   bacula
   Database user:   bacula

   Job Output Email:root@localhost
   Traceback Email: root@localhost
   SMTP Host Address:   localhost

   Director Port:   9101
   File daemon Port:9102
   Storage daemon Port: 9103

   Director User:   bacula
   Director Group:  bacula
   Storage Daemon User: bacula
   Storage DaemonGroup: disk
   File Daemon User:root
   File Daemon Group:   bacula

   SQL binaries Directory   /usr/bin

   Large file support:  yes
   Bacula conio support:no -lreadline -lncurses
   readline support:yes
   TCP Wrappers support:no
   TLS support: yes
   Encryption support:  yes
   ZLIB support:yes
   enable-smartalloc:   yes
   enable-lockmgr:  no
   bat support: no
   enable-gnome:no
   enable-bwx-console:  no
   enable-tray-monitor: no
   client-only: no
   build-dird:  yes
   build-stored:yes
   Plugin support:  yes
   AFS support: no
   ACL support: yes
   XATTR support:   yes
   Python support:  no
   Batch insert enabled:yes


__EOC__


Here is my file set :
FileSet {
Name = MacFull
Include {
Options {
HFSPlus Support = yes
Signature 

Re: [Bacula-users] Very low performance with compression and encryption !

2011-01-20 Thread Hugo Letemplier
2011/1/20 Paul Mather p...@gromit.dlib.vt.edu:
 On Jan 20, 2011, at 11:01 AM, John Drescher wrote:

 This is normal. If you want fast compression do not use software
 compression and use a tape drive with HW compression like LTO drives.

 John
 Not really an option for file/disk devices though.

 I've been tempted to experiment with BTRFS using LZO or standard zlib
 compression for storing the volumes and see how the performance compares
 to having bacula-fd do the compression before sending - I have a
 suspicion the former might be better..


 Doing the compression at the filesystem level is an idea I have wanted
 to try for several years. Hopefully one of the filesystems that
 support this becomes stable soon.

 I've been using ZFS with a compression-enabled fileset for a while now under 
 FreeBSD.  It is transparent and reliable.  Looking just now, I'm not getting 
 great compression ratios for my backup data: 1.09x.  I am using the 
 speed-oriented compression algorithm on this fileset, though, because the 
 hardware is relatively puny.  (It is a Bacula test bed.)  Probably I'd get 
 better compression if I enabled one of the GZIP levels.

 Cheers,

 Paul.



 --
 Protect Your Site and Customers from Malware Attacks
 Learn about various malware tactics and how to avoid them. Understand
 malware threats, the impact they can have on your business, and how you
 can protect your company and customers by using code signing.
 http://p.sf.net/sfu/oracle-sfdevnl
 ___
 Bacula-users mailing list
 Bacula-users@lists.sourceforge.net
 https://lists.sourceforge.net/lists/listinfo/bacula-users


I wil try somethings with scheduling for the few next days.
I will come back next week with the results.
Cheers,
Hugo

--
Protect Your Site and Customers from Malware Attacks
Learn about various malware tactics and how to avoid them. Understand 
malware threats, the impact they can have on your business, and how you 
can protect your company and customers by using code signing.
http://p.sf.net/sfu/oracle-sfdevnl
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] Job is waiting on max Storage jobs although I set Max concurrent job to 10

2011-01-13 Thread Hugo Letemplier
2011/1/12 Silver Salonen sil...@ultrasoft.ee:
 On Wednesday 12 January 2011 13:56:44 Hugo Letemplier wrote:
 2011/1/8 Silver Salonen sil...@ultrasoft.ee:
  On Saturday 08 January 2011 14:07:01 Hugo Letemplier wrote:
  Le 6 janv. 2011 à 21:17, Silver Salonen a écrit :
 
   On Thursday 06 January 2011 22:14:16 John Drescher wrote:
   Only one job can run on the same device. If you want another job to 
   run into the save folder, you have to clone the device resource under 
   some other name. Eg:
  
  
   That is wrong. Multiple jobs can write to the same tape or disk volume
   at the same time. Most likely your problem is that a single bacula
   storage device can only load 1 volume and thus 1 pool at a time even
   for disk volumes.
  
   John
  
   Yeah, sry, I forgot that. I wonder who and why writes multiple jobs 
   into the same volume with file-type volumes :)
  
   --
   Silver
 
  So I will need to create many pools ?
  But of I use one pool for many Jobs so I will need to create many devices 
  ?
 
  Hugo
 
  Many pools you need mostly because you want your backups of different 
  levels to be labeled differently and have different retention time for 
  them.
 
  For file-type volumes it makes sense to use one job per volume and thus 
  you'll need to create as many devices as many concurrent jobs you want to 
  use.
 
  PS. In mailing listis, please post at the bottom of the thread.
 
  --
  Silver
 

 Do I need to use a virtual autochanger ? in fact I don't care about
 what device I used per pool.
 eg : if I have many file devices I order  to access many file volumes
 in one sd directory. I would like that bacula select automaticatly the
 first available device in order to access the file volume.

 Here are my File devices in sd conf

 Device {
   Name = DataTest.0
   Media Type = File
   Archive Device = /BaculaData/Test/
   LabelMedia = yes;                   # lets Bacula label unlabeled media
   Random Access = Yes;
   AutomaticMount = yes;               # when device opened, read it
   RemovableMedia = no;
   AlwaysOpen = Yes;
   Maximum Volume Size = 4G
 }

 Device {
   Name = DataTest.1
   Media Type = File
   Archive Device = /BaculaData/Test/
   LabelMedia = yes;                   # lets Bacula label unlabeled media
   Random Access = Yes;
   AutomaticMount = yes;               # when device opened, read it
   RemovableMedia = no;
   AlwaysOpen = Yes;
   Maximum Volume Size = 4G
 }

 imagine 10 Devices named DataTest.X

 Here are my pools in Dir config, one named Sauvegarde is to make
 regular backup on few File servers. The other is to export on a LTO
 device as long as possible.

 Pool {
       Name = Test
       Storage = DataTest   = My problem is here
       Pool Type = Backup
       Recycle = yes                                                          
  # Bacula can automatically recycle Volumes
       AutoPrune = yes                                                        
  # Prune expired volumes
       Volume Retention = 7 days                                # Short time 
 to make test
       Label Format = Bacula-Test-
       Maximum Volume Bytes = 4G                           # Limit Volume 
 size to
 something reasonable
       Maximum Volumes = 256                                           # 
 Limit number of Volumes in Pool
       Next Pool = ArchivageLTO
 }

 Pool {                                                                       
          #Pool pour l'archivage sur bande LTO
       Name = ArchivageLTO
       Storage = Tape   = My problem is here
       Pool Type = Backup
       Label Format = Bacula-LTO-
       Recycle = no                                                           
  # Bacula can automatically recycle Volumes
       AutoPrune = no                                                         
  # Prune expired volumes
       Maximum Volumes = 0                                                    
  # unlimited number of Volumes in Pool
 }

 = My problem is here = I can only specify 1 device .

 You said one pool per job but I will have many job with the same
 retention time and space limit.
 I want to backup my File servers as long as it is possible with the
 only limit of the space on the Disk.

 I was going to make Pools in function of the kind of Data to use and
 the Value of the data stored on it.
 For eg : Financial and administrative Data of my company in a
 different pool than the Webserver, the File Server or than the Mail
 Server.

 My LTO pool is only the destination of a copy job, in this case this
 is an archive and I need to maybe 20 years later so I configure it
 with an unlimited retention time.

 Do I really need to make one pool per job ?

 As I have understood it, the problem will occur when you are going to rotate 
 your backups (eventually you WILL do it, no?). I have separated different 
 data to separate jobs (eg. configuration, /var, /home etc.) and they all have 
 different retention period and/or different number of backups

Re: [Bacula-users] Job is waiting on max Storage jobs although I set Max concurrent job to 10

2011-01-12 Thread Hugo Letemplier
2011/1/8 Silver Salonen sil...@ultrasoft.ee:
 On Saturday 08 January 2011 14:07:01 Hugo Letemplier wrote:
 Le 6 janv. 2011 à 21:17, Silver Salonen a écrit :

  On Thursday 06 January 2011 22:14:16 John Drescher wrote:
  Only one job can run on the same device. If you want another job to run 
  into the save folder, you have to clone the device resource under some 
  other name. Eg:
 
 
  That is wrong. Multiple jobs can write to the same tape or disk volume
  at the same time. Most likely your problem is that a single bacula
  storage device can only load 1 volume and thus 1 pool at a time even
  for disk volumes.
 
  John
 
  Yeah, sry, I forgot that. I wonder who and why writes multiple jobs into 
  the same volume with file-type volumes :)
 
  --
  Silver

 So I will need to create many pools ?
 But of I use one pool for many Jobs so I will need to create many devices ?

 Hugo

 Many pools you need mostly because you want your backups of different levels 
 to be labeled differently and have different retention time for them.

 For file-type volumes it makes sense to use one job per volume and thus 
 you'll need to create as many devices as many concurrent jobs you want to use.

 PS. In mailing listis, please post at the bottom of the thread.

 --
 Silver


Do I need to use a virtual autochanger ? in fact I don't care about
what device I used per pool.
eg : if I have many file devices I order  to access many file volumes
in one sd directory. I would like that bacula select automaticatly the
first available device in order to access the file volume.

Here are my File devices in sd conf

Device {
  Name = DataTest.0
  Media Type = File
  Archive Device = /BaculaData/Test/
  LabelMedia = yes;   # lets Bacula label unlabeled media
  Random Access = Yes;
  AutomaticMount = yes;   # when device opened, read it
  RemovableMedia = no;
  AlwaysOpen = Yes;
  Maximum Volume Size = 4G
}

Device {
  Name = DataTest.1
  Media Type = File
  Archive Device = /BaculaData/Test/
  LabelMedia = yes;   # lets Bacula label unlabeled media
  Random Access = Yes;
  AutomaticMount = yes;   # when device opened, read it
  RemovableMedia = no;
  AlwaysOpen = Yes;
  Maximum Volume Size = 4G
}

imagine 10 Devices named DataTest.X

Here are my pools in Dir config, one named Sauvegarde is to make
regular backup on few File servers. The other is to export on a LTO
device as long as possible.

Pool {
Name = Test
Storage = DataTest   = My problem is here
Pool Type = Backup
Recycle = yes   
# Bacula can automatically recycle Volumes
AutoPrune = yes 
# Prune expired volumes
Volume Retention = 7 days# Short time 
to make test  
Label Format = Bacula-Test-
Maximum Volume Bytes = 4G   # Limit Volume size 
to
something reasonable
Maximum Volumes = 256   # Limit 
number of Volumes in Pool
Next Pool = ArchivageLTO
}

Pool {  
#Pool pour l'archivage sur bande LTO
Name = ArchivageLTO
Storage = Tape   = My problem is here
Pool Type = Backup
Label Format = Bacula-LTO-
Recycle = no
# Bacula can automatically recycle Volumes
AutoPrune = no  
# Prune expired volumes 
Maximum Volumes = 0 
# unlimited number of Volumes in Pool
}

= My problem is here = I can only specify 1 device .

You said one pool per job but I will have many job with the same
retention time and space limit.
I want to backup my File servers as long as it is possible with the
only limit of the space on the Disk.

I was going to make Pools in function of the kind of Data to use and
the Value of the data stored on it.
For eg : Financial and administrative Data of my company in a
different pool than the Webserver, the File Server or than the Mail
Server.

My LTO pool is only the destination of a copy job, in this case this
is an archive and I need to maybe 20 years later so I configure it
with an unlimited retention time.

Do I really need to make one pool per job ?


Thanks

Hugo

--
Protect Your Site and Customers from Malware Attacks
Learn about various malware tactics and how to avoid them. Understand 
malware threats, the impact they can have on your business, and how you 
can protect your company and customers by using code signing.
http://p.sf.net/sfu/oracle-sfdevnl
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net

Re: [Bacula-users] Job is waiting on max Storage jobs although I set Max concurrent job to 10

2011-01-08 Thread Hugo Letemplier
So I will need to create many pools ?
But of I use one pool for many Jobs so I will need to create many devices ?

Hugo

Le 6 janv. 2011 à 21:17, Silver Salonen a écrit :

 On Thursday 06 January 2011 22:14:16 John Drescher wrote:
 Only one job can run on the same device. If you want another job to run 
 into the save folder, you have to clone the device resource under some 
 other name. Eg:
 
 
 That is wrong. Multiple jobs can write to the same tape or disk volume
 at the same time. Most likely your problem is that a single bacula
 storage device can only load 1 volume and thus 1 pool at a time even
 for disk volumes.
 
 John
 
 Yeah, sry, I forgot that. I wonder who and why writes multiple jobs into the 
 same volume with file-type volumes :)
 
 -- 
 Silver
 
 --
 Learn how Oracle Real Application Clusters (RAC) One Node allows customers
 to consolidate database storage, standardize their database environment, and, 
 should the need arise, upgrade to a full multi-node Oracle RAC database 
 without downtime or disruption
 http://p.sf.net/sfu/oracle-sfdevnl
 ___
 Bacula-users mailing list
 Bacula-users@lists.sourceforge.net
 https://lists.sourceforge.net/lists/listinfo/bacula-users


--
Gaining the trust of online customers is vital for the success of any company
that requires sensitive data to be transmitted over the Web.   Learn how to 
best implement a security strategy that keeps consumers' information secure 
and instills the confidence they need to proceed with transactions.
http://p.sf.net/sfu/oracle-sfdevnl 
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


[Bacula-users] Job is waiting on max Storage jobs although I set Max concurrent job to 10

2011-01-06 Thread Hugo Letemplier
Hi

I am running some job on the same storage, on a file device but it's
always waiting on max storage jobs however I set Maximum concurrent
job to 10 in the storage daemon configuration

On storage daemon :

Device {
  Name = DataTest
  Media Type = File
  Archive Device = /BaculaData/Test/
  LabelMedia = yes;   # lets Bacula label unlabeled media
  Random Access = Yes;
  AutomaticMount = yes;   # when device opened, read it
  RemovableMedia = no;
  AlwaysOpen = Yes;
  Maximum Volume Size = 4G
}


On director :

Storage {
Name = DataTest
Address = bacula..xxx
SDPort = 9103
Password = zFsrh6y
Device = DataTest
Media Type = File
}

Do I missed something ?

Hugo

--
Learn how Oracle Real Application Clusters (RAC) One Node allows customers
to consolidate database storage, standardize their database environment, and, 
should the need arise, upgrade to a full multi-node Oracle RAC database 
without downtime or disruption
http://p.sf.net/sfu/oracle-sfdevnl
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


[Bacula-users] Performance with Mac OSX, Encryption and Gzip3

2010-12-20 Thread Hugo Letemplier
Hi

I am using Bacula inside an Cent OS openVZ container on a
virtualization cluster. I tried with and without compression and/or
encryption and the rates are very various.
I am backing up a KVM client on the same nodes of the cluster and an
external mac client.

I can have rates of more than 30 or 40 MBytes/sec down to some hundred
of KBytes/sec

Here are some stats :

Backing up the mac client :

09-déc 10:48 bacula-dir JobId 42: Bacula bacula-dir 5.0.3 (30Aug10):
09-déc-2010 10:48:42
  Build OS:   i686-redhat-linux-gnu redhat
  JobId:  42
  Job:BackupStationMac.2010-12-08_18.04.32_07
  Backup Level:   Full
  Client: StationMac 5.0.3 (04Aug10)
i386-apple-darwin10.4.0,osx,10.4.0
  FileSet:Test 2010-12-08 17:53:46
  Pool:   Test (From Job resource)
  Catalog:MyCatalog (From Client resource)
  Storage:DataTest (From Pool resource)
  Scheduled time: 08-déc-2010 18:04:28
  Start time: 08-déc-2010 18:04:34
  End time:   09-déc-2010 10:48:42
  Elapsed time:   16 hours 44 mins 8 secs
  Priority:   10
  FD Files Written:   636,312
  SD Files Written:   636,312
  FD Bytes Written:   170,577,925,504 (170.5 GB)
  SD Bytes Written:   170,856,886,548 (170.8 GB)
  Rate:   2831.3 KB/s
  Software Compression:   25.7 %
  VSS:no
  Encryption: yes
  Accurate:   yes
  Volume name(s):   =  About 40 4GB Volumes
  Volume Session Id:  11
  Volume Session Time:1291809192
  Last Volume Bytes:  4,007,665,932 (4.007 GB)
  Non-fatal FD errors:0
  SD Errors:  0
  FD termination status:  OK
  SD termination status:  OK
  Termination:Backup OK


Backing up the KVM :

 I ran many test on the KVM client with various parameter and kind of files:
 1) GZIP3 - Encryption - 1Go file from /dev/urandom
   = Rate about7828.8 KB/s
 2) GZIP3 - Encryption - 1Go file from /dev/zero
   =   Rate:   113.1 KB/s
   =   Software Compression:   99.5 %
   =   Most of the time was to compress the file
 3) NO GZIP3 - Encryption - 1Go file from /dev/urandom
   = Rate about19785.7 KB/s
 4) Lots of file of various size -- Typical base install of Cent
OS and some apps about 54 000 files
   GZIP3  Encryption = 901.2 KB/s
   No GZIP3  No Encryption = 31517.3 KB/s

I also tried some copy job from hard drive to LTO3 device :
  = I am about 7 MBytes/sec I think it's correct

On which aspects can I improve theses perfs ?
I already a list of files that are not to compress :
wildfile = .rar
wildfile = .zip
wildfile = .gz
...
Or to exclude
wilddir = .TemporaryItems
wilddir = .Spotlight-V100
...

In addition, I am looking for something to don't compress tiny files.
Is this possible ?

Also which compilation options can improve my rates that are horrible
on small files. I used the default config of my RPMs and maybe I
omitted something at this moment.

In order to continue bacula implementation into a testing - pre-prod
phase I must reach at least 2 or 3 MB/sec rates for every Jobs. I want
to validate that kind of stats

Can you compare with your stats ?

The Mac machine is a Mac Mini from the last Power PC Generation

The cluster node is a  8 x Intel(R) Xeon(R) CPU E5620 @ 2.40GHz
Virtual machines ( OpenVZ : CentOS Bacula server fd+sd+dir+DB and a
KVM : CentOS Bacula fd ) have a 50/50 repartition of ressources and it
never been fully loaded.

What should I do, in order to have a better compromise between
Compression and the Transfert Rate.

Thanks

Hugo

--
Lotusphere 2011
Register now for Lotusphere 2011 and learn how
to connect the dots, take your collaborative environment
to the next level, and enter the era of Social Business.
http://p.sf.net/sfu/lotusphere-d2d
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] TLS and PKI, How to limit de encryption overhead ?

2010-11-18 Thread Hugo Letemplier
I already use Data encryption because I want the content of my Tape to
be encrypted.
The aspect that's boring me in communication is that authentication /
commands / console access is sent clearly over the network.
I am not sure of what security level the File Daemon encryption only
can provide. I know that meta data aren't encrypted.
Do you have advice for that kind of stuff ?
For the moment I use SSH but in a final configuration I might use
remote laptop computers with non static ip configuration ( change ip
via console access ...)
Thanks.
Hugo

2010/11/18 Radosław Korzeniewski rados...@korzeniewski.net:
 2010/11/18 Thomas Mueller tho...@chaschperli.ch

 On 18.11.2010 02:01, Dan Langille wrote:

 
  IMHO TLS is only used for the control-channel not for the data-
  channel.
 
  Really? I hope not. Can you prove this?
 

 ok maybe you're right. i've had in mind that it was not encrypted, but
 written is that the volumes written by sd are not encrypted. not the
 data transfer between fd and sd.

 The data written to Volumes by the Storage daemon is not encrypted by
 this code. 


 http://bacula.org/5.0.x-manuals/en/main/main/Bacula_TLS_Communications.html


 Data encription is performed by FD:

 http://bacula.org/5.0.x-manuals/en/main/main/Data_Encryption.html

 Radek

 --
 Radosław Korzeniewski
 rados...@korzeniewski.net

 --
 Beautiful is writing same markup. Internet Explorer 9 supports
 standards for HTML5, CSS3, SVG 1.1,  ECMAScript5, and DOM L2  L3.
 Spend less time writing and  rewriting code and more time creating great
 experiences on the web. Be a part of the beta today
 http://p.sf.net/sfu/msIE9-sfdev2dev
 ___
 Bacula-users mailing list
 Bacula-users@lists.sourceforge.net
 https://lists.sourceforge.net/lists/listinfo/bacula-users



--
Beautiful is writing same markup. Internet Explorer 9 supports
standards for HTML5, CSS3, SVG 1.1,  ECMAScript5, and DOM L2  L3.
Spend less time writing and  rewriting code and more time creating great
experiences on the web. Be a part of the beta today
http://p.sf.net/sfu/msIE9-sfdev2dev
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


[Bacula-users] TLS and PKI, How to limit de encryption overhead ?

2010-11-17 Thread Hugo Letemplier
Hi
I am Implementing Bacula and I have to cypher the backed up data.
Also I don't want that the console and client authentication to be
done clearly over the network.
I want to implement TLS but data are already cyphered via PKI so it
will create a cpu consumption overhead.
Can I configure bacula to use TLS only to authenticate client and
eventually sign data ?
Thanks
Hugo L.

--
Beautiful is writing same markup. Internet Explorer 9 supports
standards for HTML5, CSS3, SVG 1.1,  ECMAScript5, and DOM L2  L3.
Spend less time writing and  rewriting code and more time creating great
experiences on the web. Be a part of the beta today
http://p.sf.net/sfu/msIE9-sfdev2dev
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] What happen if I delete a single incremental between the full and another incremental

2010-10-16 Thread Hugo Letemplier
Hi thanks a lot for your answers

I have retried with a new test scenario its clear now and deleting an 
incremental is really dangerous.
But I think that a function that enable the administrator to join 2 jobs 
would be cool.
Imagine that one day lots of data manipulation are done on the machine that I 
want to backup, so there is a great difference between 2 incremental. The jobs 
are done, and deleting one job is dangerous for the jobs that follows
In this case, that would be great to mix 2 jobs.
Its quite complicated to explain I know.
Take a look at this little scenario, a classical Full with his incremental jobs 
: the client is typically a big file server

1 - The full
2 - an incremental
3 - someone make a mistake while he was exploring the file server he made lots 
of copy of files in the server (for example : a bad drag and drop).
4 - a nightly scheduled incremental 
5 - the administrator see that the last incremental got a lot of new files and 
that job bytes got a huge value.
5 - the user see his error and deletes the duplicates
6 - a new incremental is ran
7 - after checking everything, I want to reduce the size of my backups by 
fusioning the two last incrementals. The idea is to add new files of step 4 
to step 6 but without the files deleted at step 6

In a mathematical view, it can be seen like that : Inc6.1 = Inc4 - (Files of 
Inc4 deleted at Inc6.0) + (new files of inc6.0) +  (crush files modified after 
Inc4 with their version of inc6.0)

I hope that it can be understood more easily than the previous post !

Thanks a lot 

Hugo

Le 13 oct. 2010 à 17:53, Jari Fredriksson a écrit :

 On 13.10.2010 18:21, Hugo Letemplier wrote:
 Hi,
 I have an important question that will help me validating some specs
 about bacula 5.0.2
 Imagine the following scenario:
 1 - a full
 2 - an incremental
 3 - an incremental
 4 - another incremental
 
 if I delete the incremental of step 3, does it move the files that
 have been added during step 3 onto the incremental of step 4
 
 I have tried this scenario but my result is not clear. Can you tell me
 your experience ?
 
 In other words: can I delete one Incremental without deleting more
 recents incrementals or if I delete the full does it upgrade the first
 incremental into full ?
 
 
 I *think* Bacula uses timestamps when doing incrementals. if you delete
 one incremental, you lose the files modified/created for that day.
 
 But if you delete the full, Bacula upgrades the next incremental to
 Full, as it finds no suitable Full to do the incremental for.
 
 
 --
 Beautiful is writing same markup. Internet Explorer 9 supports
 standards for HTML5, CSS3, SVG 1.1,  ECMAScript5, and DOM L2  L3.
 Spend less time writing and  rewriting code and more time creating great
 experiences on the web. Be a part of the beta today.
 http://p.sf.net/sfu/beautyoftheweb
 ___
 Bacula-users mailing list
 Bacula-users@lists.sourceforge.net
 https://lists.sourceforge.net/lists/listinfo/bacula-users


--
Download new Adobe(R) Flash(R) Builder(TM) 4
The new Adobe(R) Flex(R) 4 and Flash(R) Builder(TM) 4 (formerly 
Flex(R) Builder(TM)) enable the development of rich applications that run
across multiple browsers and platforms. Download your free trials today!
http://p.sf.net/sfu/adobe-dev2dev
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


[Bacula-users] What happen if I delete a single incremental between the full and another incremental

2010-10-13 Thread Hugo Letemplier
Hi,
I have an important question that will help me validating some specs
about bacula 5.0.2
Imagine the following scenario:
1 - a full
2 - an incremental
3 - an incremental
4 - another incremental

if I delete the incremental of step 3, does it move the files that
have been added during step 3 onto the incremental of step 4

I have tried this scenario but my result is not clear. Can you tell me
your experience ?

In other words: can I delete one Incremental without deleting more
recents incrementals or if I delete the full does it upgrade the first
incremental into full ?

Thanks

Hugo L.

--
Beautiful is writing same markup. Internet Explorer 9 supports
standards for HTML5, CSS3, SVG 1.1,  ECMAScript5, and DOM L2  L3.
Spend less time writing and  rewriting code and more time creating great
experiences on the web. Be a part of the beta today.
http://p.sf.net/sfu/beautyoftheweb
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


[Bacula-users] Fileset : to compress or not to compress

2010-09-30 Thread Hugo Letemplier
Hi
I dont success to check if this file set works fine
Indeed, some files are already compressed so it's useless to reduce them again
Could you tell me if it's on the good way or else what I have to do ?

Here is my fileset :

FileSet {
  Name = MacFull
  Include {
Options { # Common option
  HFSPlus Support = yes
  Signature = MD5
  ACL Support = yes
}
Options {  # Exclude theses ones
   wilddir = /var/lib/bacula
   wilddir = /BaculaStorage
   wilddir = /BaculaRestore
   wilddir = /proc
   wilddir = /tmp
   wilddir = .TemporaryItems
   wilddir = .Spotlight-V100
   wildfile = /.journal
   wildfile = /.autofsck
   wildfile = .DS_STORE
   exclude = yes
}
Options {   # Don't compress theses files
   wildfile = *.zip
   wildfile = *.gz
   wildfile = *.dmg
   wildfile = *.jpg
   wildfile = *.jpeg
   wildfile = *.rar
}
Options {
   compression = GZIP # Use compression for the rest
}
File = / # Save the root
File = /Users # plus le /Users which is on a second FileSystem
  }
}


Thank you

Hugo

--
Start uncovering the many advantages of virtual appliances
and start using them to simplify application deployment and
accelerate your shift to cloud computing.
http://p.sf.net/sfu/novell-sfdev2dev
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


[Bacula-users] RPM on CentOS

2010-09-15 Thread Hugo Letemplier
Hi
I am trying to find some RPM's for Bacula on CentOS 5.
i found only for RedHat 7 can i use theses ones or do I need
absolutelly to use CentOS RPMs?

I know that mostly it works between RH and CentOS rpm's but i want to
be sure to not have problems in the future.

Could you tell me you experience about the capabality of the Bacula
RH7 packages in CentOS 5 ?

thanks

Hugo

--
Start uncovering the many advantages of virtual appliances
and start using them to simplify application deployment and
accelerate your shift to cloud computing.
http://p.sf.net/sfu/novell-sfdev2dev
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


[Bacula-users] How to implement mobile/itinerant clients in bacula

2010-09-14 Thread Hugo Letemplier
Hi

I am implementing Bacula in my information system and I have to backup
various file/mail… servers.
I have also to backup few laptop machines that are used for mobile
usage like remote working/conferences.

The problem is that theses station may been connected from various
location so with different ip addresses.

Have you ever tried to implement a function where the station say
hello to the director when it's on a good network link or inside the
local network of the company so the backup are done only when the
station is available ?

According to the doc, it cant be done natively because its the
director who contact the client on a static address or domain name but
I think that some script can easily tell to start a job in this case
without making a lot of log and errors reporting.

What method would you use ?

Thanks

Hugo L.

--
Start uncovering the many advantages of virtual appliances
and start using them to simplify application deployment and
accelerate your shift to cloud computing.
http://p.sf.net/sfu/novell-sfdev2dev
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users