[Bacula-users] Two autoloaders in the same pool

2012-02-22 Thread Martin Emrich
Hi!

We have a bacula (5.2.5) storage daemon with two identical autoloaders (16-slot 
Quantum Superloader 3 with LTO4 drive). They are working fine, and I labled the 
first set of tapes in both with label barcode.
Now the the problem: In the pool, I have two tapes with slot==1, and bacula now 
tries to load the slot 1 tape in the second autoloader slot 1 on the first 
autoloader. The first loader of course loads its own slot 1, which contains a 
full tape.

How do I tell bacula that the first autoloader cannot access the tapes in the 
second one, _without_ configuring two separate pools or media types? We would 
like to be able to put any tape in any loader if they are expired or required 
for restore...

Or is this impossible?

Thanks

Martin Emrich
IT Administrator
Attensity Europe GmbH  |  Campus D3 2 |  66123 Saarbrücken |  Germany
Phone +49 (0) 681 85767 41 |  Fax +49 (0) 6821 85767 99
martin.emr...@attensity.commailto:martin.emr...@attensity.com

www.attensity.comhttp://www.attensity.com/
Sitz Kaiserslautern  |  Amtsgericht Kaiserslautern HRB 30711
Geschäftsführer: Dr. Christian Schulmeyer, Dr. Peter Tepassé, Stefan Volland, 
Dr. Stefan Wess

LISTEN.ANALYZE.RELATE.ACT.
Attensity ermöglicht Unternehmen den Zugang zur Welt der unstrukturierten 
Daten, damit diese die rasant
wachsenden internen und externen Informationsquellen für ihre 
geschäftskritischen Prozesse optimal nutzen können.
Abonnieren Sie unseren monatlichen 
Newsletter!http://newsletter.attensity.eu/art_resource.php?sid=si4n.23ctc3r
[cid:image001.gif@01CCF143.3ED808B0]http://twitter.com/attensity[cid:image002.gif@01CCF143.3ED808B0]http://www.facebook.com/attensity[cid:image003.gif@01CCF143.3ED808B0]http://blog.attensity.com/


inline: image001.gifinline: image002.gifinline: image003.gif--
Virtualization  Cloud Management Using Capacity Planning
Cloud computing makes use of virtualization - but cloud computing 
also focuses on allowing computing to be delivered as a service.
http://www.accelacomm.com/jaw/sfnl/114/51521223/___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] jobs fail with various broken pipe errors

2012-02-22 Thread Hugo Letemplier
I think you can try to configure the Heartbeat Interval directive on
your various daemons.





2012/2/22 Silver Salonen sil...@serverock.ee:
 Hi.



 Recently we changed the network connection for our backup server which is
 Bacula 5.2.3 on FreeBSD 9.0.



 After that many jobs running across WAN started failing with various broken
 pipe errors. Some examples:



 21-Feb 22:42 fbsd1-fd JobId 57779: Error: bsock.c:398 Wrote 32151 bytes to
 Storage daemon:backupsrv.url:9103, but only 16384 accepted.

 21-Feb 22:42 fbsd1-fd JobId 57779: Fatal error: backup.c:1024 Network send
 error to SD. ERR=Broken pipe

 21-Feb 22:42 fbsd1-fd JobId 57779: Error: bsock.c:339 Socket has errors=1 on
 call to Storage daemon:backupsrv.url:9103



 (this one runs from the same backup-network to another SD)

 22-Feb 00:14 backupsrv-dir JobId 57852: Fatal error: Network error with FD
 during Backup: ERR=Broken pipe

 22-Feb 00:14 backupsrv-sd2 JobId 57852: JobId=57852
 Job=linux1-userdata.2012-02-21_23.05.02_02 marked to be canceled.

 22-Feb 00:14 backupsrv-sd2 JobId 57852: Job write elapsed time = 00:33:20,
 Transfer rate = 591.5 K Bytes/second

 22-Feb 00:14 backupsrv-sd2 JobId 57852: Error: bsock.c:529 Read expected
 65568 got 1448 from client:123.45.67.81:36643

 22-Feb 00:14 backupsrv-dir JobId 57852: Fatal error: No Job status returned
 from FD.



 22-Feb 00:16 backupsrv-dir JobId 57821: Fatal error: Network error with FD
 during Backup: ERR=Broken pipe

 22-Feb 00:16 backupsrv-sd JobId 57821: Job write elapsed time = 00:57:00,
 Transfer rate = 26.69 K Bytes/second

 22-Feb 00:16 backupsrv-dir JobId 57821: Fatal error: No Job status returned
 from FD.



 22-Feb 00:24 winsrv1-fd JobId 57784: Error:
 /home/kern/bacula/k/bacula/src/lib/bsock.c:393 Write error sending 9363
 bytes to Storage daemon:backupsrv.url:9103: ERR=Input/output error

 22-Feb 00:24 winsrv1-fd JobId 57784: Fatal error:
 /home/kern/bacula/k/bacula/src/filed/backup.c:1024 Network send error to SD.
 ERR=Input/output error

 22-Feb 00:26 winsrv1-fd JobId 57784: Error:
 /home/kern/bacula/k/bacula/src/lib/bsock.c:339 Socket has errors=1 on call
 to Storage daemon:backupsrv.url:9103



 (this one runs from the same backup-network to another SD)

 22-Feb 01:33 backupsrv-dir JobId 57872: Fatal error: Socket error on
 ClientRunBeforeJob command: ERR=Broken pipe

 22-Feb 01:33 backupsrv-dir JobId 57872: Fatal error: Client winsrv2-fd
 RunScript failed.

 22-Feb 01:33 backupsrv-dir JobId 57872: Fatal error: Network error with FD
 during Backup: ERR=Broken pipe

 22-Feb 01:33 backupsrv-sd2 JobId 57872: JobId=57872
 Job=winsrv2.2012-02-22_01.00.00_27 marked to be canceled.

 22-Feb 01:33 backupsrv-dir JobId 57872: Fatal error: No Job status returned
 from FD.



 22-Feb 01:51 fbsd2-fd JobId 57806: Error: bsock.c:398 Wrote 61750 bytes to
 Storage daemon:backupsrv.url:9103, but only 16384 accepted.

 22-Feb 01:51 fbsd2-fd JobId 57806: Fatal error: backup.c:1024 Network send
 error to SD. ERR=Broken pipe

 22-Feb 01:51 fbsd2-fd JobId 57806: Error: bsock.c:339 Socket has errors=1 on
 call to Storage daemon:backupsrv.url:9103



 22-Feb 02:15 backupsrv-dir JobId 57819: Fatal error: Network error with FD
 during Backup: ERR=Connection reset by peer

 22-Feb 02:15 backupsrv-dir JobId 57819: Fatal error: No Job status returned
 from FD.





 These jobs have been failing every day for a week now. Meanwhile other jobs
 complete just fine, and it seems not to about jobs' size or scripts to be
 run before jobs on clients etc.



 Any idea what could be wrong?



 --

 Silver


 --
 Virtualization  Cloud Management Using Capacity Planning
 Cloud computing makes use of virtualization - but cloud computing
 also focuses on allowing computing to be delivered as a service.
 http://www.accelacomm.com/jaw/sfnl/114/51521223/
 ___
 Bacula-users mailing list
 Bacula-users@lists.sourceforge.net
 https://lists.sourceforge.net/lists/listinfo/bacula-users


--
Virtualization  Cloud Management Using Capacity Planning
Cloud computing makes use of virtualization - but cloud computing 
also focuses on allowing computing to be delivered as a service.
http://www.accelacomm.com/jaw/sfnl/114/51521223/
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


[Bacula-users] Scheduling hourly backups with different levels and pools

2012-02-22 Thread joenyland
Hi,I'm in the process of setting up MySQL backups in Bacula, using mysqldump for full backups and backing up my bin logs for incremental backups.What I would like to do is to perform a full backup to my full backup pool at 00:00 every night, then perform incremental backups to my incremental pool every hour thereafter.Just as a rough config, I have the current schedule at the moment, whilst testing:Schedule { Name = "TestServer MySQL Database Schedule" Run = Level=Full pool=TestServer_MySQL_Full Storage=TestServer_MySQL_Full daily at 00:00 Run = Level=Incremental pool=TestServer_MySQL_Inc Storage=TestServer_MySQL_Inc daily at 01:00 Run = Level=Incremental pool=TestServer_MySQL_Inc Storage=TestServer_MySQL_Inc daily at 02:00 Run = Level=Incremental pool=TestServer_MySQL_Inc Storage=TestServer_MySQL_Inc daily at 03:00 Run = Level=Incremental pool=TestServer_MySQL_Inc Storage=TestServer_MySQL_Inc daily at 04:00 Run = Level=Incremental pool=TestServer_MySQL_Inc Storage=TestServer_MySQL_Inc daily at 05:00 Run = Level=Incremental pool=TestServer_MySQL_Inc Storage=TestServer_MySQL_Inc daily at 06:00 Run = Level=Incremental pool=TestServer_MySQL_Inc Storage=TestServer_MySQL_Inc daily at 07:00 Run = Level=Incremental pool=TestServer_MySQL_Inc Storage=TestServer_MySQL_Inc daily at 08:00 Run = Level=Incremental pool=TestServer_MySQL_Inc Storage=TestServer_MySQL_Inc daily at 09:00 Run = Level=Incremental pool=TestServer_MySQL_Inc Storage=TestServer_MySQL_Inc daily at 10:00 Run = Level=Incremental pool=TestServer_MySQL_Inc Storage=TestServer_MySQL_Inc daily at 11:00 Run = Level=Incremental pool=TestServer_MySQL_Inc Storage=TestServer_MySQL_Inc daily at 12:00 Run = Level=Incremental pool=TestServer_MySQL_Inc Storage=TestServer_MySQL_Inc daily at 13:00 Run = Level=Incremental pool=TestServer_MySQL_Inc Storage=TestServer_MySQL_Inc daily at 14:00 Run = Level=Incremental pool=TestServer_MySQL_Inc Storage=TestServer_MySQL_Inc daily at 15:00 Run = Level=Incremental pool=TestServer_MySQL_Inc Storage=TestServer_MySQL_Inc daily at 16:00 Run = Level=Incremental pool=TestServer_MySQL_Inc Storage=TestServer_MySQL_Inc daily at 17:00 Run = Level=Incremental pool=TestServer_MySQL_Inc Storage=TestServer_MySQL_Inc daily at 18:00 Run = Level=Incremental pool=TestServer_MySQL_Inc Storage=TestServer_MySQL_Inc daily at 19:00 Run = Level=Incremental pool=TestServer_MySQL_Inc Storage=TestServer_MySQL_Inc daily at 20:00 Run = Level=Incremental pool=TestServer_MySQL_Inc Storage=TestServer_MySQL_Inc daily at 21:00 Run = Level=Incremental pool=TestServer_MySQL_Inc Storage=TestServer_MySQL_Inc daily at 22:00 Run = Level=Incremental pool=TestServer_MySQL_Inc Storage=TestServer_MySQL_Inc daily at 23:00}I feel that there must be another, cleaner, way to define this kind of backup schedule, but I can't seem to be able to find one from the manual.Has anyone implemented such a schedule and if so is this how you did it? If there's anyone else who has any input, I'd appreciate it if you could share it.Many thanks,Joe--
Virtualization  Cloud Management Using Capacity Planning
Cloud computing makes use of virtualization - but cloud computing 
also focuses on allowing computing to be delivered as a service.
http://www.accelacomm.com/jaw/sfnl/114/51521223/___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] jobs fail with various broken pipe errors

2012-02-22 Thread Silver Salonen
On Wed, 22 Feb 2012 12:33:49 +0100, Hugo Letemplier wrote:
 I think you can try to configure the Heartbeat Interval directive on
 your various daemons.

Hi.

My SD already had Heartbeat Interval set to 60. I now tried it on one 
FD too, but the job still failed with the same error.

Other FD's on both FreeBSD and Linux are able to run jobs for hours and 
complete them successfully.

--
Silver

 2012/2/22 Silver Salonen sil...@serverock.ee:
 Hi.



 Recently we changed the network connection for our backup server 
 which is
 Bacula 5.2.3 on FreeBSD 9.0.



 After that many jobs running across WAN started failing with various 
 broken
 pipe errors. Some examples:



 21-Feb 22:42 fbsd1-fd JobId 57779: Error: bsock.c:398 Wrote 32151 
 bytes to
 Storage daemon:backupsrv.url:9103, but only 16384 accepted.

 21-Feb 22:42 fbsd1-fd JobId 57779: Fatal error: backup.c:1024 
 Network send
 error to SD. ERR=Broken pipe

 21-Feb 22:42 fbsd1-fd JobId 57779: Error: bsock.c:339 Socket has 
 errors=1 on
 call to Storage daemon:backupsrv.url:9103



 (this one runs from the same backup-network to another SD)

 22-Feb 00:14 backupsrv-dir JobId 57852: Fatal error: Network error 
 with FD
 during Backup: ERR=Broken pipe

 22-Feb 00:14 backupsrv-sd2 JobId 57852: JobId=57852
 Job=linux1-userdata.2012-02-21_23.05.02_02 marked to be canceled.

 22-Feb 00:14 backupsrv-sd2 JobId 57852: Job write elapsed time = 
 00:33:20,
 Transfer rate = 591.5 K Bytes/second

 22-Feb 00:14 backupsrv-sd2 JobId 57852: Error: bsock.c:529 Read 
 expected
 65568 got 1448 from client:123.45.67.81:36643

 22-Feb 00:14 backupsrv-dir JobId 57852: Fatal error: No Job status 
 returned
 from FD.



 22-Feb 00:16 backupsrv-dir JobId 57821: Fatal error: Network error 
 with FD
 during Backup: ERR=Broken pipe

 22-Feb 00:16 backupsrv-sd JobId 57821: Job write elapsed time = 
 00:57:00,
 Transfer rate = 26.69 K Bytes/second

 22-Feb 00:16 backupsrv-dir JobId 57821: Fatal error: No Job status 
 returned
 from FD.



 22-Feb 00:24 winsrv1-fd JobId 57784: Error:
 /home/kern/bacula/k/bacula/src/lib/bsock.c:393 Write error sending 
 9363
 bytes to Storage daemon:backupsrv.url:9103: ERR=Input/output error

 22-Feb 00:24 winsrv1-fd JobId 57784: Fatal error:
 /home/kern/bacula/k/bacula/src/filed/backup.c:1024 Network send 
 error to SD.
 ERR=Input/output error

 22-Feb 00:26 winsrv1-fd JobId 57784: Error:
 /home/kern/bacula/k/bacula/src/lib/bsock.c:339 Socket has errors=1 
 on call
 to Storage daemon:backupsrv.url:9103



 (this one runs from the same backup-network to another SD)

 22-Feb 01:33 backupsrv-dir JobId 57872: Fatal error: Socket error on
 ClientRunBeforeJob command: ERR=Broken pipe

 22-Feb 01:33 backupsrv-dir JobId 57872: Fatal error: Client 
 winsrv2-fd
 RunScript failed.

 22-Feb 01:33 backupsrv-dir JobId 57872: Fatal error: Network error 
 with FD
 during Backup: ERR=Broken pipe

 22-Feb 01:33 backupsrv-sd2 JobId 57872: JobId=57872
 Job=winsrv2.2012-02-22_01.00.00_27 marked to be canceled.

 22-Feb 01:33 backupsrv-dir JobId 57872: Fatal error: No Job status 
 returned
 from FD.



 22-Feb 01:51 fbsd2-fd JobId 57806: Error: bsock.c:398 Wrote 61750 
 bytes to
 Storage daemon:backupsrv.url:9103, but only 16384 accepted.

 22-Feb 01:51 fbsd2-fd JobId 57806: Fatal error: backup.c:1024 
 Network send
 error to SD. ERR=Broken pipe

 22-Feb 01:51 fbsd2-fd JobId 57806: Error: bsock.c:339 Socket has 
 errors=1 on
 call to Storage daemon:backupsrv.url:9103



 22-Feb 02:15 backupsrv-dir JobId 57819: Fatal error: Network error 
 with FD
 during Backup: ERR=Connection reset by peer

 22-Feb 02:15 backupsrv-dir JobId 57819: Fatal error: No Job status 
 returned
 from FD.





 These jobs have been failing every day for a week now. Meanwhile 
 other jobs
 complete just fine, and it seems not to about jobs' size or scripts 
 to be
 run before jobs on clients etc.



 Any idea what could be wrong?



 --

 Silver


 
 --
 Virtualization  Cloud Management Using Capacity Planning
 Cloud computing makes use of virtualization - but cloud computing
 also focuses on allowing computing to be delivered as a service.
 http://www.accelacomm.com/jaw/sfnl/114/51521223/
 ___
 Bacula-users mailing list
 Bacula-users@lists.sourceforge.net
 https://lists.sourceforge.net/lists/listinfo/bacula-users



--
Virtualization  Cloud Management Using Capacity Planning
Cloud computing makes use of virtualization - but cloud computing 
also focuses on allowing computing to be delivered as a service.
http://www.accelacomm.com/jaw/sfnl/114/51521223/
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] Scheduling hourly backups with different levels and pools

2012-02-22 Thread John Drescher
2012/2/22  joenyl...@me.com:

 Hi,

 I'm in the process of setting up MySQL backups in Bacula, using mysqldump
 for full backups and backing up my bin logs for incremental backups.

 What I would like to do is to perform a full backup to my full backup pool
 at 00:00 every night, then perform incremental backups to my incremental
 pool every hour thereafter.

 Just as a rough config, I have the current schedule at the moment, whilst
 testing:

 Schedule {
   Name = TestServer MySQL Database Schedule
   Run = Level=Full pool=TestServer_MySQL_Full Storage=TestServer_MySQL_Full
 daily at 00:00
   Run = Level=Incremental pool=TestServer_MySQL_Inc
 Storage=TestServer_MySQL_Inc daily at 01:00
   Run = Level=Incremental pool=TestServer_MySQL_Inc
 Storage=TestServer_MySQL_Inc daily at 02:00
   Run = Level=Incremental pool=TestServer_MySQL_Inc
 Storage=TestServer_MySQL_Inc daily at 03:00
   Run = Level=Incremental pool=TestServer_MySQL_Inc
 Storage=TestServer_MySQL_Inc daily at 04:00
   Run = Level=Incremental pool=TestServer_MySQL_Inc
 Storage=TestServer_MySQL_Inc daily at 05:00
   Run = Level=Incremental pool=TestServer_MySQL_Inc
 Storage=TestServer_MySQL_Inc daily at 06:00
   Run = Level=Incremental pool=TestServer_MySQL_Inc
 Storage=TestServer_MySQL_Inc daily at 07:00
   Run = Level=Incremental pool=TestServer_MySQL_Inc
 Storage=TestServer_MySQL_Inc daily at 08:00
   Run = Level=Incremental pool=TestServer_MySQL_Inc
 Storage=TestServer_MySQL_Inc daily at 09:00
   Run = Level=Incremental pool=TestServer_MySQL_Inc
 Storage=TestServer_MySQL_Inc daily at 10:00
   Run = Level=Incremental pool=TestServer_MySQL_Inc
 Storage=TestServer_MySQL_Inc daily at 11:00
   Run = Level=Incremental pool=TestServer_MySQL_Inc
 Storage=TestServer_MySQL_Inc daily at 12:00
   Run = Level=Incremental pool=TestServer_MySQL_Inc
 Storage=TestServer_MySQL_Inc daily at 13:00
   Run = Level=Incremental pool=TestServer_MySQL_Inc
 Storage=TestServer_MySQL_Inc daily at 14:00
   Run = Level=Incremental pool=TestServer_MySQL_Inc
 Storage=TestServer_MySQL_Inc daily at 15:00
   Run = Level=Incremental pool=TestServer_MySQL_Inc
 Storage=TestServer_MySQL_Inc daily at 16:00
   Run = Level=Incremental pool=TestServer_MySQL_Inc
 Storage=TestServer_MySQL_Inc daily at 17:00
   Run = Level=Incremental pool=TestServer_MySQL_Inc
 Storage=TestServer_MySQL_Inc daily at 18:00
   Run = Level=Incremental pool=TestServer_MySQL_Inc
 Storage=TestServer_MySQL_Inc daily at 19:00
   Run = Level=Incremental pool=TestServer_MySQL_Inc
 Storage=TestServer_MySQL_Inc daily at 20:00
   Run = Level=Incremental pool=TestServer_MySQL_Inc
 Storage=TestServer_MySQL_Inc daily at 21:00
   Run = Level=Incremental pool=TestServer_MySQL_Inc
 Storage=TestServer_MySQL_Inc daily at 22:00
   Run = Level=Incremental pool=TestServer_MySQL_Inc
 Storage=TestServer_MySQL_Inc daily at 23:00
 }

 I feel that there must be another, cleaner, way to define this kind of
 backup schedule, but I can't seem to be able to find one from the manual.


You could make the default level Incremental and the default Pool
TestServer_MySQL_Inc in your Job and cut all overrides but the full
however I would leave this alone. Your schedule is fine.

John

--
Virtualization  Cloud Management Using Capacity Planning
Cloud computing makes use of virtualization - but cloud computing 
also focuses on allowing computing to be delivered as a service.
http://www.accelacomm.com/jaw/sfnl/114/51521223/
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


[Bacula-users] RHEL 4/5/6 - Fedora 15/16 Bacula RPM repository

2012-02-22 Thread Simone Caronni
Hello,

I've updated the repository; Fedora 17 has been branched so Fedora 17 and
rawhide (Fedora 18) packages are identical to the ones I'm providing.

Please keep in mind at the end of February RHEL/CentOS 4 will be EOL, so
probably I will stop providing packages for that distribution.

Please read the readme file.

http://repos.fedorapeople.org/repos/slaanesh/bacula/README.txt

Regards,
--Simone



-- 
You cannot discover new oceans unless you have the courage to lose sight of
the shore (R. W. Emerson).
--
Virtualization  Cloud Management Using Capacity Planning
Cloud computing makes use of virtualization - but cloud computing 
also focuses on allowing computing to be delivered as a service.
http://www.accelacomm.com/jaw/sfnl/114/51521223/___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


[Bacula-users] Volumes not auto created

2012-02-22 Thread Raymond Norton
Bacula stopped creating new volumes. I can create them manually and jobs 
run after that, but trying to figure out why they are not auto created 
any longer.

There were 7 volumes auto created before I had to manually create the 
last two..


Any ideas what to look for?



I have the following configuration for my default pool:


Pool {
   Name = Default
   Pool Type = Backup
   Recycle = yes   # Bacula can automatically 
recycle Volumes
   AutoPrune = yes # Prune expired volumes
   Volume Retention = 30 days # one year
   Maximum Volume Bytes = 50G
   Maximum Volumes = 10
   LabelFormat = bakTrak
}




--
Virtualization  Cloud Management Using Capacity Planning
Cloud computing makes use of virtualization - but cloud computing 
also focuses on allowing computing to be delivered as a service.
http://www.accelacomm.com/jaw/sfnl/114/51521223/
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] Volumes not auto created

2012-02-22 Thread Raymond Norton
Should include SD info:


Device {
   Name = FileStorage
   Media Type = File
   Archive Device = /backups
   LabelMedia = yes;   # lets Bacula label unlabeled media
   Random Access = Yes;
   AutomaticMount = yes;   # when device opened, read it
   RemovableMedia = no;
   AlwaysOpen = no;
}



On 02/22/2012 08:00 AM, Raymond Norton wrote:
 Bacula stopped creating new volumes. I can create them manually and jobs
 run after that, but trying to figure out why they are not auto created
 any longer.

 There were 7 volumes auto created before I had to manually create the
 last two..


 Any ideas what to look for?



 I have the following configuration for my default pool:


 Pool {
 Name = Default
 Pool Type = Backup
 Recycle = yes   # Bacula can automatically
 recycle Volumes
 AutoPrune = yes # Prune expired volumes
 Volume Retention = 30 days # one year
 Maximum Volume Bytes = 50G
 Maximum Volumes = 10
 LabelFormat = bakTrak
 }




 --
 Virtualization  Cloud Management Using Capacity Planning
 Cloud computing makes use of virtualization - but cloud computing
 also focuses on allowing computing to be delivered as a service.
 http://www.accelacomm.com/jaw/sfnl/114/51521223/
 ___
 Bacula-users mailing list
 Bacula-users@lists.sourceforge.net
 https://lists.sourceforge.net/lists/listinfo/bacula-users


--
Virtualization  Cloud Management Using Capacity Planning
Cloud computing makes use of virtualization - but cloud computing 
also focuses on allowing computing to be delivered as a service.
http://www.accelacomm.com/jaw/sfnl/114/51521223/
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] Scheduling hourly backups with different levels and pools

2012-02-22 Thread Jérôme Blion
On Wed, 22 Feb 2012 08:29:22 -0500, John Drescher wrote:
 2012/2/22  joenyl...@me.com:

 Hi,

 I'm in the process of setting up MySQL backups in Bacula, using 
 mysqldump
 for full backups and backing up my bin logs for incremental backups.

 What I would like to do is to perform a full backup to my full 
 backup pool
 at 00:00 every night, then perform incremental backups to my 
 incremental
 pool every hour thereafter.

 Just as a rough config, I have the current schedule at the moment, 
 whilst
 testing:

 Schedule {
   Name = TestServer MySQL Database Schedule
   Run = Level=Full pool=TestServer_MySQL_Full 
 Storage=TestServer_MySQL_Full
 daily at 00:00
   Run = Level=Incremental pool=TestServer_MySQL_Inc
 Storage=TestServer_MySQL_Inc daily at 01:00
   Run = Level=Incremental pool=TestServer_MySQL_Inc
 Storage=TestServer_MySQL_Inc daily at 02:00
   Run = Level=Incremental pool=TestServer_MySQL_Inc
 Storage=TestServer_MySQL_Inc daily at 03:00
   Run = Level=Incremental pool=TestServer_MySQL_Inc
 Storage=TestServer_MySQL_Inc daily at 04:00
   Run = Level=Incremental pool=TestServer_MySQL_Inc
 Storage=TestServer_MySQL_Inc daily at 05:00
   Run = Level=Incremental pool=TestServer_MySQL_Inc
 Storage=TestServer_MySQL_Inc daily at 06:00
   Run = Level=Incremental pool=TestServer_MySQL_Inc
 Storage=TestServer_MySQL_Inc daily at 07:00
   Run = Level=Incremental pool=TestServer_MySQL_Inc
 Storage=TestServer_MySQL_Inc daily at 08:00
   Run = Level=Incremental pool=TestServer_MySQL_Inc
 Storage=TestServer_MySQL_Inc daily at 09:00
   Run = Level=Incremental pool=TestServer_MySQL_Inc
 Storage=TestServer_MySQL_Inc daily at 10:00
   Run = Level=Incremental pool=TestServer_MySQL_Inc
 Storage=TestServer_MySQL_Inc daily at 11:00
   Run = Level=Incremental pool=TestServer_MySQL_Inc
 Storage=TestServer_MySQL_Inc daily at 12:00
   Run = Level=Incremental pool=TestServer_MySQL_Inc
 Storage=TestServer_MySQL_Inc daily at 13:00
   Run = Level=Incremental pool=TestServer_MySQL_Inc
 Storage=TestServer_MySQL_Inc daily at 14:00
   Run = Level=Incremental pool=TestServer_MySQL_Inc
 Storage=TestServer_MySQL_Inc daily at 15:00
   Run = Level=Incremental pool=TestServer_MySQL_Inc
 Storage=TestServer_MySQL_Inc daily at 16:00
   Run = Level=Incremental pool=TestServer_MySQL_Inc
 Storage=TestServer_MySQL_Inc daily at 17:00
   Run = Level=Incremental pool=TestServer_MySQL_Inc
 Storage=TestServer_MySQL_Inc daily at 18:00
   Run = Level=Incremental pool=TestServer_MySQL_Inc
 Storage=TestServer_MySQL_Inc daily at 19:00
   Run = Level=Incremental pool=TestServer_MySQL_Inc
 Storage=TestServer_MySQL_Inc daily at 20:00
   Run = Level=Incremental pool=TestServer_MySQL_Inc
 Storage=TestServer_MySQL_Inc daily at 21:00
   Run = Level=Incremental pool=TestServer_MySQL_Inc
 Storage=TestServer_MySQL_Inc daily at 22:00
   Run = Level=Incremental pool=TestServer_MySQL_Inc
 Storage=TestServer_MySQL_Inc daily at 23:00
 }

 I feel that there must be another, cleaner, way to define this kind 
 of
 backup schedule, but I can't seem to be able to find one from the 
 manual.


 You could make the default level Incremental and the default Pool
 TestServer_MySQL_Inc in your Job and cut all overrides but the full
 however I would leave this alone. Your schedule is fine.

 John


Hello,

As far as I see, you only have 2 pools, one for each type.
Why don't you use hourly keyword to schedule incrementals backups ?

HTH.
Jérôme Blion.

--
Virtualization  Cloud Management Using Capacity Planning
Cloud computing makes use of virtualization - but cloud computing 
also focuses on allowing computing to be delivered as a service.
http://www.accelacomm.com/jaw/sfnl/114/51521223/
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] Volumes not auto created

2012-02-22 Thread Silver Salonen
On Wed, 22 Feb 2012 08:05:36 -0600, Raymond Norton wrote:
 Should include SD info:


 Device {
Name = FileStorage
Media Type = File
Archive Device = /backups
LabelMedia = yes;   # lets Bacula label unlabeled 
 media
Random Access = Yes;
AutomaticMount = yes;   # when device opened, read it
RemovableMedia = no;
AlwaysOpen = no;
 }



 On 02/22/2012 08:00 AM, Raymond Norton wrote:
 Bacula stopped creating new volumes. I can create them manually and 
 jobs
 run after that, but trying to figure out why they are not auto 
 created
 any longer.

 There were 7 volumes auto created before I had to manually create 
 the
 last two..


 Any ideas what to look for?



 I have the following configuration for my default pool:


 Pool {
 Name = Default
 Pool Type = Backup
 Recycle = yes   # Bacula can automatically
 recycle Volumes
 AutoPrune = yes # Prune expired volumes
 Volume Retention = 30 days # one year
 Maximum Volume Bytes = 50G
 Maximum Volumes = 10
 LabelFormat = bakTrak
 }

I have ran into this issue many times and almost every time it's the 
case of file/directory ownerships/permissions.

Are you sure you haven't changed something in there lately?

--
Silver

--
Virtualization  Cloud Management Using Capacity Planning
Cloud computing makes use of virtualization - but cloud computing 
also focuses on allowing computing to be delivered as a service.
http://www.accelacomm.com/jaw/sfnl/114/51521223/
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] Volumes not auto created

2012-02-22 Thread John Drescher
On Wed, Feb 22, 2012 at 9:00 AM, Raymond Norton ad...@lctn.org wrote:
 Bacula stopped creating new volumes. I can create them manually and jobs
 run after that, but trying to figure out why they are not auto created
 any longer.

 There were 7 volumes auto created before I had to manually create the
 last two..


 Any ideas what to look for?



 I have the following configuration for my default pool:


 Pool {
   Name = Default
   Pool Type = Backup
   Recycle = yes                       # Bacula can automatically
 recycle Volumes
   AutoPrune = yes                     # Prune expired volumes
   Volume Retention = 30 days         # one year
   Maximum Volume Bytes = 50G
   Maximum Volumes = 10
   LabelFormat = bakTrak
 }


Did you change the pool definition without issuing the reload command
in bacula or restarting bacula-dir.conf? Also since the limit is 10
volumes bacula can only auto create 1 more since I believe you have 9
total volumes now.

John

--
Virtualization  Cloud Management Using Capacity Planning
Cloud computing makes use of virtualization - but cloud computing 
also focuses on allowing computing to be delivered as a service.
http://www.accelacomm.com/jaw/sfnl/114/51521223/
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


[Bacula-users] Release Bacula version 5.2.6

2012-02-22 Thread Kern Sibbald
Hello,

This morning, we released Bacula version 5.2.6 to Source Forge.

You may be wondering why there have been so many releases of version 
5.2.x.  It isn't because there are any problems, but rather that Bacula
Systems is now very well organized (with a new CEO and many new 
employees) that I now have more time to work on Bacula, and that means
we are fixing more bugs, implementing new features and such.  The 
results you will see when appropriate in releases.

At the end of 2011 Bacula Systems introduced a new service offering 
specifically for existing community users called
the Bacula Enterprise Selective Migration Plan. This program is geared
towards IT shops and systems administrators who have deployed the
community version of Bacula and wish to professionalize their backup and 
restore infrastructure using the Enterprise Edition.

The success of this plan surpassed our greatest expectations and 
therefore we have decided to extend it until the 31st of March 2012!


The content of the offer remains the same and includes:

   -  Bacula Enterprise Edition (latest version available) and all
  updates and new while you are subscribed.

   -  1 plugin of your choice (see list in the current price list)

   -  1 day of remote consulting

   -  Migration to 4.0 White Paper

   -  Seats in our training courses (Admin I and/or II) at 50% discount.


You may find more details about this Selective Migration Plan on our
website at:

 
http://www.baculasystems.com/index.php/products/bee-selective-mig-plan.html

Below for your convenience, you will find the Release Notes.

Thanks for using Bacula.

Best regards,
Kern

==

Release Notes for Bacula 5.2.6

   Bacula code: Total files = 1,110 Total lines = 231,439 (Using SLOCCount)

General:

The 5.2.6 version is a bug fix release.

!
If you are upgrading directly from 5.0.3 to this version, please see the
important notices below for version 5.2.3, particularly
the database upgrade and the difference in packaging the
SQL shared libraries.
!

!!
If you store .bsr or .mail files in the Bacula working
directory, please be aware that they will all be deleted
each time the Director starts.


Compatibility:
--
  As always, both the Director and Storage daemon must be upgraded at
  the same time.

  Older 5.0.x and 3.0.x File Daemons are compatible with the 5.2.3
  Director and Storage daemons. There should be no need to upgrade
  older File  Daemons.

New Feature:
  - The restore tree cd command accepts wild cards within each
part of a path. Wild cards apply only to a single part at a
time: i.e. cd a*/b*/xx* will match abc/bcd/xxfxx
but */xx* will not match the above filename.

Changes since 5.2.5:

17Feb12
  - Fix old exchange-fd plugin Accurate checkFile code.
  - Insert the slot field as a numeric field.
  - Fix #1831 by dropping the table before creating it
  - Make cd accept wildcards
  - Remove bad optimization from Accurate code
  - Lock read acquire in SD to prevent to read jobs getting the
same thing
  - Implement more robust check in other drives for tape slot wanted
  - Fix lost dcr point -- memory loss in Copy/Migration + possible
confusion
  - Ensure that bvfs SQL link is not shared
  - Fix error printing in acl and xattr code.
  - Backport better error debug output for sd plugins.
  - Add wait on bad connection for security
  - Make mtx-changer more fault tolerant
  - Fix 32/64 bit problems in SD sscanf commands
  - Skip certain filesystem types on some platforms.
  - Allow BVFS to browse and restore Base jobs
  - Add error message to .bvfs_clear_cache command
  - Fix plugin bug with multiple simultaneous jobs

Bugs fixed/closed since last release:
1831



--
Virtualization  Cloud Management Using Capacity Planning
Cloud computing makes use of virtualization - but cloud computing 
also focuses on allowing computing to be delivered as a service.
http://www.accelacomm.com/jaw/sfnl/114/51521223/
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


[Bacula-users] Fwd: Volumes not auto created

2012-02-22 Thread John Drescher
-- Forwarded message --
From: Raymond Norton ad...@lctn.org
Date: Wed, Feb 22, 2012 at 9:41 AM
Subject: Re: [Bacula-users] Volumes not auto created
To: John Drescher dresche...@gmail.com


Should have mentioned, the volume folder is an NFS mount.


drwxrwxrwx   3 root     root      4096 2012-02-22 07:49 backups


 Did you change the pool definition without issuing the reload command
 in bacula or restarting bacula-dir.conf? Also since the limit is 10
 volumes bacula can only auto create 1 more since I believe you have 9
 total volumes now.

 John




-- 
John M. Drescher

--
Virtualization  Cloud Management Using Capacity Planning
Cloud computing makes use of virtualization - but cloud computing 
also focuses on allowing computing to be delivered as a service.
http://www.accelacomm.com/jaw/sfnl/114/51521223/
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] Volumes not auto created

2012-02-22 Thread Silver Salonen
On Wed, 22 Feb 2012 08:29:50 -0600, Raymond Norton wrote:
 I have ran into this issue many times and almost every time it's the
 case of file/directory ownerships/permissions.

 Are you sure you haven't changed something in there lately?



 I have not touched it since creation, outside of rebooting since the
 problem cropped up. What should permissions be on the backup folder
 for volumes?

You'd have to check what user/group does the bacula-sd process run 
under and then give write permissions to that user/group on the backup 
folder.

--
Silver

--
Virtualization  Cloud Management Using Capacity Planning
Cloud computing makes use of virtualization - but cloud computing 
also focuses on allowing computing to be delivered as a service.
http://www.accelacomm.com/jaw/sfnl/114/51521223/
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


[Bacula-users] update slots scan after replacing autochanger

2012-02-22 Thread Tilman Schmidt
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA1

Hi all,

I'm running Bacula 5.0.0-9.el6 from the CentOS 6 base repository.
My 24 slot autochanger with barcode reader died on me, and in order
to keep backups going I replaced it with a spare unit that has only
8 slots and no barcode reader. After the transplant I ran

*update slots storage=LTO-3 scan

in bconsole to resync Bacula's notion of which tape to find where.
It noted correctly

Device LTO-3 has 8 slots.

and updated the database so that list media showed the correct
cartridges for the existing slots 1 through 8. But it still listed
tapes as present in the now nonexistent slots 9 through 24. Excerpt:

+-++---+-+-+--+--+-+--+---+---+-+
| MediaId | VolumeName | VolStatus | Enabled | VolBytes|
VolFiles | VolRetention | Recycle | Slot | InChanger | MediaType |
LastWritten |
+-++---+-+-+--+--+-+--+---+---+-+
|  28 | vmware_image_1 | Used  |   1 | 557,353,681,920 |
149 |2,073,600 |   1 |   15 | 1 | LTO-3 |
2012-01-17 17:51:51 |
|  29 | vmware_image_2 | Used  |   1 | 107,145,206,784 |
 29 |2,073,600 |   1 |   16 | 1 | LTO-3 |
2012-01-07 07:57:24 |
|  31 | image_1| Append|   1 |  64,512 |
  0 |2,073,600 |   1 |8 | 1 | LTO-3 |
-00-00 00:00:00 |
|  32 | image_2| Append|   1 |  64,512 |
  0 |2,073,600 |   1 |   18 | 1 | LTO-3 |
-00-00 00:00:00 |
+-++---+-+-+--+--+-+--+---+---+-+

(I have mounted volume image_1, which was previously in slot 17,
in slot 8 now, while volumes image_2 and vmware_image_1/_2 are
sitting on my desk for lack of a free slot.)

I corrected the situation manually with a series of

*update volume=... slot=0 inchanger=no

commands. (I wasn't sure whether just setting inchanger=no and
leaving the Slot field non-zero would be ok.) Now the output of
list media looks ok.

But shouldn't the update slots command have done that for me in
the first place?

Thanks,

- -- 
Tilman Schmidt
Phoenix Software GmbH
Bonn, Germany
-BEGIN PGP SIGNATURE-
Version: GnuPG v2.0.12 (MingW32)
Comment: Using GnuPG with Mozilla - http://enigmail.mozdev.org/

iEYEARECAAYFAk9FJ9QACgkQ780oymN0g8OvXACghe94o+388apUnIEUYyf30IzM
FK4AoNqLSz2rtcD8mEBVGGyVU/q38mCx
=IkSO
-END PGP SIGNATURE-

--
Virtualization  Cloud Management Using Capacity Planning
Cloud computing makes use of virtualization - but cloud computing 
also focuses on allowing computing to be delivered as a service.
http://www.accelacomm.com/jaw/sfnl/114/51521223/
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


[Bacula-users] Automounting tape volumes for restoration

2012-02-22 Thread Conrad Lawes
Hello Bacula Users,

I having a little problem.  I have  an Exabyte LTO3  Tape Library with 
AutoChanger attached.   Whenever I create a restoration job  the Director 
reports that it is waiting for the Storage device to become available.
At this point, I have to manually mount the tape volume that I need to restore 
from.   Is there any way to force Bacula to auto mount the appropriate tape 
from the auto loader during a restore job? 

For backup jobs, tapes are mounted automatically as needed. I wish to do the 
same for restoration jobs.

Thanks.




--
Virtualization  Cloud Management Using Capacity Planning
Cloud computing makes use of virtualization - but cloud computing 
also focuses on allowing computing to be delivered as a service.
http://www.accelacomm.com/jaw/sfnl/114/51521223/
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] Automounting tape volumes for restoration

2012-02-22 Thread John Drescher
On Wed, Feb 22, 2012 at 3:14 PM, Conrad Lawes cla...@navtech.aero wrote:
 Hello Bacula Users,

 I having a little problem.  I have  an Exabyte LTO3  Tape Library with 
 AutoChanger attached.   Whenever I create a restoration job  the Director 
 reports that it is waiting for the Storage device to become available.
 At this point, I have to manually mount the tape volume that I need to 
 restore from.   Is there any way to force Bacula to auto mount the 
 appropriate tape from the auto loader during a restore job?

 For backup jobs, tapes are mounted automatically as needed. I wish to do the 
 same for restoration jobs.


All I can tell you is that is not normal behavior. Provided the tapes
that are needed are in the autochanger and you have not used the
umount command from bacula on the last operation.


John

--
Virtualization  Cloud Management Using Capacity Planning
Cloud computing makes use of virtualization - but cloud computing 
also focuses on allowing computing to be delivered as a service.
http://www.accelacomm.com/jaw/sfnl/114/51521223/
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] Scheduling hourly backups with different levels and pools

2012-02-22 Thread Joe Nyland
On 23 Feb 2012, at 00:50, Jérôme Blion wrote:

 On Wed, 22 Feb 2012 15:13:26 + (GMT), Joe Nyland wrote:
 On 22 Feb, 2012,at 02:11 PM, Jérôme Blion  wrote:
 
 On Wed, 22 Feb 2012 08:29:22 -0500, John Drescher wrote:
  2012/2/22 :
 
  Hi,
 
  I'm in the process of setting up MySQL backups in Bacula, using
  mysqldump
  for full backups and backing up my bin logs for incremental
 backups.
 
  What I would like to do is to perform a full backup to my full
  backup pool
  at 00:00 every night, then perform incremental backups to my
  incremental
  pool every hour thereafter.
 
  Just as a rough config, I have the current schedule at the
 moment,
  whilst
  testing:
 
  Schedule {
  Name = TestServer MySQL Database Schedule
  Run = Level=Full pool=TestServer_MySQL_Full
  Storage=TestServer_MySQL_Full
  daily at 00:00
  Run = Level=Incremental pool=TestServer_MySQL_Inc
  Storage=TestServer_MySQL_Inc daily at 01:00
  Run = Level=Incremental pool=TestServer_MySQL_Inc
  Storage=TestServer_MySQL_Inc daily at 02:00
  Run = Level=Incremental pool=TestServer_MySQL_Inc
  Storage=TestServer_MySQL_Inc daily at 03:00
  Run = Level=Incremental pool=TestServer_MySQL_Inc
  Storage=TestServer_MySQL_Inc daily at 04:00
  Run = Level=Incremental pool=TestServer_MySQL_Inc
  Storage=TestServer_MySQL_Inc daily at 05:00
  Run = Level=Incremental pool=TestServer_MySQL_Inc
  Storage=TestServer_MySQL_Inc daily at 06:00
  Run = Level=Incremental pool=TestServer_MySQL_Inc
  Storage=TestServer_MySQL_Inc daily at 07:00
  Run = Level=Incremental pool=TestServer_MySQL_Inc
  Storage=TestServer_MySQL_Inc daily at 08:00
  Run = Level=Incremental pool=TestServer_MySQL_Inc
  Storage=TestServer_MySQL_Inc daily at 09:00
  Run = Level=Incremental pool=TestServer_MySQL_Inc
  Storage=TestServer_MySQL_Inc daily at 10:00
  Run = Level=Incremental pool=TestServer_MySQL_Inc
  Storage=TestServer_MySQL_Inc daily at 11:00
  Run = Level=Incremental pool=TestServer_MySQL_Inc
  Storage=TestServer_MySQL_Inc daily at 12:00
  Run = Level=Incremental pool=TestServer_MySQL_Inc
  Storage=TestServer_MySQL_Inc daily at 13:00
  Run = Level=Incremental pool=TestServer_MySQL_Inc
  Storage=TestServer_MySQL_Inc daily at 14:00
  Run = Level=Incremental pool=TestServer_MySQL_Inc
  Storage=TestServer_MySQL_Inc daily at 15:00
  Run = Level=Incremental pool=TestServer_MySQL_Inc
  Storage=TestServer_MySQL_Inc daily at 16:00
  Run = Level=Incremental pool=TestServer_MySQL_Inc
  Storage=TestServer_MySQL_Inc daily at 17:00
  Run = Level=Incremental pool=TestServer_MySQL_Inc
  Storage=TestServer_MySQL_Inc daily at 18:00
  Run = Level=Incremental pool=TestServer_MySQL_Inc
  Storage=TestServer_MySQL_Inc daily at 19:00
  Run = Level=Incremental pool=TestServer_MySQL_Inc
  Storage=TestServer_MySQL_Inc daily at 20:00
  Run = Level=Incremental pool=TestServer_MySQL_Inc
  Storage=TestServer_MySQL_Inc daily at 21:00
  Run = Level=Incremental pool=TestServer_MySQL_Inc
  Storage=TestServer_MySQL_Inc daily at 22:00
  Run = Level=Incremental pool=TestServer_MySQL_Inc
  Storage=TestServer_MySQL_Inc daily at 23:00
  }
 
  I feel that there must be another, cleaner, way to define this
 kind
  of
  backup schedule, but I can't seem to be able to find one from
 the
  manual.
 
 
  You could make the default level Incremental and the default Pool
  TestServer_MySQL_Inc in your Job and cut all overrides but the
 full
  however I would leave this alone. Your schedule is fine.
 
  John
 
 Hello,
 
 As far as I see, you only have 2 pools, one for each type.
 Why don't you use hourly keyword to schedule incrementals backups ?
 
 HTH.
 Jérôme Blion.
 
 
 --
 Virtualization  Cloud Management Using Capacity Planning
 Cloud computing makes use of virtualization - but cloud computing
 also focuses on allowing computing to be delivered as a service.
 http://www.accelacomm.com/jaw/sfnl/114/51521223/ [2]
 ___
 Bacula-users mailing list
 Bacula-users@lists.sourceforge.net [3]
 https://lists.sourceforge.net/lists/listinfo/bacula-users [4]
 
 Jérôme and John, thank you for your replies.
 
 Jérôme, yes that's right. For this type of job and this schedule, I
 am dealing with two pools. I am aware of the hourly keyword. Whilst
 this keyword appears to be just what I need, my concern with using it
 is that at 00:00, I would get a full backup initiated by my schedule,
 but also an incremental backup at 00:00 too, due to the hourly
 keyword, would I not?
 
 Kind regards,
 
 Joe
 
 Several points which could help you:
 - Incremental hourly at 0:17 == just plan the incremental few minutes before 
 the full starts.
 - in the script to run, just return 0 if the full backup is already runnning.
 - The schedule won't start if another instance is running. Perhaps you could 
 create 2 schedules...
 
 HTH.
 Jérôme Blion.


Thanks for your suggestions, but I'm not sure I understand them fully, I'm 
afraid.