Re: [Bacula-users] Strange behaviour with Mutliple Run statements in Job Resource

2009-03-26 Thread Andreas Bogacki
Andreas Bogacki schrieb:
 Hi,

 I seem to have found a problem with multiple Run statments in a Job
 resource.
 My setup is a bit strange due to too little storage in one place so I
 set up some migrate jobs.

 scheduled backup1 Job writes to filestorage1
 migrate1 Job moves from filestorage1 to filestorage2 based on volumetime
 migrate2 Job moves from filestorage2 to filestorage3 based on volumetime
 migrate3 Job moves from filestorage3 to filestorage4 based on volumetime

 The prefered execution order would be migrate3, migrate2, migrate1, backup1.
 Using priorities to get that order is not practical for there are lots
 of other backup jobs. (last time I tried I had a Weekend full of jobs
 waiting for one mount request)

 Run is not recursive so I tryed to use multiple run statements in the
 backup1 Job.
 This lead to the director spawning a massive amount of those migrate
 jobs (not 3 as expected but 100+). I suppose this behaviour is somewhat
 wrong.

 Is there any way to define dependecies between jobs that go deeper than
 just start this single job befor this one is run or will I have to
 create staggered schedules with all jobs having the same priority.

 thanks for your help
 Andreas Bogacki

   
forgot to provide some details:
debian lenny
bacula 2.4.4 (28 December 2008) x86_64-pc-linux-gnu debian lenny/sid

the spawning of those jobs causes a Too many open files Error at some point:
message.c:589 fopen /var/log/bacula/log failed: ERR=Too many open files
and then the director segfaults.

--
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


[Bacula-users] Strange behaviour with Mutliple Run statements in Job Resource

2009-03-26 Thread Andreas Bogacki
Hi,

I seem to have found a problem with multiple Run statments in a Job
resource.
My setup is a bit strange due to too little storage in one place so I
set up some migrate jobs.

scheduled backup1 Job writes to filestorage1
migrate1 Job moves from filestorage1 to filestorage2 based on volumetime
migrate2 Job moves from filestorage2 to filestorage3 based on volumetime
migrate3 Job moves from filestorage3 to filestorage4 based on volumetime

The prefered execution order would be migrate3, migrate2, migrate1, backup1.
Using priorities to get that order is not practical for there are lots
of other backup jobs. (last time I tried I had a Weekend full of jobs
waiting for one mount request)

Run is not recursive so I tryed to use multiple run statements in the
backup1 Job.
This lead to the director spawning a massive amount of those migrate
jobs (not 3 as expected but 20+). I suppose this behaviour is somewhat
wrong.

Is there any way to define dependecies between jobs that go deeper than
just start this single job befor this one is run or will I have to
create staggered schedules with all jobs having the same priority.

thanks for your help
Andreas Bogacki

--
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] Job Migration on setup with lots of file storages

2009-03-10 Thread Andreas Bogacki
Hi,

I found the problem. I had a mixture of MediaTypes. Old Volumes were
MediaType=File, new ones were MediaType=Storage_1-file.

Everything is fine now. Sorry for the confusion.

Is there a way to change the MediaType of a Volume after it has been
written?

cheers
Andreas

mimmo lariccia schrieb:
 Hi, try to download the sample file contained in this wiki_bacula:
 -
 http://www.redstar-it.de/hp-storageworks-msl2024-and-hp-lto4-ultrium-1840-drive-with-bacula-2-2-5-2-2-6

 Anyway I try to summarize...

 Director {
   Name = dracula.in.MYDOMAIN-dir
   DIRport = 9101
   QueryFile = /etc/bacula/scripts/query.sql
   WorkingDirectory = /backup/bacula
   PidDirectory = /var/run/bacula
   Maximum Concurrent Jobs = 3
   Password = PASS1
   Messages = Standard # was: Daemon
 }

 I think Director first of all has to be instructed to to parallel jobs.

 More over You can choose the right retention of your jobs setting it
 always in bacula-dir.conf (as show before):

 Client {
   Name = ts.in.MYDOMAIN
   Address = IP
   FDPort = 9102
   Catalog = MyCatalog
   Password = CLIENTPASS  # password for FileDaemon
   File Retention = 31 days# 30 days
   Job Retention = 1 year# six months
   AutoPrune = yes # Prune expired Jobs/Files
 }

 Finally You can create and manage multiple pools:

 Pool {
   Name = DailyBackups
   Pool Type = Backup
   Recycle = yes   # Bacula can automatically
 recycle Volumes
   AutoPrune = yes # Prune expired volumes
   Volume Retention = 31 days # one year
   Recycle Oldest Volume = yes
 #  Maximum Volume Bytes = 8000
   Storage = msl2024
 }

 Pool {
   Name = WeeklyBackups
   Pool Type = Backup
   Recycle = yes   # Bacula can automatically
 recycle Volumes
   AutoPrune = yes # Prune expired volumes
   Volume Retention = 31 days # one year
   Recycle Oldest Volume = yes
 #  Maximum Volume Bytes = 8000
   Storage = msl2024
 }

 Pool {
   Name = MonthlyBackups
   Pool Type = Backup
   Recycle = yes   # Bacula can automatically
 recycle Volumes
   AutoPrune = yes # Prune expired volumes
   Volume Retention = 365 days  # one year
   Recycle Oldest Volume = yes
 #  Maximum Volume Bytes = 8000
   Storage = msl2024
 }

 As I say before: all Your needs are in bacula-dir.conf

 I hope this may help.
 Cheers.

  Date: Wed, 4 Mar 2009 16:55:01 +0100
  From: andr...@bogacki.org
  To: bacula-users@lists.sourceforge.net
  Subject: [Bacula-users] Job Migration on setup with lots of file
 storages
 
  Hi,
 
  I have a setup with 1 storage device for each client I backup.
  If I got the documentation right I need to have different Media Types
  for concurent backups to work.
 
  Device {
  Name = Storage_1-dev
  Media Type = Storage_1-file
  Archive Device = /mount/device_1
  Device Type = File;
  LabelMedia = yes;
  Random Access = Yes;
  AutomaticMount = yes;
  RemovableMedia = no;
  }
 
  Device {
  Name Storage_2-dev
  Media Type = Storage_2-file
  Archive Device = /mount/device_2
  #all other options just like first device
  }
 
  Device {
  Name Migrate_Storage_1-dev
  Media Type = Storage_1-file
  #all other options just like first device
  }
 
  Now when I try to do a migration job from Storage_1-dev to
  Migrate_Storage_1-dev it fails with the following error:
  04-Mar 16:09 backup-sd JobID 219: acquire.c:116 Changing read device.
  Want Media Type=File have=Storage_1-file
  When I set Media Type = File for Storage_1-dev and
 Migrate_Storage_1-dev
  the Migration works.
  So right now concurent backups to lots of different disks over a couple
  of network interfaces is bottlenecked by having to set all Media
 Types =
  File if I want to use Migration. Did I get it right?
  Another question that arises is:
  How can I setup a system with concurent backups for all clients that
  migrates jobs that need to be archived for a long time (2+ years) on
 one
  tape-changer device?
 
  Versions used: debian lenny with bacula 2.4.4 (28 December 2008)
  x86_64-pc-linux-gnu debian lenny/sid
 
  cheers
  Andreas Bogacki
 
 
 --
  Open Source Business Conference (OSBC), March 24-25, 2009, San
 Francisco, CA
  -OSBC tackles the biggest issue in open source: Open Sourcing the
 Enterprise
  -Strategies to boost innovation and cut costs with open source
 participation
  -Receive a $600 discount off the registration fee with the source
 code: SFAD
  http://p.sf.net/sfu/XcvMzF8H
  ___
  Bacula-users mailing list
  Bacula-users@lists.sourceforge.net
  https://lists.sourceforge.net/lists/listinfo/bacula-users

 
 check out the rest of the Windows Live™. More than mail–Windows Live™
 goes way beyond your inbox. More than

[Bacula-users] Job Migration on setup with lots of file storages

2009-03-04 Thread Andreas Bogacki
Hi,

I have a setup with 1 storage device for each client I backup.
If I got the documentation right I need to have different Media Types 
for concurent backups to work.

Device {
  Name = Storage_1-dev
  Media Type = Storage_1-file
  Archive Device = /mount/device_1
  Device Type = File;
  LabelMedia = yes;
  Random Access = Yes;
  AutomaticMount = yes;
  RemovableMedia = no;
}

Device {
  Name Storage_2-dev
  Media Type = Storage_2-file
  Archive Device = /mount/device_2
#all other options just like first device
}

Device {
  Name Migrate_Storage_1-dev
  Media Type = Storage_1-file
#all other options just like first device
}

Now when I try to do a migration job from Storage_1-dev to 
Migrate_Storage_1-dev it fails with the following error:
04-Mar 16:09 backup-sd JobID 219: acquire.c:116 Changing read device. 
Want Media Type=File have=Storage_1-file
When I set Media Type = File for Storage_1-dev and Migrate_Storage_1-dev 
the Migration works.
So right now concurent backups to lots of different disks over a couple 
of network interfaces is bottlenecked by having to set all Media Types = 
File if I want to use Migration. Did I get it right?
Another question that arises is:
How can I setup a system with concurent backups for all clients that 
migrates jobs that need to be archived for a long time (2+ years) on one 
tape-changer device?

Versions used: debian lenny with bacula 2.4.4 (28 December 2008) 
x86_64-pc-linux-gnu debian lenny/sid

cheers
Andreas Bogacki

--
Open Source Business Conference (OSBC), March 24-25, 2009, San Francisco, CA
-OSBC tackles the biggest issue in open source: Open Sourcing the Enterprise
-Strategies to boost innovation and cut costs with open source participation
-Receive a $600 discount off the registration fee with the source code: SFAD
http://p.sf.net/sfu/XcvMzF8H
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users