Re: [Bacula-users] Bacula writing in wrong volumes

2014-09-16 Thread Gean Michel Ceretta
Hi Leo,

Em Sex, 2014-09-12 às 15:17 -0300, Leo escreveu:
 Gean, you are using multiple catalogs?
No Leo, I've a single catalog, de default one. See the Ana's suggestion
above, that problably will help you.

Best regards,
Gean


--
Want excitement?
Manually upgrade your production database.
When you want reliability, choose Perforce.
Perforce version control. Predictably reliable.
http://pubads.g.doubleclick.net/gampad/clk?id=157508191iu=/4140/ostg.clktrk
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] Bacula writing in wrong volumes

2014-09-15 Thread Gean Michel Ceretta
Hi Ana, 

Em Sex, 2014-09-12 às 23:10 -0300, Ana Emília M. Arruda escreveu:
 ... You can try to put a Run Script directive on job definition like
 this:
 RunScript {
 RunsWhen = After
 RunsOnFailure = yes
 Command = /path/myscript.sh
 }
 And you could do some kind of shell script that could look for and delete the 
 volume created and not used by your job. This way Bacula should create a new 
 volume for the next job.
 Sorry I didn't had time for thinking about the script, but here is my idea 
 for solving this situation.

Thanks. Excelent idea Ana, in that way I can write a scrit that deletes
the volume if the size is small (an empty one).

I'll let my configuration of an volume per day with all backups from all
services, that I posted before, to complete a cycle. If that works well,
I'll keep that way, if not, I'll follow your suggestion.

Best regards,
Gean




--
Want excitement?
Manually upgrade your production database.
When you want reliability, choose Perforce
Perforce version control. Predictably reliable.
http://pubads.g.doubleclick.net/gampad/clk?id=157508191iu=/4140/ostg.clktrk
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] Bacula writing in wrong volumes

2014-09-12 Thread Ana Emília M . Arruda
Hi Gean,

Just an idea... Maybe it should work and do what you want. You can try to
put a Run Script directive on job definition like this:

RunScript {
RunsWhen = After
RunsOnFailure = yes
Command = /path/myscript.sh
}

And you could do some kind of shell script that could look for and
delete the volume created and not used by your job. This way Bacula
should create a new volume for the next job.

Sorry I didn't had time for thinking about the script, but here is my
idea for solving this situation.

Regards,

Ana



On Thu, Sep 11, 2014 at 9:05 AM, Gean Michel Ceretta 
engenharia_g...@inobram.com.br wrote:

 Dear Kern,

 Em Ter, 2014-09-09 às 17:21 +0200, Kern Sibbald escreveu:
   ...If you want to clearly separate Volume by clients, jobs,  or by
  some other criteria, you probably should be using different Pools.
  Different pools are the most reliable way to ensure that only
  particular jobs, clients, ... go to a particular volume.
 On my actual configuration, create a different pool for each service,
 for each kind of backup, will result in 33 pools.

 What I have done now is to create a volume per day, labeled with the
 year, month and date, with use time of 23h, not recycling and truncate
 action on purge, so the volumes purged will not use disk space. all the
 backups of the day will be in the volume, something about 4 GB, and only
 the Fulls will have about 300Gb.

 It's not the ideal situation that I expected, but will simplify thinks,
 attached are my pool configuration.

 Thanks for your help, best regards.



 --
 Want excitement?
 Manually upgrade your production database.
 When you want reliability, choose Perforce
 Perforce version control. Predictably reliable.

 http://pubads.g.doubleclick.net/gampad/clk?id=157508191iu=/4140/ostg.clktrk
 ___
 Bacula-users mailing list
 Bacula-users@lists.sourceforge.net
 https://lists.sourceforge.net/lists/listinfo/bacula-users


--
Want excitement?
Manually upgrade your production database.
When you want reliability, choose Perforce
Perforce version control. Predictably reliable.
http://pubads.g.doubleclick.net/gampad/clk?id=157508191iu=/4140/ostg.clktrk___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] Bacula writing in wrong volumes

2014-09-11 Thread Gean Michel Ceretta
Dear Kern,

Em Ter, 2014-09-09 às 17:21 +0200, Kern Sibbald escreveu:
  ...If you want to clearly separate Volume by clients, jobs,  or by
 some other criteria, you probably should be using different Pools.
 Different pools are the most reliable way to ensure that only
 particular jobs, clients, ... go to a particular volume.
On my actual configuration, create a different pool for each service,
for each kind of backup, will result in 33 pools.

What I have done now is to create a volume per day, labeled with the
year, month and date, with use time of 23h, not recycling and truncate
action on purge, so the volumes purged will not use disk space. all the
backups of the day will be in the volume, something about 4 GB, and only
the Fulls will have about 300Gb.

It's not the ideal situation that I expected, but will simplify thinks,
attached are my pool configuration.

Thanks for your help, best regards.

Pool {
  Name = Interna
  Pool Type = Backup
  Storage = Interno
  Recycle = no
  AutoPrune = yes
  Volume Use Duration = 23h
  Volume Retention = 60 days
  Action On Purge = Truncate
  LabelFormat = ${Year}-${Month}-${Day}
  Next Pool = Externa
}

Pool {
  Name = Externa
  Pool Type = Backup
  Storage = Externo
  Recycle = no
  AutoPrune = yes
  Volume Use Duration = 23h
  Volume Retention = 60 days
  Action On Purge = Truncate
  LabelFormat = ${Year}-${Month}-${Day}
}

--
Want excitement?
Manually upgrade your production database.
When you want reliability, choose Perforce
Perforce version control. Predictably reliable.
http://pubads.g.doubleclick.net/gampad/clk?id=157508191iu=/4140/ostg.clktrk___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


[Bacula-users] Bacula writing in wrong volumes

2014-09-09 Thread Gean Michel Ceretta
Dear users,

I'm using file-based backups and configurated Bacula[bacula-dir Version:
5.2.13 (19 February 2013)] to write one volume per job, with an
exclusive label containing the client name, data and time as shown
bellow in the config files attached.

The problems is: 

1º Why Bacula creates a volume for a job if the job has no data to be
backed up? That causes the created volume to stay open for writing, what
causes another problem:

2º Instead of creating a new volume for the next client to be backed-up,
Bacula writes in the empity volume created at problem 1.

I'm dealing with a situation where an client named Git has no
modifications, but a volume Git-{data}... was created.

Then, the next job starts, for the client Samba, and the data end-up
being recorded on the volume Git-{data}... because the empty volume
created on the previous job that has no modifications, was not closed.

What I've donne wrong? Suggestions?



Director {# define myself
  Name = bacula-dir
  DIRport = 9101# where we listen for UA connections
  QueryFile = /etc/bacula/query.sql
  WorkingDirectory = /var/spool/bacula
  PidDirectory = /var/run
  Maximum Concurrent Jobs = 1 
  Password = X # Console password
  Messages = Daemon
}

# Definition of clients and jobs
@/etc/bacula/bacula-dir-clients-and-jobs.conf

# Definition of filesets
@/etc/bacula/bacula-dir-filesets.conf

# Definition of pools
@/etc/bacula/bacula-dir-pools.conf

# Backup schedules
Schedule {
  Name = WeeklyCycle
  Run = Full 1st sun at 23:05
  Run = Differential 2nd-5th sun at 23:05
  Run = Incremental mon-sat at 23:05
}

# Catalog backup schedule
Schedule {
  Name = WeeklyCycleAfterBackup
  Run = Full sun-sat at 23:10
}

# Definition of file storage device

Storage {
  Name = Interno
  Address = 192.168.23.3
  SDPort = 9103
  Password = X
  Device = Interno
  Media Type = File
}

Storage {
  Name = Externo
  Address = 192.168.23.3
  SDPort = 9103
  Password = X
  Device = Externo
  Media Type = File
}




# Generic catalog service
Catalog {
  Name = MyCatalog
# Uncomment the following line if you want the dbi driver
# dbdriver = dbi:postgresql; dbaddress = 127.0.0.1; dbport =  
  dbdriver = dbi:mysql; dbname = bacula; dbuser = bacula; dbpassword = 
}

Messages {
  Name = Standard
  mailcommand = /usr/sbin/bsmtp -h localhost -f \\(Bacula\) \%r\\ -s 
\Bacula: %t %e of %c %l\ %r
  operatorcommand = /usr/sbin/bsmtp -h localhost -f \\(Bacula\) \%r\\ -s 
\Bacula: Intervention needed for %j\ %r
  mail = root@localhost = all, !skipped
  operator = root@localhost = mount
  console = all, !skipped, !saved
  append = /var/log/bacula/bacula.log = all, !skipped
  catalog = all
}

Messages {
  Name = Daemon
  mailcommand = /usr/sbin/bsmtp -h localhost -f \\(Bacula\) \%r\\ -s 
\Bacula daemon message\ %r
  mail = root@localhost = all, !skipped
  console = all, !skipped, !saved
  append = /var/log/bacula/bacula.log = all, !skipped
}

#
# Restricted console used by tray-monitor to get the status of the director
#
Console {
  Name = bacula-mon
  Password = X
  CommandACL = status, .status
}
 Clients ##

Client {
  Name = bacula-fd
  Address = localhost
  Password = X
  @/etc/bacula/basic-client.conf 
}

Client {
  Name = git-fd
  Address = 192.168.23.21
  Password = X  
  @/etc/bacula/basic-client.conf
}

Client {
  Name = NFS-fd
  Address = 192.168.23.23
  Password = X  
  @/etc/bacula/basic-client.conf
}

Client {
  Name = redmine-fd
  Address = 192.168.23.21
  Password = X
  @/etc/bacula/basic-client.conf
}

Client {
  Name = samba-fd
  Address = 192.168.23.25
  Password = X 
  @/etc/bacula/basic-client.conf
}

#Client {
#  Name = teste-fd
#  Address = 192.168.23.34
#  Password = X  
#  @/etc/bacula/basic-client.conf
#}

Client {
  Name = tin-srv2-fd
  Address = 192.168.23.2
  Password = X
  @/etc/bacula/basic-client.conf
}


### JobDefs ###

JobDefs {
  Name = DefaultJob
  Type = Backup
  Schedule = WeeklyCycle
  Messages = Standard
  Pool = Interna
  Priority = 10
  Write Bootstrap = /var/spool/bacula/%c.bsr
}

 Jobs #
Job {
  Name = Catalog
  JobDefs = DefaultJob
  Client = bacula-fd
  Level = Full
  FileSet=Catalog
  Schedule = WeeklyCycleAfterBackup
  ClientRunBeforeJob = /usr/libexec/bacula/make_catalog_backup.pl MyCatalog
  Write Bootstrap = /var/spool/bacula/%n.bsr
  Priority = 11
}

Job {
  Name = Copia-externa
  
  #Do TypeDef DefaultJob:
  Messages = Standard
  Pool = Interna
  Write Bootstrap = /var/spool/bacula/%c.bsr

  Client = bacula-fd
  FileSet = Full Set
  Type = Copy
  Selection Type = PoolUncopiedJobs
  Priority = 12
} 

Job {
  Name = Director
  JobDefs = DefaultJob
  Level = Incremental
  

Re: [Bacula-users] Bacula writing in wrong volumes

2014-09-09 Thread Kern Sibbald

  
  
Hello,
  
  The problem is basically because you have one idea about how to do
  backups and Bacula has another idea.  You want Bacula to put data
  on the *exact* Volume that you specify.  The whole logic of Bacula
  is that Bacula will deal with what is on volumes and it will
  choose the volumes it wants, within certain constraints setup by
  the administrator.  This is not to say that Bacula is right and
  you are wrong, or vise-versa, but that your views of volumes were
  different.
  
  At the beginning of a job if Bacula needs to label a volume, it
  will, then it will proceed with the backup.  Only at the end of
  the job does Bacula learn that there was nothing to backup.
  
  In general is is difficult to force Bacula to use certain volumes
  and only those volumes.   If you want to clearly separate Volume
  by clients, jobs,  or by some other criteria, you probably should
  be using different Pools.  Different pools are the most reliable
  way to ensure that only particular jobs, clients, ... go to a
  particular volume.
  
  Best regards,
  Kern
  
  On 09/09/2014 04:31 PM, Gean Michel Ceretta wrote:


  Dear users,

I'm using file-based backups and configurated Bacula[bacula-dir Version:
5.2.13 (19 February 2013)] to write one volume per job, with an
exclusive label containing the client name, data and time as shown
bellow in the config files attached.

The problems is: 

1º Why Bacula creates a volume for a job if the job has no data to be
backed up? That causes the created volume to stay open for writing, what
causes another problem:

2º Instead of creating a new volume for the next client to be backed-up,
Bacula writes in the empity volume created at problem 1.

I'm dealing with a situation where an client named Git has no
modifications, but a volume Git-{data}... was created.

Then, the next job starts, for the client Samba, and the data end-up
being recorded on the volume Git-{data}... because the empty volume
created on the previous job that has no modifications, was not closed.

What I've donne wrong? Suggestions?



  
  
  
  --
Want excitement?
Manually upgrade your production database.
When you want reliability, choose Perforce.
Perforce version control. Predictably reliable.
http://pubads.g.doubleclick.net/gampad/clk?id=157508191iu=/4140/ostg.clktrk
  
  
  
  ___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users



  


--
Want excitement?
Manually upgrade your production database.
When you want reliability, choose Perforce.
Perforce version control. Predictably reliable.
http://pubads.g.doubleclick.net/gampad/clk?id=157508191iu=/4140/ostg.clktrk___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users