Re: [Bacula-users] bizarre problem with bacula storages

2015-11-24 Thread stefano scotti
Hi Wanderlei,

i'm grateful for your help.
volumes are in storage_good, and your solution did the trick.

i wonder how is it possible that every StorageId reference of jobs_good was
completely messed up...

maybe it was some query i did in the past which accidentally created this
situation.

thank you again.





2015-11-23 22:49 GMT+01:00 Wanderlei Huttel <wanderleihut...@gmail.com>:

> Hi Stefano
>
> The old backups (volumes) are in storage_good or storage_bad ? If they are
> in the old place (storage_bad), you need to recreate the old storage in
> bacula.
>
> If they are in storage_good you can alter database and change StorageId of
> table Media to set the Storaged of old BackupsI did a test and it worked.
>
>
> # Query to get storage id
> select StorageId, Name from Storage;
>
> # Query to get MediaId from a Specifics Jobs  / (X1, X2, X3 are the
> numbers of JobId's)
> select distinct MediaId from JobMedia where JobId in (X1, X2, X2);
>
> # Query to get update StorageId on table Media / XXX is the number of
> StorageId (X1, X2, X3 are the numbers of JobId's)
> update Media set
> StorageId = XXX
> where MediaId in ( select distinct MediaId from JobMedia where JobId in
> (X1, X2, X2)  );
>
>
> Best Regards Wanderlei
>
>
> 2015-11-23 17:45 GMT-02:00 stefano scotti <scottistefan...@gmail.com>:
>
>> Hello everyone,
>>
>> im using bacula 5.2, i hope it isnt too old for you to give me an answer.
>>
>> the problem is this:
>>
>> when i try to restore from a job which refers to a storage named
>> "storage_good", it returns an error saying that the task cannot be
>> performed, because storage resource cannot be found... but this storage
>> resource is not right, it is an old storage that has nothing to do with the
>> job. it is named "storage_bad".
>>
>> this "storage_bad" was deleted some weeks ago cause i didnt need it
>> anymore.
>>
>> this is the error:
>>
>> 23-nov 19:59 mydir-dir JobId 1988: Start Restore Job
>> RestoreFiles.2015-11-23_19.59.23_06
>> 23-nov 19:59 mydir-dir JobId 1988: Using Device "good_device"
>> 23-nov 19:59 mydir-dir JobId 1988: Warning: Could not get storage
>> resource 'bad_storage'.
>> 23-nov 19:59 mydir-sd JobId 1988: Fatal error: No Volume names found for
>> restore.
>> 23-nov 19:59 mydir-fd JobId 1988: Fatal error: job.c:2390 Bad response to
>> Read Data command. Wanted 3000 OK data
>> , got 3000 error
>>
>> 23-nov 19:59 mydir-dir JobId 1988: Fatal error: Could not get storage
>> resource 'bad_storage'.
>> 23-nov 19:59 mydir-dir JobId 1988: Error: Bacula mydir-dir 5.2.6
>> (21Feb12):
>> Build OS:   x86_64-pc-linux-gnu debian 7.0
>> JobId:  1988
>> Job:RestoreFiles.2015-11-23_19.59.23_06
>> Restore Client: mydir-fd
>> Start time: 23-nov-2015 19:59:25
>> End time:   23-nov-2015 19:59:25
>> Files Expected: 7
>> Files Restored: 0
>> Bytes Restored: 0
>> Rate:   0.0 KB/s
>> FD Errors:  1
>> FD termination status:
>> SD termination status:  Error
>> Termination:*** Restore Error ***
>>
>> 23-nov 19:59 mydir-dir JobId 1988: Error: Bacula mydir-dir 5.2.6
>> (21Feb12):
>> Build OS:   x86_64-pc-linux-gnu debian 7.0
>> JobId:  1988
>> Job:RestoreFiles.2015-11-23_19.59.23_06
>> Restore Client: mydir-fd
>> Start time: 23-nov-2015 19:59:25
>> End time:   23-nov-2015 19:59:25
>> Files Expected: 7
>> Files Restored: 0
>> Bytes Restored: 0
>> Rate:   0.0 KB/s
>> FD Errors:  2
>> FD termination status:
>> SD termination status:  Error
>> Termination:*** Restore Error ***
>>
>>
>> this is the pool-job-storage configuration:
>>
>> Pool {
>>   Name = good_pool_full
>>   Pool Type = Backup
>>   Recycle = yes
>>   AutoPrune = yes
>>   Volume Retention = 1 year
>>   Volume Use Duration = 2 days
>>   Maximum Volumes = 13
>>   LabelFormat = "good_volume_full_${NumVols}"
>>   Storage=good_storage
>> }
>>
>> Pool {
>>   Name = good_pool_diff
>>   Pool Type = Backup
>>   Recycle = yes
>>   AutoPrune = yes
>>   Volume Retention = 6 months
>>   Volume Use Duration = 2 days
>>   Maximum Volumes = 27
>>   LabelFormat = "good_volume_diff_${NumVols}"
>

[Bacula-users] bizarre problem with bacula storages

2015-11-23 Thread stefano scotti
Hello everyone,

im using bacula 5.2, i hope it isnt too old for you to give me an answer.

the problem is this:

when i try to restore from a job which refers to a storage named
"storage_good", it returns an error saying that the task cannot be
performed, because storage resource cannot be found... but this storage
resource is not right, it is an old storage that has nothing to do with the
job. it is named "storage_bad".

this "storage_bad" was deleted some weeks ago cause i didnt need it anymore.

this is the error:

23-nov 19:59 mydir-dir JobId 1988: Start Restore Job
RestoreFiles.2015-11-23_19.59.23_06
23-nov 19:59 mydir-dir JobId 1988: Using Device "good_device"
23-nov 19:59 mydir-dir JobId 1988: Warning: Could not get storage resource
'bad_storage'.
23-nov 19:59 mydir-sd JobId 1988: Fatal error: No Volume names found for
restore.
23-nov 19:59 mydir-fd JobId 1988: Fatal error: job.c:2390 Bad response to
Read Data command. Wanted 3000 OK data
, got 3000 error

23-nov 19:59 mydir-dir JobId 1988: Fatal error: Could not get storage
resource 'bad_storage'.
23-nov 19:59 mydir-dir JobId 1988: Error: Bacula mydir-dir 5.2.6 (21Feb12):
Build OS:   x86_64-pc-linux-gnu debian 7.0
JobId:  1988
Job:RestoreFiles.2015-11-23_19.59.23_06
Restore Client: mydir-fd
Start time: 23-nov-2015 19:59:25
End time:   23-nov-2015 19:59:25
Files Expected: 7
Files Restored: 0
Bytes Restored: 0
Rate:   0.0 KB/s
FD Errors:  1
FD termination status:
SD termination status:  Error
Termination:*** Restore Error ***

23-nov 19:59 mydir-dir JobId 1988: Error: Bacula mydir-dir 5.2.6 (21Feb12):
Build OS:   x86_64-pc-linux-gnu debian 7.0
JobId:  1988
Job:RestoreFiles.2015-11-23_19.59.23_06
Restore Client: mydir-fd
Start time: 23-nov-2015 19:59:25
End time:   23-nov-2015 19:59:25
Files Expected: 7
Files Restored: 0
Bytes Restored: 0
Rate:   0.0 KB/s
FD Errors:  2
FD termination status:
SD termination status:  Error
Termination:*** Restore Error ***


this is the pool-job-storage configuration:

Pool {
  Name = good_pool_full
  Pool Type = Backup
  Recycle = yes
  AutoPrune = yes
  Volume Retention = 1 year
  Volume Use Duration = 2 days
  Maximum Volumes = 13
  LabelFormat = "good_volume_full_${NumVols}"
  Storage=good_storage
}

Pool {
  Name = good_pool_diff
  Pool Type = Backup
  Recycle = yes
  AutoPrune = yes
  Volume Retention = 6 months
  Volume Use Duration = 2 days
  Maximum Volumes = 27
  LabelFormat = "good_volume_diff_${NumVols}"
  Storage=good_storage
}


Pool {
  Name = good_pool_inc
  Pool Type = Backup
  Recycle = yes
  AutoPrune = yes
  Volume Retention = 1 months
  Volume Use Duration = 10 hours
  Maximum Volumes = 26
  LabelFormat = "good_volume_inc_${NumVols}"
  Storage=good_storage
}

Job {
  Name = good_job
  Type = Backup
  Client = good-fd
  FileSet = "good_files"
  Schedule = "good_schedule"
  Storage = good_storage
  Messages = Standard
  Pool = good_pool_full
  Full Backup Pool = good_pool_full
  Differential Backup Pool = good_pool_diff
  Incremental Backup Pool = good_pool_inc
  Priority = 10
  Max Wait Time = 36000  #10h
  Maximum Concurrent Jobs = 1
}

Storage {
  Name = good_storage
  Address = 192.168.1.1
  SDPort = 9103
  Password = "XXX"
  Device = good_device
  Media Type = good_device
}


this is the storage configuration:

Device {
  Name = good_device
  Media Type = good_device
  Archive Device = /backups
  LabelMedia = yes;
  Random Access = Yes;
  AutomaticMount = yes;
  RemovableMedia = no;
  AlwaysOpen = no;
}


this is the output before confirm a restore job:

Run Restore job
JobName: RestoreFiles
Bootstrap:   /var/lib/bacula/mydir-dir.restore.4.bsr
Where:   /restores
Replace: always
FileSet: good_files
Backup Client:   mydir-fd
Restore Client:  mydir-fd
Storage: good_storage
When:2015-11-23 20:42:54
Catalog: good_catalog
Priority:10
Plugin Options:  *None*
OK to run? (yes/mod/no):


this is an example of a volume:

|  91 | good_volume_diff_23 | Used  |   1 |   475,980,380 |
   0 |   15,552,000 |   1 |0 | 0 | good_device | 2015-06-22
00:08:11 |


i can't understand this behaviour, have you got any idea?

thank you in advance and let me know if you  need more informations.
--
Go from Idea to Many App Stores Faster with Intel(R) XDK
Give your users amazing mobile app experiences with Intel(R) XDK.
Use one codebase in this all-in-one HTML5 development environment.
Design, debug & build mobile apps & 2D/3D high-impact games for multiple OSs.

Re: [Bacula-users] Bacula Loading Tape at the Wrong Library

2014-08-18 Thread stefano scotti
hello,

i used to have the same problem and i solved it with raw sql queries like
that :

update Media set MediaType=new_type where mediatype=old_type;

i didnt found other methods, neither through the bconsole program.

On Mon, Aug 18, 2014 at 1:24 PM, Vinicius Alexandre Pereira de Souza 
vinicius.apso...@gmail.com wrote:

 Hi Heitor,

 Thanks for help. I tried: update  volume parameters  all volumes from
 all pools, but it didn't worked. I don't have an option like update media
 type also.


 On Mon, Aug 18, 2014 at 1:14 PM, Heitor Faria hei...@bacula.com.br
 wrote:




 The problem now is when my jobs run, it tries to use a media with Media
 Type = LTO-5-LibHP08, but all the medias are still configured as
LTO-5.
 The following log was generated:

 When i list the media on bconsole, all of them still are LTO-5, after
 searching for solutions, all i found was people suggesting to update the
 values manually on database. is there a better way to correct the media
 type on the medias?



 Hi Vinicius: did you try the update  volume parameters  all volumes
 from all pools?




 Thanks.


 On Thu, Aug 7, 2014 at 2:39 PM, Roberts, Ben ben.robe...@gsacapital.com
  wrote:

  Hi Vinicius,



 You need to use a unique Media Type for each autochanger, e.g.
 ?LTO5-Library1? and ?LTO5-Library2?. These are arbitrary string
values, the
 exact name doesn?t matter. Bacula believes a drive in library1 is
suitable
 for loading a tape from library2 because the same Media Type is used
for
 each.



 Regards,

 Ben Roberts



 *From:* Vinicius Alexandre Pereira de Souza [mailto:
 vinicius.apso...@gmail.com]
 *Sent:* 07 August 2014 17:39
 *To:* bacula-users@lists.sourceforge.net
 *Subject:* [Bacula-users] Bacula Loading Tape at the Wrong Library



 Hello everybody,

 I'm new to Bacula, and i'm having a problem using different storages in
 a same pool.

 I have two HP Tape Libraries installed on the Storage (LibHP08 and
 LibHP09), i tried to configure some jobs to access both libraries, but
I'm
 having some trouble with bacula accessing the wrong library.

 For example, bacula tries to access volume G00022L5 on slot 20 at
 LibHP08, but it reaches the volume H00011L5 on slot 20 at LibHP09.
 Basically, it tries to get a tape on the correct slot, but in the wrong
 Library. It Generates the Following error:



 LibHP 3307 Issuing autochanger unload slot 20, drive 1 command.

  Warning: Director wanted Volume G00022L5.

 Current Volume H00011L5 not acceptable because:

 1998 Volume H00011L5 catalog status is Append, not in Pool.

  Then, bacula unloads the drive, tries to find the correct tape, but
 loads the wrong one, generates the error again, and so on.

 The job never completes, since it doesn't find the correct tape.

 Some of my Pools:



 Pool {

   Name = machine-Pool-Weekly

   Pool Type = Backup

   Storage = LibHP08, LibHP09

   Recycle = yes

   AutoPrune = yes

   Volume Retention = 34 days

 }



 Pool {

   Name = machine-Pool-Monthly

   Pool Type = Backup

   Storage = LibHP08, LibHP09

   Recycle = yes

   AutoPrune = yes

   Volume Retention = 1825 days

 }



 Devices/Autochangers Config:



 #

 ## An autochanger device with four drives

 ##  Library HP (LibHP08)

 ##

 Autochanger {

   Name = LibHP08_Changer

   Device = LibHP08-drive_1, LibHP08-drive_2, LibHP08-drive_3,
LibHP08-drive_4

   Changer Command = /usr/lib64/bacula/mtx-changer %c %o %S %a %d

   Changer Device = /dev/tape/by-id/scsi-35001438016063c04

 }



 #

 ## An autochanger device with four drives

 ##  Library HP (LibHP09)

 ##

 Autochanger {

   Name = LibHP09_Changer

   Device = LibHP09-drive_1, LibHP09-drive_2, LibHP09-drive_3,
LibHP09-drive_4

   Changer Command = /usr/lib64/bacula/mtx-changer %c %o %S %a %d

   Changer Device = /dev/tape/by-id/scsi-3500143801606395c

 }



 Device {

   Name = LibHP08-drive_1  #

   Drive Index = 0

   Media Type = LTO-5

   Archive Device = /dev/tape/by-id/scsi-35001438016063c05-nst

   AutomaticMount = yes;   # when device opened, read it

   AlwaysOpen = yes;

   RemovableMedia = yes;

   RandomAccess = no;

   AutoChanger = yes

   Alert Command = sh -c 'smartctl -H -l error %c'

   Maximum Changer Wait = 600

   Maximum Concurrent Jobs = 1

   LabelMedia = yes

 }



 Device {

   Name = LibHP08-drive_2  #

   Drive Index = 1

   Media Type = LTO-5

   Archive Device = /dev/tape/by-id/scsi-35001438016063c08-nst

   AutomaticMount = yes;   # when device opened, read it

   AlwaysOpen = yes;

   RemovableMedia = yes;

   RandomAccess = no;

   AutoChanger = yes

   Alert Command = sh -c 'smartctl -H -l error %c'

   Maximum Changer Wait = 600

   Maximum Concurrent Jobs = 1

   LabelMedia = yes

 }



 Device {

   Name = LibHP08-drive_3  #

   Drive Index = 2

   Media Type = LTO-5

   Archive Device = /dev/tape/by-id/scsi-35001438016063c0b-nst

   AutomaticMount = yes;   # when device opened, 

Re: [Bacula-users] Restores fail because of multiple storages

2013-12-17 Thread stefano scotti
2013/12/17 Uwe Schuerkamp uwe.schuerk...@nionex.net

 On Mon, Dec 16, 2013 at 05:09:18PM +0100, stefano scotti wrote:
 
 
  The problem here is that  different storages refer to different
  directories, and Bacula  pretends to find the new volume in the same
  directory of the old one.
  Now that the MediaType is the same, no fatal errors are exposed, but
 Bacula
  is not really changing the storage, it is only misled thinking that the
 new
  volume will be found in the same storage of the old one, but it won't.
 
  Can you please, Uwe, tell me if i'm wrong on that point and, if not, tell
  me if there's any solution about it?
 
  Thank you.

 Hi Stefano,

 some soft links will help here so bacula will find the volumes it
 expects in their proper directories.

 We once ran a setup similar to yours, but have since moved all volumes
 (incrementals and fulls) to a single directory.

 Just create a soft link to the volume that bacula expects in the
 directory it's looking for and you should be good; not pretty, but it
 should work ;)

 Uwe


That seems a working solution, but creating a symbolic link for every
volume required by a restore job introduces a manual operation that would
be better to avoid, especially if a lot of incremental volumes are being
considered.

I suppose that using vchanger, suggested by John in this thread, or the
disk-changer script, provided by Bacula, is the only solution to automate
the symbolic links creation.

But, in my opinion, an Autochanger is a too specific tape related concept
that introduces a lot of complexity that is not really needed by a
filesystem solution, i'd prefer not to deal with redundant configuration
tasks performed exclusively in order to work around software architectural
lacks... but of course i will if i'm obliged to: in my environment it is
important to have different directories for each job level.

Do you, Uwe, confirm that an Autochanger is the only way to have working
and automated restore jobs with volumes in different directories?

Thank you again.
--
Rapidly troubleshoot problems before they affect your business. Most IT 
organizations don't have a clear picture of how application performance 
affects their revenue. With AppDynamics, you get 100% visibility into your 
Java,.NET,  PHP application. Start your 15-day FREE TRIAL of AppDynamics Pro!
http://pubads.g.doubleclick.net/gampad/clk?id=84349831iu=/4140/ostg.clktrk___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] Restores fail because of multiple storages

2013-12-16 Thread stefano scotti
2013/12/16 Uwe Schuerkamp uwe.schuerk...@nionex.net

 On Fri, Dec 13, 2013 at 04:43:29PM +0100, stefano scotti wrote:
  i took a step forward.
 
  i understood that the problem arise only if i try to restore a directory
  whose files are present in different volumes.
  if i try to restore a single file i don't have any problem, that's
 because
  Bacula is able to identify the correct volume associated with that file.
 
  but, if files are in different volumes and if volumes are stored in
  different storages, Bacula fails to restore files with the error i posted
  before.
 
  i can observe this behaviour in Bacula  5.0.2 and in Bacula 5.2.6
 
  is it a bug, an architecture error or a configuration issue?
 
  thanks again.
 
 
 

 Hi Stefano,

 in order for bacula to be able to assemble all required volumes from
 different storages during a restore involving multiple volumes, you
 need to make sure all storages are using an identical media type.

 All the best, Uwe

 --
 NIONEX --- Ein Unternehmen der Bertelsmann SE  Co. KGaA



Hi Uwe,

Thanks for your reply, that solution seems a lot easier compared to the
autochanger one.
I tried to make the MediaType the same for every storage and device and the
situation got better, but it hasn't completely solved yet.


Now, bacula produce this message:


16-dic 14:09 thisdir JobId 609: Start Restore Job
RestoreFiles.2013-12-16_14.09.30_03
16-dic 14:09 thisdir JobId 609: Using Device JobName_full
16-dic 14:09 thisdir-sd JobId 609: Ready to read from volume
JobName_full_1 on device JobName_full (/path/to/storage/JobName_full).
16-dic 14:09 thisdir-sd JobId 609: Forward spacing Volume JobName_full_1
to file:block 0:231.
*messages
16-dic 14:09 thisdir-sd JobId 609: End of Volume at file 0 on device
JobName_full (/path/to/storage/JobName_full), Volume JobName_full_1
16-dic 14:09 thisdir-sd JobId 609: Warning: acquire.c:239 Read open device
JobName_full (/path/to/storage/JobName_full) Volume JobName_diff_5
failed: ERR=dev.c:568 Could not open: /path/to/storage/JobName_full, ERR=No
such file or directory


Followed by this mouting request:

16-dic 14:09 thisdir-sd JobId 609: Please mount Volume JobName_diff_5 for:
Job:  RestoreFiles.2013-12-16_14.09.30_03
Storage:  JobName_full (/path/to/storage/JobName_full)
Pool: thisdir_catalog
Media type:   JobName


The problem here is that  different storages refer to different
directories, and Bacula  pretends to find the new volume in the same
directory of the old one.
Now that the MediaType is the same, no fatal errors are exposed, but Bacula
is not really changing the storage, it is only misled thinking that the new
volume will be found in the same storage of the old one, but it won't.

Can you please, Uwe, tell me if i'm wrong on that point and, if not, tell
me if there's any solution about it?

Thank you.
--
Rapidly troubleshoot problems before they affect your business. Most IT 
organizations don't have a clear picture of how application performance 
affects their revenue. With AppDynamics, you get 100% visibility into your 
Java,.NET,  PHP application. Start your 15-day FREE TRIAL of AppDynamics Pro!
http://pubads.g.doubleclick.net/gampad/clk?id=84349831iu=/4140/ostg.clktrk___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] Restores fail because of multiple storages

2013-12-13 Thread stefano scotti
i took a step forward.

i understood that the problem arise only if i try to restore a directory
whose files are present in different volumes.
if i try to restore a single file i don't have any problem, that's because
Bacula is able to identify the correct volume associated with that file.

but, if files are in different volumes and if volumes are stored in
different storages, Bacula fails to restore files with the error i posted
before.

i can observe this behaviour in Bacula  5.0.2 and in Bacula 5.2.6

is it a bug, an architecture error or a configuration issue?

thanks again.






2013/12/10 stefano scotti scottistefan...@gmail.com

 Hello,

 I have different storages for different backup levels:

 Full: JobName_full
 Differential: JobName_diff
 Incremental: JobName_inc

 the problem is that when i try to restore a job that involve different
 storages, Bacula fails to change the storage device issuing this kind of
 error:

 10-dic 17:44 thisdir-dir JobId 762: Start Restore Job
 RestoreFiles.2013-12-10_17.44.30_18
 10-dic 17:44 thisdir-dir JobId 762: Using Device JobName_full
 10-dic 17:44 thisdir-sd JobId 762: Ready to read from volume
 JobName_full_0 on device JobName_full (/path/to/storage/JobName_full).
 10-dic 17:44 thisdir-sd JobId 762: Forward spacing Volume JobName_full_0
 to file:block 1:711422012.
 10-dic 17:46 thisdir-sd JobId 762: End of Volume at file 11 on device
 JobName_full (/path/to/storage/JobName_full), Volume JobName_full_0
 10-dic 17:46 thisdir-sd JobId 762: acquire.c:121 Changing read device.
 Want Media Type=JobName_diff have=JobName_full
   device=JobName_full (/path/to/storage/JobName_full)
 10-dic 17:46 thisdir-sd JobId 762: Fatal error: acquire.c:182 No suitable
 device found to read Volume JobName_diff_6
 10-dic 17:46 thisdir-sd JobId 762: Fatal error: mount.c:865 Cannot open
 Dev=JobName_full (/path/to/storage/JobName_full), Vol=JobName_diff_6
 10-dic 17:46 thisdir-sd JobId 762: End of all volumes.

 do you know how to solve this problem?

 TIA.




-- 

 Please consider the environment before printing this email
--
Rapidly troubleshoot problems before they affect your business. Most IT 
organizations don't have a clear picture of how application performance 
affects their revenue. With AppDynamics, you get 100% visibility into your 
Java,.NET,  PHP application. Start your 15-day FREE TRIAL of AppDynamics Pro!
http://pubads.g.doubleclick.net/gampad/clk?id=84349831iu=/4140/ostg.clktrk___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] Restores fail because of multiple storages

2013-12-13 Thread stefano scotti
2013/12/13 John Drescher dresche...@gmail.com




 On Fri, Dec 13, 2013 at 10:43 AM, stefano scotti 
 scottistefan...@gmail.com wrote:

 i took a step forward.

 i understood that the problem arise only if i try to restore a directory
 whose files are present in different volumes.
 if i try to restore a single file i don't have any problem, that's
 because Bacula is able to identify the correct volume associated with that
 file.

 but, if files are in different volumes and if volumes are stored in
 different storages, Bacula fails to restore files with the error i posted
 before.

 i can observe this behaviour in Bacula  5.0.2 and in Bacula 5.2.6

 is it a bug, an architecture error or a configuration issue?



 I say a combination of by design and configuration issue. To avoid this
 problem use a virtual disk autochanger.

 John



thank you John for your clarification.
do you know how i can realize a virtual disk autochanger configuration or
where i can find some documentation about it?

thank you again
--
Rapidly troubleshoot problems before they affect your business. Most IT 
organizations don't have a clear picture of how application performance 
affects their revenue. With AppDynamics, you get 100% visibility into your 
Java,.NET,  PHP application. Start your 15-day FREE TRIAL of AppDynamics Pro!
http://pubads.g.doubleclick.net/gampad/clk?id=84349831iu=/4140/ostg.clktrk___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


[Bacula-users] Restores fail because of multiple storages

2013-12-10 Thread stefano scotti
Hello,

I have different storages for different backup levels:

Full: JobName_full
Differential: JobName_diff
Incremental: JobName_inc

the problem is that when i try to restore a job that involve different
storages, Bacula fails to change the storage device issuing this kind of
error:

10-dic 17:44 thisdir-dir JobId 762: Start Restore Job
RestoreFiles.2013-12-10_17.44.30_18
10-dic 17:44 thisdir-dir JobId 762: Using Device JobName_full
10-dic 17:44 thisdir-sd JobId 762: Ready to read from volume
JobName_full_0 on device JobName_full (/path/to/storage/JobName_full).
10-dic 17:44 thisdir-sd JobId 762: Forward spacing Volume JobName_full_0
to file:block 1:711422012.
10-dic 17:46 thisdir-sd JobId 762: End of Volume at file 11 on device
JobName_full (/path/to/storage/JobName_full), Volume JobName_full_0
10-dic 17:46 thisdir-sd JobId 762: acquire.c:121 Changing read device. Want
Media Type=JobName_diff have=JobName_full
  device=JobName_full (/path/to/storage/JobName_full)
10-dic 17:46 thisdir-sd JobId 762: Fatal error: acquire.c:182 No suitable
device found to read Volume JobName_diff_6
10-dic 17:46 thisdir-sd JobId 762: Fatal error: mount.c:865 Cannot open
Dev=JobName_full (/path/to/storage/JobName_full), Vol=JobName_diff_6
10-dic 17:46 thisdir-sd JobId 762: End of all volumes.

do you know how to solve this problem?

TIA.
--
Rapidly troubleshoot problems before they affect your business. Most IT 
organizations don't have a clear picture of how application performance 
affects their revenue. With AppDynamics, you get 100% visibility into your 
Java,.NET,  PHP application. Start your 15-day FREE TRIAL of AppDynamics Pro!
http://pubads.g.doubleclick.net/gampad/clk?id=84349831iu=/4140/ostg.clktrk___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


[Bacula-users] Sometimes catalog backups fails with MySql Access Denied

2013-09-19 Thread stefano scotti
Hello list,

My configuration include a different catalog for every client, every
catalog has its own MySql user and password.

This configuration works well, i only had to modify a little thing on
make_catalog_backup.pl at row 47:

my $dir_conf=/usr/sbin/dbcheck -B -c /etc/bacula/bacula-dir.conf -C $cat;

-C $cat  -  wasn't present.

But i can't understand why sometimes Bacula mess up with catalogs'
usernames failing to backup.

This is the error:

19-set 08:20 this-director-dir JobId 1700: shell command: run BeforeJob
/etc/bacula/scripts/make_catalog_backup.pl CLIENT_A_catalog
19-set 08:20 this-director-dir JobId 1700: BeforeJob: mysqldump: Got error:
1044: Access denied for user 'CLIENT_B_fd'@'localhost' to database
'CLIENT_A_catalog' when selecting the database
19-set 08:20 this-director-dir JobId 1700: Error: Runscript: BeforeJob
returned non-zero status=2. ERR=Child exited with code 2
19-set 08:20 this-director-dir JobId 1700: Error: Bacula this-director-dir
5.0.2 (28Apr10): 19-set-2013 08:20:00
  Build OS:   x86_64-pc-linux-gnu debian 6.0.6
  JobId:  1700
  Job:CLIENT_A_catalog.2013-09-19_08.20.00_10
  Backup Level:   Full
  Client: this-director-fd 5.0.2 (28Apr10)
  FileSet:CLIENT_A_catalog 2013-02-19 08:20:00
  Pool:   CLIENT_A_catalog (From Job resource)
  Catalog:this-director_catalog (From Client resource)
  Storage:CLIENT_A_catalog (From Job resource)
  Scheduled time: 19-set-2013 08:20:00
  Start time: 19-set-2013 08:20:00
  End time:   19-set-2013 08:20:00
  Elapsed time:   0 secs
  Priority:   11
  FD Files Written:   0
  SD Files Written:   0
  FD Bytes Written:   0 (0 B)
  SD Bytes Written:   0 (0 B)
  Rate:   0.0 KB/s
  Software Compression:   None
  VSS:no
  Encryption: no
  Accurate:   no
  Volume name(s):
  Volume Session Id:  0
  Volume Session Time:0
  Last Volume Bytes:  0 (0 B)
  Non-fatal FD errors:1
  SD Errors:  0
  FD termination status:
  SD termination status:
  Termination:*** Backup Error ***


As you can see, Bacula is trying to access CLIENT_A_catalog using
CLIENT_B_fd credentials.
This is wrong, it should ever use CLIENT_A_fd to access CLIENT_A_catalog
and CLIENT_B_fd to access CLIENT_B_catalog.
That's why the access denied error occurs.

That situation happens only sometimes and i can't understand why.
Have you got any idea?

I can observe this behaviour in version 5.0.2 and version 5.2.6
--
LIMITED TIME SALE - Full Year of Microsoft Training For Just $49.99!
1,500+ hours of tutorials including VisualStudio 2012, Windows 8, SharePoint
2013, SQL 2012, MVC 4, more. BEST VALUE: New Multi-Library Power Pack includes
Mobile, Cloud, Java, and UX Design. Lowest price ever! Ends 9/20/13. 
http://pubads.g.doubleclick.net/gampad/clk?id=58041151iu=/4140/ostg.clktrk___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


[Bacula-users] Wrong storage proposed while restoring

2013-06-24 Thread stefano scotti
Hello list,

I have this annoying situation: while restoring, after selecting the
destination client, bacula proposes the following restore job parameters:

JobName: RestoreFiles
Bootstrap:   /var/lib/bacula/hagrid-dir.restore.11.bsr
Where:   /home/restores
Replace: always
FileSet: client_3
Backup Client:   client_1-fd
Restore Client:  client_1-fd
Storage:  client_2
When:2013-06-24 12:55:17
Catalog: client_1
Priority:10
Plugin Options:  *None*


But the storage is wrong! it should be the storage of client_1, not
client_2.
This mismatching cause the job to fail.
I have to modify manually this parameter every time i have to restore a
file.

Actually the Storage parameter seems to be taken randomly by bacula,
sometime is client_1, sometime client_2, sometime client_3, etc...

Is it an intended behaviour? and if it is, why?

Thank you very much.
--
This SF.net email is sponsored by Windows:

Build for Windows Store.

http://p.sf.net/sfu/windows-dev2dev___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] Wrong storage proposed while restoring

2013-06-24 Thread stefano scotti
2013/6/24 James Harper james.har...@bendigoit.com.au

  Hello list,
 
  I have this annoying situation: while restoring, after selecting the
 destination
  client, bacula proposes the following restore job parameters:
 
  JobName: RestoreFiles
  Bootstrap:   /var/lib/bacula/hagrid-dir.restore.11.bsr
  Where:   /home/restores
  Replace: always
  FileSet: client_3
  Backup Client:   client_1-fd
  Restore Client:  client_1-fd
  Storage:  client_2
  When:2013-06-24 12:55:17
  Catalog: client_1
  Priority:10
  Plugin Options:  *None*
 
 
  But the storage is wrong! it should be the storage of client_1, not
 client_2.
  This mismatching cause the job to fail.
  I have to modify manually this parameter every time i have to restore a
 file.
 
  Actually the Storage parameter seems to be taken randomly by bacula,
  sometime is client_1, sometime client_2, sometime client_3, etc...
 
  Is it an intended behaviour? and if it is, why?
 

 What is the definition of your RestoreFiles job?

 James


Here it is:

Job {
  Name = RestoreFiles
  Type = Restore
  Client=client_3-fd
  FileSet=client_3-fd
  Storage = client_3
  Pool = client_3
  Messages = Standard
  Where = /home/restores
}
--
This SF.net email is sponsored by Windows:

Build for Windows Store.

http://p.sf.net/sfu/windows-dev2dev___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


[Bacula-users] verify bacula compression

2013-04-10 Thread stefano scotti
Hello users,

I've just add the directive
*compression=GZIP*to all my filesets.

now i suppose that all new backups will be compressed... but how can i be
sure of that?

I noticed that messages doesn't say anything about compression when the
job has finished.

How is it possible to verify that the compression is actually performed?

Thanks everybody.
--
Precog is a next-generation analytics platform capable of advanced
analytics on semi-structured data. The platform includes APIs for building
apps and a phenomenal toolset for data science. Developers can use
our toolset for easy data analysis  visualization. Get a free account!
http://www2.precog.com/precogplatform/slashdotnewsletter___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


[Bacula-users] Migrate from 5.0.2 to 5.2.6 didn't upgrade all my catalogs

2013-04-08 Thread stefano scotti
Hello Bacula users,

I'm not able to use my old catalogs anymore because the upgrade process
upgraded only the database named bacula and not the other ones.

I used the dist-upgrade from Debian 6 to Debian 7.

Do you know where i can find the scripts to migrate a mysql catalog from
5.0.2 to 5.2.6 ?

Thank you in advance.


-- 

 Please consider the environment before printing this email
--
Minimize network downtime and maximize team effectiveness.
Reduce network management and security costs.Learn how to hire 
the most talented Cisco Certified professionals. Visit the 
Employer Resources Portal
http://www.cisco.com/web/learning/employer_resources/index.html___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] Migrate from 5.0.2 to 5.2.6 didn't upgrade all my catalogs

2013-04-08 Thread stefano scotti
2013/4/8 stefano scotti scottistefan...@gmail.com


 Hello Bacula users,

 I'm not able to use my old catalogs anymore because the upgrade process
 upgraded only the database named bacula and not the other ones.

 I used the dist-upgrade from Debian 6 to Debian 7.

 Do you know where i can find the scripts to migrate a mysql catalog from
 5.0.2 to 5.2.6 ?

 Thank you in advance.


 --

  Please consider the environment before printing this email



Ok, I've found the solution.

The script to update catalogs from 5.0.X to 5.2.X is located in:
 /usr/share/bacula-director/update_mysql_tables

It allows to specify which database to update through the first argument.

Thanks anyway for the attention.




2013/4/8 stefano scotti scottistefan...@gmail.com


 Hello Bacula users,

 I'm not able to use my old catalogs anymore because the upgrade process
 upgraded only the database named bacula and not the other ones.

 I used the dist-upgrade from Debian 6 to Debian 7.

 Do you know where i can find the scripts to migrate a mysql catalog from
 5.0.2 to 5.2.6 ?

 Thank you in advance.


 --

  Please consider the environment before printing this email




-- 

 Please consider the environment before printing this email
--
Minimize network downtime and maximize team effectiveness.
Reduce network management and security costs.Learn how to hire 
the most talented Cisco Certified professionals. Visit the 
Employer Resources Portal
http://www.cisco.com/web/learning/employer_resources/index.html___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


[Bacula-users] Error: sql_create.c Volume XXX already exist

2013-04-08 Thread stefano scotti
Hello users,

I'm not able to run a full job because bacula gives me this error:

08-apr 16:37 hagrid-dir JobId 95: Start Backup JobId 95,
Job=angel_highsites.2013-04-08_16.37.42_09
08-apr 16:37 hagrid-dir JobId 95: Error: sql_create.c:424 Volume
angel_highsites_full_1 already exists.
08-apr 16:37 hagrid-dir JobId 95: Using Device angel_highsites_full
08-apr 16:37 hagrid-dir JobId 95: Error: sql_create.c:424 Volume
angel_highsites_full_1 already exists.
08-apr 16:37 hagrid-dir JobId 95: Error: sql_create.c:424 Volume
angel_highsites_full_1 already exists.
08-apr 16:37 yoda-sd JobId 95: Job angel_highsites.2013-04-08_16.37.42_09
is waiting. Cannot find any appendable volumes.
Please use the label command to create a new Volume for:
Storage:  angel_highsites_full
(/home/backs/angel/sites/highsites/highsites_full)
Pool: angel_highsites_full
Media type:   angel_highsites_full

I tried to label a new volume with no luck.
I've read that this might be a bug of version 5.0.2 , that's why i upgraded
to version 5.2.6 , but the error remained.

Anyone has got a tip?

Thanks in advance.
--
Minimize network downtime and maximize team effectiveness.
Reduce network management and security costs.Learn how to hire 
the most talented Cisco Certified professionals. Visit the 
Employer Resources Portal
http://www.cisco.com/web/learning/employer_resources/index.html___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


[Bacula-users] strange Access Denied error while backing up catalogs

2013-03-08 Thread stefano scotti
Hi,

I have a catalog for each client.
Sometime when i back up a catalog i receive a fatal error, like for example:

08-mar 08:20 this-dir JobId 73: shell command: run BeforeJob
/etc/bacula/scripts/make_catalog_backup.pl FIRSTCLIENT_catalog
08-mar 08:20 this-dir JobId 73: BeforeJob: mysqldump: Got error: 1044:
Access denied for user 'SECONDCLIENT_fd'@'localhost' to database
'FIRSTCLIENT_catalog' when selecting the database
08-mar 08:20 this-dir JobId 73: Error: Runscript: BeforeJob returned
non-zero status=2. ERR=Child exited with code 2


I don't know why bacula randomly mix in a wrong way database names and
users credentials.
In this case 'FIRSTCLIENT'@'localhost'  has permissions on
FIRSTCLIENT_catalog, and 'SECONDCLIENT_fd'@'localhost'  has permission on
SECONDCLIENT_catalog.

If i exec

dbcheck -B -c /etc/bacula/bacula-dir.conf -C FIRSTCLIENT_catalog
or
dbcheck -B -c /etc/bacula/bacula-dir.conf -C SECONDCLIENT_catalog

It gives me the correct credentials.

So why sometime bacula comes with this error? is it a bug?


Thanks.



-- 

 Please consider the environment before printing this email
--
Symantec Endpoint Protection 12 positioned as A LEADER in The Forrester  
Wave(TM): Endpoint Security, Q1 2013 and remains a good choice in the  
endpoint security space. For insight on selecting the right partner to 
tackle endpoint security challenges, access the full report. 
http://p.sf.net/sfu/symantec-dev2dev___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] Problem with concurrent mixed priority jobs

2013-03-06 Thread stefano scotti
2013/3/6 Uwe Schuerkamp uwe.schuerk...@nionex.net

 On Tue, Mar 05, 2013 at 04:07:36PM +0100, stefano scotti wrote:
 
  I'm using mixed priorities because i want to handle the case in which all
  job slots are occupied.
 
  For example, if there are only a free slot i'd like to assign it to a
 more
  important job (like mailboxes) instead of a not critical job (like server
  configurations).
 
  Are you suggesting that, because of bacula behavior, i should increment
 the
  number of slots instead of assign priorities to the critical jobs?
 
  I don't like very much this solution... a lot of job will eat my
 bandwidth
  slowing every job scheduled in that time, included the critical ones that
  should be completed as fast as possible!
  That's exactly what i want to avoid.
 
  Thank you again.
 

 Why don't you use different schedules and start the important jobs
 ahead of the slow ones? You an also split slow jobs into separate jobs
 (filesets) to have them finish in less than 24 hours. Another option
 could be to rsync the slow clients to on-disk storage and then backing
 up the local fs on the bacula server (that's how I handled two
 especially slow clients with millions of small files).

 Cheers, Uwe
 --
 NIONEX --- Ein Unternehmen der Bertelsmann SE  Co. KGaA




Thanks Uwe,

I think i will use one of your workarounds, because they are workarounds.

The best solution was to allow mixing lower priorities even though an
higher job is running.
Something like Allow Lower Mixed Priority directive.
But if bacula's devolopers didn't think about that, i have to sadly conform.

Thank you again.


-- 

 Please consider the environment before printing this email
--
Symantec Endpoint Protection 12 positioned as A LEADER in The Forrester  
Wave(TM): Endpoint Security, Q1 2013 and remains a good choice in the  
endpoint security space. For insight on selecting the right partner to 
tackle endpoint security challenges, access the full report. 
http://p.sf.net/sfu/symantec-dev2dev___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] List all JOBS of every CATALOGS

2013-03-05 Thread stefano scotti
2013/3/5 Dan Langille d...@langille.org


 On Mar 4, 2013, at 5:00 PM, stefano scotti wrote:

  Hi,
 
  is it possible to list jobs for every catalogs?
 
  the command list jobs list only the jobs of the catalog selected by
  the command use.

 Initial guess: no.

 The Director deals with one Catalog/database at a time.

 why not script it?

 for catalog in $catalogs
 do
   magic
 done

 etc…?

 --
 Dan Langille - http://langille.org



what a pity...

i'd like to use the standard bacula console commands instead of do some
bash tricks every time.

should i ask for this feature in the developer list?

Thanks.
--
Everyone hates slow websites. So do we.
Make your web apps faster with AppDynamics
Download AppDynamics Lite for free today:
http://p.sf.net/sfu/appdyn_d2d_feb___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


[Bacula-users] Problem with concurrent mixed priority jobs

2013-03-05 Thread stefano scotti
Hi,

My director allows 5 concurrent jobs, and each job has *Allow Mixed Priority
* set to yes.

My aim is that even if a slow job takes a lot of time to finish, there will
be other 4 job slots to use so that the backup system can still work
despite of a really really slow job.

So i can have 4 slow jobs and my system will keep scheduling fast jobs
anyway.

Now, the problem is that only jobs with an higher priority will be
scheduled, lower priority jobs still have to wait even if there are 4 free
job slots!

Is it possible to instruct bacula to let lower priority job be scheduled
not worrying about the priority of current scheduled jobs but only on the
number of free job slots?

Thanks.
--
Everyone hates slow websites. So do we.
Make your web apps faster with AppDynamics
Download AppDynamics Lite for free today:
http://p.sf.net/sfu/appdyn_d2d_feb___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] Problem with concurrent mixed priority jobs

2013-03-05 Thread stefano scotti
2013/3/5 Uwe Schuerkamp uwe.schuerk...@nionex.net

 On Tue, Mar 05, 2013 at 03:19:29PM +0100, stefano scotti wrote:
  Hi,
 
  My director allows 5 concurrent jobs, and each job has *Allow Mixed
 Priority
  * set to yes.
 
  My aim is that even if a slow job takes a lot of time to finish, there
 will
  be other 4 job slots to use so that the backup system can still work
  despite of a really really slow job.
 
  So i can have 4 slow jobs and my system will keep scheduling fast jobs
  anyway.
 
  Now, the problem is that only jobs with an higher priority will be
  scheduled, lower priority jobs still have to wait even if there are 4
 free
  job slots!
 
  Is it possible to instruct bacula to let lower priority job be scheduled
  not worrying about the priority of current scheduled jobs but only on the
  number of free job slots?
 
  Thanks.

 Hi Stefano,

 I don't understand why you're using mixed priorities to begin with as
 these have nothing to do with how fast or slow a job runs. You can
 simply run all jobs at the same priority and a slow job of a different
 priority won't hog your scheduling slots at all. Depending on your
 hardware it's usually safe to up the number of concurrent jobs. We
 generally use 8-16 concurrent jobs on our bacula directors.

 Cheers, Uwe


 --
 NIONEX --- Ein Unternehmen der Bertelsmann SE  Co. KGaA





Hi Uwe,

I'm using mixed priorities because i want to handle the case in which all
job slots are occupied.

For example, if there are only a free slot i'd like to assign it to a more
important job (like mailboxes) instead of a not critical job (like server
configurations).

Are you suggesting that, because of bacula behavior, i should increment the
number of slots instead of assign priorities to the critical jobs?

I don't like very much this solution... a lot of job will eat my bandwidth
slowing every job scheduled in that time, included the critical ones that
should be completed as fast as possible!
That's exactly what i want to avoid.

Thank you again.




-- 

 Please consider the environment before printing this email
--
Everyone hates slow websites. So do we.
Make your web apps faster with AppDynamics
Download AppDynamics Lite for free today:
http://p.sf.net/sfu/appdyn_d2d_feb___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


[Bacula-users] List all JOBS of every CATALOGS

2013-03-04 Thread stefano scotti
Hi,

is it possible to list jobs for every catalogs?

the command list jobs list only the jobs of the catalog selected by
the command use.

Thanks.

--
Everyone hates slow websites. So do we.
Make your web apps faster with AppDynamics
Download AppDynamics Lite for free today:
http://p.sf.net/sfu/appdyn_d2d_feb
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] expand Since Time on Client Run Before Job

2013-02-08 Thread stefano scotti
2013/2/7 yashmika lamahewage yashmika.lamahew...@gmail.com



 2013/2/6 stefano scotti scottistefan...@gmail.com


 Hi everybody,

 i'm trying to pass the since time as a parameter to my client side script
 before running the job.
 the user guide says that the related substitution character is %s.

 now, i know that not every variable expansion can be done on client side,
 in fact the user guide specifies that list:

 %% = %
 %b = Job Bytes
 %c = Client's name
 %d = Daemon's name (Such as host-dir or host-fd)

 %D = Director's name (Also valid on file daemon)
 %e = Job Exit Status
 %f = Job FileSet (Only on director side)
 %F = Job Files
 %h = Client address
 %i = JobId
 %j = Unique Job id

 %l = Job Level
 %n = Job name
 %p = Pool name (Only on director side)
 %s = Since time
 %t = Job type (Backup, ...)
 %v = Volume name (Only on director side)
 %w = Storage name (Only on director side)

 %x = Spooling enabled? (yes or no)


 as you see %s is not only on directory side.
 despite of that the expansion occurs correctly only with Run Before Job
 and not with Client Run Before Job.
 with the Client Run it always returns *none*.
 with the server side Run it returns -00-00 00:00:00 while
 scheduling full backups and the correct relative data when scheduling
 incremental and differential ones.

 i'm using bacula 5.0.2-2.2+squeeze1.

 am i missing something or it is just a bug?

 thank in advance for your help.


 --

  Please consider the environment before printing this email




 I had that problem too, solved by using ssh in Run Before Job directive
 which runs on director side.
 I followed this deprecated Tips page on section Executing Scripts on a
 Remote Machine:

 http://www.bacula.org/fr/dev-manual/Tips_and_Suggestions.html

 With that method i'm able to send the since time on a client side script.
 I know that this is not the correct way to achieve that, but at least it
 works.

 I hope that someone can explain why with Client Run Before Job directive
 the since time is ever translated with the *none* string.



Yes, i was thinking about the SSH method too, but it is a workaround, not a
solution, and even deprecated.

Nobody can explain us why something so simple has to become so hard?


-- 

 Please consider the environment before printing this email
--
Free Next-Gen Firewall Hardware Offer
Buy your Sophos next-gen firewall before the end March 2013 
and get the hardware for free! Learn more.
http://p.sf.net/sfu/sophos-d2d-feb___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


[Bacula-users] expand Since Time on Client Run Before Job

2013-02-06 Thread stefano scotti
Hi everybody,

i'm trying to pass the since time as a parameter to my client side script
before running the job.
the user guide says that the related substitution character is %s.

now, i know that not every variable expansion can be done on client side,
in fact the user guide specifies that list:

%% = %
%b = Job Bytes
%c = Client's name
%d = Daemon's name (Such as host-dir or host-fd)

%D = Director's name (Also valid on file daemon)
%e = Job Exit Status
%f = Job FileSet (Only on director side)
%F = Job Files
%h = Client address
%i = JobId
%j = Unique Job id

%l = Job Level
%n = Job name
%p = Pool name (Only on director side)
%s = Since time
%t = Job type (Backup, ...)
%v = Volume name (Only on director side)
%w = Storage name (Only on director side)

%x = Spooling enabled? (yes or no)


as you see %s is not only on directory side.
despite of that the expansion occurs correctly only with Run Before Job and
not with Client Run Before Job.
with the Client Run it always returns *none*.
with the server side Run it returns -00-00 00:00:00 while scheduling
full backups and the correct relative data when scheduling incremental and
differential ones.

i'm using bacula 5.0.2-2.2+squeeze1.

am i missing something or it is just a bug?

thank in advance for your help.


-- 

 Please consider the environment before printing this email
--
Free Next-Gen Firewall Hardware Offer
Buy your Sophos next-gen firewall before the end March 2013 
and get the hardware for free! Learn more.
http://p.sf.net/sfu/sophos-d2d-feb___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


[Bacula-users] Using more than a single catalog in bconsole

2013-02-05 Thread stefano scotti
Hello list,

I have a catalog for each client and sometime i'd like to list all jobs
stored in every catalog at the same time.
I noticed that the command use doesn't permit that:


**help use
  Command   Description
  ===   ===
  use   Use catalog xxx

Arguments:*


It would be nice if there were something like use all.

Is there something like that?

Thank you!
--
Free Next-Gen Firewall Hardware Offer
Buy your Sophos next-gen firewall before the end March 2013 
and get the hardware for free! Learn more.
http://p.sf.net/sfu/sophos-d2d-feb___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] Set more then 3 pools in a backup job

2013-01-29 Thread stefano scotti
2013/1/28 Adrian Reyer bacula-li...@lihas.de

 On Mon, Jan 28, 2013 at 04:21:24PM +0100, stefano scotti wrote:
  There are 4 different rotation rules and schedules... not three.
  How am i supposed to solve this?
  With my own scripts, i would define 4 pools, and a job type for each
 pool:
Level 0 Pool  Type:Full
Level 1 Pool  Type:Differential
Level 2 Pool  Type:Incremental
Level 3 Pool  Type:Incremental

 Be careful here what is based on what.
 Differential has all changes since last Full
 Incremental has all changes since the last run of any of
 Full/Differential/Incremental.
 If you run
 1234567123456712345671234567
 FIIDIIDIIDII and delete the first 2 differential, your
 1st 6 Is and last 6 Is would be usable, the 2nd and 3rd 6 Is are useless
 as they have no base to work on.
 Now if you enhance this for your 3-hourly Incrementals (i) a week looks
 like this
 DiiiIiiiIiiiIiiiIiiiIiiiIiii you imagine,
 but it won't work like this, if you delete any of the i, you won't be
 able to completly rstore to any following I or i point in time. It in
 fact actually
 DIII
 I think there is no easy way out if this is about used backup space. One
 way is to replace the I with D and then actually have the i as I.
 Depending on the changed amount of data, you might convert the weeks D
 into F.
 You can try 2 different jobs instead, but you still will have more Full
 backups than you like, you just need to keep the latest F, D and all I
 since the last D for the second set.
 Set1: FIIDIID___I___
 Set2: FxDIIIDIII

 Set1 '_' is just spaced out as here we need more fields for the time
 where the 3-hourly are run. The 'x' are positions that are already
 deleted in the second set.
 You can alter the Pool of a job with the run directive. Check

 http://www.bacula.org/5.2.x-manuals/en/main/main/Configuring_Director.html#SECTION00145

  In my opinion it is wrong to bind the concept of job type and the concept
  of pool, they are 2 different things.
  I use a pool to define a group of  volumes and their rotation rules, i
  specify a job type to define if those volumes will contain an
  incremental,differential or full backup.

 I think it can be used like that.

 Regards,
 Adrian
 --
 LiHAS - Adrian Reyer - Hessenwiesenstraße 10 - D-70565 Stuttgart
 Fon: +49 (7 11) 78 28 50 90 - Fax:  +49 (7 11) 78 28 50 91
 Mail: li...@lihas.de - Web: http://lihas.de
 Linux, Netzwerke, Consulting  Support - USt-ID: DE 227 816 626 Stuttgart



: O

Thank you Adrian!
I didn't think about that.
So i have to be really really carefull when recycling incremental backups
because all the other incrementals may rely on them!

I think i'm going to solve this as Dan suggested me, i will remove daily
incrementals and use only 3 hour incrementals, it shouldn't cause much
overhead/wasted space.

Thank you for the Pool option in the Run directive tip, now i know that i
can have more than 3 pools per job.



-- 

 Please consider the environment before printing this email
--
Master Visual Studio, SharePoint, SQL, ASP.NET, C# 2012, HTML5, CSS,
MVC, Windows 8 Apps, JavaScript and much more. Keep your skills current
with LearnDevNow - 3,200 step-by-step video tutorials by Microsoft
MVPs and experts. ON SALE this month only -- learn more at:
http://p.sf.net/sfu/learnnow-d2d___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


[Bacula-users] Set more then 3 pools in a backup job

2013-01-28 Thread stefano scotti
Hi everybody,

I've been using Bacula for 1 year, before then i used to write my own
scripts.
I see, and correct me if i am wrong, that bacula allow only a maximum of 3
backup levels:

   Level 0: Full
   Level 1: Differential
   Level 2: Incremental

This means a maximum of 3 pools per job.

Now, if i have to implement this example of recovery schema:

   Every month for a year
   Every week for 6 months
   Every day for a month
   Every 3 hours for 2 days

There are 4 different rotation rules and schedules... not three.
How am i supposed to solve this?

With my own scripts, i would define 4 pools, and a job type for each pool:

  Level 0 Pool  Type:Full
  Level 1 Pool  Type:Differential
  Level 2 Pool  Type:Incremental
  Level 3 Pool  Type:Incremental

Every pool with his own rotation rules which based on the recovery schema,
and a job type for each pool.

In my opinion it is wrong to bind the concept of job type and the concept
of pool, they are 2 different things.
I use a pool to define a group of  volumes and their rotation rules, i
specify a job type to define if those volumes will contain an
incremental,differential or full backup.

I hope i'm wrong with this and that i wasn't able to find the correct
solution which Bacula proposes, if this is the case please help me to throw
light on that.

Thanks.

-- 

 Please consider the environment before printing this email
--
Master Visual Studio, SharePoint, SQL, ASP.NET, C# 2012, HTML5, CSS,
MVC, Windows 8 Apps, JavaScript and much more. Keep your skills current
with LearnDevNow - 3,200 step-by-step video tutorials by Microsoft
MVPs and experts. ON SALE this month only -- learn more at:
http://p.sf.net/sfu/learnnow-d2d___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] Set more then 3 pools in a backup job

2013-01-28 Thread stefano scotti
Thanks Dan,

I really appreciate your help and i think you are right, Pools doesn't
define Levels, but a collection of similar volumes.
Levels are something related to jobs, not to pools, even though a pool
contains volumes of the same job level.

You solved my problem, i didn't know that it was possible to specify
IncrementalPool= in  a Run definition, now i know that is possible to
have more than 3 pools for a job, so i can have all the Levels or better
Retention periods that i want.

I have only one more question for you:
why do you think that is useless to run incremental every three hours and
then again daily?
Hourly backups have different retention periods than daily ones, that's why
i have to store them in different pools.

Thank you again for your help.



2013/1/28 Dan Langille d...@langille.org


 On Jan 28, 2013, at 10:21 AM, stefano scotti wrote:

 
  Hi everybody,
 
  I've been using Bacula for 1 year, before then i used to write my own
 scripts.
  I see, and correct me if i am wrong, that bacula allow only a maximum of
 3 backup levels:
 
 Level 0: Full
 Level 1: Differential
 Level 2: Incremental
 
  This means a maximum of 3 pools per job.

 I consider Pools to be a collection of Volumes with similar attributes.
  Stop thinking about levels.  Start thinking about retention.


  Now, if i have to implement this example of recovery schema:
 
 Every month for a year
 Every week for 6 months
 Every day for a month
 Every 3 hours for 2 days

 I think the above corresponds to

 Every month for a year  FULL
 Every week for 6 months DIFFERENTIAL
 Every day for a month  INCREMENTAL
 Every 3 hours for 2 days INCREMENTAL

 With different retention times on each

 It sounds like you want four pools

 FULL - retain 1 year, scheduled to run monthly
 DIFF - retain 6 months, scheduled to run weekly
 INCRMONTH - retain 1 month, scheduled to run daily
 INCR2DAYS -  retain 2 days, schedule to run every 3 hours

 Does that help you?

 First, what is your goal in running incrementals every 3 hours?  And then
 again daily?

  There are 4 different rotation rules and schedules... not three.
  How am i supposed to solve this?
 
  With my own scripts, i would define 4 pools, and a job type for each
 pool:
 
Level 0 Pool  Type:Full
Level 1 Pool  Type:Differential
Level 2 Pool  Type:Incremental
Level 3 Pool  Type:Incremental

 We came to the same conclusion, but stop thinking about level.

 You should also look at virtual backups.

  Every pool with his own rotation rules which based on the recovery
 schema, and a job type for each pool.
 
  In my opinion it is wrong to bind the concept of job type and the
 concept of pool, they are 2 different things.

 What do you mean by Job Type here?  Job type is backup, restore,
 copy/migrate.

  I use a pool to define a group of  volumes and their rotation rules, i
 specify a job type to define if those volumes will contain an
 incremental,differential or full backup.
 
  I hope i'm wrong with this and that i wasn't able to find the correct
 solution which Bacula proposes, if this is the case please help me to throw
 light on that. \

 I'm guessing at the schedule, and using mine as a starting point:

 Schedule {
   Name = WeeklyCycle
   Run = Level=Full 1st sun at 5:55
   Run = Level=Differential 2nd-5th sun at 5:55
   Run = Level=Incremental  mon-sat at 5:55
 }

 Alter mine to become:

 Schedule {
   Name = WeeklyCycle
   Run = Level=Full 1st sun at 5:55
   Run = Level=Differential 2nd-5th sun at 5:55
   Run = Level=Incremental IncrementalPool=MONTH mon-sat at 5:55
  Run = Level=Incremental IncrementalPool=INCR2DAYS daily at 0, 3, 6, 9,
 12, 15, 18, 21
 }}

 That should give you a starting point.  Sorry it's not more complete.
  Others might have better ideas. there…

 --
 Dan Langille - http://langille.org




-- 

 Please consider the environment before printing this email
--
Master Visual Studio, SharePoint, SQL, ASP.NET, C# 2012, HTML5, CSS,
MVC, Windows 8 Apps, JavaScript and much more. Keep your skills current
with LearnDevNow - 3,200 step-by-step video tutorials by Microsoft
MVPs and experts. ON SALE this month only -- learn more at:
http://p.sf.net/sfu/learnnow-d2d___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] Set more then 3 pools in a backup job

2013-01-28 Thread stefano scotti
 I think you concluded I thought they were useless because I asked you:

But you didn't answer my question. :)



Yes, i deducted it from your question... so maybe i should ask you if you
really think that is useless or not.


Noted.  But I wanted to know your goal there….

  Keep in mind that EACH incremental you run will be relative to the last
 backup.  Thus, the incremental that runs daily won't
 have stuff that changed since yesterday. Only the stuff that changed since
 the last incremental 3 hours ago.



My aim is to have daily backups which persist 1 month, and 3-hourly backups
which persist only 2 days.
Is there a different solution to achieve that instead of using 2 different
pools?

Thank you again.



2013/1/28 Dan Langille d...@langille.org


 On Jan 28, 2013, at 11:16 AM, stefano scotti wrote:

 2013/1/28 Dan Langille d...@langille.org


 On Jan 28, 2013, at 10:21 AM, stefano scotti wrote:

 
  Hi everybody,
 
  I've been using Bacula for 1 year, before then i used to write my own
 scripts.
  I see, and correct me if i am wrong, that bacula allow only a maximum
 of 3 backup levels:
 
 Level 0: Full
 Level 1: Differential
 Level 2: Incremental
 
  This means a maximum of 3 pools per job.

 I consider Pools to be a collection of Volumes with similar attributes.
  Stop thinking about levels.  Start thinking about retention.


  Now, if i have to implement this example of recovery schema:
 
 Every month for a year
 Every week for 6 months
 Every day for a month
 Every 3 hours for 2 days

 I think the above corresponds to

 Every month for a year  FULL
 Every week for 6 months DIFFERENTIAL
 Every day for a month  INCREMENTAL
 Every 3 hours for 2 days INCREMENTAL

 With different retention times on each

 It sounds like you want four pools

 FULL - retain 1 year, scheduled to run monthly
 DIFF - retain 6 months, scheduled to run weekly
 INCRMONTH - retain 1 month, scheduled to run daily
 INCR2DAYS -  retain 2 days, schedule to run every 3 hours

 Does that help you?

 First, what is your goal in running incrementals every 3 hours?  And then
 again daily?

  There are 4 different rotation rules and schedules... not three.
  How am i supposed to solve this?
 
  With my own scripts, i would define 4 pools, and a job type for each
 pool:
 
Level 0 Pool  Type:Full
Level 1 Pool  Type:Differential
Level 2 Pool  Type:Incremental
Level 3 Pool  Type:Incremental

 We came to the same conclusion, but stop thinking about level.

 You should also look at virtual backups.

  Every pool with his own rotation rules which based on the recovery
 schema, and a job type for each pool.
 
  In my opinion it is wrong to bind the concept of job type and the
 concept of pool, they are 2 different things.

 What do you mean by Job Type here?  Job type is backup, restore,
 copy/migrate.

  I use a pool to define a group of  volumes and their rotation rules, i
 specify a job type to define if those volumes will contain an
 incremental,differential or full backup.
 
  I hope i'm wrong with this and that i wasn't able to find the correct
 solution which Bacula proposes, if this is the case please help me to throw
 light on that. \

 I'm guessing at the schedule, and using mine as a starting point:

 Schedule {
   Name = WeeklyCycle
   Run = Level=Full 1st sun at 5:55
   Run = Level=Differential 2nd-5th sun at 5:55
   Run = Level=Incremental  mon-sat at 5:55
 }

 Alter mine to become:

 Schedule {
   Name = WeeklyCycle
   Run = Level=Full 1st sun at 5:55
   Run = Level=Differential 2nd-5th sun at 5:55
   Run = Level=Incremental IncrementalPool=MONTH mon-sat at 5:55
  Run = Level=Incremental IncrementalPool=INCR2DAYS daily at 0, 3, 6, 9,
 12, 15, 18, 21
 }}

 That should give you a starting point.  Sorry it's not more complete.
  Others might have better ideas. there…


 Thanks Dan,

 I really appreciate your help and i think you are right, Pools doesn't
 define Levels, but a collection of similar volumes.
 Levels are something related to jobs, not to pools, even though a pool
 contains volumes of the same job level.

 You solved my problem, i didn't know that it was possible to specify
 IncrementalPool= in  a Run definition, now i know that is possible to
 have more than 3 pools for a job, so i can have all the Levels or better
 Retention periods that i want.

 I have only one more question for you:
 why do you think that is useless to run incremental every three hours and
 then again daily?


 I think you concluded I thought they were useless because I asked you:

 First, what is your goal in running incrementals every 3 hours?  And then
 again daily?


 But you didn't answer my question. :)

 Hourly backups have different retention periods than daily ones, that's
 why i have to store them in different pools.


 Noted.  But I wanted to know your goal there….

 Keep in mind that EACH incremental you run will be relative

Re: [Bacula-users] Set more then 3 pools in a backup job

2013-01-28 Thread stefano scotti
2013/1/28 Dan Langille d...@langille.org


 On Jan 28, 2013, at 11:50 AM, stefano scotti wrote:

 2013/1/28 Dan Langille d...@langille.org


 On Jan 28, 2013, at 11:16 AM, stefano scotti wrote:

 2013/1/28 Dan Langille d...@langille.org


 On Jan 28, 2013, at 10:21 AM, stefano scotti wrote:

 
  Hi everybody,
 
  I've been using Bacula for 1 year, before then i used to write my own
 scripts.
  I see, and correct me if i am wrong, that bacula allow only a maximum
 of 3 backup levels:
 
 Level 0: Full
 Level 1: Differential
 Level 2: Incremental
 
  This means a maximum of 3 pools per job.

 I consider Pools to be a collection of Volumes with similar attributes.
  Stop thinking about levels.  Start thinking about retention.


  Now, if i have to implement this example of recovery schema:
 
 Every month for a year
 Every week for 6 months
 Every day for a month
 Every 3 hours for 2 days

 I think the above corresponds to

 Every month for a year  FULL
 Every week for 6 months DIFFERENTIAL
 Every day for a month  INCREMENTAL
 Every 3 hours for 2 days INCREMENTAL

 With different retention times on each

 It sounds like you want four pools

 FULL - retain 1 year, scheduled to run monthly
 DIFF - retain 6 months, scheduled to run weekly
 INCRMONTH - retain 1 month, scheduled to run daily
 INCR2DAYS -  retain 2 days, schedule to run every 3 hours

 Does that help you?

 First, what is your goal in running incrementals every 3 hours?  And
 then again daily?

  There are 4 different rotation rules and schedules... not three.
  How am i supposed to solve this?
 
  With my own scripts, i would define 4 pools, and a job type for each
 pool:
 
Level 0 Pool  Type:Full
Level 1 Pool  Type:Differential
Level 2 Pool  Type:Incremental
Level 3 Pool  Type:Incremental

 We came to the same conclusion, but stop thinking about level.

 You should also look at virtual backups.

  Every pool with his own rotation rules which based on the recovery
 schema, and a job type for each pool.
 
  In my opinion it is wrong to bind the concept of job type and the
 concept of pool, they are 2 different things.

 What do you mean by Job Type here?  Job type is backup, restore,
 copy/migrate.

  I use a pool to define a group of  volumes and their rotation rules, i
 specify a job type to define if those volumes will contain an
 incremental,differential or full backup.
 
  I hope i'm wrong with this and that i wasn't able to find the correct
 solution which Bacula proposes, if this is the case please help me to throw
 light on that. \

 I'm guessing at the schedule, and using mine as a starting point:

 Schedule {
   Name = WeeklyCycle
   Run = Level=Full 1st sun at 5:55
   Run = Level=Differential 2nd-5th sun at 5:55
   Run = Level=Incremental  mon-sat at 5:55
 }

 Alter mine to become:

 Schedule {
   Name = WeeklyCycle
   Run = Level=Full 1st sun at 5:55
   Run = Level=Differential 2nd-5th sun at 5:55
   Run = Level=Incremental IncrementalPool=MONTH mon-sat at 5:55
  Run = Level=Incremental IncrementalPool=INCR2DAYS daily at 0, 3, 6, 9,
 12, 15, 18, 21
 }}

 That should give you a starting point.  Sorry it's not more complete.
  Others might have better ideas. there…


 Thanks Dan,

 I really appreciate your help and i think you are right, Pools doesn't
 define Levels, but a collection of similar volumes.
 Levels are something related to jobs, not to pools, even though a pool
 contains volumes of the same job level.

 You solved my problem, i didn't know that it was possible to specify
 IncrementalPool= in  a Run definition, now i know that is possible to
 have more than 3 pools for a job, so i can have all the Levels or better
 Retention periods that i want.

 I have only one more question for you:
 why do you think that is useless to run incremental every three hours and
 then again daily?




 I think you concluded I thought they were useless because I asked you:

 But you didn't answer my question. :)



 Yes, i deducted it from your question... so maybe i should ask you if you
 really think that is useless or not.


 Noted.  But I wanted to know your goal there….

  Keep in mind that EACH incremental you run will be relative to the last
 backup.  Thus, the incremental that runs daily won't
 have stuff that changed since yesterday. Only the stuff that changed
 since the last incremental 3 hours ago.



 My aim is to have daily backups which persist 1 month, and 3-hourly
 backups which persist only 2 days.
 Is there a different solution to achieve that instead of using 2 different
 pools?


 If you answer down here, or at least inline, people can read from top to
 bottom and follow the discussion.

 The issue is, what do you want in each of those two different backups?

 I *think* you want in the daily backup: everything which has changed since
 yesterday.

 I *think* you want in the 3 hourly backup: everything which has changed