Re: [Bacula-users] Remote Director

2009-08-02 Thread Radek Hladik
James Chamberlain napsal(a):
> Hi Bacula Users,
> 
> Does anyone have experience setting up Storage Daemons at different  
> sites from the Director?  I'm thinking that it would be really nice to  
> have a central console where I can see and control the backups of  
> other sites, but I don't have the bandwidth to send all the data to  
> the central location.  If I have a backup device and Storage Daemon  
> local to the clients I want to back up, I think their FDs will send  
> the data straight to their local SD, with the traffic to the Director  
> being minimal by comparison.  As near as I can tell from the diagram  
> at the end of the following page, what I want to do should be possible:
> 
> http://www.bacula.org/en/dev-manual/What_is_Bacula.html
> 
> I'm curious for other people's experiences with this sort of setup -  
> how much trouble was it, how much bandwidth does the SD actually use  
> to talk with the Director, what gotchas did you run into, are there  
> any security or permissions issues to keep in mind, etc.  It would be  
> nice if I could give someone at each site the ability to restore their  
> own site's files, but not files from other sites.
> 
> Thanks,
> 
> James

Hi James,
I do not think that the setup you mentioned is something unusual. The 
director only contacts FD and SD and these two communicate directly. You 
should read the documentation for firewall configuration 
http://www.bacula.org/en/rel-manual/Dealing_with_Firewalls.html .
I am not sure about the bandwidth requirements. As the data itself pass 
straight from FD to SD, there must be some communication to the Director 
to store the metadata in to the catalog. And of course the catalog 
backups will go from director to SD.

Radek

--
Let Crystal Reports handle the reporting - Free Crystal Reports 2008 30-Day 
trial. Simplify your report design, integration and deployment - and focus on 
what you do best, core application coding. Discover what's new with 
Crystal Reports now.  http://p.sf.net/sfu/bobj-july
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] Catalog backup retention

2008-10-13 Thread Radek Hladik
John Drescher napsal(a):
>> I have no problem with backing up catalog on other server with less
>> expensive storage space. But I do not see any use of old catalog backups at
>> all. I would do "bscan into new catalog" in all cases except the one, where
>> I have catalog backup newer than last data backed up and I need to recover
>> it. I.e. database server crash or something like that. In all other cases I
>> would not risk the possible complications...
>>
> A couple of years ago I had a database corruption issue where the
> database had become corrupt and I did not notice that for over 1 week.
> The catalog backup and some other backups were running fine but others
> were failing. If I did not have more than 1 week of catalogs it would
> have been much harder to recover being that 200 volumes or so and
> manually scanning to recover the catalog would have taken months. One
> other thing I recommend. Always have a bootstrap file for your
> catalogs. I have mine named so that each day I get a new bootstrap
> file and that file has the jobid in its name.
> 
> 
> Job {
>   Name = "BackupCatalog"
>   Type = Backup
>   Client = dev6-fd
>   FileSet = "Catalog"
>   Schedule = "WeeklyCycleAfterBackup"
>   Storage = File
>   Messages = Standard
>   Pool = BackupCatalogs
>   ClientRunBeforeJob = "/usr/libexec/bacula/make_catalog_backup bacula
> hbroker hbroker"
>   ClientRunAfterJob = /usr/libexec/bacula/delete_catalog_backup
>   WriteBootstrap = "/mnt/vg/backups/BootStrap/%n_%i.bsr"
> }
> 
> 
> John

I keep the bootstrap file too, and I think that this is a good idea.
I am using only disk backups and all jobs are stored in separate files. 
Therefore I thought that bscan should not take significantly more time 
than reading all backup files once.
I tried this as I've tested Bacula. I used one machine's backup file, 
"empty" machine, livecd with bacula binaries and nothing else. I scanned 
the file into the new catalog and restored the machine. The scanning has 
been reasonably fast.
We have a few hundreds gigs of backups and with cca 50MB/s it should 
scan like 100GB in half an hour. Thats why I think scanning all backups 
would be more reasonable than using old catalog...

Radek

-
This SF.Net email is sponsored by the Moblin Your Move Developer's challenge
Build the coolest Linux based applications with Moblin SDK & win great prizes
Grand prize is a trip for two to an Open Source event anywhere in the world
http://moblin-contest.org/redirect.php?banner_id=100&url=/
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] Catalog backup retention

2008-10-13 Thread Radek Hladik
John Drescher napsal(a):
> On Mon, Oct 13, 2008 at 7:33 AM, Radek Hladik <[EMAIL PROTECTED]> wrote:
>> Hi,
>>I would like to ask how should I recycle catalog backups.
>> I am using disk based storage and catalog pool is by default set to
>> half-year or so long retention period. I would think, that week would be
>>  more than sufficient.
>> What is old catalog backup good for? I mean if I have catalog backup
>> older that data backup I would need to run bscan from scratch. Otherwise
>> more recent data would not be recognized by Bacula and on the other hand
>> there may be data in catalog which are not in data backups any more...
>>
> 
> I do not keep more than a month or so. However the server I have them
> on is not were my regular backups go so I have tons of space there.
> 
> John

I have no problem with backing up catalog on other server with less 
expensive storage space. But I do not see any use of old catalog backups 
at all. I would do "bscan into new catalog" in all cases except the one, 
where I have catalog backup newer than last data backed up and I need to 
recover it. I.e. database server crash or something like that. In all 
other cases I would not risk the possible complications...

Radek

-
This SF.Net email is sponsored by the Moblin Your Move Developer's challenge
Build the coolest Linux based applications with Moblin SDK & win great prizes
Grand prize is a trip for two to an Open Source event anywhere in the world
http://moblin-contest.org/redirect.php?banner_id=100&url=/
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] Deleting from disk based pool

2008-10-13 Thread Radek Hladik
John Drescher napsal(a):
> On Mon, Oct 13, 2008 at 7:26 AM, Radek Hladik <[EMAIL PROTECTED]> wrote:
>> Hi,
>>I'm backing to disk based storage, every volume has maximum count set
>> to one job, so every pool has one file per job. Each pool is configured
>> with recycle period and volume count so it can hold just enough
>> volumes/jobs. For example for 7 daily full backups I have pool with
>> maximum 7 volumes and 6 days retention period.
>>However I need sometimes to delete some older backups by hand, because
>> of low disk space, or because we sometimes manually run the backup and
>> it breaks the schema (the last backup is not old enough to be recycled).
>> I would like to ask whether there is a possibility to manually recycle
>> volume from console and/or if there is a way how to tell Bacula that
>> I've deleted some particular volume and that it should remove it from
>> catalog?
>>
> 
> use the delete volume command in a bacula console then delete the
> volume manually on disk.
> 
> John

Thanks for your advice. I tried this command once, it warned me with 
something like "Deleting data is genraly bad idea" and I think that 
there were some warnings in backup reports afterwards. But if this is 
the correct way how to do it, I will try it more thorough...

Radek

-
This SF.Net email is sponsored by the Moblin Your Move Developer's challenge
Build the coolest Linux based applications with Moblin SDK & win great prizes
Grand prize is a trip for two to an Open Source event anywhere in the world
http://moblin-contest.org/redirect.php?banner_id=100&url=/
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


[Bacula-users] Catalog backup retention

2008-10-13 Thread Radek Hladik
Hi,
I would like to ask how should I recycle catalog backups.
I am using disk based storage and catalog pool is by default set to 
half-year or so long retention period. I would think, that week would be 
  more than sufficient.
What is old catalog backup good for? I mean if I have catalog backup 
older that data backup I would need to run bscan from scratch. Otherwise 
more recent data would not be recognized by Bacula and on the other hand 
there may be data in catalog which are not in data backups any more...

Radek

-
This SF.Net email is sponsored by the Moblin Your Move Developer's challenge
Build the coolest Linux based applications with Moblin SDK & win great prizes
Grand prize is a trip for two to an Open Source event anywhere in the world
http://moblin-contest.org/redirect.php?banner_id=100&url=/
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


[Bacula-users] Deleting from disk based pool

2008-10-13 Thread Radek Hladik
Hi,
I'm backing to disk based storage, every volume has maximum count set 
to one job, so every pool has one file per job. Each pool is configured 
with recycle period and volume count so it can hold just enough 
volumes/jobs. For example for 7 daily full backups I have pool with 
maximum 7 volumes and 6 days retention period.
However I need sometimes to delete some older backups by hand, because 
of low disk space, or because we sometimes manually run the backup and 
it breaks the schema (the last backup is not old enough to be recycled).
I would like to ask whether there is a possibility to manually recycle 
volume from console and/or if there is a way how to tell Bacula that 
I've deleted some particular volume and that it should remove it from 
catalog?

Radek

-
This SF.Net email is sponsored by the Moblin Your Move Developer's challenge
Build the coolest Linux based applications with Moblin SDK & win great prizes
Grand prize is a trip for two to an Open Source event anywhere in the world
http://moblin-contest.org/redirect.php?banner_id=100&url=/
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] Disk backup recycling

2007-11-01 Thread Radek Hladik

Hi,
	option 0 is the best one however there are financial drawbacks :-) The 
whole situation is like this. I have bacula server with two remote SAN 
connected drives. SAN does mirroring etc and SAN drives are considered 
stable and safe.
I have the backup rotation schema with 1 weekly full backup and 6 
differential/incremental backups. I need to backup various routers, 
servers, important workstations etc... There is cca 20 clients now. 
Total storage needed to be backed up is like 200-250GB. Number of 
clients should increase in time.
Since now we used simple ssh+tar solution. I would like to use Bacula to 
"tidy up" the whole process and make it more reliable and robust. The 
disk space on SAN is expensive and "precious" and I would like to use it 
reasonably. So I have no problem with connecting each client after 
another, performing full backup to a new volume and deleting old full 
backup afterwards - this is how it now works with ssh+tar. Client 
connects, backups to temporary file and when backup is complete, old 
backup is deleted and temporary file is renamed. Clients are backed up 
one after another so I do not need so much overhead disk space. I have 
no problem finding reason for extra space for one full backup.
I am still considering other options like spooling to local SATA drive, 
or backing up to local drive and synchronizing to SAN drives, but every 
solution has some disadvantages...
Full catalog backup will be performed on SAN drives every day and to 
remote servers. I find it as the most valuable data to be backed up :-)


Radek


Marek Simon napsal(a):

My opinion to your ideas:
0) Leave the schema as I submited and buy more disk space for backuping. :-)

1) It is best variant I think. The other advantage is that the full 
backup of all clients would take much longer time then 1/7th full and 
other differential. Now what to do with Catalog:

You can backup the catalog to some changable media (tape, CD/DWD-RW).
You can pull the (zipped and may be encrypted) catalog to some or all of 
your clients.
You can send your (zipped and maybe encrypted) Catalog to some friend of 
you (and you can backup his catalog for reciprocation), but it may be a 
violence of the data privacy (even if the Catalog contain only names and 
sizes).
You can forward the bacula messages (completed backups) to some external 
mail address and then if needed you can reconstruct the job-volume 
binding from them.
The complete catalog is too big for sending it by e-mail, but still you 
can do SQL selection in catalog after the backup and send the job-volume 
bindings and some other relevant information to the external email 
address in CSV format.
Still you can (and I strongly recommend to) backup the catalog every 
time after the daily bunch of Jobs and extract it when needed with other 
bacula tools (bextract).


2) I thought, you are in lack of disk space, so you can't afford to have 
the full backup twice plus many differential backups. So I do not see 
the difference if I have two full backups on a device for a day or for 
few hours, I need that space anyway. But I think this variant is better 
to be used it with your original idea: Every full backup volume has its 
own pool and the Job Schedule is set up to use volume 1 in odd weeks and 
do the immediate differential (practicaly zero sized) backup to the 
volume 2 just after the full one and vice-versa in even weeks. 
Priorities could help you as well in this case. May be some check if the 
full backup was good would be advisable, but I am not sure if bacula can 
do this kind of conditional job runs, may be with some python hacking or 
some After Run and Before Run scripts.
You can do the same for differential backups - two volumes in two pools, 
the first is used and the other cleared - in turns.
And finaly, you can combine it with previous solution and divide it to 
sevenths or more parts, but then it would be the real Catalog hell.


3) It is the worst solution. If you want to have bad sleep every Monday 
(or else day), try it. It is realy risky to loose the backup even for a 
while, an accident can strike at any time.


Marek

P.S. I could write it in czech, but the other readers can be interested 
too :-)


Radek Hladik napsal(a):

Hi,
	thanks for your answer. Your idea sounds good. However if I understand 
it correctly, there will be two full backups for the whole day after 
full backup. This is what I am trying to avoid as I will be backing up a 
lot of clients. So as I see it I have these possibilities:


1) use your scheme and divide clients into seven groups. One group will 
start it's full backup on Monday, second on Tuesday, etd.. So I will 
have all the week two full backups for 1/7 clients. This really seems 
like I will need to backup the catalog at least dozen times because no 
one will be able to deduct which backup is on which volume :-)
2) modify your scheme as there will be another differential backup right 
aft

Re: [Bacula-users] Disk backup recycling

2007-10-24 Thread Radek Hladik
Hi,
option 0 is the best one however there are financial drawbacks :-) The
whole situation is like this. I have bacula server with two remote SAN
connected drives. SAN does mirroring etc and SAN drives are considered
stable and safe.
I have the backup rotation schema with 1 weekly full backup and 6
differential/incremental backups. I need to backup various routers,
servers, important workstations etc... There is cca 20 clients now.
Total storage needed to be backed up is like 200-250GB. Number of
clients should increase in time.
Since now we used simple ssh+tar solution. I would like to use Bacula to
"tidy up" the whole process and make it more reliable and robust. The
disk space on SAN is expensive and "precious" and I would like to use it
reasonably. So I have no problem with connecting each client after
another, performing full backup to a new volume and deleting old full
backup afterwards - this is how it now works with ssh+tar. Client
connects, backups to temporary file and when backup is complete, old
backup is deleted and temporary file is renamed. Clients are backed up
one after another so I do not need so much overhead disk space. I have
no problem finding reason for extra space for one full backup.
I am still considering other options like spooling to local SATA drive,
or backing up to local drive and synchronizing to SAN drives, but every
solution has some disadvantages...
Full catalog backup will be performed on SAN drives every day and to
remote servers. I find it as the most valuable data to be backed up :-)

Radek


Marek Simon napsal(a):
> My opinion to your ideas:
> 0) Leave the schema as I submited and buy more disk space for backuping. :-)
> 
> 1) It is best variant I think. The other advantage is that the full 
> backup of all clients would take much longer time then 1/7th full and 
> other differential. Now what to do with Catalog:
> You can backup the catalog to some changable media (tape, CD/DWD-RW).
> You can pull the (zipped and may be encrypted) catalog to some or all of 
> your clients.
> You can send your (zipped and maybe encrypted) Catalog to some friend of 
> you (and you can backup his catalog for reciprocation), but it may be a 
> violence of the data privacy (even if the Catalog contain only names and 
> sizes).
> You can forward the bacula messages (completed backups) to some external 
> mail address and then if needed you can reconstruct the job-volume 
> binding from them.
> The complete catalog is too big for sending it by e-mail, but still you 
> can do SQL selection in catalog after the backup and send the job-volume 
> bindings and some other relevant information to the external email 
> address in CSV format.
> Still you can (and I strongly recommend to) backup the catalog every 
> time after the daily bunch of Jobs and extract it when needed with other 
> bacula tools (bextract).
> 
> 2) I thought, you are in lack of disk space, so you can't afford to have 
> the full backup twice plus many differential backups. So I do not see 
> the difference if I have two full backups on a device for a day or for 
> few hours, I need that space anyway. But I think this variant is better 
> to be used it with your original idea: Every full backup volume has its 
> own pool and the Job Schedule is set up to use volume 1 in odd weeks and 
> do the immediate differential (practicaly zero sized) backup to the 
> volume 2 just after the full one and vice-versa in even weeks. 
> Priorities could help you as well in this case. May be some check if the 
> full backup was good would be advisable, but I am not sure if bacula can 
> do this kind of conditional job runs, may be with some python hacking or 
> some After Run and Before Run scripts.
> You can do the same for differential backups - two volumes in two pools, 
> the first is used and the other cleared - in turns.
> And finaly, you can combine it with previous solution and divide it to 
> sevenths or more parts, but then it would be the real Catalog hell.
> 
> 3) It is the worst solution. If you want to have bad sleep every Monday 
> (or else day), try it. It is realy risky to loose the backup even for a 
> while, an accident can strike at any time.
> 
> Marek
> 
> P.S. I could write it in czech, but the other readers can be interested 
> too :-)
> 
> Radek Hladik napsal(a):
>> Hi,
>>  thanks for your answer. Your idea sounds good. However if I understand 
>> it correctly, there will be two full backups for the whole day after 
>> full backup. This is what I am trying to avoid as I will be backing up a 
>> lot of clients. So as I see it I have these possibilities:
>>
>> 1) use your scheme and divide clients into seven groups. One group will 
>> start it's full backup on Monday, second o

Re: [Bacula-users] Disk backup recycling

2007-10-23 Thread Radek Hladik
Hi,
thanks for your answer. Your idea sounds good. However if I understand 
it correctly, there will be two full backups for the whole day after 
full backup. This is what I am trying to avoid as I will be backing up a 
lot of clients. So as I see it I have these possibilities:

1) use your scheme and divide clients into seven groups. One group will 
start it's full backup on Monday, second on Tuesday, etd.. So I will 
have all the week two full backups for 1/7 clients. This really seems 
like I will need to backup the catalog at least dozen times because no 
one will be able to deduct which backup is on which volume :-)
2) modify your scheme as there will be another differential backup right 
after the full backup before next job starts. It will effectively erase 
the last week full backup.
3) use only 7 volumes and retention 6 days and live with the fact, that 
there is no backup during backup.

Now I only need to decide which option will be the best one :-)

Radek


Marek Simon napsal(a):
> Hi,
> I suggest to solve like this:
> One Pool only. 8 volumes. Retention time 7 days and few hours. Use time 
> 1 day.
> 
> The use will be like this:
> first Monday: full backup. volume 1 used and contains the full backup
> Tue to Sun: diff backup using volumes 2 to 7 (automaticaly selected or 
> created by bacula)
> second Monday: volume 1 is not free yet, so using volume 8 for full 
> backup. Now you have two full backups.
> second Tuesday: volume 1 is available and bacula will recycle it for 
> first differencial backup. Old full backup is discarded. Now you have 
> full backup on volume 8, first diff on volume 1 and 6 volumes with 
> useless data
> second Wednesday: volume 2 is available etc.
> 
> You will not be able to keep the exact content of volumes in your head, 
> but the bacula is designed for not needing that. You can still read 
> every day's report and get your brain busy with it (and I recomand it 
> for few weeks to catch the buggies).
> 
> Marek
> 
> 
> Radek Hladik napsal(a):
>> Hi,
>>  I am implementing simple schema for backing up our data. Data are 
>> backed up to disk. Full backup is performed every Sunday. The next 6 
>> days differential backups are performed.
>> I want to keep only one Full backup and at maximum 6 daily differences. 
>> The moment new Full backup is made, previous Full backup and its 
>> "differential children" can be deleted. According to documentation 
>> (chapter Automated disk backup) I've made two pools like this:
>>
>> Pool
>> {
>>Name = full
>>Pool Type = Backup
>>Recycle = yes
>>AutoPrune = yes
>>Volume Retention = 7 days
>>Label format = full
>>Maximum Volume Jobs = 1
>>Maximum Volumes = 2
>> }
>>
>> Pool
>> {
>>Name = diff
>>Pool Type = Backup
>>Recycle = yes
>>AutoPrune = yes
>>Volume Retention = 7 days
>>Label format = diff
>>Maximum Volume Jobs = 1
>>Maximum Volumes = 8
>> }
>>
>> But as I understand it, there will be two full backups as backups have 
>> period 7 days and retention is 7 days too - so the last week backup will 
>> not be reused as it is not old enough. But if I lower retention to i.e. 
>> 6 days, the volume will be deleted before performing backup and there 
>> will be window without any backup.
>> I am backing up a lot of clients and I do not mind having one 
>> unnecessary backup during backup process itself but I would like to 
>> avoid having two backups for each client for whole the time.
>> And my other question - can be differential/incremental backups 
>> automatically deleted as soon as their "parent" is reused/deleted?
>>
>> Radek
>>
>>
>>
>> -
>> This SF.net email is sponsored by: Splunk Inc.
>> Still grepping through log files to find problems?  Stop.
>> Now Search log events and configuration files using AJAX and a browser.
>> Download your FREE copy of Splunk now >> http://get.splunk.com/
>> ___
>> Bacula-users mailing list
>> Bacula-users@lists.sourceforge.net
>> https://lists.sourceforge.net/lists/listinfo/bacula-users
>>
>>
>>   
> 
> -
> This SF.net email is sponsored by: Splunk Inc.
> Still grepping through log files to find problems?  Stop.
> Now Search log events and configuration files using AJAX and a browser.
> Download your FREE copy of Splunk now >> http://get

Re: [Bacula-users] Problem with backing up over the internet

2007-10-23 Thread Radek Hladik
Mateus Interciso napsal(a):
> On Tue, 23 Oct 2007 12:37:13 +, Mateus Interciso wrote:
> 
>> On Mon, 22 Oct 2007 17:28:40 +, Mateus Interciso wrote:
>>
>>> On Mon, 22 Oct 2007 19:19:34 +0200, Radek Hladik wrote:
>>>
>>>> Mateus Interciso napsal(a):
>>>>> On Mon, 22 Oct 2007 15:44:23 +, Mateus Interciso wrote:
>>>>>
>>>>>> On Mon, 22 Oct 2007 15:17:32 +, Mateus Interciso wrote:
>>>>>>
>>>>>>> On Fri, 19 Oct 2007 15:51:24 +0200, Viktor Radnai wrote:
>>>>>>>
>>>>>>>> Hi there,
>>>>>>>>
>>>>>>>> On 10/18/07, Arno Lehmann <[EMAIL PROTECTED]> wrote:
>>>>>>>>
>>>>>>>>> You can do this even if the SD is inside your firewall, you'll
>>>>>>>>> need port forwarding or a proxy on the firewall then. With
>>>>>>>>> separate DNS zones inside and outside, resolving the SD hostname
>>>>>>>>> either as the internal or the external IP, this can be seamlessly
>>>>>>>>> integrated with your internal Bacula setup.
>>>>>>>>>
>>>>>>>>> Arno
>>>>>>>> I think in this case, /etc/hosts is your friend :-)
>>>>>>>>
>>>>>>>> Configure your storage daemon with a hostname, and specify that
>>>>>>>> hostname in /etc/hosts to be either the internal or the external
>>>>>>>> address, as required. And you don't need to return different
>>>>>>>> results from your internal DNS server. Dead simple and works well.
>>>>>>>>
>>>>>>>> HTH,
>>>>>>>>
>>>>>>>> Cheers,
>>>>>>>> Vik
>>>>>>>> --
>>>>>>>> My other sig is hilarious
>>>>>>>>
>>>>>>>>
>>>>>>>>
>>>>>>>>
>>>>>>>>
> -
>>>>>>>> This SF.net email is sponsored by: Splunk Inc. Still grepping
>>>>>>>> through log files to find problems?  Stop. Now Search log events
>>>>>>>> and configuration files using AJAX and a browser. Download your
>>>>>>>> FREE copy of Splunk now >> http://get.splunk.com/
>>>>>>> Ok, but how do I configure the storage daemon with a hostname?
>>>>>>>
>>>>>>>
>>>>>>>
>>>>>>>
>>>>>>>
> -
>>>>>>> This SF.net email is sponsored by: Splunk Inc. Still grepping
>>>>>>> through log files to find problems?  Stop. Now Search log events
>>>>>>> and configuration files using AJAX and a browser. Download your
>>>>>>> FREE copy of Splunk now >> http://get.splunk.com/
>>>>>> Sorry, I should have sayd more...
>>>>>> when I try to put the Address config on the sd, like this:
>>>>>>
>>>>>> Storage { # definition of myself
>>>>>>   Name = test-sd
>>>>>>   Address = Storage-Server
>>>>>>   SDPort = 9103  # Director's port WorkingDirectory
>>>>>>   = "/var/lib/bacula"
>>>>>>   Pid Directory = "/var/run"
>>>>>>   Maximum Concurrent Jobs = 20
>>>>>> }
>>>>>>
>>>>>> I get this error when I start bacula-sd
>>>>>>
>>>>>> 22-Oct 14:11 bacula-sd: ERROR TERMINATION at parse_conf.c:889 Config
>>>>>> error: Keyword "Address" not permitted in this resource. Perhaps you
>>>>>> left the trailing brace off of the previous resource.
>>>>>> : line 15, col 10 of file /etc/bacula/bacula-sd.conf
>>>>>>   Address = Storage-Server
>>>>>>
>>>>>> :(
>>>>>>
>>>>>> Mateus
>>>>>>
>>>>>>
>>>>>>
>>>>>>
>>>>>>
> -
>>>

Re: [Bacula-users] Problem with backing up over the internet

2007-10-22 Thread Radek Hladik
Mateus Interciso napsal(a):
> On Mon, 22 Oct 2007 15:44:23 +, Mateus Interciso wrote:
> 
>> On Mon, 22 Oct 2007 15:17:32 +, Mateus Interciso wrote:
>>
>>> On Fri, 19 Oct 2007 15:51:24 +0200, Viktor Radnai wrote:
>>>
 Hi there,

 On 10/18/07, Arno Lehmann <[EMAIL PROTECTED]> wrote:

> You can do this even if the SD is inside your firewall, you'll need
> port forwarding or a proxy on the firewall then. With separate DNS
> zones inside and outside, resolving the SD hostname either as the
> internal or the external IP, this can be seamlessly integrated with
> your internal Bacula setup.
>
> Arno
 I think in this case, /etc/hosts is your friend :-)

 Configure your storage daemon with a hostname, and specify that
 hostname in /etc/hosts to be either the internal or the external
 address, as required. And you don't need to return different results
 from your internal DNS server. Dead simple and works well.

 HTH,

 Cheers,
 Vik
 --
 My other sig is hilarious



> -
 This SF.net email is sponsored by: Splunk Inc. Still grepping through
 log files to find problems?  Stop. Now Search log events and
 configuration files using AJAX and a browser. Download your FREE copy
 of Splunk now >> http://get.splunk.com/
>>> Ok, but how do I configure the storage daemon with a hostname?
>>>
>>>
>>>
> -
>>> This SF.net email is sponsored by: Splunk Inc. Still grepping through
>>> log files to find problems?  Stop. Now Search log events and
>>> configuration files using AJAX and a browser. Download your FREE copy
>>> of Splunk now >> http://get.splunk.com/
>> Sorry, I should have sayd more...
>> when I try to put the Address config on the sd, like this:
>>
>> Storage { # definition of myself
>>   Name = test-sd
>>   Address = Storage-Server
>>   SDPort = 9103  # Director's port WorkingDirectory =
>>   "/var/lib/bacula"
>>   Pid Directory = "/var/run"
>>   Maximum Concurrent Jobs = 20
>> }
>>
>> I get this error when I start bacula-sd
>>
>> 22-Oct 14:11 bacula-sd: ERROR TERMINATION at parse_conf.c:889 Config
>> error: Keyword "Address" not permitted in this resource. Perhaps you
>> left the trailing brace off of the previous resource.
>> : line 15, col 10 of file /etc/bacula/bacula-sd.conf
>>   Address = Storage-Server
>>
>> :(
>>
>> Mateus
>>
>>
>>
> -
>> This SF.net email is sponsored by: Splunk Inc. Still grepping through
>> log files to find problems?  Stop. Now Search log events and
>> configuration files using AJAX and a browser. Download your FREE copy of
>> Splunk now >> http://get.splunk.com/
> 
> I've just managed to make the backups run across the internet.
> I'm really thankfull for all of you guys :d
> Fiddling a little with the /etc/hosts and the firewall...it worked 
> perfectly :D
> Now is just making it secure...thanks a lot guys :D
> 
I would suggest you OpenVPN. It is easy to setup, there is a lot of 
howtos on the net and it will solve all NAT problems. OpenVPN is for 
both Windows and Linux and all it needs is one port on public IP.
We use it for backing up remote servers and it works like a charm.

Basically you need to install OpenVPN server in your office and OpenVPN 
clients on all your clients. OpenVPN will create new "ethernet" devices 
which are connected all together. You then setup some private IPs on 
this virtual network and use them in bacula configuration.

Radek

-
This SF.net email is sponsored by: Splunk Inc.
Still grepping through log files to find problems?  Stop.
Now Search log events and configuration files using AJAX and a browser.
Download your FREE copy of Splunk now >> http://get.splunk.com/
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


[Bacula-users] Disk backup recycling

2007-10-21 Thread Radek Hladik
Hi,
I am implementing simple schema for backing up our data. Data are 
backed up to disk. Full backup is performed every Sunday. The next 6 
days differential backups are performed.
I want to keep only one Full backup and at maximum 6 daily differences. 
The moment new Full backup is made, previous Full backup and its 
"differential children" can be deleted. According to documentation 
(chapter Automated disk backup) I've made two pools like this:

Pool
{
   Name = full
   Pool Type = Backup
   Recycle = yes
   AutoPrune = yes
   Volume Retention = 7 days
   Label format = full
   Maximum Volume Jobs = 1
   Maximum Volumes = 2
}

Pool
{
   Name = diff
   Pool Type = Backup
   Recycle = yes
   AutoPrune = yes
   Volume Retention = 7 days
   Label format = diff
   Maximum Volume Jobs = 1
   Maximum Volumes = 8
}

But as I understand it, there will be two full backups as backups have 
period 7 days and retention is 7 days too - so the last week backup will 
not be reused as it is not old enough. But if I lower retention to i.e. 
6 days, the volume will be deleted before performing backup and there 
will be window without any backup.
I am backing up a lot of clients and I do not mind having one 
unnecessary backup during backup process itself but I would like to 
avoid having two backups for each client for whole the time.
And my other question - can be differential/incremental backups 
automatically deleted as soon as their "parent" is reused/deleted?

Radek



-
This SF.net email is sponsored by: Splunk Inc.
Still grepping through log files to find problems?  Stop.
Now Search log events and configuration files using AJAX and a browser.
Download your FREE copy of Splunk now >> http://get.splunk.com/
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] How to get SRPM

2007-08-15 Thread Radek Hladik
Timo Neuvonen napsal(a):
>> I would like to try Bacula 2.2.0 however there are no RPMS on
>> sourceforge yet and I prefer RPMs to "direct" installation.
>> I do not mind compiling my own packages from tarball or SRPM but there
>> is no SRPM  anywhere and tarball contains only bacula.spec.in file. How
>> can I convert this file into proper spec file so I can use:
>> rpmbuild -tb bacula.2.2.0.tar.gz ?
>>
> Someone else might be more competent than I am to answer this, but my guess 
> is that the spec file may still need some modification by the persons 
> responsible maintaining the packages. When they've done this, the packages 
> (sources and binaries) thereafter propably become available to SF quite 
> soon.
> 
> So, just be patient. So am I - I also prefer building from SRPM instead of 
> tarball. Unfortunately it's beyond my skills to help with the packaging 
> process.
> 
> --
> TiN 
> 

I could not wait any further :-) I needed the packages only for testing
now so I tried to figure it out.

These are steps needed to create RPMs (however I can not guarantee there
is no error or mistake :-) ):

1) unpack tarball and run configure
this is the step I have the biggest "ideological" problem. Configure
should be run when building not when preparing spec file. But IMHO this
"first" configure processes spec.in and create spec file (expanding
macros etc...). Of course it also prepares all the sources but when
building "the second" configure should override all settings made by the
first one.
2) remove all references to developer.pdf prom spec file as it could not
be created by the build process (maybe only on my machine?)
3) download depkgs tarball, documentation tarball, etc - all the
required files are specified on the beginning of the spec file - and put
them into the same directory as final tarball
4) remove (rename) the other spec file
5) tar changed files to tarball again
6) rpmbuild -tb defines tarbal.tar.gz

This worked for me...

Radek

-
This SF.net email is sponsored by: Splunk Inc.
Still grepping through log files to find problems?  Stop.
Now Search log events and configuration files using AJAX and a browser.
Download your FREE copy of Splunk now >>  http://get.splunk.com/
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


[Bacula-users] How to get SRPM

2007-08-13 Thread Radek Hladik
Hi all,
I would like to try Bacula 2.2.0 however there are no RPMS on 
sourceforge yet and I prefer RPMs to "direct" installation.
I do not mind compiling my own packages from tarball or SRPM but there 
is no SRPM  anywhere and tarball contains only bacula.spec.in file. How 
can I convert this file into proper spec file so I can use:
rpmbuild -tb bacula.2.2.0.tar.gz ?

Or is there any other way how to create SRPM or RPM packages?

Sorry if this is lame question :-)


Radek


-
This SF.net email is sponsored by: Splunk Inc.
Still grepping through log files to find problems?  Stop.
Now Search log events and configuration files using AJAX and a browser.
Download your FREE copy of Splunk now >>  http://get.splunk.com/
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] Long term backup with bacula

2007-08-07 Thread Radek Hladik
Hi,

Arno Lehmann napsal(a):
> Hi,
> 
> 07.08.2007 11:39,, Mike Follwerk - T²BF wrote::
>> Radek Hladik schrieb:
>>> Till now I came up with this ideas:
>>> * Backup catalog and bootstrap files with the data
>>> * Disable jobs/files/volumes autopruning
>>> * maybe modify some livecd to contain current version of bacula or at
>>> least bscan (do not know, maybe such a CD exists)
>>> * Create SQL query to list which jobs is on which tape and print it on
>>> the paper with the tapes
>>>
>>> Do you think this is enough or am I overseeing something?
> 
> I didn't actually read your original mail, but going from the subject, 
> this sounds good already. Having the job lists on paper is definitely 
> helpful sometimes, but I'd mainly make sure to know where the current 
> catalog dump is stored, though. The catalog has all the information 
> you will need, readily accessible by Bacula. If you only have your 
> printout, you'll probably need some time for bscanning, but 
> bextracting the catalog dump, loading it and starting Bacula itself 
> might be faster.

I'm planning to have catalog backup on last tape or on some other media
(flashdisk, cd?) with the backups, so only bscan of last tape would be
needed at worst. But what I am a little affraid of is: I restore catalog
and bacula says: "Wow, such an old catalog, you forget to disable XY
type pruning, I am going to delete old files from catalog" :-)

> 
>> since no one has answered this yet, I feel free to voice a blind guess:
>> doesn't bacula have some kind of "archive" flag for volumes for
>> precisely this reason? I seem to remember something like that from the
>> documentation.
> 
> Well, the something includes a "not implemented" remark, unfortunately :-)

I know and thus I didn't mention it in my previous email. And it still
is not what I'm looking for as it should delete files from catlaog.

> 
> But you don't really need that, anyway: Just make sure your long term 
> volumes are not automatically pruned and you're already half the way 
> where you want to go... the other half of the way is usually deciding 
> if you need complete file lists for your archives (then you probably 
> want to set up separate job and client entries for these, so you can 
> have different retention times for normal production backups) or if 
> the fact that the data exists and on which volumes it's stored is 
> enough (Then just make sure the jobs are not pruned from the catalog).
> 

> For real long-term storage, you'll have to find ways to move data to 
> new tapes from time to time, probably keeping the catalog up to date, 
> and so on, so that you can restore when the original tapes can't any 
> longer be read. Migration might be helpful for this, but that's a 
> different story...

I am thinking of year or two, maximum three years. And after this period
the backups will be done again completely. There will meybe be the need
to recycle really old backups but it should be done manually.

> 
> Arno
> 
> 
>> Regards
>> Mike Follwerk
>>
> 
Radek


-
This SF.net email is sponsored by: Splunk Inc.
Still grepping through log files to find problems?  Stop.
Now Search log events and configuration files using AJAX and a browser.
Download your FREE copy of Splunk now >>  http://get.splunk.com/
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] Long term backup with bacula

2007-08-07 Thread Radek Hladik
Mike Follwerk - T²BF napsal(a):
> Radek Hladik schrieb:
>> Till now I came up with this ideas:
>> * Backup catalog and bootstrap files with the data
>> * Disable jobs/files/volumes autopruning
>> * maybe modify some livecd to contain current version of bacula or at
>> least bscan (do not know, maybe such a CD exists)
>> * Create SQL query to list which jobs is on which tape and print it on
>> the paper with the tapes
>>
>> Do you think this is enough or am I overseeing something?
> 
> since no one has answered this yet, I feel free to voice a blind guess:
> doesn't bacula have some kind of "archive" flag for volumes for
> precisely this reason? I seem to remember something like that from the
> documentation.
> 
> 
> Regards
> Mike Follwerk
> 

I've noticed this to and search the documentation for "archive", the 
only relevant I've found:

*Archive
 An Archive operation is done after a Save, and it consists of 
removing the Volumes on which data is saved from active use. These 
Volumes are marked as Archived, and may no longer be used to save files. 
All the files contained on an Archived Volume are removed from the 
Catalog. NOT YET IMPLEMENTED.

It is not yet implemented and it seems like only half of what I need. I 
do not want to reuse these media unless I manually mark them as 
available but I would like to keep file information in catalog so in 
case of recovery I do not need to run bscan.

Radek

-
This SF.net email is sponsored by: Splunk Inc.
Still grepping through log files to find problems?  Stop.
Now Search log events and configuration files using AJAX and a browser.
Download your FREE copy of Splunk now >>  http://get.splunk.com/
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


[Bacula-users] Long term backup with bacula

2007-08-06 Thread Radek Hladik
Hallo,
I'm considering using Bacula for long term backups. I mean I would like
to backup now, put tapes somewhere safe and do not touch them
(hopefully) for year or two.
After a year or two I would repeat the backup again - no incremental
backup, just the whole full backup again on other set of tapes.

I would like to be not surprised by some glitch if I will need to
restore the backups. Considering the worst scenario I will only have my
backups and completely new machine with new tapedrive (of course
compatible with the tapes :-) ).

Till now I came up with this ideas:
* Backup catalog and bootstrap files with the data
* Disable jobs/files/volumes autopruning
* maybe modify some livecd to contain current version of bacula or at
least bscan (do not know, maybe such a CD exists)
* Create SQL query to list which jobs is on which tape and print it on
the paper with the tapes

Do you think this is enough or am I overseeing something?

Radek


-
This SF.net email is sponsored by: Splunk Inc.
Still grepping through log files to find problems?  Stop.
Now Search log events and configuration files using AJAX and a browser.
Download your FREE copy of Splunk now >>  http://get.splunk.com/
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] Win32 client not backing up network mounted shares

2007-08-06 Thread Radek Hladik
John Drescher napsal(a):
> I am not able to backup files on Win32 client if they are on
> drive
> which is mapped via network (normal Windows "net use"). I've tried  to
> disable VSS and/or enable portable backup but it didn't help.
> I did not find in documentation anything about this being Win32 client
> limitation.
> 
> 
> Who mapped the drive? Did you map the drive in the bacula job using a 
> run before job? I don't believe in windows if one user mapps a drive 
> that another user can see the mapped drive.
> 
> John

Thx for pointing me the right direction. As there is bacula icon in the
tray I did not fully realize that bacula client permissions and mappings
are different from mine.
I've created run before script to mount the drive and reconfigured
bacula service to run as network user instead of system service and
everything works fine. Only disadvantage is that service configured to
use network user can not access local desktop so bacula icon is gone...
At least it will confuse me no more :-)

Radek


-
This SF.net email is sponsored by: Splunk Inc.
Still grepping through log files to find problems?  Stop.
Now Search log events and configuration files using AJAX and a browser.
Download your FREE copy of Splunk now >>  http://get.splunk.com/
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


[Bacula-users] Win32 client not backing up network mounted shares

2007-08-06 Thread Radek Hladik
Hi,
I am not able to backup files on Win32 client if they are on drive 
which is mapped via network (normal Windows "net use"). I've tried  to 
disable VSS and/or enable portable backup but it didn't help.
I did not find in documentation anything about this being Win32 client 
limitation.
Should I report this as bug or is this intended behavior?
Unfortunately I can not install Bacula directly on share server.

Bacula Win32 client is:
2.0.3 (06 March 2007)  VSS Linux Cross-compile Win32


Radek

-
This SF.net email is sponsored by: Splunk Inc.
Still grepping through log files to find problems?  Stop.
Now Search log events and configuration files using AJAX and a browser.
Download your FREE copy of Splunk now >>  http://get.splunk.com/
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


[Bacula-users] Tape permissions

2007-05-14 Thread Radek Hladik

Hi,
	is it possible to tell Bacula to check permissions for the tape drive 
before an operation? I've just spent like hour searching why bacula is 
not working :-) Hopefully I've remembered my first Unix lesson and 
checked all the permissions. Bacula RPM added user bacula to group disk 
but this group had only read permissions for the tape (looks like 
default on my Fedora Core 4 box).
	What I mean is that the actual behavior has been really confusing, 
command label outputs "Sending command label to storage." and 
console freezes - if I open second console it says that tape is in state 
LABELING MEDIA and the tape do not even move.

If I tried to run a job tape ended in state 'initializing device'.

Radek

By the way: thanks for such a good program





smime.p7s
Description: S/MIME Cryptographic Signature
-
This SF.net email is sponsored by DB2 Express
Download DB2 Express C - the FREE version of DB2 express and take
control of your XML. No limits. Just data. Click to get it now.
http://sourceforge.net/powerbar/db2/___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users