Re: [Bacula-users] wilddir not working for exclusion but is for inclusion

2022-10-03 Thread Dave
I actually need that inclusion match, each subdirectory of CUSTOMER_DATA 
(starting with a, starting with b, etc) gets its own job due to the massive 
size of each.

 

From: Eduardo Antonio Adami  
Sent: Monday, October 3, 2022 2:07 PM
To: Dave 
Cc: bacula-users@lists.sourceforge.net
Subject: Re: [Bacula-users] wilddir not working for exclusion but is for 
inclusion

 

Yeah, you need delete the line  /mnt/CUSTOMER_DATA/i*

I did a test inside my test machine, please see the my example:

 

FileSet {
  Name = "CD-i"
  Include {
Options {
  signature = MD5
Exclude = yes
WildDir = "*/\.cache"
WildDir = "/home/adami/CUSTOMER_DATA/i*/temp"
Wildfile = "/home/adami/*.rar"
Wildfile = "/home/adami/*.zip"
}

File = /home/adami/CUSTOMER_DATA
  }

 

$ ls -l

-rw-r--r-- 1 adami adami6 out  3 15:34 arq1.rar
-rw-r--r-- 1 adami adami6 out  3 15:35 arq1.zip
-rw-r--r-- 1 adami adami6 out  3 15:34 arq2.rar
-rw-r--r-- 1 adami adami6 out  3 15:35 arq2.zip
drwxr-xr-x 3 adami adami 4096 out  3 15:44 inside1
-rw-r--r-- 1 adami adami6 out  3 15:35 test01.txt
-rw-r--r-- 1 adami adami6 out  3 15:35 test02.txt

 

$ ls -l inside/

-rw-r--r-- 1 adami adami6 out  3 15:37 arq1.rar
-rw-r--r-- 1 adami adami6 out  3 15:37 arq1.zip
-rw-r--r-- 1 adami adami6 out  3 15:37 arq2.rar
-rw-r--r-- 1 adami adami6 out  3 15:37 arq2.zip
drwxr-xr-x 2 adami adami 4096 out  3 15:45 temp
-rw-r--r-- 1 adami adami6 out  3 15:37 test01.txt
-rw-r--r-- 1 adami adami6 out  3 15:37 test02.txt

 

$ls -l inside/temp/

-rw-r--r-- 1 adami adami 6 out  3 15:45 temp1.txt
-rw-r--r-- 1 adami adami 6 out  3 15:45 temp2.txt

 

 

Bacula backup results

 


Atributo

Arquivo

File Id 

Status


-rw-r--r--

1000  1000

03-Oct-22 15:35:34

6.00 bytes

/home/adami/CUSTOMER_DATA/test02.txt

  584  

  OK  


-rw-r--r--

1000  1000

03-Oct-22 15:35:31

6.00 bytes

/home/adami/CUSTOMER_DATA/test01.txt

  586  

  OK  


-rw-r--r--

1000  1000

03-Oct-22 15:37:26

6.00 bytes

/home/adami/CUSTOMER_DATA/inside1/test02.txt

  583  

  OK  


-rw-r--r--

1000  1000

03-Oct-22 15:37:26

6.00 bytes

/home/adami/CUSTOMER_DATA/inside1/test01.txt

  585  

  OK  


drwxr-xr-x

1000  1000

03-Oct-22 15:44:25

4.00 KB

/home/adami/CUSTOMER_DATA/inside1/

  582  

  OK  


drwxr-xr-x

1000  1000

03-Oct-22 15:45:35

4.00 KB

/home/adami/CUSTOMER_DATA/

  581  

  OK  

 

Note didn´t copy the directory /temp

 
Eduardo A Adami

 

 

Em seg., 3 de out. de 2022 às 15:13, Dave mailto:du...@onetouchemr.com> > escreveu:

Thanks for the response, but I want to include the /mnt/CUSTOMER_DATA/i* and 
exclude the /mnt/CUSTOMER_DATA/*/temp.  Wouldn’t your exclude both?

 

 

From: Eduardo Antonio Adami mailto:ad...@unicamp.br> > 
Sent: Monday, October 3, 2022 12:35 PM
To: bacula-users@lists.sourceforge.net 
<mailto:bacula-users@lists.sourceforge.net> 
Subject: Re: [Bacula-users] wilddir not working for exclusion but is for 
inclusion

 

Hi Dave, you can try to use only one option block!

 

 

FileSet {
  Name = "CD-i"
  Include {
Options {
signature = SHA1
Compression = GZIP9  

exclude = yes

wilddir = "/mnt/CUSTOMER_DATA/i*"

wildfile = "*.rar"
wildfile = "*.zip"
wilddir = "/mnt/CUSTOMER_DATA/i*/temp"
 }
    File = /mnt/CUSTOMER_DATA
  }
}

 
 
Best Regards
Eduardo A Adami
 

 

 

Em seg., 3 de out. de 2022 às 11:44, Dave mailto:du...@onetouchemr.com> > escreveu:

I’m running Bacula 9.0.6 and cannot seem to get a wilddir exclusion to work.  
My fileset is:

 

FileSet {

  Name = "CD-i"

  Include {

Options {

signature = SHA1

Compression = GZIP9

wilddir = "/mnt/CUSTOMER_DATA/i*"

 }

Options {

   RegexDir = ".*"

   wildfile = "*.rar"

   wildfile = "*.zip"

   wilddir = "/mnt/CUSTOMER_DATA/i*/temp"

   exclude = yes

  }

File = /mnt/CUSTOMER_DATA

  }

}

 

There are a few hundred gigs of data in a few temp subdirectories and it 
continues to be backed up.  Is there some sort of issue with how I have this 
configured?  I did also try the following with the same results:

 

   wilddir = "/mnt/CUSTOMER_DATA/*/temp"

 

 

___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net <mailto:Bacula-users@lists.sourceforge.net> 
https://lists.sourceforge.net/lists/listinfo/bacula-users

___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users

___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] wilddir not working for exclusion but is for inclusion

2022-10-03 Thread Dave
Awesome, that makes a lot of sense and I'll give it a shot.


-Original Message-
From: Martin Simmons  
Sent: Monday, October 3, 2022 1:52 PM
To: Dave 
Cc: bacula-users@lists.sourceforge.net
Subject: Re: [Bacula-users] wilddir not working for exclusion but is for
inclusion

Bacula uses on the first Options clause that matches (in the order they are
written) to decide whether to include or exclude something.  If no clauses
match, then the item is backed up using the options (e.g. Compression) from
the final clause.

The problem with your clauses is that directories such as
/mnt/CUSTOMER_DATA/ifoo/temp first matches /mnt/CUSTOMER_DATA/i* so will be
included.

You need something like this:

FileSet {
  Name = "CD-i"
  Include {
Options {   # exclude rar/zip files and temp dir
   wildfile = "*.rar"
   wildfile = "*.zip"
   wilddir = "/mnt/CUSTOMER_DATA/i*/temp"
   exclude = yes
  }
Options {   # include some dirs
signature = SHA1
Compression = GZIP9
wilddir = "/mnt/CUSTOMER_DATA/i*"
 }
Options {   # exclude everything else at top level, but not top level
itself
   signature = SHA1
   Compression = GZIP9
   Regex = "^/mnt/CUSTOMER_DATA/[^/]+$"
   exclude = yes
  }
# everything else is included by default using the final options
File = /mnt/CUSTOMER_DATA
  }
}

__Martin


>>>>> On Mon, 3 Oct 2022 09:26:05 -0500, Dave  said:
> 
> I'm running Bacula 9.0.6 and cannot seem to get a wilddir exclusion to
work.
> My fileset is:
> 
> FileSet {
>   Name = "CD-i"
>   Include {
> Options {
> signature = SHA1
> Compression = GZIP9
> wilddir = "/mnt/CUSTOMER_DATA/i*"
>  }
> Options {
>RegexDir = ".*"
>wildfile = "*.rar"
>wildfile = "*.zip"
>wilddir = "/mnt/CUSTOMER_DATA/i*/temp"
>exclude = yes
>   }
> File = /mnt/CUSTOMER_DATA
>   }
> }
> 
> There are a few hundred gigs of data in a few temp subdirectories and 
> it continues to be backed up.  Is there some sort of issue with how I 
> have this configured?  I did also try the following with the same results:
> 
>wilddir = "/mnt/CUSTOMER_DATA/*/temp"



___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] wilddir not working for exclusion but is for inclusion

2022-10-03 Thread Dave
Thanks for the response, but I want to include the /mnt/CUSTOMER_DATA/i* and 
exclude the /mnt/CUSTOMER_DATA/*/temp.  Wouldn’t your exclude both?

 

 

From: Eduardo Antonio Adami  
Sent: Monday, October 3, 2022 12:35 PM
To: bacula-users@lists.sourceforge.net
Subject: Re: [Bacula-users] wilddir not working for exclusion but is for 
inclusion

 

Hi Dave, you can try to use only one option block!

 

 

FileSet {
  Name = "CD-i"
  Include {
Options {
signature = SHA1
Compression = GZIP9  

exclude = yes

wilddir = "/mnt/CUSTOMER_DATA/i*"

wildfile = "*.rar"
wildfile = "*.zip"
wilddir = "/mnt/CUSTOMER_DATA/i*/temp"
 }
File = /mnt/CUSTOMER_DATA
  }
}

 
 
Best Regards
Eduardo A Adami
 

 

 

Em seg., 3 de out. de 2022 às 11:44, Dave mailto:du...@onetouchemr.com> > escreveu:

I’m running Bacula 9.0.6 and cannot seem to get a wilddir exclusion to work.  
My fileset is:

 

FileSet {

  Name = "CD-i"

  Include {

Options {

signature = SHA1

Compression = GZIP9

wilddir = "/mnt/CUSTOMER_DATA/i*"

 }

Options {

   RegexDir = ".*"

   wildfile = "*.rar"

   wildfile = "*.zip"

   wilddir = "/mnt/CUSTOMER_DATA/i*/temp"

   exclude = yes

  }

File = /mnt/CUSTOMER_DATA

  }

}

 

There are a few hundred gigs of data in a few temp subdirectories and it 
continues to be backed up.  Is there some sort of issue with how I have this 
configured?  I did also try the following with the same results:

 

   wilddir = "/mnt/CUSTOMER_DATA/*/temp"

 

 

___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net <mailto:Bacula-users@lists.sourceforge.net> 
https://lists.sourceforge.net/lists/listinfo/bacula-users

___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


[Bacula-users] wilddir not working for exclusion but is for inclusion

2022-10-03 Thread Dave
I'm running Bacula 9.0.6 and cannot seem to get a wilddir exclusion to work.
My fileset is:

 

FileSet {

  Name = "CD-i"

  Include {

Options {

signature = SHA1

Compression = GZIP9

wilddir = "/mnt/CUSTOMER_DATA/i*"

 }

Options {

   RegexDir = ".*"

   wildfile = "*.rar"

   wildfile = "*.zip"

   wilddir = "/mnt/CUSTOMER_DATA/i*/temp"

   exclude = yes

  }

File = /mnt/CUSTOMER_DATA

  }

}

 

There are a few hundred gigs of data in a few temp subdirectories and it
continues to be backed up.  Is there some sort of issue with how I have this
configured?  I did also try the following with the same results:

 

   wilddir = "/mnt/CUSTOMER_DATA/*/temp"

 

 

___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


[Bacula-users] List Volumes without Job ?

2016-08-09 Thread dave
Dear Ana,

thanks for your reply.

I deleted all the "large volume" job entries in the Catalog and tested with.

"list jobs" and "list jobmedia". so far so good !  then I ran "prune expired 
volume yes"

well it did mark about 5 volumes as "Purged". But there are 50 others that 
should be marked as purged as well. And it did not. And I do not get it.

so - I guess there must still be some entries in the Catalog but it seems I 
cannot find it ...

What am I missing here ?

I know ...
Volume Retention = time-period-specification
Note, when all the File records have been removed that are on the Volume, the 
Volume will marked Purged (i.e. it has no more valid Files stored on it), and 
the Volume may be recycled even if the Volume Retention period has not expired. 

thanks
--> David

+--
|This was sent by d...@espros.com via Backup Central.
|Forward SPAM to ab...@backupcentral.com.
+--



--
What NetFlow Analyzer can do for you? Monitors network bandwidth and traffic
patterns at an interface-level. Reveals which users, apps, and protocols are 
consuming the most bandwidth. Provides multi-vendor support for NetFlow, 
J-Flow, sFlow and other flows. Make informed decisions using capacity 
planning reports. http://sdm.link/zohodev2dev
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


[Bacula-users] List Volumes without Job ?

2016-08-03 Thread dave
Hi There,

I am switching from bacula tape to disk based backup. I'm as good as finished 
and testing the system now.

In this phase i recognized that it would be helpful if I could purge and 
truncate volumes with no jobs on it.

So far I was not able to find a way to list volumes with 0 jobs on it and then 
purge and truncate them with some script. I tried "select VolumeName,VolJobs 
from Media;" but even after I have purged the jobs from a client it still shows 
the same amount of jobs. if it would show 0 Jobs that would be easy to script 
 . or do I mix s'thing up ?

I think it is a bit cumbersome to purge and truncate volumes. maybe there is an 
easy way out there ?

Thanks for helping me out.
--> David

+--
|This was sent by d...@espros.ch via Backup Central.
|Forward SPAM to ab...@backupcentral.com.
+--



--
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] Bacula backup speed

2015-12-15 Thread Lewis, Dave
> -Original Message-
> From: Uwe Schuerkamp [mailto:uwe.schuerk...@nionex.net]
> Sent: Tuesday, December 15, 2015 4:47 AM
> To: Lewis, Dave
> Cc: bacula-users@lists.sourceforge.net
> Subject: Re: [Bacula-users] Bacula backup speed
>
...
> Hi Dave,
>
> how large is your catalog database? How many entries do you have in
> your File table, for instance? Attribute despooling should be much
> faster than what you're seeing even on SATA disks.

Hi Uwe,

I don't know much about databases, but I'm learning.

We have 659,172 entries in the File table.


> I guess your mysql setup could do with some optimization w/r to buffer
> pool size (I hope you're using InnoDB as the backend db engine) and
> lock / write strategies.

The command
SHOW TABLE STATUS;
shows that we're using InnoDB.


> As your DB runs on the director machine, I'd assign at last 50% of the
> available RAM if your catalog has a similar size.
>
> A quick google search came up with the following query to determine
> your catalog db size:
>
> SELECT table_schema "DB Name",
> Round(Sum(data_length + index_length) / 1024 / 1024, 1) "DB Size in MB"
> FROM information_schema.tables GROUP BY table_schema;
>
> All the best, Uwe

The above command gave
++---+
| DB Name| DB Size in MB |
++---+
| bacula | 216.6 |
| information_schema | 0.0   |
++---+

To assign 50% of RAM (we have 16 GB total) I suppose I should add the line
innodb_buffer_pool_size = 8G
in /etc/mysql/my.cnf, then I assume restart MySQL. But maybe we don't need it 
that big at this time, since the database is much smaller.

Our my.cnf doesn't currently have a line for innodb_buffer_pool_size; I don't 
know what it uses by default.

Thanks,
Dave
IMPORTANT NOTICE: This e-mail is meant only for the use of the intended 
recipient. It may contain confidential information which is legally privileged 
or otherwise protected by law. If you received this e-mail in error or from 
someone who is not authorized to send it to you, you are strictly prohibited 
from reviewing, using, disseminating, distributing or copying the e-mail. 
PLEASE NOTIFY US IMMEDIATELY OF THE ERROR BY RETURN E-MAIL AND DELETE THIS 
MESSAGE FROM YOUR SYSTEM. Thank you for your cooperation.

--
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] Bacula backup speed

2015-12-15 Thread Lewis, Dave
> -Original Message-
> From: Heitor Faria [mailto:hei...@bacula.com.br]
> Sent: Tuesday, December 15, 2015 8:21 AM
> To: Alan Brown
> Cc: Lewis, Dave; bacula-users@lists.sourceforge.net
> Subject: Re: [Bacula-users] Bacula backup speed
>
...
>
> Suggestion: http://bacula.us/tuning/

Thanks, I'll check it out.
Dave

>
> Regards,
> ===
> 
> Heitor Medrado de Faria - LPIC-III | ITIL-F | Bacula Systems Certified
> Administrator II Do you need Bacula training? http://bacula.us/video-
> classes/
> +55 61 8268-4220
> Site: http://bacula.us FB: heitor.faria
> ===
> 
IMPORTANT NOTICE: This e-mail is meant only for the use of the intended 
recipient. It may contain confidential information which is legally privileged 
or otherwise protected by law. If you received this e-mail in error or from 
someone who is not authorized to send it to you, you are strictly prohibited 
from reviewing, using, disseminating, distributing or copying the e-mail. 
PLEASE NOTIFY US IMMEDIATELY OF THE ERROR BY RETURN E-MAIL AND DELETE THIS 
MESSAGE FROM YOUR SYSTEM. Thank you for your cooperation.
--
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] Bacula backup speed

2015-12-15 Thread Lewis, Dave
> -Original Message-
> From: Alan Brown [mailto:a...@mssl.ucl.ac.uk]
> Sent: Tuesday, December 15, 2015 8:08 AM
> To: Lewis, Dave; bacula-users@lists.sourceforge.net
> Subject: Re: [Bacula-users] Bacula backup speed
...
> What size is your database? ("select count(*) from File;") What write
> speeds are you actually achieving to the database drive?
> (MB/s and iops - "iostat -kxt 2" will tell you this)

Enter SQL query: SELECT COUNT(*) FROM File;
+--+
| COUNT(*) |
+--+
|  659,172 |
+--+

I'll run iostat next time I test Bacula. (It's not installed on this server -- 
thanks for the suggestion and the command.)


> How much ram does the server have?
> How much swap?
> how much Swap activity is there?
> How much swap is in use?

It's got 16 GB RAM and 16 GB swap space. I didn't check if it was using a lot 
of swap space during the test backup. However the database size is around 200 
MB, so I wouldn't think that swapping would be a problem. I'll check next time 
I test Bacula to be sure I'm not missing something.


> Have you tuned your MySQL?
> is MySQL writing lots of temp files?
> Are you using MyISAM or InnoDB?

We're using InnoDB. I haven't tuned MySQL.

I don't know how to check if MySQL is writing lots of temp files.


> If the MySQL is larger than 8-10 million entries, why aren't you using
> Postgresql?
>
> What is the network environment? (speeds? switches? hubs? subnets?
> Routers?)
>
> 700kB/sec is an abysmal rate for backup to disk unless you're running
> on 10Mb/s thinnet.
> Why is it so awful?
>
> Solve that first, THEN look at the attribute despooling issue.

I'm currently testing just local backups to disk, in order to eliminate issues 
of the network and the tape drive. The Bacula / MySQL server has gigabit 
Ethernet and is connected to a 10 Gbps switch, as are most of the other servers 
that we will be backing up.

Thanks,
Dave

IMPORTANT NOTICE: This e-mail is meant only for the use of the intended 
recipient. It may contain confidential information which is legally privileged 
or otherwise protected by law. If you received this e-mail in error or from 
someone who is not authorized to send it to you, you are strictly prohibited 
from reviewing, using, disseminating, distributing or copying the e-mail. 
PLEASE NOTIFY US IMMEDIATELY OF THE ERROR BY RETURN E-MAIL AND DELETE THIS 
MESSAGE FROM YOUR SYSTEM. Thank you for your cooperation.

--
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] Bacula backup speed

2015-12-15 Thread Lewis, Dave
> From: Josh Fisher [mailto:jfis...@pvct.com]
> Sent: Tuesday, December 15, 2015 6:57 AM
> To: bacula-users@lists.sourceforge.net
> Subject: Re: [Bacula-users] Bacula backup speed
>
...
>
> What else is writing to this same disk when the metadata is being
> written?

Probably not much. The database is on the same disk as the OS. The server is 
also a DICOM server and receives a few GB of medical images each day. It stores 
them on a separate disk. However, it also writes information to a log file on 
the OS disk, but the total time is probably less than 30 minutes. But I should 
check next time I test Bacula.

Thanks,
Dave

IMPORTANT NOTICE: This e-mail is meant only for the use of the intended 
recipient. It may contain confidential information which is legally privileged 
or otherwise protected by law. If you received this e-mail in error or from 
someone who is not authorized to send it to you, you are strictly prohibited 
from reviewing, using, disseminating, distributing or copying the e-mail. 
PLEASE NOTIFY US IMMEDIATELY OF THE ERROR BY RETURN E-MAIL AND DELETE THIS 
MESSAGE FROM YOUR SYSTEM. Thank you for your cooperation.

--
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] Bacula backup speed

2015-12-14 Thread Lewis, Dave
We are running MySQL, and the database is on the same server as the director. 
The disk that the database is on is a 7200 RPM, 3 Gb/s SATA disk.

Thanks,
Dave


-Original Message-
From: Bryn Hughes [mailto:li...@nashira.ca]
Sent: Monday, December 14, 2015 4:37 PM
To: bacula-users@lists.sourceforge.net
Subject: Re: [Bacula-users] Bacula backup speed

Sounds pretty clear that you have some performance issues on your database.

What are you running (MySQL/Postgresql/Sqlite/etc) for your DB? Is it running 
on the same server as your director?  What kind of disk storage do you have 
under your database?

Bryn

On 2015-12-14 01:12 PM, Lewis, Dave wrote:
> Hi,
>
> Thanks. I ran it again with attribute spooling. That sped up the backup of 
> data to the disk pool - instead of 6 hours it took less than 2 - but writing 
> the file metadata afterwards took nearly 6 hours.
>
> 12-Dec 18:24 jubjub-sd JobId 583: Job write elapsed time = 01:51:55,
> Transfer rate = 703.0 K Bytes/second 12-Dec 18:24 jubjub-sd JobId 583: 
> Sending spooled attrs to the Director.  Despooling 120,266,153 bytes ...
> 13-Dec 00:11 jubjub-dir JobId 583: Bacula jubjub-dir 5.2.6 (21Feb12):
>Elapsed time:   7 hours 39 mins 13 secs
>FD Files Written:   391,552
>SD Files Written:   391,552
>FD Bytes Written:   4,486,007,552 (4.486 GB)
>SD Bytes Written:   4,720,742,979 (4.720 GB)
>Rate:   162.8 KB/s
>Software Compression:   None
>Encryption: yes
>Accurate:   no
>
> So the transfer rate increased from about 200 KB/s to about 700 KB/s, but the 
> total elapsed time increased.
>
> Thanks,
> Dave
>
>
> -Original Message-
> From: Christian Manal [mailto:moen...@informatik.uni-bremen.de]
> Sent: Thursday, December 10, 2015 6:14 AM
> To: bacula-users@lists.sourceforge.net
> Subject: Re: [Bacula-users] Bacula backup speed
>
> On 10.12.2015 01:06, Lewis, Dave wrote:
>> Does anyone know what's causing the OS backups to be so slow and what
>> I can do to speed them up?
> Hi,
>
> the problem might be number of files, as in, writing all the file metadata to 
> the catalog could very well be your bottle neck.
>
> Try enabling attribute spooling, so all the metadata is collected and 
> commited to the DB in one go instead of file by file.
>
>
> Regards,
> Christian Manal
>
> --
>  ___
> Bacula-users mailing list
> Bacula-users@lists.sourceforge.net
> https://lists.sourceforge.net/lists/listinfo/bacula-users
> IMPORTANT NOTICE: This e-mail is meant only for the use of the intended 
> recipient. It may contain confidential information which is legally 
> privileged or otherwise protected by law. If you received this e-mail in 
> error or from someone who is not authorized to send it to you, you are 
> strictly prohibited from reviewing, using, disseminating, distributing or 
> copying the e-mail. PLEASE NOTIFY US IMMEDIATELY OF THE ERROR BY RETURN 
> E-MAIL AND DELETE THIS MESSAGE FROM YOUR SYSTEM. Thank you for your 
> cooperation.
>
> --
>  ___
> Bacula-users mailing list
> Bacula-users@lists.sourceforge.net
> https://lists.sourceforge.net/lists/listinfo/bacula-users


--
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users
IMPORTANT NOTICE: This e-mail is meant only for the use of the intended 
recipient. It may contain confidential information which is legally privileged 
or otherwise protected by law. If you received this e-mail in error or from 
someone who is not authorized to send it to you, you are strictly prohibited 
from reviewing, using, disseminating, distributing or copying the e-mail. 
PLEASE NOTIFY US IMMEDIATELY OF THE ERROR BY RETURN E-MAIL AND DELETE THIS 
MESSAGE FROM YOUR SYSTEM. Thank you for your cooperation.

--
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] Bacula backup speed

2015-12-14 Thread Lewis, Dave
Hi,

Thanks. I ran it again with attribute spooling. That sped up the backup of data 
to the disk pool - instead of 6 hours it took less than 2 - but writing the 
file metadata afterwards took nearly 6 hours.

12-Dec 18:24 jubjub-sd JobId 583: Job write elapsed time = 01:51:55, Transfer 
rate = 703.0 K Bytes/second
12-Dec 18:24 jubjub-sd JobId 583: Sending spooled attrs to the Director.  
Despooling 120,266,153 bytes ...
13-Dec 00:11 jubjub-dir JobId 583: Bacula jubjub-dir 5.2.6 (21Feb12):
  Elapsed time:   7 hours 39 mins 13 secs
  FD Files Written:   391,552
  SD Files Written:   391,552
  FD Bytes Written:   4,486,007,552 (4.486 GB)
  SD Bytes Written:   4,720,742,979 (4.720 GB)
  Rate:   162.8 KB/s
  Software Compression:   None
  Encryption: yes
  Accurate:   no

So the transfer rate increased from about 200 KB/s to about 700 KB/s, but the 
total elapsed time increased.

Thanks,
Dave


-Original Message-
From: Christian Manal [mailto:moen...@informatik.uni-bremen.de]
Sent: Thursday, December 10, 2015 6:14 AM
To: bacula-users@lists.sourceforge.net
Subject: Re: [Bacula-users] Bacula backup speed

On 10.12.2015 01:06, Lewis, Dave wrote:
> Does anyone know what's causing the OS backups to be so slow and what
> I can do to speed them up?

Hi,

the problem might be number of files, as in, writing all the file metadata to 
the catalog could very well be your bottle neck.

Try enabling attribute spooling, so all the metadata is collected and commited 
to the DB in one go instead of file by file.


Regards,
Christian Manal

--
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users
IMPORTANT NOTICE: This e-mail is meant only for the use of the intended 
recipient. It may contain confidential information which is legally privileged 
or otherwise protected by law. If you received this e-mail in error or from 
someone who is not authorized to send it to you, you are strictly prohibited 
from reviewing, using, disseminating, distributing or copying the e-mail. 
PLEASE NOTIFY US IMMEDIATELY OF THE ERROR BY RETURN E-MAIL AND DELETE THIS 
MESSAGE FROM YOUR SYSTEM. Thank you for your cooperation.

--
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] Bacula backup speed

2015-12-10 Thread Lewis, Dave
Hi Uwe,

We are using MySQL. It hasn't been optimized for bacula usage. I installed both 
bacula and mysql via apt-get. We aren't using spooling. These are just local 
backups. The director / sd machine was bought in 2009 and has 2 Intel Xeon 
E5410, 2.33 GHz CPUs and 16 GB memory. Disks are SATA. I'm out of town today 
and tomorrow and will try to find other specs tonight or this weekend.

I ran another test: I backed up the imaging data tarfiles to the same disk pool 
as I used for the operating system test backup. The rate was 23 MB/s, same as 
backup of the imaging data tarfiles to tape.

Thanks!
Dave


From: Uwe Schuerkamp [uwe.schuerk...@nionex.net]
Sent: Thursday, December 10, 2015 6:30 AM
To: Lewis, Dave
Cc: bacula-users@lists.sourceforge.net
Subject: Re: [Bacula-users] Bacula backup speed

On Thu, Dec 10, 2015 at 12:06:42AM +, Lewis, Dave wrote:
> Hi,
>
> I'm configuring Bacula backups and sometimes it is very slow to back up to 
> disk or tape, around 1 MB/s and sometimes slower. I'm wondering why it is 
> sometimes so slow and if there is something I can do differently that will 
> speed up the backups. I want to do backups of about 1 TB (or more) of user 
> data, and 1 MB/s is far too slow. I also want to back up operating systems of 
> various Linux servers and imaging data (currently stored locally).
>
> As a test, I ran a Bacula backup of several operating system directories of 
> the backup computer, and it took about 6 hours. Here are details:
> The directories were /bin, /boot, /etc, /lib, /lib64, /opt, /root, /sbin, 
> /srv, /usr
> Level = Full
> Disk pool
> Computed SHA1 signature
> >From the log file:
> 02-Dec 20:16 jubjub-sd JobId 547: Job write elapsed time = 06:03:29, Transfer 
> rate = 216.4 K Bytes/second
> Elapsed time:   6 hours 13 mins 54 secs
> FD Files Written:   391,549
> SD Files Written:   391,549
> FD Bytes Written:   4,486,000,544 (4.486 GB)
> SD Bytes Written:   4,720,733,845 (4.720 GB)
> Rate:   200.0 KB/s
> Software Compression:   None
> Encryption: yes
> Accurate:   no
>

Hello Dave,

well, it depends. ;-) On a lot of things, actually. What DB backend
are you using? Is the db optimized for bacula usage? How fast are your
disks? Are you using attribute / job spooling? What's the hardware
spec of the director / sd machine? What's your connection / max
throughput to the clients?

All the best, Uwe

--







IMPORTANT NOTICE: This e-mail is meant only for the use of the intended 
recipient. It may contain confidential information which is legally privileged 
or otherwise protected by law. If you received this e-mail in error or from 
someone who is not authorized to send it to you, you are strictly prohibited 
from reviewing, using, disseminating, distributing or copying the e-mail. 
PLEASE NOTIFY US IMMEDIATELY OF THE ERROR BY RETURN E-MAIL AND DELETE THIS 
MESSAGE FROM YOUR SYSTEM. Thank you for your cooperation.

--
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


[Bacula-users] Bacula backup speed

2015-12-09 Thread Lewis, Dave
Hi,

I'm configuring Bacula backups and sometimes it is very slow to back up to disk 
or tape, around 1 MB/s and sometimes slower. I'm wondering why it is sometimes 
so slow and if there is something I can do differently that will speed up the 
backups. I want to do backups of about 1 TB (or more) of user data, and 1 MB/s 
is far too slow. I also want to back up operating systems of various Linux 
servers and imaging data (currently stored locally).

As a test, I ran a Bacula backup of several operating system directories of the 
backup computer, and it took about 6 hours. Here are details:
The directories were /bin, /boot, /etc, /lib, /lib64, /opt, /root, /sbin, /srv, 
/usr
Level = Full
Disk pool
Computed SHA1 signature
>From the log file:
02-Dec 20:16 jubjub-sd JobId 547: Job write elapsed time = 06:03:29, Transfer 
rate = 216.4 K Bytes/second
Elapsed time:   6 hours 13 mins 54 secs
FD Files Written:   391,549
SD Files Written:   391,549
FD Bytes Written:   4,486,000,544 (4.486 GB)
SD Bytes Written:   4,720,733,845 (4.720 GB)
Rate:   200.0 KB/s
Software Compression:   None
Encryption: yes
Accurate:   no

I repeated the backup with no encryption and then repeated it with no SHA1 
signature computing, but in each case the rate was still about 200 KB/s.

Previously I set up Bacula backups of large tarfiles of imaging data. It runs 
every night, and it takes a few minutes to back up 4 GB. Here are details of 
last night's backups, which are typical:
Level = Incremental
Tape pool (LTO6)
Computed SHA1 signature
>From the log file:
09-Dec 01:08 jubjub-sd JobId 570: Job write elapsed time = 00:01:29, Transfer 
rate = 45.95 M Bytes/second
Elapsed time:   3 mins 23 secs
FD Files Written:   71
SD Files Written:   71
FD Bytes Written:   4,089,746,704 (4.089 GB)
SD Bytes Written:   4,089,800,810 (4.089 GB)
Rate:   20146.5 KB/s
Software Compression:   None
Encryption: yes
Accurate:   no

These are all local backups, and the same MySQL database was used for all of 
these backups. For some reason the backups of operating system directories were 
100x slower than backups of imaging data.

Does anyone know what's causing the OS backups to be so slow and what I can do 
to speed them up?

OS: Ubuntu 14.04 LTS
Bacula 5.2.6
16 GB memory, dual quad core, 2.33 GHz

Thanks,
Dave

IMPORTANT NOTICE: This e-mail is meant only for the use of the intended 
recipient. It may contain confidential information which is legally privileged 
or otherwise protected by law. If you received this e-mail in error or from 
someone who is not authorized to send it to you, you are strictly prohibited 
from reviewing, using, disseminating, distributing or copying the e-mail. 
PLEASE NOTIFY US IMMEDIATELY OF THE ERROR BY RETURN E-MAIL AND DELETE THIS 
MESSAGE FROM YOUR SYSTEM. Thank you for your cooperation.
--
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


[Bacula-users] Bacula SD Broken Pipe after 16 minutes after ...

2014-08-21 Thread dave
Since there are no solutions comming back from the community - I guess I am the 
only one not overcoming this problem.

But least I have found a workaround.

Since we have increased our spooling cache to a size so that a complete full 
backup of the servers fits - and therefore no despooling in the middle of the 
backup is needed - the backup runs fine.

For me it is a clear bug somewhere in Bacula with all versions since 5.2.x .

Reading the error message it makes kinda sense now since the bacula-fd is 
giving the error. Once despooling starts during backup - the fd continues to 
send data - but is not getting an answer from the storage daemon. Hence giving 
up. there seams to be some kind of communication Bug between the client and the 
storage daemon when the despooling process starts.

Still I am not really happy with my workaround - and still looking forward that 
someone kicks in with a real solution.

Best Regards
-- David

+--
|This was sent by d...@espros.ch via Backup Central.
|Forward SPAM to ab...@backupcentral.com.
+--



--
Slashdot TV.  
Video for Nerds.  Stuff that matters.
http://tv.slashdot.org/
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


[Bacula-users] Bacula SD Broken Pipe after 16 minutes after ...

2014-07-07 Thread dave
Hi !

I'm back from vacation. Smile

thanks for your tips. Unfortunately the Heartbeat won't help. I have upgraded 
meanwhile to 7.0.4. Just came into office to see that the weekend backup failed 
again with Heartbeat set on the client/server on all daemons to 300. As 
suggested.

- It just happens wenn MaxSpoolCache Size gets hist.
- It despools for exactly 16:12 min and then breaks. (what kind of timeout 
would that be ?)

Also did 
- Switched network card on bacula server
- Removed on LTO drive (running single now)
- Switched SAS Port (on Library)

05-Jul 15:42 srv-bacula-dir JobId 9856: Start Backup JobId 9856, 
Job=cli-bacula-data.2014-07-05_15.42.00_34
05-Jul 15:42 srv-bacula-dir JobId 9856: Using Device tapelib-drive0 to write.
05-Jul 15:42 srv-bacula-sd JobId 9856: Spooling data ...
05-Jul 23:10 srv-bacula-sd JobId 9856: User specified Device spool size 
reached: DevSpoolSize=800,000,016,969 MaxDevSpoolSize=800,000,000,000
05-Jul 23:10 srv-bacula-sd JobId 9856: Writing spooled data to Volume. 
Despooling 800,000,016,969 bytes ...
05-Jul 23:26 srv-client-fd JobId 9856: Error: bsock.c:428 Write error sending 
65540 bytes to Storage daemon:srv-bacula:9103: ERR=Broken pipe
05-Jul 23:26 srv-client-fd JobId 9856: Fatal error: backup.c:1200 Network send 
error to SD. ERR=Broken pipe
05-Jul 23:26 srv-bacula-sd JobId 9856: Despooling elapsed time = 00:16:12, 
Transfer rate = 823.0 M Bytes/second
05-Jul 23:26 srv-bacula-dir JobId 9856: Error: Director's connection to SD for 
this Job was lost.
05-Jul 23:26 srv-bacula-dir JobId 9856: Error: Bacula srv-bacula-dir 7.0.4 
(04Jun14):


Again - I am desperate. No clue what else todo to get it running.
- Why is this happening when it starts despooling from MaxSpoolCache size ?
- What has the client to do with despooling ? (05-Jul 23:26) cause the data is 
in the cache on the server.
- Why after 16:12min ?

After restarting the job - in 95% of the retries the backup completes.

Many thanks
-- David

+--
|This was sent by d...@espros.ch via Backup Central.
|Forward SPAM to ab...@backupcentral.com.
+--



--
Open source business process management suite built on Java and Eclipse
Turn processes into business applications with Bonita BPM Community Edition
Quickly connect people, data, and systems into organized workflows
Winner of BOSSIE, CODIE, OW2 and Gartner awards
http://p.sf.net/sfu/Bonitasoft
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


[Bacula-users] Bacula SD Broken Pipe after 16 minutes after ...

2014-07-06 Thread dave
Hi !

I'm back from vacation. :-)

thanks for your tips. Unfortunately the Heartbeat won't help. I have upgraded 
meanwhile to 7.0.4. Just came into office to see that the weekend backup failed 
again with Heartbeat set on the client/server on all daemons to 300. As 
suggested.

- It just happens wenn MaxSpoolCache Size gets hist.
- It despools for exactly 16:12 min and then breaks. (what kind of timeout 
would that be ?)

Also did 
- Switched network card on bacula server
- Removed on LTO drive (running single now)
- Switched SAS Port (on Library)

05-Jul 15:42 srv-vispa-dir JobId 9856: Start Backup JobId 9856, 
Job=cli-bacula-data.2014-07-05_15.42.00_34
05-Jul 15:42 srv-vispa-dir JobId 9856: Using Device tapelib-drive0 to write.
05-Jul 15:42 srv-vispa-sd JobId 9856: Spooling data ...
05-Jul 23:10 srv-vispa-sd JobId 9856: User specified Device spool size reached: 
DevSpoolSize=800,000,016,969 MaxDevSpoolSize=800,000,000,000
05-Jul 23:10 srv-vispa-sd JobId 9856: Writing spooled data to Volume. 
Despooling 800,000,016,969 bytes ...
05-Jul 23:26 srv-emme-fd JobId 9856: Error: bsock.c:428 Write error sending 
65540 bytes to Storage daemon:srv-bacula:9103: ERR=Broken pipe
05-Jul 23:26 srv-emme-fd JobId 9856: Fatal error: backup.c:1200 Network send 
error to SD. ERR=Broken pipe
05-Jul 23:26 srv-vispa-sd JobId 9856: Despooling elapsed time = 00:16:12, 
Transfer rate = 823.0 M Bytes/second
05-Jul 23:26 srv-vispa-dir JobId 9856: Error: Director's connection to SD for 
this Job was lost.
05-Jul 23:26 srv-vispa-dir JobId 9856: Error: Bacula srv-bacula-dir 7.0.4 
(04Jun14):


Again - I am desperate. No clue what else todo to get it running. 
- Why just when it starts despooling from MaxSpoolCache size ?
- Why aftzer 16:12min ?

After restarting the job - in 95% of the retries the backup completes.

Many thanks
-- David

+--
|This was sent by d...@espros.ch via Backup Central.
|Forward SPAM to ab...@backupcentral.com.
+--



--
Open source business process management suite built on Java and Eclipse
Turn processes into business applications with Bonita BPM Community Edition
Quickly connect people, data, and systems into organized workflows
Winner of BOSSIE, CODIE, OW2 and Gartner awards
http://p.sf.net/sfu/Bonitasoft
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


[Bacula-users] Bacula SD Broken Pipe after 16 minutes after ...

2014-06-01 Thread dave
Hi There,

this problem is driving me nuts. Bacula Server running Ubuntu 12.04. With 
several clients mostly Ubuntu.

Problem with 2 Clients and Full Backup. Reaching 800GB spool cache - start 
despooling - after 16:10 mm:ss +-10 sec. - Broken Pipe happens.
Backup Server has tape library with 2 LTO-4 drives. Local spool cache 1.8TB. 
Spool cache max 800GB.

Error: bsock.c:428 Write error sending 65540 bytes to Storage 
daemon:srv-01:9103: ERR=Connection timed out
26-May 22:38 srv-client-fd JobId 9291: Fatal error: backup.c:1200 Network send 
error to SD. ERR=Connection timed out
26-May 22:38 srv-01-sd JobId 9291: Despooling elapsed time = 00:16:10, Transfer 
rate = 82.47 M Bytes/second
26-May 22:38 srv-01-dir JobId 9291: Error: Director's connection to SD for this 
Job was lost.
26-May 22:38 srv-01-dir JobId 9291: Error: Bacula srv-vispa-dir 7.0.3 (12May14):

- reduced spool size to 80GB - then it spools/despools a couple times and 
fails. BUT always 16min after despool starts
- upgraded client and server to bacula 7.0.3 starting from 5.2.10.
- it seems to happen only on clients where the spool cache max size is hit.
- maybe every 2nd time the backup works !
- SD and DIR are on the same machine.
- no firewall in between 
- no router in between (2nd backup network)

Any ideas would be highly appreciated.

thanks
-- dave

+--
|This was sent by d...@espros.ch via Backup Central.
|Forward SPAM to ab...@backupcentral.com.
+--



--
Time is money. Stop wasting it! Get your web API in 5 minutes.
www.restlet.com/download
http://p.sf.net/sfu/restlet
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


[Bacula-users] linux Mint 13 bacula-fd system lockup

2013-12-26 Thread Dave Augustus
My workstation runs Linux Mint 13 (Ubuntu 12.0.4 LTS) 64 bit and is 
using the Bacula fd 5.2.5-0ubuntu6.2. The rest of my installation of 
Bacula is running 5.0.2-1 x86_64 on Centos 5 or 6.

My director is running CentOS 5 and is also the the storage device.

The problem is that all my backups appear to work fine-incremental, 
differential, and full- on all the rest of my machines(Centos 5 and 6). 
The only failure I am having is Full backups of my workstation.

I can restore from an incremental or a differential backup for this host 
without error. The only thing I can't get is a completed full backup! 
Running a full backup manually from my director eventually crashes my 
workstation with the only solution a hard-reboot.

I appear to have the latest bacula packages for this version of Ubuntu.

Any ideas?

Thanks,
Dave

--
Rapidly troubleshoot problems before they affect your business. Most IT 
organizations don't have a clear picture of how application performance 
affects their revenue. With AppDynamics, you get 100% visibility into your 
Java,.NET,  PHP application. Start your 15-day FREE TRIAL of AppDynamics Pro!
http://pubads.g.doubleclick.net/gampad/clk?id=84349831iu=/4140/ostg.clktrk
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


[Bacula-users] Can't find your catalog (MyCatalog)

2010-06-22 Thread Dave Buchanan
Hello ... I am having a issue with the catalog backups failing.

We are running 5.03 on Ubuntu server 10.04 LTS 32 bit.  Loaded from Rep
The failure message is:

22-Jun 05:48 server JobId 11: shell command: run BeforeJob
/etc/bacula/scripts/make_catalog_backup.pl MyCatalog
22-Jun 05:48 server JobId 11: BeforeJob: Can't find your catalog (MyCatalog)
in director configuration
22-Jun 05:48 server 11: Error: Runscript: BeforeJob returned non-zero
status=1. ERR=Child exited with code 1

Other jobs are running

My bacula-dir.conf does have the following



 # Generic catalog service
 Catalog {
   Name = MyCatalog
 dbname = bacula; dbuser = bacula; dbpassword = password;DB Address
 = 127.0.0.1



Netstat -tulpn shows

roto Recv-Q Send-Q Local Address   Foreign Address
State   PID/Program name
tcp0  0 127.0.0.1:3306  0.0.0.0:*
LISTEN  3501/mysqld
tcp0  0 10.0.2.16:9101  0.0.0.0:*
LISTEN  1782/bacula-dir
tcp0  0 10.0.2.16:9102  0.0.0.0:*
LISTEN  13030/bacula-fd
tcp0  0 0.0.0.0:110 0.0.0.0:*
LISTEN  1282/dovecot
tcp0  0 10.0.2.16:9103  0.0.0.0:*
LISTEN  13006/bacula-sd
tcp0  0 0.0.0.0:143 0.0.0.0:*
LISTEN  1282/dovecot
tcp0  0 0.0.0.0:33839   0.0.0.0:*
LISTEN  -
tcp0  0 0.0.0.0:111 0.0.0.0:*
LISTEN  702/portmap
tcp0  0 0.0.0.0:1   0.0.0.0:*
LISTEN  1573/perl
tcp0  0 0.0.0.0:80  0.0.0.0:*
LISTEN  1482/apache2
[REMAINDER TRUNCATED]

I am able to log in to mysql using the command line and pull the schema

mysql -u root -p bacula -e show tables;

+--+
| Tables_in_bacula |
+--+
| BaseFiles|
| CDImages |
| Client   |
| Counters |
| Device   |
| File |
| FileSet  |
| Filename |
| Job  |
| JobHisto |
| JobMedia |
| Location |
| LocationLog  |
| Log  |
| Media|
| MediaType|
| Path |
| PathHierarchy|
| PathVisibility   |
| Pool |
| Status   |
| Storage  |
| UnsavedFiles |
| Version  |
+--+

If the command is run manually it does indeed work (permission issue maybe)

I have no good ideas on this one

Thanks

Dave
--
ThinkGeek and WIRED's GeekDad team up for the Ultimate 
GeekDad Father's Day Giveaway. ONE MASSIVE PRIZE to the 
lucky parental unit.  See the prize list and enter to win: 
http://p.sf.net/sfu/thinkgeek-promo___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] Can't find your catalog (MyCatalog)

2010-06-22 Thread Dave Buchanan
On Tue, Jun 22, 2010 at 11:54 AM, John Drescher dresche...@gmail.comwrote:

 On Tue, Jun 22, 2010 at 12:47 PM, Dave Buchanan
 alegriatech...@gmail.com wrote:
  Hi John -
 
  yes it does bring up the schema using the bacula account
 

 Can you post your bacula-dir.conf. It appears the problem is not
 database connectivity but something wrong with your configuration
 file.

 John


Here ya go !

#
# Default Bacula Director Configuration file
#
#  The only thing that MUST be changed is to add one or more
#   file or directory names in the Include directive of the
#   FileSet resource.
#
#  For Bacula release 5.0.1 (24 February 2010) -- ubuntu 10.04
#
#  You might also want to change the default email address
#   from root to your address.  See the mail and operator
#   directives in the Messages resource.
#

Director {# define myself
  Name = dfw_backup-dir
  DIRport = 9101# where we listen for UA connections
  QueryFile = /etc/bacula/scripts/query.sql
  WorkingDirectory = /var/lib/bacula
  PidDirectory = /var/run/bacula
  Maximum Concurrent Jobs = 15
  Password = password_for_Dir # Console password
  Messages = Daemon
  DirAddress = 10.0.2.16
}

JobDefs {
  Name = DefaultJob
  Type = Backup
  Level = Incremental
  Client = dfw_backup-fd
  FileSet = Full Set
  Schedule = WeeklyCycle
  Storage = File
  Messages = Standard
  Pool = File
  Priority = 10
  Write Bootstrap = /var/lib/bacula/%c.bsr
  Prefer Mounted Volumes = no
}


#
# Define the main nightly save backup job

Job {
  Name = BackupClient1
  JobDefs = DefaultJob
}

#Job {
#  Name = BackupClient2
#  Client = dfw_backup2-fd
#  JobDefs = DefaultJob
#}

# Backup the catalog database (after the nightly save)
Job {
  Name = BackupCatalog
  JobDefs = DefaultJob
  Level = Full
  FileSet=Catalog
  Schedule = WeeklyCycleAfterBackup
  # This creates an ASCII copy of the catalog
  # Arguments to make_catalog_backup.pl are:
  #  make_catalog_backup.pl catalog-name
  RunBeforeJob = /etc/bacula/scripts/make_catalog_backup.pl MyCatalog
  # This deletes the copy of the catalog
  RunAfterJob  = /etc/bacula/scripts/delete_catalog_backup
  Write Bootstrap = /var/lib/bacula/%n.bsr
  Priority = 11   # run after main backup
}

#
# Standard Restore template, to be changed by Console program
#  Only one such job is needed for all Jobs/Clients/Storage ...
#
Job {
  Name = RestoreFiles
  Type = Restore
  Client=dfw_backup-fd
  FileSet=Full Set
  Storage = File
  Pool = Default
  Messages = Standard
  Where = /backup/bacula-restores
}


# List of files to be backed up
FileSet {
  Name = Full Set
  Include {
Options {
  signature = MD5
}
#
#  Put your list of files here, preceded by 'File =', one per line
#or include an external list with:
#
#File = file-name
#
#  Note: / backs up everything on the root partition.
#if you have other partitions such as /usr or /home
#you will probably want to add them too.
#
#  By default this is defined to point to the Bacula binary
#directory to give a reasonable FileSet to backup to
#disk storage during initial testing.
#
File = /usr/sbin
  }

#
# If you backup the root directory, the following two excluded
#   files can be useful
#
  Exclude {
File = /var/lib/bacula
File = /backup
File = /proc
File = /tmp
File = /.journal
File = /.fsck
  }
}

#
# When to do the backups, full backup on first sunday of the month,
#  differential (i.e. incremental since full) every other sunday,
#  and incremental backups other days
Schedule {
  Name = WeeklyCycle
  Run = Full 1st sun at 23:05
  Run = Differential 2nd-5th sun at 23:05
  Run = Incremental mon-sat at 23:05
}

# This schedule does the catalog. It starts after the WeeklyCycle
Schedule {
  Name = WeeklyCycleAfterBackup
  Run = Full sun-sat at 23:10
}

# This is the backup of the catalog
FileSet {
  Name = Catalog
  Include {
Options {
  signature = MD5
}
File = /var/lib/bacula/bacula.sql
  }
}

# Client (File Services) to backup
Client {
  Name = dfw_backup-fd
  Address = 10.0.2.16
  FDPort = 9102
  Catalog = MyCatalog
  Password = password_for_Dir  # password for FileDaemon
  File Retention = 30 days# 30 days
  Job Retention = 6 months# six months
  AutoPrune = yes # Prune expired Jobs/Files
  Maximum Concurrent Jobs = 10

}


# Definition of file storage device
Storage {
  Name = File
# Do not use localhost here
  Address = dfw_backup.dfw.sirific.com# N.B. Use a fully
qualified name here
  SDPort = 9103
  Password = password_for_Dir
  Device = FileStorage
  Media Type = File
}

Storage {
  Name = Autochanger
  Password = password_for_Dir
  Address = dfw_backup.domain
  SDPort = 9103
  Device = Autochanger
  Media Type = LTO-3
  Maximum concurrent jobs = 10
  }

 Storage {
  Name = Drive-0
  Password = password_for_Dir
  Address = dfw_backup.domain
  SDPort = 9103
  Device = Drive-0

Re: [Bacula-users] New To Bacula

2010-05-14 Thread Frandin, Dave
John/Carlo..

Thanks for the reply!!
I found the problem.. I'd checked to see iptables status on the client
node, and found it
was off (INPUT default ACCEPT), but I *didn't* check the Bacula server
machine.. Stopped
iptables and now it works fine... 

Thanks again.. I'm betting I'm gonna have a LOT more questions soon!!

Dave

-Original Message-
From: John Drescher [mailto:dresche...@gmail.com] 
Sent: Thursday, May 13, 2010 5:29 PM
To: Frandin, Dave
Cc: bacula-users@lists.sourceforge.net
Subject: Re: [Bacula-users] New To Bacula

On Thu, May 13, 2010 at 7:33 PM, Frandin, Dave dave.fran...@nv.doe.gov
wrote:
 Hello List!

 I'm new to Bacula, we have set up a system to serve as an
 evaluation/test Bacula server (director, storage, file daemons, Bacula
 5.0.1, on CentOS5.4, with a spare SCSI DLT drive). I built the server
 from source using the Traditional Redhat Linux install specs shown
in
 the user manual. The director talks to the local file/storage daemons
 fine, btape was happy with my tape drive, several test jobs were run
 using the FD on the test machine ok. Now my boss wants me to test on
one
 of our production machines with just the FD installed there. I've done
 this, and the director connects to the FD on the production machine
fine
 when I do a status from bconsole, I can do an estimate of the
small
 test job I've set up, and bconsole's estimate shows the amount of
data
 I would expect to see, however... when I try to run the job that I
just
 did the estimate on, the job hangs for 30 min and 1 sec, showing 0 FD
 files written, 0 SD files written, no SD errors, but shows an FD
 termination status of Error and SD termination status Waiting on
FD,
 and Termination *** Backup Error ***. We are evaluating Bacula to
 replace the current Symantec BackupExec/RALUS that we currently use..


 Any assistance would be greatly appreciated..


If you have localhost or 127.0.0.1 in any address for any of the
bacula configuration files remove that. Also is there a firewall that
prevents the FD from initiating a connection to the SD?

John



--

___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


[Bacula-users] New To Bacula

2010-05-13 Thread Frandin, Dave
Hello List!

I'm new to Bacula, we have set up a system to serve as an
evaluation/test Bacula server (director, storage, file daemons, Bacula
5.0.1, on CentOS5.4, with a spare SCSI DLT drive). I built the server
from source using the Traditional Redhat Linux install specs shown in
the user manual. The director talks to the local file/storage daemons
fine, btape was happy with my tape drive, several test jobs were run
using the FD on the test machine ok. Now my boss wants me to test on one
of our production machines with just the FD installed there. I've done
this, and the director connects to the FD on the production machine fine
when I do a status from bconsole, I can do an estimate of the small
test job I've set up, and bconsole's estimate shows the amount of data
I would expect to see, however... when I try to run the job that I just
did the estimate on, the job hangs for 30 min and 1 sec, showing 0 FD
files written, 0 SD files written, no SD errors, but shows an FD
termination status of Error and SD termination status Waiting on FD,
and Termination *** Backup Error ***. We are evaluating Bacula to
replace the current Symantec BackupExec/RALUS that we currently use..


Any assistance would be greatly appreciated..

Dave

Compile Configuration

CFLAGS=-g -Wall ./configure \
--sbindir=/usr/sbin \
--sysconfdir=/etc/bacula \
--with-scriptdir=/etc/bacula \
--enable-smartalloc \
--enable-bat \
--with-mysql \
--with-working-dir=/var/bacula \
--with-pid-dir=/var/run \
--enable-conio


--

___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


[Bacula-users] volume in error after cancelling a job

2010-04-22 Thread Dave Cramer
I get this error after cancelling a job

22-Apr 06:10  JobId 55: Volume TestVolume1 previously written,
moving to end of data.
22-Apr 06:10 vd JobId 55: Error: Bacula cannot write on disk Volume
TestVolume1 because: The sizes do not match! Volume=1749446731
Catalog=1246704716
22-Apr 06:10d JobId 55: Marking Volume TestVolume1 in Error in Catalog


Is there a way to repair the volume ?

Is this a bug ?

Dave

--
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


[Bacula-users] Trouble creating RPM on Centos 5.4

2010-03-01 Thread Dave Augustus
Hello Listers,

I am a bacula user for about 6 months- currently running on version 3.

I am using the SRPM from the website to build BAT and running into a
challenge I can't get past.

The SRPM is from the website: bacula-bat-5.0.1-1.src.rpm

The error is with qt and the output is listed below :(  . I also have QT
installed at
/usr/local/Trolltech/Qt-4.4.3 . But it appears that this SRPM comes with
a new version. I also am not seeing any conflicts between the SRPM and
my QT version.

Thanks in advance,
Dave

make[1]: Leaving directory
`/home/rpmbuild/rpm/BUILD/bacula-5.0.1/src/filed'
==Entering
directory /home/rpmbuild/rpm/BUILD/bacula-5.0.1/src/qt-console
make[1]: Entering directory
`/home/rpmbuild/rpm/BUILD/bacula-5.0.1/src/qt-console'
make[1]: *** No rule to make target `all'.  Stop.
make[1]: Leaving directory
`/home/rpmbuild/rpm/BUILD/bacula-5.0.1/src/qt-console'

  == Error in /home/rpmbuild/rpm/BUILD/bacula-5.0.1/src/qt-console
==

AND

+ mkdir -p /home/rpmbuild/rpm/tmp/bacula-bat-root/usr/bin
+ cd src/qt-console
+ make DESTDIR=/home/rpmbuild/rpm/tmp/bacula-bat-root install
make: *** No rule to make target `install'.  Stop.
error: Bad exit status from /home/rpmbuild/rpm/tmp/rpm-tmp.31269 (%
install)

attachment: stock_smiley-4.png--
Download Intel#174; Parallel Studio Eval
Try the new software tools for yourself. Speed compiling, find bugs
proactively, and fine-tune applications for parallel performance.
See why Intel Parallel Studio got high marks during beta.
http://p.sf.net/sfu/intel-sw-dev___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


[Bacula-users] Bacula wants to use volume that is already in use

2010-02-24 Thread Dave Garbus
Hello List,

I am using Bacula 3.0.3 in conjunction with a 4 drive autochanger and 
Prefer Mounted Volumes set to No. Unfortunately, I have had a number 
of problems with Bacula trying to back up to a volume that is already 
being used for another job. As a result, the job will continue to wait 
for the tape in the other drive to become free, essentially eliminating 
any benefit that the job concurrency might have had.

This problem is occurring on a nightly basis, with some nights being 
worse than others. If I tell Bacula to prefer mounted volumes instead, 
the backups complete just fine, but then I am not taking advantage of 
the large tape library that we have at our disposal.

Here is an example of what I see in the logs. This will repeat until the 
tape is finally freed from the other drive (which can sometimes be hours).

backup.domain.tld-dir JobId 4783: Start Backup JobId 4783, 
Job=jobname.2010-02-24_02.00.00_11
backup.domain.tld-dir JobId 4783: Using Device Drive-5
backup.domain.tld-sd JobId 4783: 3301 Issuing autochanger loaded? drive 
5 command.
backup.domain.tld-sd JobId 4783: 3302 Autochanger loaded? drive 5, 
result: nothing loaded.
backup.domain.tld-sd JobId 4783: Warning: Volume 023071 wanted on 
Drive-5 (/dev/nst3) is in use by device Drive-4 (/dev/nst2)
backup.domain.tld-sd JobId 4783: 3301 Issuing autochanger loaded? drive 
5 command.
backup.domain.tld-sd JobId 4783: 3302 Autochanger loaded? drive 5, 
result: nothing loaded.
backup.domain.tld-sd JobId 4783: Warning: Volume 023071 wanted on 
Drive-5 (/dev/nst3) is in use by device Drive-4 (/dev/nst2)
backup.domain.tld-sd JobId 4783: 3301 Issuing autochanger loaded? drive 
5 command.
backup.domain.tld-sd JobId 4783: 3302 Autochanger loaded? drive 5, 
result: nothing loaded.
backup.domain.tld-sd JobId 4783: 3301 Issuing autochanger loaded? drive 
5 command.
backup.domain.tld-sd JobId 4783: 3302 Autochanger loaded? drive 5, 
result: nothing loaded.
backup.domain.tld-sd JobId 4783: Warning: mount.c:227 Open device 
Drive-5 (/dev/nst3) Volume 023071 failed: ERR=dev.c:491 Unable to 
open device Drive-5 (/dev/nst3): ERR=No medium found

Any help would be greatly appreciated.

-- 
Dave


--
Download Intel#174; Parallel Studio Eval
Try the new software tools for yourself. Speed compiling, find bugs
proactively, and fine-tune applications for parallel performance.
See why Intel Parallel Studio got high marks during beta.
http://p.sf.net/sfu/intel-sw-dev
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] Bacula - ESXi - vmdk hot backup

2010-01-21 Thread DAve
Carlo Filippetto wrote:
 Hi all,
 I would like to backup my virtual machine VMDK on ESXi 3.5
 There's a way to do it with the machine on and I have a consistent bck?
 What kind of change I have to do?
 
 Otherwise.. I can make a hot clone of my VM and backup it?
 

We tried several times to find a way to use Bacula, unsuccessfully. I
finally ended up using a script called ghettoVCB.sh and using scp to
copy the files over to where bacula could get to them.

If you find a better way I would love to know about it.

DAve


-- 
Posterity, you will know how much it cost the present generation to
preserve your freedom.  I hope you will make good use of it.  If you
do not, I shall repent in heaven that ever I took half the pains to
preserve it. John Adams

http://appleseedinfo.org


--
Throughout its 18-year history, RSA Conference consistently attracts the
world's best and brightest in the field, creating opportunities for Conference
attendees to learn about information security's most important issues through
interactions with peers, luminaries and emerging and established companies.
http://p.sf.net/sfu/rsaconf-dev2dev
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] Error with DR backups

2009-11-24 Thread DAve
DAve wrote:
 Martin Simmons wrote:
 On Mon, 16 Nov 2009 09:29:07 -0500, DAve  said:
 16-Nov 08:25 director-dir: Allied-ex3.2009-11-16_01.00.02 Warning: Error
 updating job record. sql_update.c:194 Update problem: affected_rows=0
 16-Nov 08:25 director-dir: Allied-ex3.2009-11-16_01.00.02 Warning: Error
 getting job record for stats: sql_get.c:293 No Job found for JobId 20947

 I am at a loss to understand why. The volumes can be pruned almost
 immediately as the backup is only for DR purposes and each volume will
 be recycled each night. The only problem I see is that the client is
 paying for 60GB and the backups have begun using more than that amount,
 so volumes are being reused within the current backup.
 That seems a very likely reason, especially if you have set Purge Oldest
 Volume = yes.  When Bacula purges a volume, it removes whole jobs, not just
 the info for that volume.

 __Martin
 
 Added ten more volumes to the pool. Two days complete, no errors...
 
 DAve
 

A week, no errors. Looks like that was the issue. Thank you to everyone
who responded on/off list with ideas.

DAve

-- 
Posterity, you will know how much it cost the present generation to
preserve your freedom.  I hope you will make good use of it.  If you
do not, I shall repent in heaven that ever I took half the pains to
preserve it. John Adams

http://appleseedinfo.org


--
Let Crystal Reports handle the reporting - Free Crystal Reports 2008 30-Day 
trial. Simplify your report design, integration and deployment - and focus on 
what you do best, core application coding. Discover what's new with
Crystal Reports now.  http://p.sf.net/sfu/bobj-july
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] Error with DR backups

2009-11-18 Thread DAve
Martin Simmons wrote:
 On Mon, 16 Nov 2009 09:29:07 -0500, DAve  said:
 16-Nov 08:25 director-dir: Allied-ex3.2009-11-16_01.00.02 Warning: Error
 updating job record. sql_update.c:194 Update problem: affected_rows=0
 16-Nov 08:25 director-dir: Allied-ex3.2009-11-16_01.00.02 Warning: Error
 getting job record for stats: sql_get.c:293 No Job found for JobId 20947

 I am at a loss to understand why. The volumes can be pruned almost
 immediately as the backup is only for DR purposes and each volume will
 be recycled each night. The only problem I see is that the client is
 paying for 60GB and the backups have begun using more than that amount,
 so volumes are being reused within the current backup.
 
 That seems a very likely reason, especially if you have set Purge Oldest
 Volume = yes.  When Bacula purges a volume, it removes whole jobs, not just
 the info for that volume.
 
 __Martin

Added ten more volumes to the pool. Two days complete, no errors...

DAve

-- 
Posterity, you will know how much it cost the present generation to
preserve your freedom.  I hope you will make good use of it.  If you
do not, I shall repent in heaven that ever I took half the pains to
preserve it. John Quincy Adams

http://appleseedinfo.org


--
Let Crystal Reports handle the reporting - Free Crystal Reports 2008 30-Day 
trial. Simplify your report design, integration and deployment - and focus on 
what you do best, core application coding. Discover what's new with
Crystal Reports now.  http://p.sf.net/sfu/bobj-july
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] Error with DR backups

2009-11-16 Thread DAve
I have tried several things now to correct this error without success.

- Checked and repaired MySQL db with MySQL tools (mysqlcheck).

- Cleaned MySQL db with bacula tools (dbcheck).

- The offending pool has been removed and a new pool created and volumes
added. The backup ran without error over two weeks and then the error
returned.

- The temp directory has been changed to a directory with more than
enough space to contain the entire backup set.

- The backup job has been moved in the schedule to a period that was not
concurrent with any other job.

Still, the one job that has 200GB+ of files succeeds, and the job with
only 60GB fails, same error.

16-Nov 08:25 director-dir: Allied-ex3.2009-11-16_01.00.02 Warning: Error
updating job record. sql_update.c:194 Update problem: affected_rows=0
16-Nov 08:25 director-dir: Allied-ex3.2009-11-16_01.00.02 Warning: Error
getting job record for stats: sql_get.c:293 No Job found for JobId 20947

I am at a loss to understand why. The volumes can be pruned almost
immediately as the backup is only for DR purposes and each volume will
be recycled each night. The only problem I see is that the client is
paying for 60GB and the backups have begun using more than that amount,
so volumes are being reused within the current backup.

I will next try increasing the number of volumes to see if that helps.

Still open to suggestions, more than happy to repost all files.

DAve

-- 
Posterity, you will know how much it cost the present generation to
preserve your freedom.  I hope you will make good use of it.  If you
do not, I shall repent in heaven that ever I took half the pains to
preserve it. John Quincy Adams

http://appleseedinfo.org


--
Let Crystal Reports handle the reporting - Free Crystal Reports 2008 30-Day 
trial. Simplify your report design, integration and deployment - and focus on 
what you do best, core application coding. Discover what's new with
Crystal Reports now.  http://p.sf.net/sfu/bobj-july
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] Error with DR backups

2009-11-03 Thread DAve
DAve wrote:
 DAve wrote:
 Cedric Tefft wrote:
 DAve wrote:
 No change, same error.

 30-Oct 07:09 director-dir: Allied-ex3.2009-10-30_01.00.02 Warning: Error 
 updating job record. sql_update.c:194 Update problem: affected_rows=0
 30-Oct 07:09 director-dir: Allied-ex3.2009-10-30_01.00.02 Warning: Error 
 getting job record for stats: sql_get.c:293 No Job found for JobId 20604
 30-Oct 07:09 director-dir: Allied-ex3.2009-10-30_01.00.02 Error: Bacula 
 2.0.3 (06Mar07):

 Baffling.
   
 You might be running out of disk space on /tmp or /var or wherever the 
 DB engine writes its temp tables.   The important thing is that you will 
 probably only see this condition WHILE bacula is actually trying to 
 update the database.  If you check it now, you will probably find you 
 have plenty of space -- don't let that fool you.
 Interesting idea. When that client is running I have a another very 
 large job that starts before, and ends after.

 I will check that out, though I have never seen this with MySQL.

 DAve


 
 My /tmp dir had but 300mb left. The client that is backing up, when the 
 failing backup starts, has 192GB of data and tens of thousands of files.
 
 I changed the mysql tmp dir to another spindle which has 90gb left. It 
 is also a different spindle than the logs and the database.
 
 We will see at 1am.
 
 DAve
 

No change, same error. So I do not believe it to be a SQL error or an 
error with the bacula DB.

DAve

-- 
Posterity, you will know how much it cost the present generation to
preserve your freedom.  I hope you will make good use of it.  If you
do not, I shall repent in heaven that ever I took half the pains to
preserve it. John Quincy Adams

http://appleseedinfo.org


--
Come build with us! The BlackBerry(R) Developer Conference in SF, CA
is the only developer event you need to attend this year. Jumpstart your
developing skills, take BlackBerry mobile applications to market and stay 
ahead of the curve. Join us from November 9 - 12, 2009. Register now!
http://p.sf.net/sfu/devconference
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] Error with DR backups

2009-10-30 Thread DAve
DAve wrote:

 
 Ran dbcheck, found several paths to correct and also orphaned files.
 dbcheck results, second run, everything looks good.
 
 Select function number: 16
 Checking for Filenames with a trailing slash
 Found 0 bad Filename records.
 Checking for Paths without a trailing slash
 Found 0 bad Path records.
 Checking for duplicate Filename entries.
 Found 0 duplicate Filename records.
 Checking for duplicate Path entries.
 Found 0 duplicate Path records.
 Checking for orphaned JobMedia entries.
 Checking for orphaned File entries. This may take some time!
 Checking for orphaned Path entries. This may take some time!
 Terminated
 
 Reran mysqlcheck as #mysqlcheck -q -r -uroot -p bacula
 Enter password:
 bacula.BaseFiles   OK
 bacula.CDImagesOK
 bacula.Client  OK
 bacula.CountersOK
 bacula.Device  OK
 bacula.FileOK
 bacula.FileSet OK
 bacula.FilenameOK
 bacula.Job OK
 bacula.JobMediaOK
 bacula.LocationOK
 bacula.LocationLog OK
 bacula.Log OK
 bacula.Media   OK
 bacula.MediaType   OK
 bacula.PathOK
 bacula.PoolOK
 bacula.Status  OK
 bacula.Storage OK
 bacula.UnsavedFilesOK
 bacula.Version OK
 
 I will see what happens tonight.
 
 DAve
 
 

No change, same error.

30-Oct 07:09 director-dir: Allied-ex3.2009-10-30_01.00.02 Warning: Error 
updating job record. sql_update.c:194 Update problem: affected_rows=0
30-Oct 07:09 director-dir: Allied-ex3.2009-10-30_01.00.02 Warning: Error 
getting job record for stats: sql_get.c:293 No Job found for JobId 20604
30-Oct 07:09 director-dir: Allied-ex3.2009-10-30_01.00.02 Error: Bacula 
2.0.3 (06Mar07):

Baffling.

DAve

-- 
Posterity, you will know how much it cost the present generation to
preserve your freedom.  I hope you will make good use of it.  If you
do not, I shall repent in heaven that ever I took half the pains to
preserve it. John Quincy Adams

http://appleseedinfo.org


--
Come build with us! The BlackBerry(R) Developer Conference in SF, CA
is the only developer event you need to attend this year. Jumpstart your
developing skills, take BlackBerry mobile applications to market and stay 
ahead of the curve. Join us from November 9 - 12, 2009. Register now!
http://p.sf.net/sfu/devconference
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] Error with DR backups

2009-10-30 Thread DAve
Cedric Tefft wrote:
 DAve wrote:
 No change, same error.

 30-Oct 07:09 director-dir: Allied-ex3.2009-10-30_01.00.02 Warning: Error 
 updating job record. sql_update.c:194 Update problem: affected_rows=0
 30-Oct 07:09 director-dir: Allied-ex3.2009-10-30_01.00.02 Warning: Error 
 getting job record for stats: sql_get.c:293 No Job found for JobId 20604
 30-Oct 07:09 director-dir: Allied-ex3.2009-10-30_01.00.02 Error: Bacula 
 2.0.3 (06Mar07):

 Baffling.
   
 You might be running out of disk space on /tmp or /var or wherever the 
 DB engine writes its temp tables.   The important thing is that you will 
 probably only see this condition WHILE bacula is actually trying to 
 update the database.  If you check it now, you will probably find you 
 have plenty of space -- don't let that fool you.

Interesting idea. When that client is running I have a another very 
large job that starts before, and ends after.

I will check that out, though I have never seen this with MySQL.

DAve


-- 
Posterity, you will know how much it cost the present generation to
preserve your freedom.  I hope you will make good use of it.  If you
do not, I shall repent in heaven that ever I took half the pains to
preserve it. John Quincy Adams

http://appleseedinfo.org


--
Come build with us! The BlackBerry(R) Developer Conference in SF, CA
is the only developer event you need to attend this year. Jumpstart your
developing skills, take BlackBerry mobile applications to market and stay 
ahead of the curve. Join us from November 9 - 12, 2009. Register now!
http://p.sf.net/sfu/devconference
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] Error with DR backups

2009-10-30 Thread DAve
DAve wrote:
 Cedric Tefft wrote:
 DAve wrote:
 No change, same error.

 30-Oct 07:09 director-dir: Allied-ex3.2009-10-30_01.00.02 Warning: Error 
 updating job record. sql_update.c:194 Update problem: affected_rows=0
 30-Oct 07:09 director-dir: Allied-ex3.2009-10-30_01.00.02 Warning: Error 
 getting job record for stats: sql_get.c:293 No Job found for JobId 20604
 30-Oct 07:09 director-dir: Allied-ex3.2009-10-30_01.00.02 Error: Bacula 
 2.0.3 (06Mar07):

 Baffling.
   
 You might be running out of disk space on /tmp or /var or wherever the 
 DB engine writes its temp tables.   The important thing is that you will 
 probably only see this condition WHILE bacula is actually trying to 
 update the database.  If you check it now, you will probably find you 
 have plenty of space -- don't let that fool you.
 
 Interesting idea. When that client is running I have a another very 
 large job that starts before, and ends after.
 
 I will check that out, though I have never seen this with MySQL.
 
 DAve
 
 

My /tmp dir had but 300mb left. The client that is backing up, when the 
failing backup starts, has 192GB of data and tens of thousands of files.

I changed the mysql tmp dir to another spindle which has 90gb left. It 
is also a different spindle than the logs and the database.

We will see at 1am.

DAve

-- 
Posterity, you will know how much it cost the present generation to
preserve your freedom.  I hope you will make good use of it.  If you
do not, I shall repent in heaven that ever I took half the pains to
preserve it. John Quincy Adams

http://appleseedinfo.org


--
Come build with us! The BlackBerry(R) Developer Conference in SF, CA
is the only developer event you need to attend this year. Jumpstart your
developing skills, take BlackBerry mobile applications to market and stay 
ahead of the curve. Join us from November 9 - 12, 2009. Register now!
http://p.sf.net/sfu/devconference
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] Error with DR backups

2009-10-29 Thread DAve
DAve wrote:
 Ryan Novosielski wrote:
 -BEGIN PGP SIGNED MESSAGE-
 Hash: SHA1

 DAve wrote:
 DAve wrote:
 DAve wrote:
 DAve wrote:
 Good afternoon.

 I am having a recurring issue with a backup that is configured for DR 
 purposes. The client purchased a fixed amount of space and wants to 
 overwrite the volumes each night. They have a local backup system in 
 place and we are using Bacula to get those backups offsite for the 
 evening only. I setup Bacula to use a number of volumes of fixed size, 
 and the volumes are written over each night.

 Everything worked fine for a period and then began producing an error. 
 There have been days when the error does not occur and I can see nothing 
 different.

 I am putting the client's config below and the larger backup output and 
 media list online at these URLs.

 Job Output
 http://pixelhammer.com/Backup-allied-ex3-fd%20Full.txt

 bconsole media list
 http://pixelhammer.com/allied-media.txt

 The error I am seeing,
 05-Oct 08:38 director-dir: Allied-ex3.2009-10-05_01.00.02 Warning: Error 
 updating job record. sql_update.c:194 Update problem: affected_rows=0
 05-Oct 08:38 director-dir: Allied-ex3.2009-10-05_01.00.02 Warning: Error 
 getting job record for stats: sql_get.c:293 No Job found for JobId 20126
 05-Oct 08:38 director-dir: Allied-ex3.2009-10-05_01.00.02 Error: Bacula 
 2.0.3 (06Mar07): 05-Oct-2009 08:38:53

 The client config,
 Job {
Name = Allied-ex3
FileSet = Allied-ex3
Write Bootstrap = /data/backups/Allied-ex3.bsr
Type = Backup
Level = Full
Client = allied-ex3-fd
Schedule = Allied-ex3
Storage = storage2-allied-ex3
Messages = Allied
Pool = ex3-allied-Pool
Priority = 10
#Enabled = No
}

 FileSet {
Name = Allied-ex3
Enable VSS = no
Include {
Options {
  #compression = gzip
  IgnoreCase = yes
 }
File = D:/archivesink/
}

Exclude {
}
 }

 Schedule {
Name = Allied-ex3
Run = Level=Full FullPool=ex3-allied-Pool mon-sun at 01:00
}

 Client {
Name = allied-ex3-fd
Address = xxx.xxx.105.12
FDPort = 49202
Catalog = DataVault
Password = xx
File Retention = 1 week
Job Retention = 1 week
AutoPrune = yes
}

 Storage {
Name = storage2-allied-ex3
Address = xxx.tls.net
SDPort = 49022
Password = xx
Device = FileStorage-allied-ex3
Media Type = File
}

 Pool {
Name = ex3-allied-Pool
Pool Type = Backup
LabelFormat = ex3-allied-
Recycle = yes
Recycle Oldest Volume = yes
Purge Oldest Volume = yes
Volume Retention = 12 hours
Maximum Volumes = 60
Maximum Volume Jobs = 0
Maximum Volume Bytes = 1G
}

 I am reasonably certain the problem is PEBKAC and my understanding of 
 pruning and retention. I cannot see where I have gone wrong.

 Thanks,

 DAve
 Hmmm, I have a second client configured in the same manner. The only 
 difference is that the second client has 240 1gb volumes instead of 60 
 1gb volumes. The configs are identical and the larger client has no 
 issues. Both backup jobs start and finish within 10 minutes of each 
 other, yet the smaller backup has it's job purged and the larger backup 
 does not.

 Still digging.

 DAve

 Changed the pool resource to not autoprune and the error was the same 
 last night.

 Pool {
Name = ex3-allied-Pool
Pool Type = Backup
LabelFormat = ex3-allied-
Recycle = yes
Recycle Oldest Volume = yes
Purge Oldest Volume = yes
AutoPrune = no
Volume Retention = 12 hours
Maximum Volumes = 60
Maximum Volume Jobs = 0
Maximum Volume Bytes = 1G
}

 The larger client mentioned above, again, no problems. If I have Job 
 Retention = 1 week then why is my current job not found in the catalog?

  From the manual,

 Job Retention = time-period-specification The Job Retention directive
 defines the length of time that Bacula will keep Job records in
 the Catalog database after the Job End time. When this time period
 expires, and if AutoPrune is set to yes Bacula will prune (remove)
 Job records that are older than the specified File Retention period.
 As with the other retention periods, this affects only records in the
 catalog and not data in your archive backup.

 And the error clearly states No Job found for JobId 20126, when the 
 job is still running.

 the only mention I ever seem to find of this error is a recent post by 
 Joshua J. Kugler, with no solution other than his issue went away and he 
 will keep an eye on it until it returns.

 DAve

 I find nothing different between the two configs that would explain why 
 one works and the other does not. So I created a new media pool called 
 exch3-allied-Pool, changed the name of the pool resource to the new name 
 in the director config, added volumes same as before, and now I have 
 been running without error since the 7th.

 I am at a loss to understand why. I need to go into SQL

Re: [Bacula-users] Error with DR backups

2009-10-29 Thread DAve
Ryan Novosielski wrote:
 -BEGIN PGP SIGNED MESSAGE-
 Hash: SHA1
 
 DAve wrote:
 DAve wrote:
 Ryan Novosielski wrote:
 -BEGIN PGP SIGNED MESSAGE-
 Hash: SHA1

 DAve wrote:
 DAve wrote:
 DAve wrote:
 DAve wrote:
 Good afternoon.

 I am having a recurring issue with a backup that is configured for DR 
 purposes. The client purchased a fixed amount of space and wants to 
 overwrite the volumes each night. They have a local backup system in 
 place and we are using Bacula to get those backups offsite for the 
 evening only. I setup Bacula to use a number of volumes of fixed size, 
 and the volumes are written over each night.

 Everything worked fine for a period and then began producing an error. 
 There have been days when the error does not occur and I can see 
 nothing 
 different.

 I am putting the client's config below and the larger backup output 
 and 
 media list online at these URLs.

 Job Output
 http://pixelhammer.com/Backup-allied-ex3-fd%20Full.txt

 bconsole media list
 http://pixelhammer.com/allied-media.txt

 The error I am seeing,
 05-Oct 08:38 director-dir: Allied-ex3.2009-10-05_01.00.02 Warning: 
 Error 
 updating job record. sql_update.c:194 Update problem: affected_rows=0
 05-Oct 08:38 director-dir: Allied-ex3.2009-10-05_01.00.02 Warning: 
 Error 
 getting job record for stats: sql_get.c:293 No Job found for JobId 
 20126
 05-Oct 08:38 director-dir: Allied-ex3.2009-10-05_01.00.02 Error: 
 Bacula 
 2.0.3 (06Mar07): 05-Oct-2009 08:38:53

 The client config,
 Job {
Name = Allied-ex3
FileSet = Allied-ex3
Write Bootstrap = /data/backups/Allied-ex3.bsr
Type = Backup
Level = Full
Client = allied-ex3-fd
Schedule = Allied-ex3
Storage = storage2-allied-ex3
Messages = Allied
Pool = ex3-allied-Pool
Priority = 10
#Enabled = No
}

 FileSet {
Name = Allied-ex3
Enable VSS = no
Include {
Options {
  #compression = gzip
  IgnoreCase = yes
 }
File = D:/archivesink/
}

Exclude {
}
 }

 Schedule {
Name = Allied-ex3
Run = Level=Full FullPool=ex3-allied-Pool mon-sun at 01:00
}

 Client {
Name = allied-ex3-fd
Address = xxx.xxx.105.12
FDPort = 49202
Catalog = DataVault
Password = xx
File Retention = 1 week
Job Retention = 1 week
AutoPrune = yes
}

 Storage {
Name = storage2-allied-ex3
Address = xxx.tls.net
SDPort = 49022
Password = xx
Device = FileStorage-allied-ex3
Media Type = File
}

 Pool {
Name = ex3-allied-Pool
Pool Type = Backup
LabelFormat = ex3-allied-
Recycle = yes
Recycle Oldest Volume = yes
Purge Oldest Volume = yes
Volume Retention = 12 hours
Maximum Volumes = 60
Maximum Volume Jobs = 0
Maximum Volume Bytes = 1G
}

 I am reasonably certain the problem is PEBKAC and my understanding of 
 pruning and retention. I cannot see where I have gone wrong.

 Thanks,

 DAve
 Hmmm, I have a second client configured in the same manner. The only 
 difference is that the second client has 240 1gb volumes instead of 60 
 1gb volumes. The configs are identical and the larger client has no 
 issues. Both backup jobs start and finish within 10 minutes of each 
 other, yet the smaller backup has it's job purged and the larger backup 
 does not.

 Still digging.

 DAve

 Changed the pool resource to not autoprune and the error was the same 
 last night.

 Pool {
Name = ex3-allied-Pool
Pool Type = Backup
LabelFormat = ex3-allied-
Recycle = yes
Recycle Oldest Volume = yes
Purge Oldest Volume = yes
AutoPrune = no
Volume Retention = 12 hours
Maximum Volumes = 60
Maximum Volume Jobs = 0
Maximum Volume Bytes = 1G
}

 The larger client mentioned above, again, no problems. If I have Job 
 Retention = 1 week then why is my current job not found in the catalog?

  From the manual,

 Job Retention = time-period-specification The Job Retention directive
 defines the length of time that Bacula will keep Job records in
 the Catalog database after the Job End time. When this time period
 expires, and if AutoPrune is set to yes Bacula will prune (remove)
 Job records that are older than the specified File Retention period.
 As with the other retention periods, this affects only records in the
 catalog and not data in your archive backup.

 And the error clearly states No Job found for JobId 20126, when the 
 job is still running.

 the only mention I ever seem to find of this error is a recent post by 
 Joshua J. Kugler, with no solution other than his issue went away and he 
 will keep an eye on it until it returns.

 DAve

 I find nothing different between the two configs that would explain why 
 one works and the other does not. So I created a new media pool called 
 exch3-allied-Pool, changed the name of the pool resource to the new name 
 in the director config, added volumes same as before, and now I have

Re: [Bacula-users] Error with DR backups

2009-10-29 Thread DAve
DAve wrote:
 Ryan Novosielski wrote:
 -BEGIN PGP SIGNED MESSAGE-
 Hash: SHA1

 DAve wrote:
 DAve wrote:
 Ryan Novosielski wrote:
 -BEGIN PGP SIGNED MESSAGE-
 Hash: SHA1

 DAve wrote:
 DAve wrote:
 DAve wrote:
 DAve wrote:
 Good afternoon.

 I am having a recurring issue with a backup that is configured for DR 
 purposes. The client purchased a fixed amount of space and wants to 
 overwrite the volumes each night. They have a local backup system in 
 place and we are using Bacula to get those backups offsite for the 
 evening only. I setup Bacula to use a number of volumes of fixed 
 size, 
 and the volumes are written over each night.

 Everything worked fine for a period and then began producing an 
 error. 
 There have been days when the error does not occur and I can see 
 nothing 
 different.

 I am putting the client's config below and the larger backup output 
 and 
 media list online at these URLs.

 Job Output
 http://pixelhammer.com/Backup-allied-ex3-fd%20Full.txt

 bconsole media list
 http://pixelhammer.com/allied-media.txt

 The error I am seeing,
 05-Oct 08:38 director-dir: Allied-ex3.2009-10-05_01.00.02 Warning: 
 Error 
 updating job record. sql_update.c:194 Update problem: affected_rows=0
 05-Oct 08:38 director-dir: Allied-ex3.2009-10-05_01.00.02 Warning: 
 Error 
 getting job record for stats: sql_get.c:293 No Job found for JobId 
 20126
 05-Oct 08:38 director-dir: Allied-ex3.2009-10-05_01.00.02 Error: 
 Bacula 
 2.0.3 (06Mar07): 05-Oct-2009 08:38:53

 The client config,
 Job {
Name = Allied-ex3
FileSet = Allied-ex3
Write Bootstrap = /data/backups/Allied-ex3.bsr
Type = Backup
Level = Full
Client = allied-ex3-fd
Schedule = Allied-ex3
Storage = storage2-allied-ex3
Messages = Allied
Pool = ex3-allied-Pool
Priority = 10
#Enabled = No
}

 FileSet {
Name = Allied-ex3
Enable VSS = no
Include {
Options {
  #compression = gzip
  IgnoreCase = yes
 }
File = D:/archivesink/
}

Exclude {
}
 }

 Schedule {
Name = Allied-ex3
Run = Level=Full FullPool=ex3-allied-Pool mon-sun at 01:00
}

 Client {
Name = allied-ex3-fd
Address = xxx.xxx.105.12
FDPort = 49202
Catalog = DataVault
Password = xx
File Retention = 1 week
Job Retention = 1 week
AutoPrune = yes
}

 Storage {
Name = storage2-allied-ex3
Address = xxx.tls.net
SDPort = 49022
Password = xx
Device = FileStorage-allied-ex3
Media Type = File
}

 Pool {
Name = ex3-allied-Pool
Pool Type = Backup
LabelFormat = ex3-allied-
Recycle = yes
Recycle Oldest Volume = yes
Purge Oldest Volume = yes
Volume Retention = 12 hours
Maximum Volumes = 60
Maximum Volume Jobs = 0
Maximum Volume Bytes = 1G
}

 I am reasonably certain the problem is PEBKAC and my understanding of 
 pruning and retention. I cannot see where I have gone wrong.

 Thanks,

 DAve
 Hmmm, I have a second client configured in the same manner. The only 
 difference is that the second client has 240 1gb volumes instead of 60 
 1gb volumes. The configs are identical and the larger client has no 
 issues. Both backup jobs start and finish within 10 minutes of each 
 other, yet the smaller backup has it's job purged and the larger 
 backup 
 does not.

 Still digging.

 DAve

 Changed the pool resource to not autoprune and the error was the same 
 last night.

 Pool {
Name = ex3-allied-Pool
Pool Type = Backup
LabelFormat = ex3-allied-
Recycle = yes
Recycle Oldest Volume = yes
Purge Oldest Volume = yes
AutoPrune = no
Volume Retention = 12 hours
Maximum Volumes = 60
Maximum Volume Jobs = 0
Maximum Volume Bytes = 1G
}

 The larger client mentioned above, again, no problems. If I have Job 
 Retention = 1 week then why is my current job not found in the catalog?

  From the manual,

 Job Retention = time-period-specification The Job Retention directive
 defines the length of time that Bacula will keep Job records in
 the Catalog database after the Job End time. When this time period
 expires, and if AutoPrune is set to yes Bacula will prune (remove)
 Job records that are older than the specified File Retention period.
 As with the other retention periods, this affects only records in the
 catalog and not data in your archive backup.

 And the error clearly states No Job found for JobId 20126, when the 
 job is still running.

 the only mention I ever seem to find of this error is a recent post by 
 Joshua J. Kugler, with no solution other than his issue went away and 
 he 
 will keep an eye on it until it returns.

 DAve

 I find nothing different between the two configs that would explain why 
 one works and the other does not. So I created a new media pool called 
 exch3-allied-Pool, changed the name of the pool resource to the new name 
 in the director config, added volumes same as before

Re: [Bacula-users] Error with DR backups

2009-10-12 Thread DAve
DAve wrote:
 DAve wrote:
 DAve wrote:
 Good afternoon.

 I am having a recurring issue with a backup that is configured for DR 
 purposes. The client purchased a fixed amount of space and wants to 
 overwrite the volumes each night. They have a local backup system in 
 place and we are using Bacula to get those backups offsite for the 
 evening only. I setup Bacula to use a number of volumes of fixed size, 
 and the volumes are written over each night.

 Everything worked fine for a period and then began producing an error. 
 There have been days when the error does not occur and I can see nothing 
 different.

 I am putting the client's config below and the larger backup output and 
 media list online at these URLs.

 Job Output
 http://pixelhammer.com/Backup-allied-ex3-fd%20Full.txt

 bconsole media list
 http://pixelhammer.com/allied-media.txt

 The error I am seeing,
 05-Oct 08:38 director-dir: Allied-ex3.2009-10-05_01.00.02 Warning: Error 
 updating job record. sql_update.c:194 Update problem: affected_rows=0
 05-Oct 08:38 director-dir: Allied-ex3.2009-10-05_01.00.02 Warning: Error 
 getting job record for stats: sql_get.c:293 No Job found for JobId 20126
 05-Oct 08:38 director-dir: Allied-ex3.2009-10-05_01.00.02 Error: Bacula 
 2.0.3 (06Mar07): 05-Oct-2009 08:38:53

 The client config,
 Job {
Name = Allied-ex3
FileSet = Allied-ex3
Write Bootstrap = /data/backups/Allied-ex3.bsr
Type = Backup
Level = Full
Client = allied-ex3-fd
Schedule = Allied-ex3
Storage = storage2-allied-ex3
Messages = Allied
Pool = ex3-allied-Pool
Priority = 10
#Enabled = No
}

 FileSet {
Name = Allied-ex3
Enable VSS = no
Include {
Options {
  #compression = gzip
  IgnoreCase = yes
 }
File = D:/archivesink/
}

Exclude {
}
 }

 Schedule {
Name = Allied-ex3
Run = Level=Full FullPool=ex3-allied-Pool mon-sun at 01:00
}

 Client {
Name = allied-ex3-fd
Address = xxx.xxx.105.12
FDPort = 49202
Catalog = DataVault
Password = xx
File Retention = 1 week
Job Retention = 1 week
AutoPrune = yes
}

 Storage {
Name = storage2-allied-ex3
Address = xxx.tls.net
SDPort = 49022
Password = xx
Device = FileStorage-allied-ex3
Media Type = File
}

 Pool {
Name = ex3-allied-Pool
Pool Type = Backup
LabelFormat = ex3-allied-
Recycle = yes
Recycle Oldest Volume = yes
Purge Oldest Volume = yes
Volume Retention = 12 hours
Maximum Volumes = 60
Maximum Volume Jobs = 0
Maximum Volume Bytes = 1G
}

 I am reasonably certain the problem is PEBKAC and my understanding of 
 pruning and retention. I cannot see where I have gone wrong.

 Thanks,

 DAve
 Hmmm, I have a second client configured in the same manner. The only 
 difference is that the second client has 240 1gb volumes instead of 60 
 1gb volumes. The configs are identical and the larger client has no 
 issues. Both backup jobs start and finish within 10 minutes of each 
 other, yet the smaller backup has it's job purged and the larger backup 
 does not.

 Still digging.

 DAve

 
 Changed the pool resource to not autoprune and the error was the same 
 last night.
 
 Pool {
Name = ex3-allied-Pool
Pool Type = Backup
LabelFormat = ex3-allied-
Recycle = yes
Recycle Oldest Volume = yes
Purge Oldest Volume = yes
AutoPrune = no
Volume Retention = 12 hours
Maximum Volumes = 60
Maximum Volume Jobs = 0
Maximum Volume Bytes = 1G
}
 
 The larger client mentioned above, again, no problems. If I have Job 
 Retention = 1 week then why is my current job not found in the catalog?
 
  From the manual,
 
 Job Retention = time-period-specification The Job Retention directive
 defines the length of time that Bacula will keep Job records in
 the Catalog database after the Job End time. When this time period
 expires, and if AutoPrune is set to yes Bacula will prune (remove)
 Job records that are older than the specified File Retention period.
 As with the other retention periods, this affects only records in the
 catalog and not data in your archive backup.
 
 And the error clearly states No Job found for JobId 20126, when the 
 job is still running.
 
 the only mention I ever seem to find of this error is a recent post by 
 Joshua J. Kugler, with no solution other than his issue went away and he 
 will keep an eye on it until it returns.
 
 DAve
 

I find nothing different between the two configs that would explain why 
one works and the other does not. So I created a new media pool called 
exch3-allied-Pool, changed the name of the pool resource to the new name 
in the director config, added volumes same as before, and now I have 
been running without error since the 7th.

I am at a loss to understand why. I need to go into SQL and see what is 
different as doing list media pool=exch3-allied-Pool shows no 
difference

Re: [Bacula-users] Error with DR backups

2009-10-07 Thread DAve
DAve wrote:
 DAve wrote:
 Good afternoon.

 I am having a recurring issue with a backup that is configured for DR 
 purposes. The client purchased a fixed amount of space and wants to 
 overwrite the volumes each night. They have a local backup system in 
 place and we are using Bacula to get those backups offsite for the 
 evening only. I setup Bacula to use a number of volumes of fixed size, 
 and the volumes are written over each night.

 Everything worked fine for a period and then began producing an error. 
 There have been days when the error does not occur and I can see nothing 
 different.

 I am putting the client's config below and the larger backup output and 
 media list online at these URLs.

 Job Output
 http://pixelhammer.com/Backup-allied-ex3-fd%20Full.txt

 bconsole media list
 http://pixelhammer.com/allied-media.txt

 The error I am seeing,
 05-Oct 08:38 director-dir: Allied-ex3.2009-10-05_01.00.02 Warning: Error 
 updating job record. sql_update.c:194 Update problem: affected_rows=0
 05-Oct 08:38 director-dir: Allied-ex3.2009-10-05_01.00.02 Warning: Error 
 getting job record for stats: sql_get.c:293 No Job found for JobId 20126
 05-Oct 08:38 director-dir: Allied-ex3.2009-10-05_01.00.02 Error: Bacula 
 2.0.3 (06Mar07): 05-Oct-2009 08:38:53

 The client config,
 Job {
Name = Allied-ex3
FileSet = Allied-ex3
Write Bootstrap = /data/backups/Allied-ex3.bsr
Type = Backup
Level = Full
Client = allied-ex3-fd
Schedule = Allied-ex3
Storage = storage2-allied-ex3
Messages = Allied
Pool = ex3-allied-Pool
Priority = 10
#Enabled = No
}

 FileSet {
Name = Allied-ex3
Enable VSS = no
Include {
Options {
  #compression = gzip
  IgnoreCase = yes
 }
File = D:/archivesink/
}

Exclude {
}
 }

 Schedule {
Name = Allied-ex3
Run = Level=Full FullPool=ex3-allied-Pool mon-sun at 01:00
}

 Client {
Name = allied-ex3-fd
Address = xxx.xxx.105.12
FDPort = 49202
Catalog = DataVault
Password = xx
File Retention = 1 week
Job Retention = 1 week
AutoPrune = yes
}

 Storage {
Name = storage2-allied-ex3
Address = xxx.tls.net
SDPort = 49022
Password = xx
Device = FileStorage-allied-ex3
Media Type = File
}

 Pool {
Name = ex3-allied-Pool
Pool Type = Backup
LabelFormat = ex3-allied-
Recycle = yes
Recycle Oldest Volume = yes
Purge Oldest Volume = yes
Volume Retention = 12 hours
Maximum Volumes = 60
Maximum Volume Jobs = 0
Maximum Volume Bytes = 1G
}

 I am reasonably certain the problem is PEBKAC and my understanding of 
 pruning and retention. I cannot see where I have gone wrong.

 Thanks,

 DAve
 
 Hmmm, I have a second client configured in the same manner. The only 
 difference is that the second client has 240 1gb volumes instead of 60 
 1gb volumes. The configs are identical and the larger client has no 
 issues. Both backup jobs start and finish within 10 minutes of each 
 other, yet the smaller backup has it's job purged and the larger backup 
 does not.
 
 Still digging.
 
 DAve
 

Changed the pool resource to not autoprune and the error was the same 
last night.

Pool {
   Name = ex3-allied-Pool
   Pool Type = Backup
   LabelFormat = ex3-allied-
   Recycle = yes
   Recycle Oldest Volume = yes
   Purge Oldest Volume = yes
   AutoPrune = no
   Volume Retention = 12 hours
   Maximum Volumes = 60
   Maximum Volume Jobs = 0
   Maximum Volume Bytes = 1G
   }

The larger client mentioned above, again, no problems. If I have Job 
Retention = 1 week then why is my current job not found in the catalog?

 From the manual,

Job Retention = time-period-specification The Job Retention directive
defines the length of time that Bacula will keep Job records in
the Catalog database after the Job End time. When this time period
expires, and if AutoPrune is set to yes Bacula will prune (remove)
Job records that are older than the specified File Retention period.
As with the other retention periods, this affects only records in the
catalog and not data in your archive backup.

And the error clearly states No Job found for JobId 20126, when the 
job is still running.

the only mention I ever seem to find of this error is a recent post by 
Joshua J. Kugler, with no solution other than his issue went away and he 
will keep an eye on it until it returns.

DAve

-- 
Posterity, you will know how much it cost the present generation to
preserve your freedom.  I hope you will make good use of it.  If you
do not, I shall repent in heaven that ever I took half the pains to
preserve it. John Quincy Adams

http://appleseedinfo.org


--
Come build with us! The BlackBerry(R) Developer Conference in SF, CA
is the only developer event you need to attend this year. Jumpstart your
developing skills, take BlackBerry mobile

Re: [Bacula-users] Error with DR backups

2009-10-06 Thread DAve
DAve wrote:
 Good afternoon.
 
 I am having a recurring issue with a backup that is configured for DR 
 purposes. The client purchased a fixed amount of space and wants to 
 overwrite the volumes each night. They have a local backup system in 
 place and we are using Bacula to get those backups offsite for the 
 evening only. I setup Bacula to use a number of volumes of fixed size, 
 and the volumes are written over each night.
 
 Everything worked fine for a period and then began producing an error. 
 There have been days when the error does not occur and I can see nothing 
 different.
 
 I am putting the client's config below and the larger backup output and 
 media list online at these URLs.
 
 Job Output
 http://pixelhammer.com/Backup-allied-ex3-fd%20Full.txt
 
 bconsole media list
 http://pixelhammer.com/allied-media.txt
 
 The error I am seeing,
 05-Oct 08:38 director-dir: Allied-ex3.2009-10-05_01.00.02 Warning: Error 
 updating job record. sql_update.c:194 Update problem: affected_rows=0
 05-Oct 08:38 director-dir: Allied-ex3.2009-10-05_01.00.02 Warning: Error 
 getting job record for stats: sql_get.c:293 No Job found for JobId 20126
 05-Oct 08:38 director-dir: Allied-ex3.2009-10-05_01.00.02 Error: Bacula 
 2.0.3 (06Mar07): 05-Oct-2009 08:38:53
 
 The client config,
 Job {
Name = Allied-ex3
FileSet = Allied-ex3
Write Bootstrap = /data/backups/Allied-ex3.bsr
Type = Backup
Level = Full
Client = allied-ex3-fd
Schedule = Allied-ex3
Storage = storage2-allied-ex3
Messages = Allied
Pool = ex3-allied-Pool
Priority = 10
#Enabled = No
}
 
 FileSet {
Name = Allied-ex3
Enable VSS = no
Include {
Options {
  #compression = gzip
  IgnoreCase = yes
 }
File = D:/archivesink/
}
 
Exclude {
}
 }
 
 Schedule {
Name = Allied-ex3
Run = Level=Full FullPool=ex3-allied-Pool mon-sun at 01:00
}
 
 Client {
Name = allied-ex3-fd
Address = xxx.xxx.105.12
FDPort = 49202
Catalog = DataVault
Password = xx
File Retention = 1 week
Job Retention = 1 week
AutoPrune = yes
}
 
 Storage {
Name = storage2-allied-ex3
Address = xxx.tls.net
SDPort = 49022
Password = xx
Device = FileStorage-allied-ex3
Media Type = File
}
 
 Pool {
Name = ex3-allied-Pool
Pool Type = Backup
LabelFormat = ex3-allied-
Recycle = yes
Recycle Oldest Volume = yes
Purge Oldest Volume = yes
Volume Retention = 12 hours
Maximum Volumes = 60
Maximum Volume Jobs = 0
Maximum Volume Bytes = 1G
}
 
 I am reasonably certain the problem is PEBKAC and my understanding of 
 pruning and retention. I cannot see where I have gone wrong.
 
 Thanks,
 
 DAve

Hmmm, I have a second client configured in the same manner. The only 
difference is that the second client has 240 1gb volumes instead of 60 
1gb volumes. The configs are identical and the larger client has no 
issues. Both backup jobs start and finish within 10 minutes of each 
other, yet the smaller backup has it's job purged and the larger backup 
does not.

Still digging.

DAve

-- 
Posterity, you will know how much it cost the present generation to
preserve your freedom.  I hope you will make good use of it.  If you
do not, I shall repent in heaven that ever I took half the pains to
preserve it. John Quincy Adams

http://appleseedinfo.org


--
Come build with us! The BlackBerryreg; Developer Conference in SF, CA
is the only developer event you need to attend this year. Jumpstart your
developing skills, take BlackBerry mobile applications to market and stay 
ahead of the curve. Join us from November 9#45;12, 2009. Register now#33;
http://p.sf.net/sfu/devconf
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


[Bacula-users] Error with DR backups

2009-10-05 Thread DAve
Good afternoon.

I am having a recurring issue with a backup that is configured for DR 
purposes. The client purchased a fixed amount of space and wants to 
overwrite the volumes each night. They have a local backup system in 
place and we are using Bacula to get those backups offsite for the 
evening only. I setup Bacula to use a number of volumes of fixed size, 
and the volumes are written over each night.

Everything worked fine for a period and then began producing an error. 
There have been days when the error does not occur and I can see nothing 
different.

I am putting the client's config below and the larger backup output and 
media list online at these URLs.

Job Output
http://pixelhammer.com/Backup-allied-ex3-fd%20Full.txt

bconsole media list
http://pixelhammer.com/allied-media.txt

The error I am seeing,
05-Oct 08:38 director-dir: Allied-ex3.2009-10-05_01.00.02 Warning: Error 
updating job record. sql_update.c:194 Update problem: affected_rows=0
05-Oct 08:38 director-dir: Allied-ex3.2009-10-05_01.00.02 Warning: Error 
getting job record for stats: sql_get.c:293 No Job found for JobId 20126
05-Oct 08:38 director-dir: Allied-ex3.2009-10-05_01.00.02 Error: Bacula 
2.0.3 (06Mar07): 05-Oct-2009 08:38:53

The client config,
Job {
   Name = Allied-ex3
   FileSet = Allied-ex3
   Write Bootstrap = /data/backups/Allied-ex3.bsr
   Type = Backup
   Level = Full
   Client = allied-ex3-fd
   Schedule = Allied-ex3
   Storage = storage2-allied-ex3
   Messages = Allied
   Pool = ex3-allied-Pool
   Priority = 10
   #Enabled = No
   }

FileSet {
   Name = Allied-ex3
   Enable VSS = no
   Include {
   Options {
 #compression = gzip
 IgnoreCase = yes
}
   File = D:/archivesink/
   }

   Exclude {
   }
}

Schedule {
   Name = Allied-ex3
   Run = Level=Full FullPool=ex3-allied-Pool mon-sun at 01:00
   }

Client {
   Name = allied-ex3-fd
   Address = xxx.xxx.105.12
   FDPort = 49202
   Catalog = DataVault
   Password = xx
   File Retention = 1 week
   Job Retention = 1 week
   AutoPrune = yes
   }

Storage {
   Name = storage2-allied-ex3
   Address = xxx.tls.net
   SDPort = 49022
   Password = xx
   Device = FileStorage-allied-ex3
   Media Type = File
   }

Pool {
   Name = ex3-allied-Pool
   Pool Type = Backup
   LabelFormat = ex3-allied-
   Recycle = yes
   Recycle Oldest Volume = yes
   Purge Oldest Volume = yes
   Volume Retention = 12 hours
   Maximum Volumes = 60
   Maximum Volume Jobs = 0
   Maximum Volume Bytes = 1G
   }

I am reasonably certain the problem is PEBKAC and my understanding of 
pruning and retention. I cannot see where I have gone wrong.

Thanks,

DAve
-- 
Posterity, you will know how much it cost the present generation to
preserve your freedom.  I hope you will make good use of it.  If you
do not, I shall repent in heaven that ever I took half the pains to
preserve it. John Quincy Adams

http://appleseedinfo.org


--
Come build with us! The BlackBerryreg; Developer Conference in SF, CA
is the only developer event you need to attend this year. Jumpstart your
developing skills, take BlackBerry mobile applications to market and stay 
ahead of the curve. Join us from November 9#45;12, 2009. Register now#33;
http://p.sf.net/sfu/devconf
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


[Bacula-users] bacula log rotation on FreeBSD?

2009-07-05 Thread Dave
Hello,
I'm wondering what others use for rotating their bacula logs on
FreeBSD? I'm refering to /var/db/bacula/log file.
Thanks.
Dave.


--
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


[Bacula-users] bacula user web interface

2009-07-02 Thread Dave
Hello,
I had a quick question. I was wondering if there was a bacula web
interface for users that would allow them to recover deleted files from only
their own backup sets and only on the computers the sets were taken on?
Thanks.
Dave.


--
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


[Bacula-users] windows backup and recovery

2009-07-02 Thread Dave
Hello,
I've got bacula 3.0 installed on a FreeBSD machine with mysql set
up. I want to use it to back up an xp machine completely, i've got vss going
but am wondering about the system state? My file definition is C:/ i'm
assuming with vss it'll get everything? My purpose is to take a complete
backup of this machine, remove the hard disk and then install a larger one
and restore the backup. 
I was also wondering about a recovery CD if one were still out
there?
Thanks.
Dave.


--
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


[Bacula-users] xp recovery to larger hard disk

2009-06-01 Thread Dave
Hello,
I'm back after a year absence. I have Bacula 2.2.8 installed on a
FreeBSD machine. I want first to upgrade then to image an xpsp2 machine. I'm
wanting to back the system up completely, using vss, ensuring i get the
system state, then drop in a new hard disk, one that's larger, boot from
some rescue CD, probably one i'm having to make, and then load the client,
and recover the backup to the new disk. A reboot should bring windows up on
the disk. Has anyone done this or similar?
Thanks.
Dave.


David Mehler (MCSA 2003, MCP, Network+, A+)
dave.meh...@gmail.com


--
OpenSolaris 2009.06 is a cutting edge operating system for enterprises 
looking to deploy the next generation of Solaris that includes the latest 
innovations from Sun and the OpenSource community. Download a copy and 
enjoy capabilities such as Networking, Storage and Virtualization. 
Go to: http://p.sf.net/sfu/opensolaris-get
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


[Bacula-users] Backup errors, No Job Found

2009-01-05 Thread DAve
Good morning. I have begun getting an error I cannot seem to track down. 
It seems that the job runs fine, the volumes fill, but when the job 
finishes Bacula is unable to write the completion because the job no 
longer exists. It looks to me like Bacula is purging the jobs too soon.

The error I get in the backup report is as follows,

05-Jan 05:57 storage2-sd: Job write elapsed time = 04:56:31, Transfer 
rate = 3.761 M bytes/second
05-Jan 05:57 director-dir: Allied-ex3.2009-01-05_01.00.01 Warning: Error 
updating job record. sql_update.c:194 Update problem: affected_rows=0
05-Jan 05:57 director-dir: Allied-ex3.2009-01-05_01.00.01 Warning: Error 
getting job record for stats: sql_get.c:293 No Job found for JobId 15303

This client gets a full backup once a night for DR. Retention is set for 
12 hours. Oddly I have two jobs running for this client and one job 
completes fine, the other fails.

Looking through the archives I found a few bits of information 
concerning MySQL and a few bits concerning retention times and auto 
pruning. I suspect I have an issues with how I setup auto pruning but 
the solution eludes me. That might have something to do with 2000mg of 
antibiotics a day and screaming sinuses 8^0

As to MySQL, I checked the table status and all looks well.
mysql SHOW TABLE STATUS FROM bacula like 'File' \G
*** 1. row ***
Name: File
  Engine: MyISAM
 Version: 10
  Row_format: Dynamic
Rows: 2749308
  Avg_row_length: 84
 Data_length: 316518320
Max_data_length: 281474976710655
Index_length: 160896000
   Data_free: 82983144
  Auto_increment: 108368578
 Create_time: 2008-03-04 16:01:52
 Update_time: 2009-01-05 11:41:24
  Check_time: 2009-01-01 13:35:43
   Collation: latin1_swedish_ci
Checksum: NULL
  Create_options:
 Comment:
1 row in set (0.00 sec)

Checking my director config looks good also, both client's have their 
backups configured identically, but only the smaller backup fails.
### SCHEDULES
Schedule {
   Name = Allied-vm1
   Run = Level=Full FullPool=vm1-allied-Pool mon-sun at 01:00
   }

Schedule {
   Name = Allied-ex3
   Run = Level=Full FullPool=ex3-allied-Pool mon-sun at 01:00
   }


### CLIENTS
Client {
   Name = allied-vm1-fd
   Address = REDACTED
   FDPort = 49201
   Catalog = DataVault
   Password = REDACTED  # password for FileDaemon
   File Retention = 1 week  # 30 days
   Job Retention = 1 week# six months
   AutoPrune = yes # Prune expired Jobs/Files
   }

Client {
   Name = allied-ex3-fd
   Address = REDACTED
   FDPort = 49202
   Catalog = DataVault
   Password = REDACTED  # password for FileDaemon
   File Retention = 1 week  # 30 days
   Job Retention = 1 week# six months
   AutoPrune = yes # Prune expired Jobs/Files
   }

### POOLS
Pool {
   Name = vm1-allied-Pool
   Pool Type = Backup
   LabelFormat = vm1-allied-
   Recycle = yes
   Recycle Oldest Volume = yes
   Purge Oldest Volume = yes
   AutoPrune = yes
   Volume Retention = 12 hours
   Maximum Volumes = 240
   Maximum Volume Jobs = 0
   Maximum Volume Bytes = 1G
   }

Pool {
   Name = ex3-allied-Pool
   Pool Type = Backup
   LabelFormat = ex3-allied-
   Recycle = yes
   Recycle Oldest Volume = yes
   Purge Oldest Volume = yes
   AutoPrune = yes
   Volume Retention = 12 hours
   Maximum Volumes = 60
   Maximum Volume Jobs = 0
   Maximum Volume Bytes = 1G
   }

Any ideas what I may have done wrong? The backup for ex3 ran fine until 
a few days ago. Any assistance is appreciated.

DAve



-- 
The whole internet thing is sucking the life out of me,
there ain't no pony in there.

--
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] waiting on max storage jobs

2008-10-14 Thread Dave Poirier
On Tue, Oct 14, 2008 at 12:10 AM, Sebastian Lehmann [EMAIL PROTECTED]
 wrote:

 Hello,

 Look at the storage-section in the bacula-dir.conf. There you have also
 to define Maximum Concurrent Jobs, e.g:

 Storage {
  Name = SL500-1
  Address = dss-bacula
  SD Port = 9103
  Password = passwd
  Device = SL500-1
  Media Type = LTO-3
  Autochanger = yes
  Maximum Concurrent Jobs = 10
 }


 Greetings
 Sebastian



This suggestion was close. I already had it in the bacula-sd.conf but it
turns out that I actually need to add Maximum Concurrent Jobs = 4 to the
Job definition in bacula-dir.conf. A reload seems to help too. ;)
Thanks everyone!
Dave
-
This SF.Net email is sponsored by the Moblin Your Move Developer's challenge
Build the coolest Linux based applications with Moblin SDK  win great prizes
Grand prize is a trip for two to an Open Source event anywhere in the world
http://moblin-contest.org/redirect.php?banner_id=100url=/___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


[Bacula-users] waiting on max storage jobs

2008-10-13 Thread Dave Poirier
Brand spankin new to Bacula... I like what I see so far.
I googled and looked through the archives and couldn't find anything
helpful. Anyway, I have a autochanger with 4 AIT-3 drives but it seems if I
start multiple jobs it will only run one job on the first drive and any job
after will sit with a status of waiting on max storage jobs.  I guess I
expected the behavior to start any subsiquent jobs on the other three
drives. Am I wrong in my thinking? I suspected that perhaps Maximum
Concurrent Jobs would have something to do with it, but that is set to
four... So I'm stumped and looking for some help... Any suggestions?

Bacula version 2.0.3 on Centos 5.2, the RPM is from the EPEL Repository.

bacula-dir.conf

Director {# define myself
  Name = bacula-dir
  DIRport = 9101# where we listen for UA connections
  QueryFile = /etc/bacula/query.sql
  WorkingDirectory = /var/spool/bacula
  PidDirectory = /var/run
  Maximum Concurrent Jobs = 4
  Password =  # Console password
  Messages = Daemon
}

bacula-sd.conf

Storage { # definition of myself
  Name = bacula-sd
  SDPort = 9103  # Director's port
  WorkingDirectory = /var/spool/bacula
  Pid Directory = /var/run
  Maximum Concurrent Jobs = 20
  Heartbeat Interval = 30 sec;
}

Thanks for any help,
Dave
-
This SF.Net email is sponsored by the Moblin Your Move Developer's challenge
Build the coolest Linux based applications with Moblin SDK  win great prizes
Grand prize is a trip for two to an Open Source event anywhere in the world
http://moblin-contest.org/redirect.php?banner_id=100url=/___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


[Bacula-users] bacula centos rpms

2008-01-02 Thread Dave
Hello,
What is the latest bacula rpms available for centos 5.x? Right now i'm 
running the client only on a few machines and that's at 2.2.4-1 i'm 
wondering if first there's something later and second if so, if there are 
any new features clientside that would justify upgrade?
Thanks.
Dave.


-
This SF.net email is sponsored by: Microsoft
Defy all challenges. Microsoft(R) Visual Studio 2005.
http://clk.atdmt.com/MRT/go/vse012070mrt/direct/01/
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


[Bacula-users] bacula-dir freebsd port and tls

2007-12-20 Thread Dave
Hello,
I posted this a while back, but i have additional information on it.
I've been having an issue about bacula since upgrading from 2.03 to
2.2.x. I'm running bacula with tls communications for all daemons. THis is
on a FreeBSD 6.x machine, all three daemons. I'm able to start the file and
storage daemons, they read the configuration files and keys fine, the
director did not. I found out via a bug report i atempted to file that the
most likely cause was the director was not being started as root. I went to
the box and manually started the director with:
bacula-dir -c /usr/local/etc/bacula-dir.conf
and it fired right up. This told me the most likely cause of the issue was
in the bacula-dir startup rc.d file and that the suggestion as starting as 
root was correct. I checked /usr/local/etc/rc.d/bacula-dir and noted the 
bacula_flags
line:
-u bacula -g bacula -v -c /usr/local/etc/bacula-dir.conf
when i start the director using this line i'm getting an error the
private key file can not be read. This is definitely my problem, manually 
starting with only the -c option works fine. Is there a
way to correct this in the port, perhaps with a flag at installtime if one
is using tls,  or is there a better way?
Thanks.
Dave.


-
This SF.net email is sponsored by: Microsoft
Defy all challenges. Microsoft(R) Visual Studio 2005.
http://clk.atdmt.com/MRT/go/vse012070mrt/direct/01/
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] bacula-dir freebsd port and tls

2007-12-20 Thread Dave
Hello,
Thanks for your replies. I've not changed permissions on these files 
since the 2.03 update, here's what they are:

total 16
-rw-r--r--  1 root  wheel  4357 May 25  2007 bacula.example.crt
-rw-r-  1 root  wheel   887 May 25  2007 bacula.example.key
-rw-r--r--  1 root  wheel  1659 May 25  2007 ca-cert.pem
-rw-r--r--  1 root  wheel  1720 May 27  2007 master.crt
-rw-r--r--  1 root  wheel  3415 May 27  2007 zeus-fd.pem

I tried starting via the commandline using the bacula_flags and -d option, 
got the same msg.
I checked the private key which seems to be the cause of this and the 
permissions i remember setting because i read that private keys should not 
be world readable, the file and storage daemons are utilizing the same flags 
and do not have issues accessing it.
If the permissions and memberships are wrong or insecure let me know 
what they should be, but as i said in 2.03 the same configs gave me no 
issue.
Thanks.
Dave. 


-
This SF.net email is sponsored by: Microsoft
Defy all challenges. Microsoft(R) Visual Studio 2005.
http://clk.atdmt.com/MRT/go/vse012070mrt/direct/01/
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] All the problems with Bacula

2007-11-21 Thread DAve
Janco van der Merwe wrote:
 Shon,
 
 If it took you four months to figure out how Bacula works maybe you
 shouldn't be allowed near any computer!

Everyone is entitled to their opinion, including me. However, continued
belittling of Bacula, Bacula developers, Bacula users, the Bacula
disenchanted, or anyone else serves no purpose except to give this mail
list a bad reputation. (eeww)

If you feel the need to vent, sign up for the PostFix or qmail lists,
there is always someone willing to exchange insults there.

DAve

 
 There are ways and means to do things in Bacula and let me tell you this
 if you couldn't do it or find a way you are really incompetent and again
 should be allowed to operate a PC.
 
 I have Bacula running at several sites without a glitch and backing up
 over 1 TB of data on one site with a LTO3 Autoloader, and BTW this was
 done when I was still a novice in the Linux World, so if I could do it
 anyone canwell it looks like you're the excpetion not the
 rule.
 
 On the Windows side of things there are no problems, I've backuped and
 restored several machine using Bacula and the Bart PE and with great
 success.
 
 Basically I can't see that its useable for anything more than backing up
 a single system, and even then better be careful.WHAT, ARE YOU
 OUT OF FREAKING MIND, YOU NUMB SKULL, here is a tip and I'll try to do
 this in a language that you might understand
 LEARN.HOWTOUSE.A..COMPUTER
 
 You know what Bacula Usersgood riddance to bad trash and lastely
 he'll 10 to 1 make a [EMAIL PROTECTED] up of the other software as well!

 
 
 
 
 
 On Wed, 2007-11-21 at 08:07 -0500, Shon Stephens wrote:
 Ok. This is a rant and you can remove it from the list if you want to
 later. I just have to vent.

 Bacula is incredibly complex to setup. Its taken 4 months and its
 still not working correctly.

 Things that should be easy that Bacula makes overly complex:

 Labeling tapes
 Assigning tapes to pools
 Reassigning tapes to pools
 Managing disk media

 Things Bacula can't seem to get right:

 Detecting a tape is in the drive and using it
 Even though the correctly labeled tape is in the drive, and has the
 right Volume label, and is marked Append, and is from the correct
 Pool
 Bacula is still waiting for a mount request. Every external program
 recognizes that the tape is in the drive and mounted. Not Bacula

 Catalog entries. I've not had a single backup job where the right
 entries made it into the Catalog

 Windows hosts. Good luck figuring out the esoteric path syntax because
 its different in different chapters of the manual and also different
 depending on which part of the config you are editing

 Basically I can't see that its useable for anything more than backing
 up a single system, and even then better be careful.

 I'm going with Arkeia Network Backup. Might cost money, but at least
 it will work as advertised which is more than can be said for Crapula

 -
 This SF.net email is sponsored by: Microsoft
 Defy all challenges. Microsoft(R) Visual Studio 2005.
 http://clk.atdmt.com/MRT/go/vse012070mrt/direct/01/
 ___
 Bacula-users mailing list
 Bacula-users@lists.sourceforge.net
 https://lists.sourceforge.net/lists/listinfo/bacula-users



-- 
I've been asking Google for a Veteran's Day logo since 2000,
maybe 1999. I was told they finally did a Veteran's Day logo,
but none of the links I was given return anything but a
normal Google logo.

Sad, very sad. Maybe the Chinese Government didn't like it?


-
This SF.net email is sponsored by: Microsoft
Defy all challenges. Microsoft(R) Visual Studio 2005.
http://clk.atdmt.com/MRT/go/vse012070mrt/direct/01/
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] Managing backup volume space

2007-11-13 Thread DAve
Chris Sarginson wrote:
 Hi Dave,
 
 We have a similar idea - the way we utilise bacula however is just to 
 create a single pool for a client, and then set their quota as the 
 Maximum Volume Bytes * Maximum Volumes.
 
 This works for us simply because the client then has a definitive quota 
 set which we can modify by increasing the Maximum Volumes settings, and 
 having a File Retention of whatever time is required (1 week/month) 
 whatever.  You just need to make sure you have enough space :)
 

Close I think. I believe that setting XX volumes of 1GB each in a single
pool for each client. Then setting the following in the client's Pool
resource.

Maximum Volume Jobs = 0
Maximum Volume Bytes = 1,073,741,824
Volume Retention = 7 days
AutoPrune = yes
Recycle = yes
Recycle Oldest Volume = yes
Purge Oldest Volume = yes

As I understand the manual, when backup time arrives Bacula will use the
oldest volumes first and purge whatever data is necessary to recycle the
oldest volume it finds. This would hold true regardless of retention
periods.

So if the client had 60GB of storage allocated, they could retain up to
60GB of data for up to seven days. But if they needed to backup 60GB in
one night, Bacula would recycle every volume to do do.

Am I correct?

DAve

 Kind regards
 
 Chris Sarginson
 Technical Support
 UKFast.Net Ltd
 
 (t) 0870 111 8866
 (f) 0870 458 4545
 
 The UK's Best Hosting Provider ISPA Awards 2007, 2006 and 2005
 
 Dedicated Servers - Managed Hosting - Domain Names- http://www.ukfast.net
 
 UKFast.Net Ltd, City Tower, Piccadilly Plaza, Manchester, M1 4BT
 Registered in England. Number 384 5616
 
 DAve wrote:
 Good afternoon,

 We have been using Bacula for a while now and having good success within
 our NOCs. We have a problem still, though not with Bacula itself.

 Currently we configure three pools per client. A full pool which is used
 on the 1st Sun, a Diff pool which is used on the 2nd, 3rd, 4th, and 5th
 Sun, and a Inc pool which is used on Mon-Sat. We followed the example at
 http://bacula.org/rel-manual/Automated_Disk_Backup.html

 Unfortunately this allows some clients to greatly exceed their alloted
 space. I think it is our fault for not configuring Bacula properly. I
 would like to know if we can create 1GB volumes up to the client's
 alloted space and then configure Bacula to use them to best advantage.
 IE keep as much data as possible, but complete the scheduled backup even
 if pruning all previous data is required to fit the current backup
 within the volumes.

 I am still reading through the manual again but I am not seeing a
 possible solution, though I believe it is there.

 Can anyone point me at a solution, a chapter/section to read, or an example?

 Thank you.

 DAve


 


-- 
I've been asking Google for a Veteran's Day logo since 2000,
maybe 1999. I was told they finally did a Veteran's Day logo,
but none of the links I was given return anything but a
normal Google logo.

Sad, very sad. Maybe the Chinese Government didn't like it?


-
This SF.net email is sponsored by: Splunk Inc.
Still grepping through log files to find problems?  Stop.
Now Search log events and configuration files using AJAX and a browser.
Download your FREE copy of Splunk now  http://get.splunk.com/
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] bacula and tls

2007-10-18 Thread Dave
Hello,
Thanks for your reply. I did recheck those permissions, they are 644 
shouldn't have a problem reading them. These are also the same certs the 
storage and file daemons load, so i am confused. If i can provide any 
additional information let me know.
Thanks.
Dave.

- Original Message - 
From: Landon Fuller [EMAIL PROTECTED]
To: Dave [EMAIL PROTECTED]
Cc: bacula-users@lists.sourceforge.net
Sent: Thursday, October 18, 2007 5:23 PM
Subject: Re: [Bacula-users] bacula and tls



-
This SF.net email is sponsored by: Splunk Inc.
Still grepping through log files to find problems?  Stop.
Now Search log events and configuration files using AJAX and a browser.
Download your FREE copy of Splunk now  http://get.splunk.com/
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


[Bacula-users] DVD Volume Catalog Sizes mismatch

2007-10-12 Thread Dave Green
I'm using Bacula 2.2.3 and have a schedule that writes a full backup to 
DVD on monday night followed by incrementals for the rest of the week.

The full backup runs OK and the next four incrementals run OK, but the 
fifth incremental fails with the following error:

12-Oct 23:10 server-sd: Upfront.2007-10-12_23.10.00 Error: Bacula cannot 
write on DVD Volume Daily1 because: The sizes do not match! 
Volume=100364766 Catalog=98272134
12-Oct 23:10 server-sd: Marking Volume Daily1 in Error in Catalog.
12-Oct 23:10 server-sd: Job Upfront.2007-10-12_23.10.00 waiting. Cannot 
find any
 appendable volumes.

I've been through several iterations of blanking media, deleting and 
recreating the database. When I blank the media and restart a backup 
sequence I get the same result after the fourth incremental.

I've commented out the no-tray option in dvd-handler and am using the 
latest dvd+rw-tools (7.0-7).

dvd-handler /dev/scd0 free returns a freespace value and a no errors 
reported message.

I've tried deleting the media entry in the catalog and recreating it 
using bscan using a dvd that has the full + 4 incremental backups and I 
get the mismatch error on the next incremental.

Frustration is setting in, so I'd appreciate any pointers to where I 
might look for a cause/resolution to the roblem.

TIA,

Dave

-- 
Dave Green
Snowline Computer Services
PO Box 103
Ohakune 4660

ph/fax: 06 385 4859
mob: 027 444 3405

Computer Sales, Service  Support for the Waimarino District


-
This SF.net email is sponsored by: Splunk Inc.
Still grepping through log files to find problems?  Stop.
Now Search log events and configuration files using AJAX and a browser.
Download your FREE copy of Splunk now  http://get.splunk.com/
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


[Bacula-users] repost, bacula and tls, (v2.2.50), and pool error msg from fd

2007-10-12 Thread Dave
Hello,
Is anyone using tls with the latest bacula? I upgraded from 2.03 to 
2.2.4 and the director now does not start. Both the file and storage daemons 
do start, so i do not believe it's a tls issue. I've installed the latest
server on both FreeBSD via ports, and a CentOS 5 box, and i'm getting the
same tls error, unable to load certification information on both systems.
When i test with -t the configuration files i am seeing the below error 
msgs with the fd, though the daemon starts and otherwise runs normally.
Thanks.
Dave.

#bacula-fd -t -c bacula-fd.conf
Pool   Maxsize  Maxused  Inuse
NoPool  2560  0
NAME1300  0
FNAME   2561  0
MSG 5121  0
EMSG   10241  0

and in /var/db/bacula/log:
12-Oct 19:28 bacula-dir JobId 0: Error: openssl.c:86 Error loading private 
key: ERR=error:0200100D:system library:fopen:Permission denied
12-Oct 19:28 bacula-dir JobId 0: Error: openssl.c:86 Error loading private 
key: ERR=error:20074002:BIO routines:FILE_CTRL:system lib
12-Oct 19:28 bacula-dir JobId 0: Error: openssl.c:86 Error loading private 
key: ERR=error:140B0002:SSL routines:SSL_CTX_use_PrivateKey_file:system lib
12-Oct 19:28 bacula-dir JobId 0: Fatal error: Failed to initialize TLS 
context for Storage Quantum DLT4000 in /usr/local/etc/bacula-dir.conf.
12-Oct 19:28 bacula-dir ERROR TERMINATION
Please correct configuration file: /usr/local/etc/bacula-dir.conf


-
This SF.net email is sponsored by: Splunk Inc.
Still grepping through log files to find problems?  Stop.
Now Search log events and configuration files using AJAX and a browser.
Download your FREE copy of Splunk now  http://get.splunk.com/
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] bacula 2.2.4 openssl context error on FreeBSD

2007-09-27 Thread Dave
Hello,
I just upgraded the installed port from 2.2.4 to 2.2.4_1 and still i am 
getting the same error msg. Any other suggestions?
Thanks.
Dave.

- Original Message - 
From: Landon Fuller [EMAIL PROTECTED]
To: Dave [EMAIL PROTECTED]
Cc: bacula-users@lists.sourceforge.net
Sent: Wednesday, September 26, 2007 1:08 AM
Subject: Re: [Bacula-users] bacula 2.2.4 openssl context error on FreeBSD



-
This SF.net email is sponsored by: Microsoft
Defy all challenges. Microsoft(R) Visual Studio 2005.
http://clk.atdmt.com/MRT/go/vse012070mrt/direct/01/
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] bacula 2.2.4 openssl context error on FreeBSD

2007-09-26 Thread Dave
Hello,
Here is the complete error msg. As i said the config hasn't changed 
between the working 2.03 and this nonworking 2.2.4.
Thanks.
Dave.

25-Sep 20:23 bacula-dir:  Error: openssl.c:86 Error loading certificate 
verification stores: ERR=error:0200100D:system library:fopen:Permission 
denied
25-Sep 20:23 bacula-dir:  Error: openssl.c:86 Error loading certificate 
verification stores: ERR=error:2006D002:BIO routines:BIO_new_file:system lib
25-Sep 20:23 bacula-dir:  Error: openssl.c:86 Error loading certificate 
verification stores: ERR=error:0B084002:x509 certificate 
routines:X509_load_cert_crl_file:system lib
25-Sep 20:23 bacula-dir:  Fatal error: Failed to initialize TLS context for 
Storage CatalogStorage in /usr/local/etc/bacula-dir.conf.
25-Sep 20:23 bacula-dir ERROR TERMINATION
Please correct configuration file: /usr/local/etc/bacula-dir.conf

- Original Message - 
From: Landon Fuller [EMAIL PROTECTED]
To: Dave [EMAIL PROTECTED]
Cc: bacula-users@lists.sourceforge.net
Sent: Wednesday, September 26, 2007 1:08 AM
Subject: Re: [Bacula-users] bacula 2.2.4 openssl context error on FreeBSD



-
This SF.net email is sponsored by: Microsoft
Defy all challenges. Microsoft(R) Visual Studio 2005.
http://clk.atdmt.com/MRT/go/vse012070mrt/direct/01/
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] bacula 2.2.4 openssl context error on FreeBSD

2007-09-26 Thread Dave
Hello,
Thanks for your reply. The public key, the .crt file has owner root 
group wheel and permissions of 644. The private key has the same owner and 
group and permissions of 640. The ca public key has original permissions of 
640 same owner and group, i changed them to 644 and tried a director 
restart, and got the same error. I've tested my director configuration 
with -t and it comes out with no errors.
Any other suggestions appreciated.
Thanks.
Dave.

- Original Message - 
From: Dan Langille [EMAIL PROTECTED]
To: Dave [EMAIL PROTECTED]
Cc: bacula-users@lists.sourceforge.net
Sent: Wednesday, September 26, 2007 10:39 AM
Subject: Re: [Bacula-users] bacula 2.2.4 openssl context error on FreeBSD


 On 26 Sep 2007 at 10:35, Dave wrote:

 Hello,
 Here is the complete error msg. As i said the config hasn't changed
 between the working 2.03 and this nonworking 2.2.4.
 Thanks.
 Dave.

 25-Sep 20:23 bacula-dir:  Error: openssl.c:86 Error loading certificate
 verification stores: ERR=error:0200100D:system library:fopen:Permission
 denied

 Perhaps file permissions?  Be sure the certificate files (etc) are
 all readable by the GID/UID that bacula is running as.


 25-Sep 20:23 bacula-dir:  Error: openssl.c:86 Error loading certificate
 verification stores: ERR=error:2006D002:BIO routines:BIO_new_file:system 
 lib
 25-Sep 20:23 bacula-dir:  Error: openssl.c:86 Error loading certificate
 verification stores: ERR=error:0B084002:x509 certificate
 routines:X509_load_cert_crl_file:system lib
 25-Sep 20:23 bacula-dir:  Fatal error: Failed to initialize TLS context 
 for
 Storage CatalogStorage in /usr/local/etc/bacula-dir.conf.
 25-Sep 20:23 bacula-dir ERROR TERMINATION
 Please correct configuration file: /usr/local/etc/bacula-dir.conf

 - Original Message - 
 From: Landon Fuller [EMAIL PROTECTED]
 To: Dave [EMAIL PROTECTED]
 Cc: bacula-users@lists.sourceforge.net
 Sent: Wednesday, September 26, 2007 1:08 AM
 Subject: Re: [Bacula-users] bacula 2.2.4 openssl context error on FreeBSD



 -
 This SF.net email is sponsored by: Microsoft
 Defy all challenges. Microsoft(R) Visual Studio 2005.
 http://clk.atdmt.com/MRT/go/vse012070mrt/direct/01/
 ___
 Bacula-users mailing list
 Bacula-users@lists.sourceforge.net
 https://lists.sourceforge.net/lists/listinfo/bacula-users




 -- 
 Dan Langille - http://www.langille.org/
 Available for hire: http://www.freebsddiary.org/dan_langille.php
 


-
This SF.net email is sponsored by: Microsoft
Defy all challenges. Microsoft(R) Visual Studio 2005.
http://clk.atdmt.com/MRT/go/vse012070mrt/direct/01/
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


[Bacula-users] bacula 2.2.4 openssl context error on FreeBSD

2007-09-25 Thread Dave
Hello,
I upgraded my bacula from 2.03 to 2.2.4 and now i am getting an error 
msg: can not initialize tls context for Storage device catalog in my 
bacula-dir.conf file. Other than the upgrade i haven't changed any options 
in the configs. I've used ldd on the bacula* daemons and they all have the 
various tls libs in them. Any ideas?
Thanks.
Dave.


-
This SF.net email is sponsored by: Microsoft
Defy all challenges. Microsoft(R) Visual Studio 2005.
http://clk.atdmt.com/MRT/go/vse012070mrt/direct/01/
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] bacula rpm for rhel5?

2007-09-12 Thread Dave
Hi,
Not sure if this went out, i never saw it onlist.
Thanks for your reply. I'd like to go with 2.2.3 if possible. And just 
for practice purposes i'd like to build from the src.rpm, i'm going to need 
to install on multiple machines some storage daemons others just file 
daemons.
Thanks.
Dave.

- Original Message - 
From: Felix Schwarz [EMAIL PROTECTED]
To: bacula-users@lists.sourceforge.net
Sent: Monday, September 10, 2007 4:35 PM
Subject: Re: [Bacula-users] bacula rpm for rhel5?


 Dave schrieb:
 I've downloaded the bacula src.rpm and installed it on a Centos 5
 machine. I want to build binaries but am only seeing build options for
 rhel4, i know there's a way around this i just can't remember what it is.
 any help appreciated.

 The fastest method getting a CentOS 5 rpm is using Fedora EPEL (if you 
 don't
 mind using 2.0.x). Just enable Fedora EPEL and do yum install 
 bacula-client.

 fs


 -
 This SF.net email is sponsored by: Microsoft
 Defy all challenges. Microsoft(R) Visual Studio 2005.
 http://clk.atdmt.com/MRT/go/vse012070mrt/direct/01/
 ___
 Bacula-users mailing list
 Bacula-users@lists.sourceforge.net
 https://lists.sourceforge.net/lists/listinfo/bacula-users 


-
This SF.net email is sponsored by: Microsoft
Defy all challenges. Microsoft(R) Visual Studio 2005.
http://clk.atdmt.com/MRT/go/vse012070mrt/direct/01/
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] bacula rpm for rhel5?

2007-09-11 Thread Dave
Hi,
Thanks for your reply. I'd rather use 2.2.3 Bacula.
Thanks.
Dave.

- Original Message - 
From: Felix Schwarz [EMAIL PROTECTED]
To: bacula-users@lists.sourceforge.net
Sent: Monday, September 10, 2007 4:35 PM
Subject: Re: [Bacula-users] bacula rpm for rhel5?


 Dave schrieb:
 I've downloaded the bacula src.rpm and installed it on a Centos 5
 machine. I want to build binaries but am only seeing build options for
 rhel4, i know there's a way around this i just can't remember what it is.
 any help appreciated.

 The fastest method getting a CentOS 5 rpm is using Fedora EPEL (if you 
 don't
 mind using 2.0.x). Just enable Fedora EPEL and do yum install 
 bacula-client.

 fs


 -
 This SF.net email is sponsored by: Microsoft
 Defy all challenges. Microsoft(R) Visual Studio 2005.
 http://clk.atdmt.com/MRT/go/vse012070mrt/direct/01/
 ___
 Bacula-users mailing list
 Bacula-users@lists.sourceforge.net
 https://lists.sourceforge.net/lists/listinfo/bacula-users 


-
This SF.net email is sponsored by: Microsoft
Defy all challenges. Microsoft(R) Visual Studio 2005.
http://clk.atdmt.com/MRT/go/vse012070mrt/direct/01/
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


[Bacula-users] bacula rpm for rhel5?

2007-09-10 Thread Dave
Hello,
I've downloaded the bacula src.rpm and installed it on a Centos 5 
machine. I want to build binaries but am only seeing build options for 
rhel4, i know there's a way around this i just can't remember what it is.
any help appreciated.
Thanks.
Dave.


-
This SF.net email is sponsored by: Microsoft
Defy all challenges. Microsoft(R) Visual Studio 2005.
http://clk.atdmt.com/MRT/go/vse012070mrt/direct/01/
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] Technical aspects of the restore bug

2007-09-10 Thread DAve
Kern Sibbald wrote:
This document contains the technical details of Bug #395.
 
 Bacula bug #935 reports that during a restore, a large number of files are 
 missing and thus not restored.  This is really quite surprising because we 
 have a fairly extensive regression test suite that explicitly tests for this 
 kind of problem many times.
 
 Despite our testing, there is indeed a bug in Bacula that has the following 
 characteristics:
 
 1. It happens only when multiple simultaneous Jobs are run (regardless of 
 whether or not data spooling is enabled), and happens only when the 
 Storage daemon is changing from one Volume to another -- i.e. the
 backups span multiple volumes.
 
 2. It has only been observed on disk based backup, but not on tape. 
 
 3. Under the right circumstances (timing), it could and probably does happen 
 on tape backups.
 
 4. It seems to be timing dependent, and requires multiple clients to 
 reproduce, although under the right circumstances, it should be reproducible
 with a single client doing multiple simultaneous backups.
 
 5. Analysis indicates that it happens most often when the clients are slow 
 (e.g. doing Incremental backups).
 
 6. It has been verified to exist in versions 2.0.x and 2.2.x.
 
 7. It should also be in version 1.38, but could not be reproduced in testing, 
 perhaps due to timing considerations or the fact that the test FD daemons 
 were version 2.2.2.
 
 8. The data is correctly stored on the Volume, but incorrect index (JobMedia) 
 records are stored in the database.  (the JobMedia record generated during 
 the Volume change contains the index of the new Volume rather than the 
 previous Volume).  This will be described in more detail below.
 
 9. You can prevent the problem from occurring by either turning off multiple 
 simultaneous Jobs or by ensuring that while running multiple simultaneous 
 Jobs that those Jobs do not span Volumes.  E.g. you could manually mark 
 Volumes as full when they are sufficiently large.
 
 10. If you are not running multiple simultaneous Jobs, you will not be 
 affected by this bug.
 
 11. If you are running multiple simultaneous Jobs to tapes, I believe there 
 is 
 a reasonable probability that this problem could show up when Jobs are split 
 across tapes.
 
 12. If you are running multiple simultaneous Jobs to disks, I believe there 
 is 
 a high probability that this problem will show up when Jobs are split across 
 disks Volumes.
 
 ===
 
 The problem comes from the fact that when the end of a Volume is reached,
 the SD must generate a JobMedia (index) record for each of the Jobs that is
 currently running. Since each job is in a separate thread, the thread that
 does the Volume switch marks all the other threads (Jobs) with a flag
 that tell them to update the catalog index (JobMedia).  Sometime later,
 when that thread attempts to do another write to the volume, it will
 create a JobMedia record.  
 

If I read everything correctly, I believe we would be immune to this bug 
at this time. While we certainly use concurrent jobs, each job is 
written to a recycled volume each night. We have no jobs that span a 
volume at any time.

Would that be a correct analysis?

DAve
-- 
Three years now I've asked Google why they don't have a
logo change for Memorial Day. Why do they choose to do logos
for other non-international holidays, but nothing for
Veterans?

Maybe they forgot who made that choice possible.

-
This SF.net email is sponsored by: Microsoft
Defy all challenges. Microsoft(R) Visual Studio 2005.
http://clk.atdmt.com/MRT/go/vse012070mrt/direct/01/
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] [Bacula-devel] [Bacula-beta] Bacula BETA 2.1.26 released toSource Forge

2007-07-17 Thread DAve
Frank Sweetser wrote:
 Bill Moran wrote:
 
 So, I think it's a good plan from every angle.  Furthermore, I think that
 anyone who doesn't think it's a good plan either hasn't reviewed it
 thoroughly, or has some strange axe to grind.
 
 There actually is one potentially negative downside I can think of.  Right
 now, it's trivial to evaluate the software - download and play.  One of the
 best bits about free software is that there is no risk at all in trialling
 software beyond the time spent.  By adding in this barrier, there's more
 hassle involved getting the software and getting a functional first impression
 (at least for platforms where binaries aren't readily available or compilable
 elsewhere).

It is a marvel to me how someone can give away a major part of their 
life, their time, their talent, for free. Yet, when they ask for the 
littlest bit of help or appreciation they are berated. Companies make 
money on OSS, they MAKE MONEY ON OSS!.

I have an ongoing personal struggle which is currently leading me to 
leave the internet industry. I work hard to try and get every company I 
have been employed with the donate, not a pidly twenty bucks, more like 
five thousand. The use of Apache (free) has saved the company I 
currently work for over twenty five thousand dollars a year. The most 
common reply I get is we can't afford it, my response is  you have a 
bad business model then because you based your companies survival on an 
unknown resource that you cannot afford to replace.

This weekend Bacula saved my bacon, if we expand it's use it will become 
a service we can charge clients for. The company I work for will make 
money from the use of Bacula, as I am certain others on this list do 
already. In my opinion a donation of 25% of the yearly cost of a 
comparable commercial product is not unreasonable, it is a bargin. I 
also know it won't happen.

This all reminds me of the story of the father explaining welfare to his 
son. He tells his son to go to the park every day at noon and feed the 
squirrels all they can eat. Go every day for a year without fail, rain 
or shine. Then go to the park and give them nothing, see how long you 
sit before they bite you.

I appologise for the rant, it was a long weekend.

-- 
Three years now I've asked Google why they don't have a
logo change for Memorial Day. Why do they choose to do logos
for other non-international holidays, but nothing for
Veterans?

Maybe they forgot who made that choice possible.

-
This SF.net email is sponsored by DB2 Express
Download DB2 Express C - the FREE version of DB2 express and take
control of your XML. No limits. Just data. Click to get it now.
http://sourceforge.net/powerbar/db2/
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


[Bacula-users] Restoring a crashed director server

2007-07-15 Thread DAve
Well I got surprise this weekend (6am Saturday) when a raid card failed 
and we corrupted a mirror on our director server.

I reinstalled the OS, went to my storage server and used bextract to 
pull out the needed files for the director server. This files include 
the .bsr files, and my catalog backups. Of course I got all my bacula 
conf files, contents of /etc and so on.

So I install bacula, install mysql, start both up, create new bacula 
tables, login to bconsole, everything is there, wonderful.

I am now trying to get my catalog restored without success. I had 
previously kept SQL dumps but had abandoned them long ago when all other 
bacula restores had worked so well. I checked the chapters on recovery, 
specifically here,

  http://www.bacula.org/dev-manual/Restore_Command.html#database_restore

Seems easy enough, and is exactly what I want to do, however I can't 
make heads or tails of this line, You do so by entering the bf run 
command in the console and selecting the restore job. What is the bf 
run command because my console knows nothing about that?

I have Cat-0001 and BackupCatalog.bsr on the server, just not sure how 
to use them at this point. Am I reading something wrong?

I am on FreeBSD 5.4, Bacula 2.0.3, mysql 5.0.2

Thanks,

DAve
-- 
Three years now I've asked Google why they don't have a
logo change for Memorial Day. Why do they choose to do logos
for other non-international holidays, but nothing for
Veterans?

Maybe they forgot who made that choice possible.

-
This SF.net email is sponsored by DB2 Express
Download DB2 Express C - the FREE version of DB2 express and take
control of your XML. No limits. Just data. Click to get it now.
http://sourceforge.net/powerbar/db2/
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] Restoring a crashed director server

2007-07-15 Thread DAve
DAve wrote:
 Well I got surprise this weekend (6am Saturday) when a raid card failed 
 and we corrupted a mirror on our director server.
 
 I reinstalled the OS, went to my storage server and used bextract to 
 pull out the needed files for the director server. This files include 
 the .bsr files, and my catalog backups. Of course I got all my bacula 
 conf files, contents of /etc and so on.
 
 So I install bacula, install mysql, start both up, create new bacula 
 tables, login to bconsole, everything is there, wonderful.
 
 I am now trying to get my catalog restored without success. I had 
 previously kept SQL dumps but had abandoned them long ago when all other 
 bacula restores had worked so well. I checked the chapters on recovery, 
 specifically here,
 
   http://www.bacula.org/dev-manual/Restore_Command.html#database_restore
 
 Seems easy enough, and is exactly what I want to do, however I can't 
 make heads or tails of this line, You do so by entering the bf run 
 command in the console and selecting the restore job. What is the bf 
 run command because my console knows nothing about that?
 
 I have Cat-0001 and BackupCatalog.bsr on the server, just not sure how 
 to use them at this point. Am I reading something wrong?

Ahh, just entered the run command, used the catalog backup jobid from my 
last report, changing the job params using mod. Getting ready to run the 
catalog restore job now.

I think I got befuddled from lack of sleep

DAve

 
 I am on FreeBSD 5.4, Bacula 2.0.3, mysql 5.0.2
 
 Thanks,
 
 DAve


-- 
Three years now I've asked Google why they don't have a
logo change for Memorial Day. Why do they choose to do logos
for other non-international holidays, but nothing for
Veterans?

Maybe they forgot who made that choice possible.

-
This SF.net email is sponsored by DB2 Express
Download DB2 Express C - the FREE version of DB2 express and take
control of your XML. No limits. Just data. Click to get it now.
http://sourceforge.net/powerbar/db2/
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] Restoring a crashed director server

2007-07-15 Thread DAve
Frank Sweetser wrote:
 DAve wrote:
   http://www.bacula.org/dev-manual/Restore_Command.html#database_restore

 Seems easy enough, and is exactly what I want to do, however I can't 
 make heads or tails of this line, You do so by entering the bf run 
 command in the console and selecting the restore job. What is the bf 
 run command because my console knows nothing about that?
 
 Looks like that's actually a typo in the manual source code - the 'bf' part is
 supposed to be a latex command, not something that shows up in the output.
 Just use 'run' instead of 'bf run' and give it a shot.
 

I suspected as much, doing a new search I noticed lots of instances 
where commands were prefixed with bf and \bf. Tried just run as 
normal and I am working with that now.

Thanks,

DAve

-- 
Three years now I've asked Google why they don't have a
logo change for Memorial Day. Why do they choose to do logos
for other non-international holidays, but nothing for
Veterans?

Maybe they forgot who made that choice possible.

-
This SF.net email is sponsored by DB2 Express
Download DB2 Express C - the FREE version of DB2 express and take
control of your XML. No limits. Just data. Click to get it now.
http://sourceforge.net/powerbar/db2/
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] Restoring a crashed director server

2007-07-15 Thread DAve
DAve wrote:
 Frank Sweetser wrote:
 DAve wrote:
   http://www.bacula.org/dev-manual/Restore_Command.html#database_restore

 Seems easy enough, and is exactly what I want to do, however I can't 
 make heads or tails of this line, You do so by entering the bf run 
 command in the console and selecting the restore job. What is the bf 
 run command because my console knows nothing about that?
 Looks like that's actually a typo in the manual source code - the 'bf' part 
 is
 supposed to be a latex command, not something that shows up in the output.
 Just use 'run' instead of 'bf run' and give it a shot.

 
 I suspected as much, doing a new search I noticed lots of instances 
 where commands were prefixed with bf and \bf. Tried just run as 
 normal and I am working with that now.
 
 Thanks,
 
 DAve
 

And I am up and running, Bacula has saved itself and quite easily as 
well. Thank you Kern, thank you everyone who works on Bacula.

DAve


-- 
Three years now I've asked Google why they don't have a
logo change for Memorial Day. Why do they choose to do logos
for other non-international holidays, but nothing for
Veterans?

Maybe they forgot who made that choice possible.

-
This SF.net email is sponsored by DB2 Express
Download DB2 Express C - the FREE version of DB2 express and take
control of your XML. No limits. Just data. Click to get it now.
http://sourceforge.net/powerbar/db2/
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] bacula can not write to catalog volumes

2007-07-05 Thread Dave
Hello,
Yes there are ten catalog volume files. The oldest one according to the 
results volume 0008 was pruned so i don't get why bacula is unable to find a 
volume.
Thanks.
Dave.

- Original Message - 
From: Drew Bentley [EMAIL PROTECTED]
To: Dave [EMAIL PROTECTED]
 Sent: Thursday, July 05, 2007 9:54 AM
Subject: Re: [Bacula-users] bacula can not write to catalog volumes


 On 7/4/07, Dave [EMAIL PROTECTED] wrote:
 Hello,
 I'm running bacula 2.03 on FreeBSD. All is working well, except i'm
 getting a periodic error about not being able to find any appendable
 volumes. This time it occurred with my catalog backup, i've included the 
 job
 and pool definition below. My issue with this is before the job starts 
 one
 volume, according to bacula, it's the oldest is pruned, then i get the 
 error
 can not find any appendable volumes. My understanding is bacula just 
 pruned
 the oldest volume so it should have that free for writing?
 Thanks.
 Dave.

 05-Jul 01:02 zeus-dir: Pruning oldest volume CAT-0008
 05-Jul 01:02 zeus-sd: Job BackupCatalog.2007-07-04_23.10.00 waiting. 
 Cannot
 find any appendable volumes.
 Please use the label  command to create a new Volume for:
 Storage:  CatalogStorage (/backup/bacula/catalog)
 Media type:   File-Catalog
 Pool: Catalog

 # Backup the catalog database (after the nightly save)
 Job {
   Name = BackupCatalog
   Type = Backup
   Level = Full
   FileSet=Catalog
   Schedule = WeeklyCycleAfterBackup
   Client = zeus-fd
   Storage = CatalogStorage
   Messages = Standard
   Pool = Catalog
   # This creates an ASCII copy of the catalog
   RunBeforeJob = /usr/local/share/bacula/make_catalog_backup bacula 
 bacula
   # This deletes the copy of the catalog
   RunAfterJob  = /usr/local/share/bacula/delete_catalog_backup
   Write Bootstrap = /backup/bacula/catalog/BackupCatalog.bsr
   Priority = 100   # run after main backup
 }

 Pool {
   Name = Catalog
   Pool Type = Backup
   Recycle = yes   # Bacula can automatically recycle
 Volumes
   Recycle Oldest Volume = yes
   Maximum Volume Jobs = 1
   LabelFormat = CAT-
   AutoPrune = yes # Prune expired volumes
   Volume Retention = 8 days
   Maximum Volumes = 10
 }



 You have 10 max volumes configured, are you using 10 currently? You
 may want to reconfigure the max amount of volumes and or your
 retention periods. This has been discussed many times in the past week
 or so on this list, perhaps a quick search thru the archives might
 give you more clues.

 -Drew 


-
This SF.net email is sponsored by DB2 Express
Download DB2 Express C - the FREE version of DB2 express and take
control of your XML. No limits. Just data. Click to get it now.
http://sourceforge.net/powerbar/db2/
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


[Bacula-users] bacula can not write to catalog volumes

2007-07-04 Thread Dave
Hello,
I'm running bacula 2.03 on FreeBSD. All is working well, except i'm 
getting a periodic error about not being able to find any appendable 
volumes. This time it occurred with my catalog backup, i've included the job 
and pool definition below. My issue with this is before the job starts one 
volume, according to bacula, it's the oldest is pruned, then i get the error 
can not find any appendable volumes. My understanding is bacula just pruned 
the oldest volume so it should have that free for writing?
Thanks.
Dave.

05-Jul 01:02 zeus-dir: Pruning oldest volume CAT-0008
05-Jul 01:02 zeus-sd: Job BackupCatalog.2007-07-04_23.10.00 waiting. Cannot 
find any appendable volumes.
Please use the label  command to create a new Volume for:
Storage:  CatalogStorage (/backup/bacula/catalog)
Media type:   File-Catalog
Pool: Catalog

# Backup the catalog database (after the nightly save)
Job {
  Name = BackupCatalog
  Type = Backup
  Level = Full
  FileSet=Catalog
  Schedule = WeeklyCycleAfterBackup
  Client = zeus-fd
  Storage = CatalogStorage
  Messages = Standard
  Pool = Catalog
  # This creates an ASCII copy of the catalog
  RunBeforeJob = /usr/local/share/bacula/make_catalog_backup bacula bacula
  # This deletes the copy of the catalog
  RunAfterJob  = /usr/local/share/bacula/delete_catalog_backup
  Write Bootstrap = /backup/bacula/catalog/BackupCatalog.bsr
  Priority = 100   # run after main backup
}

Pool {
  Name = Catalog
  Pool Type = Backup
  Recycle = yes   # Bacula can automatically recycle 
Volumes
  Recycle Oldest Volume = yes
  Maximum Volume Jobs = 1
  LabelFormat = CAT-
  AutoPrune = yes # Prune expired volumes
  Volume Retention = 8 days
  Maximum Volumes = 10
}


-
This SF.net email is sponsored by DB2 Express
Download DB2 Express C - the FREE version of DB2 express and take
control of your XML. No limits. Just data. Click to get it now.
http://sourceforge.net/powerbar/db2/
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


[Bacula-users] requeueing stuck bacula jobs

2007-06-27 Thread Dave
Hello,
I've got a bacula box that is stuck. The disk that was taking the bacula 
jobs was filled due to an unforseen circumstance with another program. I've 
since cleaned it and it, however bacula appears stuck. The first item i 
noticed was that the backup catalog job was saying it couldn't find any 
appendable volumes. I checked it and it looked like it tried to prune and 
reuse catalog02 but couldn't reuse the file because of the diskspace issue. 
I used bconsole to label a new catalog02 but i keep getting the same error. 
I then did a status jobs and found out the 7 other machines from last 
evening's set of backups are also stuck. I could cancel, but i'd rather fix 
this for consistency.
Thanks.
Dave.


-
This SF.net email is sponsored by DB2 Express
Download DB2 Express C - the FREE version of DB2 express and take
control of your XML. No limits. Just data. Click to get it now.
http://sourceforge.net/powerbar/db2/
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


[Bacula-users] specifying wildcards during restore

2007-06-25 Thread Dave
Hello,
I'm using bacula 2.03 and trying to restore a file from tape. I have two 
problems, first i tried to list the last 20 jobs run, hoping that what i 
want is in there, but the display scrolled, is there a single screen only 
option like more? My second issue, i do not remember the exact filename, i 
remember where it was saved from i.e. client, which job though not the 
jobid, but i do not remember the filename. I remember it's an iso file 
that's it.
Thanks.
Dave.


-
This SF.net email is sponsored by DB2 Express
Download DB2 Express C - the FREE version of DB2 express and take
control of your XML. No limits. Just data. Click to get it now.
http://sourceforge.net/powerbar/db2/
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] specifying wildcards during restore

2007-06-25 Thread Dave
Hello,
Thanks, that did it, but i still am unable to find the fil i want. Is 
there a way i can use wildcards to find this file?
Thanks.
Dave.

- Original Message - 
From: Drew Bentley [EMAIL PROTECTED]
To: Dave [EMAIL PROTECTED]
Sent: Monday, June 25, 2007 9:21 PM
Subject: Re: [Bacula-users] specifying wildcards during restore


 On 6/25/07, Dave [EMAIL PROTECTED] wrote:
 Hello,
 I'm using bacula 2.03 and trying to restore a file from tape. I have 
 two
 problems, first i tried to list the last 20 jobs run, hoping that what i
 want is in there, but the display scrolled, is there a single screen only
 option like more? My second issue, i do not remember the exact filename, 
 i
 remember where it was saved from i.e. client, which job though not the
 jobid, but i do not remember the filename. I remember it's an iso file
 that's it.
 Thanks.
 Dave.


 If this is on a Linux/Unix system, you should be able to scroll up to
 view the contents that scrolled past. If you don't have a mouse and
 your on the console, simply press ctrl + page up or page down to
 scroll.

 -drew 


-
This SF.net email is sponsored by DB2 Express
Download DB2 Express C - the FREE version of DB2 express and take
control of your XML. No limits. Just data. Click to get it now.
http://sourceforge.net/powerbar/db2/
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


[Bacula-users] bacula freebsd port log rotation

2007-06-24 Thread Dave
Hello,
Anyone using the freebsd port of bacula with log rotation? I'd like to 
get that set up.
Thanks.
Dave.


-
This SF.net email is sponsored by DB2 Express
Download DB2 Express C - the FREE version of DB2 express and take
control of your XML. No limits. Just data. Click to get it now.
http://sourceforge.net/powerbar/db2/
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


[Bacula-users] bacula linux rescue CD procedure (FreeBSD)

2007-06-22 Thread Dave
Hello,
I've never used a Linux bacula rescue CD at this point i don't need 
to do any bare metal recoveries of the linux box i have. I do however have a 
need for FreeBSD bare metal recovery. I've got a few questions.
First of all, has anyone done this or got far with it? What i was 
planning was to use the section of the freebsd handbook as a guide, make a 
custom boot media, compile bacula-fd statically along with my system's 
partition information, boot and see if i can do it. I realize this is rough, 
but my goal is to get the FreeBSD bare-metal recovery as comprehensive as 
the Linux procedure. To that end i'd like to know how linux rescue goes, i 
imagine you boot, then run programs like fdisk and format and recreate your 
system, then use bacula-fd to load file and boot information? If anyone who 
has done this could give me an overview conceptually i'd appreciate it.
Thanks.
Dave.


-
This SF.net email is sponsored by DB2 Express
Download DB2 Express C - the FREE version of DB2 express and take
control of your XML. No limits. Just data. Click to get it now.
http://sourceforge.net/powerbar/db2/
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] bacula linux rescue CD procedure (FreeBSD)

2007-06-22 Thread Dave
Hi Frank,
Thanks for your reply. I'll probably start manually, grind out a 
procedure by hand then look at the Linux scripts and see what if any are the 
freebsd equivalents and try to bring them over. My goal initially was bare 
metal recovery for my own use, but the more i think about this the more i 
feel that this would benefit all bacula freebsd users who might want to do 
bare metal recoveries. My problem now becomes how to genericize everything, 
how is the linux rescue cd-rom created? That's where i am going to take 
this.
Thanks.
Dave.

- Original Message - 
From: Frank Sweetser [EMAIL PROTECTED]
To: Dave [EMAIL PROTECTED]
Cc: bacula-users@lists.sourceforge.net
Sent: Friday, June 22, 2007 2:23 PM
Subject: Re: [Bacula-users] bacula linux rescue CD procedure (FreeBSD)


 Dave wrote:
 Hello,
 I've never used a Linux bacula rescue CD at this point i don't 
 need
 to do any bare metal recoveries of the linux box i have. I do however 
 have a
 need for FreeBSD bare metal recovery. I've got a few questions.
 First of all, has anyone done this or got far with it? What i was
 planning was to use the section of the freebsd handbook as a guide, make 
 a
 custom boot media, compile bacula-fd statically along with my system's
 partition information, boot and see if i can do it. I realize this is 
 rough,
 but my goal is to get the FreeBSD bare-metal recovery as comprehensive as
 the Linux procedure. To that end i'd like to know how linux rescue goes, 
 i
 imagine you boot, then run programs like fdisk and format and recreate 
 your
 system, then use bacula-fd to load file and boot information? If anyone 
 who
 has done this could give me an overview conceptually i'd appreciate it.

 That's roughly it, yes.  The biggest thing to note is that at least for 
 linux,
 there are a handful of helper scripts in
 /etc/bacula/rescue/linux/cdrom/bacula.  The diskinfo script, for example, 
 will
 create a series of secondary scripts that, when run, will recreate the 
 same
 partion and LVM layout.  There are also scripts to set up networking and 
 help
 wth installing the bootloader.  None of these scripts are mandatory (I've 
 done
 manual bare metal restores without them), but they can be extremely 
 helpful.

 -- 
 Frank Sweetser fs at wpi.edu  |  For every problem, there is a solution 
 that
 WPI Senior Network Engineer   |  is simple, elegant, and wrong. - HL 
 Mencken
GPG fingerprint = 6174 1257 129E 0D21 D8D4  E8A3 8E39 29E3 E2E8 8CEC 


-
This SF.net email is sponsored by DB2 Express
Download DB2 Express C - the FREE version of DB2 express and take
control of your XML. No limits. Just data. Click to get it now.
http://sourceforge.net/powerbar/db2/
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] Cannot find any appendable volume problem

2007-06-20 Thread Dave
Hi,
Yah, just went through this one myself had some help for it. Your 
retention periods are to short, your running out of volumes before the first 
one is set to be recycled. The fix is either more volumes or shorter 
retention periods. Your setup is remarkably like mine, if you get stuck i 
should be able to help you out.
Hth
Dave.
Oh, forgot once you update your config files load up the bacula console and 
do, this is going to take a while depending on how many pools you have, an 
update pool and an update volume command for each client, that should fix 
it.

- Original Message - 
From: Mihai Tanasescu [EMAIL PROTECTED]
To: bacula-users@lists.sourceforge.net
Sent: Thursday, June 21, 2007 1:40 AM
Subject: [Bacula-users] Cannot find any appendable volume problem


 Hello,


 I've just recently configured bacula to backup some machines and after
 4-5 days of doing backups I started getting:

 Cannot find any appendable volumes, although I have the LabelMedia=yes 
 specified in the storage daemon


 My setup looks like this:

 For each machine I have a different device defined in my bacula-sd.conf 
 (in order to make each backup happen in a different directory..and also 
 MediaType is different for each definition in order to avoid any locking 
 problems).

 Each incremental, differential and full backup has its own Pool definition 
 for each machine and its own settings for Retention.

 Example:

 Pool {
  Name = Diff-Pool-RS
  Pool Type = Backup
  Recycle = yes
  AutoPrune = yes
  Volume Retention = 40 days
 #1 job per volume
  Maximum Volume Jobs = 1
  LabelFormat = ${Job}-${Day:p/2/0/r}-${Month:p/2/0/r}-${Year}-Diff-
  Maximum Volumes = 6
 }

 Pool {
  Name = Inc-Pool-RS
  Pool Type = Backup
  Recycle = yes
  AutoPrune = yes
  Volume Retention = 6 days
 #6 jobs per volume
  Maximum Volume Jobs = 6
  LabelFormat = ${Job}-${Day:p/2/0/r}-${Month:p/2/0/r}-${Year}-Inc-
  Maximum Volumes = 1
 }

 Pool {
  Name = Full-Pool-RS
  Pool Type = Backup
  Recycle = yes
  AutoPrune = yes
  Volume Retention = 1 months
 #one job per volume
  Maximum Volume Jobs = 1
  LabelFormat = ${Job}-${Day:p/2/0/r}-${Month:p/2/0/r}-${Year}-Full-
  Maximum Volumes = 1

 and this is the same for the other servers (with a different ending, 
 instead of RS)


 Schedule {
  Name = WeeklyCycle
  Run = Full 1st sun at 04:00
  Run = Differential 2nd-5th sun at 03:00
  Run = Incremental mon-sat at 05:00
 }

 and each Job for each machine uses the three different pools (stated 
 above) and the default one (it gave an error without defining it)

 Pool {
  Name = Default
  Pool Type = Backup
  Recycle = yes   # Bacula can automatically recycle 
 Volumes
  AutoPrune = yes # Prune expired volumes
  Volume Retention = 365 days # one year

 My design was supposed to make:
 A full backup on the first week of the month on Sunday.
 Incremental backups for a week.
 Differential backups each Sunday.


 Does anyone have any idea why I keep receiving that error above not being 
 able to find any appendable volumes ?




 -
 This SF.net email is sponsored by DB2 Express
 Download DB2 Express C - the FREE version of DB2 express and take
 control of your XML. No limits. Just data. Click to get it now.
 http://sourceforge.net/powerbar/db2/
 ___
 Bacula-users mailing list
 Bacula-users@lists.sourceforge.net
 https://lists.sourceforge.net/lists/listinfo/bacula-users 


-
This SF.net email is sponsored by DB2 Express
Download DB2 Express C - the FREE version of DB2 express and take
control of your XML. No limits. Just data. Click to get it now.
http://sourceforge.net/powerbar/db2/
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


[Bacula-users] network error job aborted

2007-06-18 Thread Dave
Hello,
I've got a bacula win32 2.03 client that has been backing up fine. This 
morning it didn't and gave me a fatal network error. I'm open to 
suggestions, i've confirmed that all the daemons are still running. Here's 
an excerpt from my bacula log.

18-Jun 04:38 titan-fd: Generate VSS snapshots. Driver=VSS WinXP, 
Drive(s)=C
18-Jun 04:40 zeus-dir: backup_titan.2007-06-18_04.35.00 Fatal error: Network 
error with FD during Backup: ERR=Connection reset by peer
18-Jun 04:40 zeus-sd: backup_titan.2007-06-18_04.35.00 Fatal error: 
append.c:259 Network error on data channel. ERR=Connection reset by peer
18-Jun 04:40 zeus-sd: Job write elapsed time = 00:05:48, Transfer rate = 
615.2 K bytes/second
18-Jun 04:40 zeus-dir: backup_titan.2007-06-18_04.35.00 Fatal error: No Job 
status returned from FD.

Thanks.
Dave.


-
This SF.net email is sponsored by DB2 Express
Download DB2 Express C - the FREE version of DB2 express and take
control of your XML. No limits. Just data. Click to get it now.
http://sourceforge.net/powerbar/db2/
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] network error job aborted

2007-06-18 Thread Dave
Hello,
I really hate it when i try to respond to questions and when i do bacula 
does not duplicate the error.
Brian, a status on the client showed that it never went down, i 
restarted it manually with the flags you indicated and nothing more verbose 
came through. Like Frank i'm also not convinced it's networking, this box 
and the server do have firewalls, but they pass bacula traffic without 
issues.
All of the daemons involved are 2.03, i'm not using any 2.1.x bacula on 
this client.
Windows event viewer didn't tell me anything, i saw two errors around 
the time in question, but i had a user on the box unbenounced to me playing 
an online game, error came from one of the dlls it used. My first 
speculation is that error that snafu whatever corrupted the vss snapshot, is 
this possible or how could i confirm this?
The other thing that it might be although i doubt this as well is a 
conflict between running jobs. The job in question started at 4:35, and died 
at 5:47. I had another job, backing up to another disk volume totally 
separate, kick off two minutes before the failure at 5:45. That client was a 
Unix not windows box. It worked fine, but i don't like not knowing the why, 
i suspect i will see more of this from this client.
Thanks.
Dave.

- Original Message - 
From: Brian A. Seklecki [EMAIL PROTECTED]
To: Dave [EMAIL PROTECTED]
Cc: bacula-users@lists.sourceforge.net
Sent: Monday, June 18, 2007 8:06 AM
Subject: Re: [Bacula-users] network error job aborted



 what does:

 status client=backup_titan show?

 Did you restart the bacula-fd service on the client?

 Check the windows event viewer?

 Re-start the bacula-fd.exe process from command line manually with -d99
 -f -v ?

 ~BAS

 On Mon, 2007-06-18 at 07:41 -0400, Dave wrote:
 zeus-dir
 -- 
 Brian A. Seklecki [EMAIL PROTECTED]
 Collaborative Fusion, Inc.




 IMPORTANT: This message contains confidential information and is intended 
 only for the individual named. If the reader of this message is not an 
 intended recipient (or the individual responsible for the delivery of this 
 message to an intended recipient), please be advised that any re-use, 
 dissemination, distribution or copying of this message is prohibited. 
 Please notify the sender immediately by e-mail if you have received this 
 e-mail by mistake and delete this e-mail from your system.
 


-
This SF.net email is sponsored by DB2 Express
Download DB2 Express C - the FREE version of DB2 express and take
control of your XML. No limits. Just data. Click to get it now.
http://sourceforge.net/powerbar/db2/
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] Questions for Bacula Windows users ...

2007-06-15 Thread Dave
Hi Kern,
Personal preference, i would prefer c:\bacula for the reasons you 
indicated, the long path names and spaces, but i also have noticed that 
prohibit programs from not working outside program files. What i would 
suggest is a compromise, make the installation location customizable to 
either location with a pair of radio buttons, the end user selects which 
installation location is prefered based on personal preference and 
environmental consideration.
Hth
Dave.

- Original Message - 
From: George R.Kasica [EMAIL PROTECTED]
To: bacula-users@lists.sourceforge.net
Sent: Friday, June 15, 2007 11:22 AM
Subject: Re: [Bacula-users] Questions for Bacula Windows users ...


 On Fri, 15 Jun 2007 00:56:32 -0700, you wrote:

Kern Sibbald wrote:
 Hello,

 If you are a user of Bacula on Windows, I would be interested in your
 responses to the following:

 1. I am considering to change the default installation location for 
 Bacula on
 Windows to be the same as it was previously -- that is the \bacula 
 directory
 on the main disk.  The current installation places files in the 
 standard
 Windows locations, but IMO, it is very inconvenient because they are
 extremely long names with spaces that are very hard to remember, and are
 sprayed all over the disk.

I actually prefer having them in the standard locations. On our Windows
servers we have data execution prevention policies which stop programs
from executing if they aren't under C:\Program Files.
 I'd agree with this statement for many clients I work with.

 4. Can any of you run the Windows regression scripts?  This requires a 
 minimal
 knowledge of cmd scripts but no programming experience.
I would be willing to run regression scripts. We have a lot of Windows
servers which we run Bacula on (the fd only). And I would like to help
make sure Bacula continues to work on our systems.
 Same situation here, I'd be happy to test out that portion. The SD and
 DIR run off linux here. Servers are 2003 with all up to date patches,
 and workstations are XP-SP2 again fully updated normally.
 -- 
 ===[George R. Kasica]===+1 262 677 0766
 President   +1 206 374 6482 FAX
 Netwrx Consulting Inc.  Jackson, WI USA
 http://www.netwrx1.com
 [EMAIL PROTECTED]
 ICQ #12862186

 -
 This SF.net email is sponsored by DB2 Express
 Download DB2 Express C - the FREE version of DB2 express and take
 control of your XML. No limits. Just data. Click to get it now.
 http://sourceforge.net/powerbar/db2/
 ___
 Bacula-users mailing list
 Bacula-users@lists.sourceforge.net
 https://lists.sourceforge.net/lists/listinfo/bacula-users 


-
This SF.net email is sponsored by DB2 Express
Download DB2 Express C - the FREE version of DB2 express and take
control of your XML. No limits. Just data. Click to get it now.
http://sourceforge.net/powerbar/db2/
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


[Bacula-users] bacula backing up to tape

2007-06-12 Thread Dave
Hello,
I've got a job that i want to back up to tape. It contains several Large 
files that i don't want under disk storage, but i forsee a need to 
periodically restore them say over the next year. I'm wondering the best 
configuration, this is what i have.

Job {
  Name = backup_to_tape
  Type = Backup
 Level = Incremental
  Messages = Standard
  Client = client1-fd
Storage = Tape Drive
  FileSet = isos
  Pool = Default
  Write Bootstrap = /backup/bacula/tape.bsr
  Priority = 10
}

Client {
  Name = client1-fd
  Address = 111.222.333.444
  FDPort = 9102
  Catalog = MyCatalog
  Password = xxx
  File Retention = 45 days
  Job Retention = 6 months
  AutoPrune = yes   # Prune expired Jobs/Files
}

Pool {
 Name = Default
  Pool Type = Backup
  Recycle = yes   # Bacula can automatically recycle 
Volumes
  AutoPrune = yes # Prune expired volumes
  Volume Retention = 365 days # one year
}

I've got the client resource already set up for this box, and it's retention 
periods are working fine for another job that it backs up. But the job and 
file retention periods for this job wouldn't work and i don't want to have 
to continuously adjust retention periods.
Thanks.
Dave. 


-
This SF.net email is sponsored by DB2 Express
Download DB2 Express C - the FREE version of DB2 express and take
control of your XML. No limits. Just data. Click to get it now.
http://sourceforge.net/powerbar/db2/
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


[Bacula-users] bug 807, fix?

2007-06-05 Thread Dave
Hello,
I was wondering if there was a fix for bug 807? I am encountering it, as 
it's bug posters are, and would post directly to the bugs database, but i 
don't think a msg like I'm seeing this bug's behavior as well would be 
productive. I have nothing new to add and heard there might be a fix?
Thanks.
Dave.


-
This SF.net email is sponsored by DB2 Express
Download DB2 Express C - the FREE version of DB2 express and take
control of your XML. No limits. Just data. Click to get it now.
http://sourceforge.net/powerbar/db2/
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] bacula database comparison and pitfallsbacula database comparison

2007-05-30 Thread Dave
Hi,
Thanks for your reply. For me and i don't know if this is in general, 
but in 5.0 there's a new timeout feature and when it kicks in if bacula is 
running i end up with table corruption.
Thanks.
Dave.

- Original Message - 
From: Ryan Novosielski [EMAIL PROTECTED]
To: Dave [EMAIL PROTECTED]
Cc: bacula-users@lists.sourceforge.net
Sent: Wednesday, May 30, 2007 12:08 PM
Subject: Re: [Bacula-users] bacula database comparison and pitfallsbacula 
database comparison


 -BEGIN PGP SIGNED MESSAGE-
 Hash: SHA1

 All I've got is don't use SQLite, particularly v3.0 for production.
 There is tuning to make SQLite quicker in v3.0 than the default (some
 feature from v2.0 to v3.0 was turned on that slows things down), but
 it's still not really a database designed for heavy hits.

 I run MySQL 4.1 I think. Never had a problem yet -- I'm not sure why 5.0
 would be any different.

 Dave wrote:
 Hello,
 I'm about to create a new bacula server and given the issues i've had
 with mysql5 i do not believe i will be using it. Does anyone have a
 comparison of the various databases, sqlite, postgresql, and mysql,
 specifically with bacula? I'm looking for issues with database 
 interaction
 as well as issues with the database size, i'm not sure how large this new
 box will get.
 Thanks.
 Dave.


 -
 This SF.net email is sponsored by DB2 Express
 Download DB2 Express C - the FREE version of DB2 express and take
 control of your XML. No limits. Just data. Click to get it now.
 http://sourceforge.net/powerbar/db2/
 ___
 Bacula-users mailing list
 Bacula-users@lists.sourceforge.net
 https://lists.sourceforge.net/lists/listinfo/bacula-users

 - --
  _  _ _  _ ___  _  _  _
 |Y#| |  | |\/| |  \ |\ |  | |Ryan Novosielski - Systems Programmer III
 |$| |__| |  | |__/ | \| _| |[EMAIL PROTECTED] - 973/972.0922 (2-0922)
 \__/ Univ. of Med. and Dent.|IST/AST - NJMS Medical Science Bldg - C630
 -BEGIN PGP SIGNATURE-
 Version: GnuPG v1.4.5 (MingW32)
 Comment: Using GnuPG with Mozilla - http://enigmail.mozdev.org

 iD8DBQFGXaFzmb+gadEcsb4RAmAXAKCRYT7eL8qG+h+TJ5ryXBd+UzT4AACgtCrg
 pZTPYE+5bkVSo++HUW/6aPs=
 =GGYp
 -END PGP SIGNATURE- 


-
This SF.net email is sponsored by DB2 Express
Download DB2 Express C - the FREE version of DB2 express and take
control of your XML. No limits. Just data. Click to get it now.
http://sourceforge.net/powerbar/db2/
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] bacula database comparison and pitfallsbacula database comparison

2007-05-30 Thread Dave
Hi,
Thanks for your reply. I tried that, add the wait option to my my.cnf 
file, i don't know if that didn't work for me, but i get bacula database 
corruption with mysql5.
Thanks.
Dave.

- Original Message - 
From: Frank Sweetser [EMAIL PROTECTED]
To: Dave [EMAIL PROTECTED]
Cc: Ryan Novosielski [EMAIL PROTECTED]; 
bacula-users@lists.sourceforge.net
Sent: Wednesday, May 30, 2007 1:01 PM
Subject: Re: [Bacula-users] bacula database comparison and pitfallsbacula 
database comparison


 Dave wrote:
 Hi,
 Thanks for your reply. For me and i don't know if this is in general,
 but in 5.0 there's a new timeout feature and when it kicks in if bacula 
 is
 running i end up with table corruption.

 This can be worked around on the MySQL side.

 http://paramount.ind.wpi.edu/wiki/doku.php?id=faq#mysql_server_has_gone_away

 -- 
 Frank Sweetser fs at wpi.edu  |  For every problem, there is a solution 
 that
 WPI Network Engineer  |  is simple, elegant, and wrong. - HL 
 Mencken
GPG fingerprint = 6174 1257 129E 0D21 D8D4  E8A3 8E39 29E3 E2E8 8CEC 


-
This SF.net email is sponsored by DB2 Express
Download DB2 Express C - the FREE version of DB2 express and take
control of your XML. No limits. Just data. Click to get it now.
http://sourceforge.net/powerbar/db2/
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


[Bacula-users] bacula on centos with different filesystems, recovery

2007-05-28 Thread Dave
Hello,
I've got a centos5 box that needs to be fully backed up because it's 
going to be reinstalled with raid. I don't want to loose any data. I set up 
a backup job that backed up / and /boot do i need to do anymore? The job 
completed successfully, but i'm getting in the email output:

wserv-fd:  /sys is a different filesystem. Will not descend from / into 
/sys
wserv-fd:  /dev is a different filesystem. Will not descend from / into 
/dev

Do i need either of these filesystems if i'm going to be recreating the 
system then installing from backup?
Thanks.
Dave.


-
This SF.net email is sponsored by DB2 Express
Download DB2 Express C - the FREE version of DB2 express and take
control of your XML. No limits. Just data. Click to get it now.
http://sourceforge.net/powerbar/db2/
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


[Bacula-users] bacula daemon msg

2007-05-28 Thread Dave
Hello,
I got this msg this morning from bacula. Any suggestions?

 28-May 04:22 zeus-dir:  Error: bnet.c:412 Wrote 4 bytes to 
client:192.168.0.2:36131, but only 0 accepted.

That box is a windows box, running tls comm encryption. I can perform a 
status client on it and see it so this is not a firewall issue.
Thanks.
Dave.


-
This SF.net email is sponsored by DB2 Express
Download DB2 Express C - the FREE version of DB2 express and take
control of your XML. No limits. Just data. Click to get it now.
http://sourceforge.net/powerbar/db2/
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] postfix and bacula bsmtp mail option

2007-05-27 Thread Dave
Hi Kern,
Thanks, that did it. I replaced that with %r as you indicated and it 
fired right up, sent the email when my catalog job was done, came in as 
[EMAIL PROTECTED], so i then went in and changed the %r again to the 
fully qualified hostname i wanted and it worked. I would probably mention 
this in the manual, it might be useful.
Thanks again to everyone.
Dave.

- Original Message - 
From: Kern Sibbald [EMAIL PROTECTED]
To: Dave [EMAIL PROTECTED]
Cc: bacula-users bacula-users@lists.sourceforge.net
Sent: Sunday, May 27, 2007 7:48 AM
Subject: Re: postfix and bacula bsmtp mail option


 On Sunday 27 May 2007 13:43, Kern Sibbald wrote:
 Hello,

 The solution to your problem is *most likely* to put only \%r\ in the 
 from
 field (i.e. after the -f option).

 The handling of more complex from fields in bsmtp was not really correct 
 for
 all SMTP servers (for the reasons you state). However, version 2.1.11 has 
 a
 fix for this providing you use the form in the manual and in the default
 bacula-dir.conf file, which is not what you have in your conf file below.
 What you have below isn't going work on any SMTP server that I know of.

 Correction -- you are using what is shown in the manual.  I guess over the
 years it has changed a bit ...  Oh well.   My basic advice in the first
 paragraph is still correct.


 Regards,

 Kern

  Date: Fri, 25 May 2007 16:50:14 -0400
  From: Dave [EMAIL PROTECTED]
  Subject: [Bacula-users] postfix and bacula bsmtp mail option
  To: bacula-users@lists.sourceforge.net
  Message-ID: [EMAIL PROTECTED]
  Content-Type: text/plain; format=flowed; charset=iso-8859-1;
  reply-type=original

  Hello,
   I'm running postfix 2.4.1 on FreeBSD, with the option
  strict_rfc821_envelopes set to yes in my main.cf. I set up Bacula on 
  this
  box as well, v2.03. When i had strict_rfc821_envelopes set to yes I 
  kept
  getting the error fatal malformed reply from localhost and an error 
  501
  from postfix in my bacula log. If i change the strict_rfc821_envelopes
  option to no it works fine. I'd rather not change this option, does 
  anyone
  have a workaround? The specific error i'm seeing in the log is:

  25-May 16:26 zeus-dir: message.c:481 Mail prog: bsmtp: bsmtp.c:92 Fatal
  malformed reply from localhost: 501 5.1.7 Bad sender address syntax

  and the message resource looks like this:

  Messages {
Name = Standard
  #
  # NOTE! If you send to two email or more email addresses, you will need
  #  to replace the %r in the from field (-f part) with a single valid
  #  email address in both the mailcommand and the operatorcommand.
  #
mailcommand = /usr/local/sbin/bsmtp -h localhost -f
  \\([EMAIL PROTECTED]) %r\ -s \Bacula: %t %e of %c %l\ %r
operatorcommand = /usr/local/sbin/bsmtp -h localhost -f
  \\([EMAIL PROTECTED]) %r\ -s \Bacula: Intervention needed 
  for
  %j\ %r
mail = [EMAIL PROTECTED] = all, !skipped
operator = [EMAIL PROTECTED] = mount
console = all, !skipped, !saved
  #
  # WARNING! the following will create a file that you must cycle from
  #  time to time as it will grow indefinitely. However, it will
  #  also keep all your messages if they scroll off the console.
  #
append = /var/db/bacula/log = all, !skipped
  }

  Messages {
Name = Daemon
mailcommand = /usr/local/sbin/bsmtp -h localhost -f
  \\([EMAIL PROTECTED]) %r\ -s \Bacula daemon message\ %r
mail = [EMAIL PROTECTED] = all, !skipped
console = all, !skipped, !saved
append = /var/db/bacula/log = all, !skipped
  }

  Thanks.
  Dave.
 


-
This SF.net email is sponsored by DB2 Express
Download DB2 Express C - the FREE version of DB2 express and take
control of your XML. No limits. Just data. Click to get it now.
http://sourceforge.net/powerbar/db2/
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] postfix and bacula bsmtp mail option

2007-05-26 Thread Dave
Hello,
Thanks for your reply. I added a \ and \ to the bsmtp line and that 
didn't do it. If you've got this going could i see your bsmtp line for 
comparison?
Thanks.
Dave.

- Original Message - 
From: Arno Lehmann [EMAIL PROTECTED]
To: bacula-users@lists.sourceforge.net
Sent: Friday, May 25, 2007 5:27 PM
Subject: Re: [Bacula-users] postfix and bacula bsmtp mail option


 Hi,

 On 5/25/2007 10:50 PM, Dave wrote:
 Hello,
  I'm running postfix 2.4.1 on FreeBSD, with the option
 strict_rfc821_envelopes set to yes in my main.cf. I set up Bacula on this
 box as well, v2.03. When i had strict_rfc821_envelopes set to yes I kept
 getting the error fatal malformed reply from localhost and an error 501
 from postfix in my bacula log. If i change the strict_rfc821_envelopes
 option to no it works fine. I'd rather not change this option, does 
 anyone
 have a workaround?

 The obvious solution would be to use strictly RFC compliant mail
 addresses for the sender :-)

 Let's see...

 postconf tells me
 strict_rfc821_envelopes (default: no)
Require  that  addresses  received  in  SMTP  MAIL  FROM  and RCPT 
 TO commands are
enclosed with , and that those addresses do not contain RFC 822 
 style  comments
or phrases.  This stops mail from poorly written software.

By  default,  the Postfix SMTP server accepts RFC 822 syntax in 
 MAIL FROM and RCPT
TO addresses.

 So, a mail sender address like '[EMAIL PROTECTED]' should
 work better. Actually, assuming your  system is set up accordingly, I
 wold even prefer '[EMAIL PROTECTED]', but that's a matter of taste
 as probably noone ever will have reason to reply to that address...

 Unfortunately, I did not reread the relevant RFCs recently ;-) but I
 suppose an address like 'Bacula system [EMAIL PROTECTED]' should
 also work and give you a reasonable sender name in your mail client.

 The specific error i'm seeing in the log is:

 25-May 16:26 zeus-dir: message.c:481 Mail prog: bsmtp: bsmtp.c:92 Fatal
 malformed reply from localhost: 501 5.1.7 Bad sender address syntax

 and the message resource looks like this:

 Messages {
   Name = Standard
 #
 # NOTE! If you send to two email or more email addresses, you will need
 #  to replace the %r in the from field (-f part) with a single valid
 #  email address in both the mailcommand and the operatorcommand.
 #
   mailcommand = /usr/local/sbin/bsmtp -h localhost -f
 \\([EMAIL PROTECTED]) %r\ -s \Bacula: %t %e of %c %l\ %r

 Note that we're talking about the sender address, given after -f, here...

 Arno

   operatorcommand = /usr/local/sbin/bsmtp -h localhost -f
 \\([EMAIL PROTECTED]) %r\ -s \Bacula: Intervention needed 
 for
 %j\ %r
   mail = [EMAIL PROTECTED] = all, !skipped
   operator = [EMAIL PROTECTED] = mount
   console = all, !skipped, !saved
 #
 # WARNING! the following will create a file that you must cycle from
 #  time to time as it will grow indefinitely. However, it will
 #  also keep all your messages if they scroll off the console.
 #
   append = /var/db/bacula/log = all, !skipped
 }

 Messages {
   Name = Daemon
   mailcommand = /usr/local/sbin/bsmtp -h localhost -f
 \\([EMAIL PROTECTED]) %r\ -s \Bacula daemon message\ %r
   mail = [EMAIL PROTECTED] = all, !skipped
   console = all, !skipped, !saved
   append = /var/db/bacula/log = all, !skipped
 }

 Thanks.
 Dave.



 -
 This SF.net email is sponsored by DB2 Express
 Download DB2 Express C - the FREE version of DB2 express and take
 control of your XML. No limits. Just data. Click to get it now.
 http://sourceforge.net/powerbar/db2/
 ___
 Bacula-users mailing list
 Bacula-users@lists.sourceforge.net
 https://lists.sourceforge.net/lists/listinfo/bacula-users

 -- 
 IT-Service Lehmann[EMAIL PROTECTED]
 Arno Lehmann  http://www.its-lehmann.de

 -
 This SF.net email is sponsored by DB2 Express
 Download DB2 Express C - the FREE version of DB2 express and take
 control of your XML. No limits. Just data. Click to get it now.
 http://sourceforge.net/powerbar/db2/
 ___
 Bacula-users mailing list
 Bacula-users@lists.sourceforge.net
 https://lists.sourceforge.net/lists/listinfo/bacula-users 


-
This SF.net email is sponsored by DB2 Express
Download DB2 Express C - the FREE version of DB2 express and take
control of your XML. No limits. Just data. Click to get it now.
http://sourceforge.net/powerbar/db2/
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


[Bacula-users] computer drags during backup, win32

2007-05-26 Thread Dave
Hello,
I'm running bacula 2.0.3 on windows xpsp2. When doing any kind of 
backup, i have the entire system set to back up and if i'm using the box it 
slows down to a crawl. I'm using vss if that matters. On my various Unix 
boxes this does not happen. I was wondering if there was an equivalent to 
unix nice that i could set for win32? If not, is there a way of finding out 
where this slowdown is coming from, a resource limit, network performance, 
cpu, and how best to deal with it?
Thanks.
Dave.


-
This SF.net email is sponsored by DB2 Express
Download DB2 Express C - the FREE version of DB2 express and take
control of your XML. No limits. Just data. Click to get it now.
http://sourceforge.net/powerbar/db2/
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] computer drags during backup, win32

2007-05-26 Thread Dave
Hi,
Thanks for your reply. Is there a way or is it possible to implement, a 
method in win32 where the nice of the bacula process is set at installtime 
or via a config option? I'd like to set the nice once in the process list 
then not have to set it again or have bacula do it via it's install.
Thanks.
Dave.

- Original Message - 
From: John Drescher [EMAIL PROTECTED]
To: Dave [EMAIL PROTECTED]
Cc: bacula-users@lists.sourceforge.net
Sent: Saturday, May 26, 2007 4:08 AM
Subject: Re: [Bacula-users] computer drags during backup, win32


 Hello,
 I'm running bacula 2.0.3 on windows xpsp2. When doing any kind of
 backup, i have the entire system set to back up and if i'm using the box 
 it
 slows down to a crawl. I'm using vss if that matters. On my various Unix
 boxes this does not happen. I was wondering if there was an equivalent to
 unix nice that i could set for win32? If not, is there a way of finding 
 out
 where this slowdown is coming from, a resource limit, network 
 performance,
 cpu, and how best to deal with it?

 I am not on a windows box right now so I can not give you exact
 details but if you execute task manager and select processes and then
 right click on the bacula process you can set it to run at low
 priority. This will reduce the cpu usage but still the disk will
 thrash which still may make your system unusable.

 John 


-
This SF.net email is sponsored by DB2 Express
Download DB2 Express C - the FREE version of DB2 express and take
control of your XML. No limits. Just data. Click to get it now.
http://sourceforge.net/powerbar/db2/
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


[Bacula-users] misc questions on tls and data encryption

2007-05-26 Thread Dave
Hello,
I'm setting up a new bacula server for a friend. It runs on FreeBSD 6.2 
using sqlite2 as the backend database. All clients are bacula 2.03 as is the 
director and storage daemons. Aside, for CentOS5 are there bacula 2.03 rpms 
available that do tls and data encryption that i can get from a centos-repo 
rpmforge just for example?
Backups are working fine, now i'm implementing tls communications between 
the various daemons and data encryption from the file daemon. For tls 
encryption i followed:

http://www.devco.net/pubwiki/Bacula/TLS

and for data encryption:

http://www.bacula.org/rel-manual/Data_Encryption.html

I did some initial testing with a remote client, same network, small job. I 
ran the job twice once with data encryption, once without, both times with 
tls. With encryption on information was:

  Elapsed time:   11 mins 34 secs
  FD Files Written:   3,503
  SD Files Written:   3,503
  FD Bytes Written:   31,160,525 (31.16 MB)
  SD Bytes Written:   32,555,687 (32.55 MB)
  Rate:   44.9 KB/s
  Software Compression:   77.3 %
  Encryption: yes

and with encryption off:

  Elapsed time:   6 mins 6 secs
  FD Files Written:   3,503
  SD Files Written:   3,503
  FD Bytes Written:   29,080,372 (29.08 MB)
  SD Bytes Written:   29,524,318 (29.52 MB)
  Rate:   79.5 KB/s
  Software Compression:   78.8 %
  Encryption: no

After all that here are my questions. From what i can see it seems as if 
there's a performance hit with data encryption, in the throughput area, is 
encryption done as the files are going out? If so is that why the slow data 
transfer rate? Same question for software compression, this one is a little 
more slight, but without encryption it compresses a little better, though 
unless your doing large backups probably not that significant. Lastly, in 
both cases the fd and sd files written values are the same, but the amounts 
are different, without encryption the byte values don't match, but they're 
not off by that much, with encryption the mismatch is more pronounced, 
question is the difference with encryption due to the fact that the files 
are being sent as encrypted files?
NOw, away from the results, one last general question. Following the 
bacula manual section above i created a master key called master.key and 
.crt and a file-daemon specific key, called hostname-fd.key and .crt. One of 
my pki lines references the master public key, but aside from that reference 
there was no interaction between the keys during creation, i don't 
understand how this master key will decrypt the client encrypted data if the 
client specific keys are lost, since the private keys are not the same.
I hope all this makes sense.
Thanks.
Dave.


-
This SF.net email is sponsored by DB2 Express
Download DB2 Express C - the FREE version of DB2 express and take
control of your XML. No limits. Just data. Click to get it now.
http://sourceforge.net/powerbar/db2/
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


[Bacula-users] bacula contributed rpms, differences?

2007-05-26 Thread Dave
Hello,
I was on the bacula sourceforge site looking for rhel5 rpms for CentOS5. 
I found several rpm areas all for 2.03 but with different contributors. I 
was wondering what the difference was between the various rpms?
Thanks.
Dave.


-
This SF.net email is sponsored by DB2 Express
Download DB2 Express C - the FREE version of DB2 express and take
control of your XML. No limits. Just data. Click to get it now.
http://sourceforge.net/powerbar/db2/
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


[Bacula-users] bacula client, automatic volume management

2007-05-26 Thread Dave
Hello,
I've got a client with this new bacula server that i want to be 
completely automated in terms of volume and backup management, bacula won't 
require any intervention for new volumes, as that will be automated. I've 
got the below setup, my only question is my file, job, and volume retention 
periods, are they right for hands off operations? Will they rotate on 
schedule and not need new volumes?
Thanks.
Dave.

#
# Bacula client1 client configuration
#

Job {
  Name = backup_client1
  Type = Backup
 Level = Incremental
  Schedule = Client1Cycle
  Messages = Standard
  Client = client1-fd
  FileSet = fbsd_client1
  Pool = Default
Full Backup Pool = Client1_Full
Incremental Backup Pool = Client1_Incremental
Storage = Client1Storage
  Write Bootstrap = /backup/bacula/client1/client1.bsr
  Priority = 10
}

FileSet {
  Name = fbsd_client1
  Include {
options {
Compression=GZIP9
Signature=SHA1
aclsupport=yes
}
File = /
File = /usr
File = /var
  }
#
# If you backup the root directory, the following two excluded
#   files can be useful
#
  Exclude {
File = /proc
File = /tmp
File = /.journal
File = /.fsck
File = /dev
  }
  }

#
# When to do the backups, full backup on first sunday of the month,
#  differential (i.e. incremental since full) every other sunday,
#  and incremental backups other days
Schedule {
  Name = Client1Cycle
Run = Full 2nd,4th sun at 5:05AM
  Run = Differential 1st,3rd,5th sun at 5:05AM
  Run = Incremental mon-sat at 5:05AM
}

Client {
  Name = client1-fd
  Address = client1.example.com
  FDPort = 9102
  Catalog = MyCatalog
  Password = x # password for FileDaemon
  File Retention = 30 days
  Job Retention = 6 months
  AutoPrune = yes   # Prune expired Jobs/Files
TLS Enable = yes
TLS Require = yes
TLS CA Certificate File = /usr/local/etc/bacula/ca-cert.pem
TLS Certificate = /usr/local/etc/bacula/bacula.example.com.crt
TLS Key = /usr/local/etc/bacula/bacula.example.com.key
}

Storage {
  Name = Client1Storage
  Address = bacula.example.com  # N.B. Use a fully qualified name here
  SDPort = 9103
  Password = x
  Device = Client1Storage
  Media Type = File
TLS Enable = yes
TLS Require = Yes
TLS CA Certificate File = /usr/local/etc/bacula/ca-cert.pem
TLS Certificate = /usr/local/etc/bacula/bacula.example.com.crt
TLS Key = /usr/local/etc/bacula/bacula.example.com.key
}

pool {
 Name = Client1_Full
 Pool Type = Backup
Use Volume Once = yes
 Recycle = yes # Bacula can automatically recycle volumes
 AutoPrune = yes # Prune expired volumes
  Volume Retention = 21 days
Maximum Volumes = 3 # Keep 3 fulls
LabelFormat = client1-full
}

pool {
 Name = Client1_Incremental
 Pool Type = Backup
Use Volume Once = yes
 Recycle = yes # Bacula can automatically recycle volumes
 AutoPrune = yes # Prune expired volumes
  Volume Retention = 21 days
Maximum Volumes = 10 # Keep 10 incrementals
LabelFormat = client1-incremental
}
 


-
This SF.net email is sponsored by DB2 Express
Download DB2 Express C - the FREE version of DB2 express and take
control of your XML. No limits. Just data. Click to get it now.
http://sourceforge.net/powerbar/db2/
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


[Bacula-users] postfix and bacula bsmtp mail option

2007-05-25 Thread Dave
Hello,
 I'm running postfix 2.4.1 on FreeBSD, with the option 
strict_rfc821_envelopes set to yes in my main.cf. I set up Bacula on this 
box as well, v2.03. When i had strict_rfc821_envelopes set to yes I kept 
getting the error fatal malformed reply from localhost and an error 501 
from postfix in my bacula log. If i change the strict_rfc821_envelopes 
option to no it works fine. I'd rather not change this option, does anyone 
have a workaround? The specific error i'm seeing in the log is:

25-May 16:26 zeus-dir: message.c:481 Mail prog: bsmtp: bsmtp.c:92 Fatal 
malformed reply from localhost: 501 5.1.7 Bad sender address syntax

and the message resource looks like this:

Messages {
  Name = Standard
#
# NOTE! If you send to two email or more email addresses, you will need
#  to replace the %r in the from field (-f part) with a single valid
#  email address in both the mailcommand and the operatorcommand.
#
  mailcommand = /usr/local/sbin/bsmtp -h localhost -f 
\\([EMAIL PROTECTED]) %r\ -s \Bacula: %t %e of %c %l\ %r
  operatorcommand = /usr/local/sbin/bsmtp -h localhost -f 
\\([EMAIL PROTECTED]) %r\ -s \Bacula: Intervention needed for 
%j\ %r
  mail = [EMAIL PROTECTED] = all, !skipped
  operator = [EMAIL PROTECTED] = mount
  console = all, !skipped, !saved
#
# WARNING! the following will create a file that you must cycle from
#  time to time as it will grow indefinitely. However, it will
#  also keep all your messages if they scroll off the console.
#
  append = /var/db/bacula/log = all, !skipped
}

Messages {
  Name = Daemon
  mailcommand = /usr/local/sbin/bsmtp -h localhost -f 
\\([EMAIL PROTECTED]) %r\ -s \Bacula daemon message\ %r
  mail = [EMAIL PROTECTED] = all, !skipped
  console = all, !skipped, !saved
  append = /var/db/bacula/log = all, !skipped
}

Thanks.
Dave.



-
This SF.net email is sponsored by DB2 Express
Download DB2 Express C - the FREE version of DB2 express and take
control of your XML. No limits. Just data. Click to get it now.
http://sourceforge.net/powerbar/db2/
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


[Bacula-users] bacula database comparison and pitfallsbacula database comparison

2007-05-23 Thread Dave
Hello,
I'm about to create a new bacula server and given the issues i've had 
with mysql5 i do not believe i will be using it. Does anyone have a 
comparison of the various databases, sqlite, postgresql, and mysql, 
specifically with bacula? I'm looking for issues with database interaction 
as well as issues with the database size, i'm not sure how large this new 
box will get.
Thanks.
Dave.


-
This SF.net email is sponsored by DB2 Express
Download DB2 Express C - the FREE version of DB2 express and take
control of your XML. No limits. Just data. Click to get it now.
http://sourceforge.net/powerbar/db2/
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


[Bacula-users] bacula retention periods for disk backups

2007-04-26 Thread Dave
Hello,
I'm trying to set up disk backup jobs. I've included my config at the
end of this msg, but i'm confused as to the difference between job
retention, and volume retention in the client definition, and volume
retention in the pool definition. I've read the bacula manual on these
topics, and i understand the shortest one takes priority, but i'm not sure
the values to set.
I've got a machine that has two jobs with it. The first backs up general
files, and i want it to back up according to it's schedule, and have bacula
automatically handle volume rotation. The second job is a monthly back up of
/home, where i want it to do one full backup and the incrementals until the
next full. And again automatic volume rotation, i.e. month 2's full volume
overwrites the current one and the incrementals do the same.
The problem i'm having is i'm occationally getting the error no
appendable volumes, which tells me i don't have my retention periods quite
right for this setup. Any help appreciated.
Thanks.
Dave.

#
# Bacula zeus client configuration
#

Job {
  Name = backup_zeus
  Type = Backup
 Level = Incremental
  Schedule = ZeusCycle
  Messages = Standard
  Client = zeus-fd
  FileSet = fbsd_zeus
  Pool = Default
Full Backup Pool = Zeus_Full
Incremental Backup Pool = Zeus_Incremental
Storage = ZeusStorage
  Write Bootstrap = /var/db/bacula/zeus.bsr
  Priority = 10
}

Job {
  Name = backup_zeus_home
  Type = Backup
 Level = Incremental
  Schedule = ZeusCycleHome
  Messages = Standard
  Client = zeus-fd
  FileSet = fbsd_zeus_home
  Pool = Default
Full Backup Pool = Zeus_Full_Home
Incremental Backup Pool = Zeus_Incremental_Home
Storage = ZeusStorage
  Write Bootstrap = /var/db/bacula/zeus.bsr
  Priority = 10
}

FileSet {
  Name = fbsd_zeus
  Include {
options {
Compression=GZIP9
Signature=SHA1
aclsupport=yes
}
 File = /etc
File = /usr/local/etc
File = /usr/local/www
  }
  Exclude {
File = /proc
File = /tmp
File = /.journal
File = /.fsck
File = /usr/src
File = /usr/ports
File = /usr/doc
File = /usr/share/doc
File = /usr/obj
File = /tmp
File = /var/tmp
}
  }

FileSet {
  Name = fbsd_zeus_home
  Include {
options {
Compression=GZIP9
Signature=SHA1
aclsupport=yes
}
 File = /home
  }
  Exclude {
File = /proc
File = /tmp
File = /.journal
File = /.fsck
File = /usr/src
File = /usr/ports
File = /usr/doc
File = /usr/share/doc
File = /usr/obj
File = /tmp
File = /var/tmp
}
  }

Schedule {
  Name = ZeusCycle
  Run = Full 1st sun at 1:05AM
  Run = Incremental mon-sat at 1:05AM
}

Schedule {
  Name = ZeusCycleHome
  Run = Full 1st sun at 4:00AM
  Run = Incremental mon-sat at 4:00AM
}

Client {
  Name = zeus-fd
  Address = bacula.example.com
  FDPort = 9102
  Catalog = MyCatalog
  Password = xxx   # password for FileDaemon
  File Retention = 62 days
  Job Retention = 3 months
  AutoPrune = yes   # Prune expired Jobs/Files
}

Storage {
  Name = ZeusStorage
  Address = bacula.example.com
  SDPort = 9103
  Password = baculapassword1
  Device = ZeusStorage
  Media Type = File
}

pool {
 Name = Zeus_Full
 Pool Type = Backup
Maximum Volume Jobs = 1 # New file for each backup
 Recycle = yes # Bacula can automatically recycle volumes
 AutoPrune = yes # Prune expired volumes
  Volume Retention = 27 days
Maximum Volumes = 2 # Keep 2 fulls
LabelFormat = zeus-full
}

pool {
 Name = Zeus_Incremental
 Pool Type = Backup
Maximum Volume Jobs = 1 # New file for each backup
 Recycle = yes # Bacula can automatically recycle volumes
 AutoPrune = yes # Prune expired volumes
  Volume Retention = 312 hours
  Volume Use Duration = 6 days
Maximum Volumes = 10 # Keep 10 incrementals
LabelFormat = zeus-incremental
}

pool {
 Name = Zeus_Full_Home
 Pool Type = Backup
Maximum Volume Jobs = 1 # New file for each backup
 Recycle = yes # Bacula can automatically recycle volumes
 AutoPrune = yes # Prune expired volumes
  Volume Retention = 62 days
Maximum Volumes = 2 # Keep 2 fulls
LabelFormat = zeus-full-home
}

pool {
 Name = Zeus_Incremental_Home
 Pool Type = Backup
Maximum Volume Jobs = 1 # New file for each backup
 Recycle = yes # Bacula can automatically recycle volumes
 AutoPrune = yes # Prune expired volumes
  Volume Retention = 31 days
Maximum Volumes = 31 # Keep 10 incrementals
LabelFormat = zeus-incremental-home
}


-
This SF.net email is sponsored by DB2 Express
Download DB2 Express C - the FREE version of DB2 express and take
control of your XML. No limits. Just data. Click to get it now.
http://sourceforge.net/powerbar/db2/
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https

[Bacula-users] bacula and database crash

2007-04-24 Thread Dave
Hi,
I've got a problem between bacula and mysql and would appreciate any 
help. Output below.
Thanks.
Dave.

24-Apr 05:51 zeus-dir: Start Backup JobId 70, 
Job=backup_wserv.2007-04-24_05.51.00
24-Apr 05:51 zeus-dir: backup_wserv.2007-04-24_05.51.00 Fatal error: 
sql_find.c:320 sql_find.c:320 query SELECT 
MediaId,VolumeName,VolJobs,VolFiles,VolBlocks,VolBytes,VolMounts,VolErrors,VolWrites,MaxVolBytes,VolCapacityBytes,VolRetention,VolUseDuration,MaxVolJobs,MaxVolFiles,Recycle,Slot,FirstWritten,LastWritten,VolStatus,InChanger,VolParts,LabelType
 
FROM Media WHERE PoolId=13 AND MediaType='File' AND Enabled=1 AND 
VolStatus='Append'  ORDER BY LastWritten IS NULL,LastWritten DESC,MediaId 
LIMI
 1 failed:
Table 'Media' is marked as crashed and should be repaired
24-Apr 05:51 zeus-dir: sql_find.c:320 SELECT 
MediaId,VolumeName,VolJobs,VolFiles,VolBlocks,VolBytes,VolMounts,VolErrors,VolWrites,MaxVolBytes,VolCapacityBytes,VolRetention,VolUseDuration,MaxVolJobs,MaxVolFiles,Recycle,Slot,FirstWritten,LastWritten,VolStatus,InChanger,VolParts,LabelType
 
FROM Media WHERE PoolId=13 AND MediaType='File' AND Enabled=1 AND 
VolStatus='Append'  ORDER BY LastWritten IS NULL,LastWritten DESC,MediaId 
LIMIT 1
24-Apr 05:51 zeus-dir: backup_wserv.2007-04-24_05.51.00 Fatal error: 
sql_find.c:320 sql_find.c:320 query SELECT 
MediaId,VolumeName,VolJobs,VolFiles,VolBlocks,VolBytes,VolMounts,VolErrors,VolWrites,MaxVolBytes,VolCapacityBytes,VolRetention,VolUseDuration,MaxVolJobs,MaxVolFiles,Recycle,Slot,FirstWritten,LastWritten,VolStatus,InChanger,VolParts,LabelType
 
FROM Media WHERE PoolId=13 AND MediaType='File' AND Enabled=1 AND 
VolStatus='Recycle'  ORDER BY LastWritten IS NULL,LastWritten DESC,MediaId 
LIM
T 1 failed:
Table 'Media' is marked as crashed and should be repaired
24-Apr 05:51 zeus-dir: sql_find.c:320 SELECT 
MediaId,VolumeName,VolJobs,VolFiles,VolBlocks,VolBytes,VolMounts,VolErrors,VolWrites,MaxVolBytes,VolCapacityBytes,VolRetention,VolUseDuration,MaxVolJobs,MaxVolFiles,Recycle,Slot,FirstWritten,LastWritten,VolStatus,InChanger,VolParts,LabelType
 
FROM Media WHERE PoolId=13 AND MediaType='File' AND Enabled=1 AND 
VolStatus='Recycle'  ORDER BY LastWritten IS NULL,LastWritten DESC,MediaId 
LIMIT 1
24-Apr 05:51 zeus-dir: backup_wserv.2007-04-24_05.51.00 Error: Query failed: 
SELECT MediaId,LastWritten FROM Media WHERE PoolId=13 AND Recycle=1 AND 
VolStatus='Purged' AND Enabled=1 AND MediaType='File' ORDER BY LastWritten 
ASC,MediaId LIMIT 1: ERR=Table 'Media' is marked as crashed and should be 
repaired
24-Apr 05:51 zeus-dir: backup_wserv.2007-04-24_05.51.00 Error: Media id 
select failed: ERR=Table 'Media' is marked as crashed and should be repaired
24-Apr 05:51 zeus-dir: backup_wserv.2007-04-24_05.51.00 Error: Media id 
select failed: ERR=Table 'Media' is marked as crashed and should be repaired
24-Apr 05:51 zeus-dir: backup_wserv.2007-04-24_05.51.00 Error: Query failed: 
SELECT MediaId,LastWritten FROM Media WHERE PoolId=13 AND Recycle=1 AND 
VolStatus='Purged' AND Enabled=1 AND MediaType='File' ORDER BY LastWritten 
ASC,MediaId LIMIT 1: ERR=Table 'Media' is marked as crashed and should be 
repaired
24-Apr 05:51 zeus-dir: Created new Volume wserv-incremental0006 in 
catalog.
24-Apr 05:51 zeus-dir: backup_wserv.2007-04-24_05.51.00 Warning: Error 
getting Media record for Volume wserv-incremental0006: ERR=Media record 
for Vol=wserv-incremental0006 not found in Catalog.
24-Apr 05:51 zeus-dir: backup_wserv.2007-04-24_05.51.00 Error: Bacula 2.0.3 
(06Mar07): 24-Apr-2007 05:51:04
  JobId:  70
  Job:backup_wserv.2007-04-24_05.51.00
  Backup Level:   Incremental, since=2007-04-21 05:51:06
  Client: wserv-fd 2.0.3 (06Mar07) 
i686-redhat-linux-gnu,redhat,
  FileSet:fbsd_wserv 2007-04-15 11:11:06
  Pool:   Wserv_Incremental (From Job IncPool override)
  Storage:WservStorage (From Job resource)
  Scheduled time: 24-Apr-2007 05:51:00
  Start time: 24-Apr-2007 05:51:02
  End time:   24-Apr-2007 05:51:04
  Elapsed time:   2 secs
  Priority:   10
  FD Files Written:   0
  SD Files Written:   0
  FD Bytes Written:   0 (0 B)
  SD Bytes Written:   0 (0 B)
  Rate:   0.0 KB/s
  Software Compression:   None
  VSS:no
  Encryption: no
  Volume name(s):
  Volume Session Id:  5
  Volume Session Time:1177357669
  Last Volume Bytes:  0 (0 B)
  Non-fatal FD errors:4
  SD Errors:  0
  FD termination status:
  SD termination status:
  Termination:*** Backup Error ***


-
This SF.net email is sponsored by DB2 Express
Download DB2 Express C - the FREE version of DB2 express and take
control of your XML. No limits. Just data. Click to get it now.
http://sourceforge.net/powerbar/db2

[Bacula-users] basic bacula question on retention periods and disk backups

2007-04-15 Thread Dave
Hello,
I'm trying to set up disk backup jobs. I've included my config at the 
end of this msg, but i'm confused as to the difference between job 
retention, and volume retention in the client definition, and volume 
retention in the pool definition. I've read the bacula manual on these 
topics, and i understand the shortest one takes priority, but i'm not sure 
the values to set.
I've got a machine that has two jobs with it. The first backs up general 
files, and i want it to back up according to it's schedule, and have bacula 
automatically handle volume rotation. The second job is a monthly back up of 
/home, where i want it to do one full backup and the incrementals until the 
next full. And again automatic volume rotation, i.e. month 2's full volume 
overwrites the current one and the incrementals do the same.
The problem i'm having is i'm occationally getting the error no 
appendable volumes, which tells me i don't have my retention periods quite 
right for this setup. Any help appreciated.
Thanks.
Dave.

#
# Bacula zeus client configuration
#

Job {
  Name = backup_zeus
  Type = Backup
 Level = Incremental
  Schedule = ZeusCycle
  Messages = Standard
  Client = zeus-fd
  FileSet = fbsd_zeus
  Pool = Default
Full Backup Pool = Zeus_Full
Incremental Backup Pool = Zeus_Incremental
Storage = ZeusStorage
  Write Bootstrap = /var/db/bacula/zeus.bsr
  Priority = 10
}

Job {
  Name = backup_zeus_home
  Type = Backup
 Level = Incremental
  Schedule = ZeusCycleHome
  Messages = Standard
  Client = zeus-fd
  FileSet = fbsd_zeus_home
  Pool = Default
Full Backup Pool = Zeus_Full_Home
Incremental Backup Pool = Zeus_Incremental_Home
Storage = ZeusStorage
  Write Bootstrap = /var/db/bacula/zeus.bsr
  Priority = 10
}

FileSet {
  Name = fbsd_zeus
  Include {
options {
Compression=GZIP9
Signature=SHA1
aclsupport=yes
}
 File = /etc
File = /usr/local/etc
File = /usr/local/www
  }
  Exclude {
File = /proc
File = /tmp
File = /.journal
File = /.fsck
File = /usr/src
File = /usr/ports
File = /usr/doc
File = /usr/share/doc
File = /usr/obj
File = /tmp
File = /var/tmp
}
  }

FileSet {
  Name = fbsd_zeus_home
  Include {
options {
Compression=GZIP9
Signature=SHA1
aclsupport=yes
}
 File = /home
  }
  Exclude {
File = /proc
File = /tmp
File = /.journal
File = /.fsck
File = /usr/src
File = /usr/ports
File = /usr/doc
File = /usr/share/doc
File = /usr/obj
File = /tmp
File = /var/tmp
}
  }

Schedule {
  Name = ZeusCycle
  Run = Full 1st sun at 1:05AM
  Run = Incremental mon-sat at 1:05AM
}

Schedule {
  Name = ZeusCycleHome
  Run = Full 1st sun at 4:00AM
  Run = Incremental mon-sat at 4:00AM
}

Client {
  Name = zeus-fd
  Address = bacula.example.com
  FDPort = 9102
  Catalog = MyCatalog
  Password = xxx   # password for FileDaemon
  File Retention = 62 days
  Job Retention = 3 months
  AutoPrune = yes   # Prune expired Jobs/Files
}

Storage {
  Name = ZeusStorage
  Address = bacula.example.com
  SDPort = 9103
  Password = baculapassword1
  Device = ZeusStorage
  Media Type = File
}

pool {
 Name = Zeus_Full
 Pool Type = Backup
Maximum Volume Jobs = 1 # New file for each backup
 Recycle = yes # Bacula can automatically recycle volumes
 AutoPrune = yes # Prune expired volumes
  Volume Retention = 27 days
Maximum Volumes = 2 # Keep 2 fulls
LabelFormat = zeus-full
}

pool {
 Name = Zeus_Incremental
 Pool Type = Backup
Maximum Volume Jobs = 1 # New file for each backup
 Recycle = yes # Bacula can automatically recycle volumes
 AutoPrune = yes # Prune expired volumes
  Volume Retention = 312 hours
  Volume Use Duration = 6 days
Maximum Volumes = 10 # Keep 10 incrementals
LabelFormat = zeus-incremental
}

pool {
 Name = Zeus_Full_Home
 Pool Type = Backup
Maximum Volume Jobs = 1 # New file for each backup
 Recycle = yes # Bacula can automatically recycle volumes
 AutoPrune = yes # Prune expired volumes
  Volume Retention = 62 days
Maximum Volumes = 2 # Keep 2 fulls
LabelFormat = zeus-full-home
}

pool {
 Name = Zeus_Incremental_Home
 Pool Type = Backup
Maximum Volume Jobs = 1 # New file for each backup
 Recycle = yes # Bacula can automatically recycle volumes
 AutoPrune = yes # Prune expired volumes
  Volume Retention = 31 days
Maximum Volumes = 31 # Keep 10 incrementals
LabelFormat = zeus-incremental-home
}


-
This SF.net email is sponsored by DB2 Express
Download DB2 Express C - the FREE version of DB2 express and take
control of your XML. No limits. Just data. Click to get it now.
http://sourceforge.net/powerbar/db2/
___
Bacula-users mailing list
[EMAIL PROTECTED]
https

[Bacula-users] appropriate data spool size for tape

2007-04-15 Thread Dave
Hello,
I'm getting my Quantum Dlt 4000 going with bacula. I've included my 
definitions below and the job that i'm running. I'm not getting very good 
performance with the spool, the drive starts and stops quite frequently, it 
is on an Adaptec Ultrawide 2940 scsi controller. I was wondering the optimal 
size of a tape data spool?
Thanks.
Dave.

# Definition of Quantum DLT4000 Tape Drive
Device {
  Name = Quantum DLT4000
Description = Quantum DLT4000 Tape Drive for FreeBSD
  Media Type = DLT
  Archive Device = /dev/sa0
LabelMedia = yes;   # lets Bacula label unlabeled Media
Random Access = Yes;
AutomaticMount = yes;   # when device opened, read it
RemovableMedia = no;
AlwaysOpen = no;
Offline On Unmount  = no
Hardware End of Medium  = no
BSF at EOM  = yes
Backward Space Record   = yes
Fast Forward Space File = no
TWO EOF = yes
# for data spooling
Maximum Spool Size = 1024
Maximum Job Spool Size = 1024
Spool Directory = /backup/bacula-spool
}

Job {
 Name = isos-Tape
  Type = Backup
  Client = zeus-fd
  FileSet = Iso Files
  Storage = Quantum DLT4000
  Messages = Standard
  Pool = Default
  Write Bootstrap = /var/db/bacula/tap.bsr
  Priority = 10
SpoolData = yes
}

Storage {
  Name = Quantum DLT4000
  Address = bacula.example.com # N.B. Use a fully qualified name 
here
  SDPort = 9103
  Password =    # password for Storage daemon
  Device = Quantum DLT4000 # must be same as in the storage daemon
  Media Type = DLT  # must be same as MediaType in 
Storage daemon
}

Pool {
  Name = Default
  Pool Type = Backup
  Recycle = yes   # Bacula can automatically recycle 
Volumes
  Recycle Oldest Volume = yes
  Maximum Volume Jobs = 1
  LabelFormat = TAPE-
  Volume Retention = 1 year
  Storage = Quantum DLT4000
}


-
This SF.net email is sponsored by DB2 Express
Download DB2 Express C - the FREE version of DB2 express and take
control of your XML. No limits. Just data. Click to get it now.
http://sourceforge.net/powerbar/db2/
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


  1   2   3   >