Re: [Bacula-users] Better way to garbage collect postgresql database

2009-03-20 Thread Jesper Krogh
Hemant Shah wrote:
 This is a database question, but I figured some of the bacula users may have 
  come across this problem so I am posting it here.
 
 Every monday I run following commands to check and garbage collect bacula 
 database:
 
 dbcheck command
 vacuumdb -q -d bacula -z -f

There is absolutely no reason to vacumm full unless your data-size 
actually are shrinking over time. (longer periods). A normal vacuum will 
make it available for the next run.. you most likely are running 
autovacuum anyway.

 reindexdb

Might make sense, but weekly?..  There is AFAIK a small amount of 
index-bloat collecting up over time in PG. But in general just yearly 
or monthly should really be sufficient.

 Usually I purge one or two backup volumes and the above commands run in less 
 than 20 minutes. 
 
 Before my monthly Full backup I delete large amount of data from the database 
 as I 
  delete one month worth of Full and Incremental backups. When I run 
the above
  commands after the Full backup, the vacummdb command take 12 hours
  to run. Is there a faster/better way of doing it?

No not really VACUUM FULL i telling PG to reorder the data on-disk and 
free up space for the os, that is a hard task. But it is also not 
needed, since you are going to used it within the next week/month 
anyway.. so ordinary VACUUM is sufficient.

 My database is about 9GB.
 
 If I backup database using pgdump and then restore it, will it do the same
  thing as vacuumdb and reindexdb commands?

Basically yes.

--
Apps built with the Adobe(R) Flex(R) framework and Flex Builder(TM) are
powering Web 2.0 with engaging, cross-platform capabilities. Quickly and
easily build your RIAs with Flex Builder, the Eclipse(TM)based development
software that enables intelligent coding and step-through debugging.
Download the free 60 day trial. http://p.sf.net/sfu/www-adobe-com
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] End of tape error - pbzip2

2009-03-20 Thread Ralf Brinkmann
John Drescher schrieb:
 I
 have not seen a cpu that can do more than 20 MB/s. I know my 2.83GHz
 core2 quad is no way as fast as my LTO2 tape drive when it comes to
 compression.
 there is a multi-threading version of bzip2 - but I have no idea whether
 bacula will be able to handle bzip2

 This is pbzip2, I use it for a custom build process with gentoo. I am
 not sure how hard it would be to add this to bacula. 

Our bottleneck is the network. For that I made a test with this 
compression tool while the servers where on their daily job.

It seems the multi-threading bzip2 is compressing round about 50 percent 
more data at the same time than bacula shows for the average 
transferrate of the uncompressed data.

But I think for a significant benchmark I have to measure the average 
transferrate of the whole nightly  backup process including the 
compression step.

-- 
Ralf Brinkmann

--
Apps built with the Adobe(R) Flex(R) framework and Flex Builder(TM) are
powering Web 2.0 with engaging, cross-platform capabilities. Quickly and
easily build your RIAs with Flex Builder, the Eclipse(TM)based development
software that enables intelligent coding and step-through debugging.
Download the free 60 day trial. http://p.sf.net/sfu/www-adobe-com
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


[Bacula-users] Volume labeling according to Job Problem

2009-03-20 Thread Saur, Achim
Hi there,

 

we have a Problem with labeling of volumes. The volumes are labelled with the 
name of the job plus the pool it used.

We use Harddisks as media. Strange thing is, that volumes are labeled with the 
wrong name. We currently have 4 jobs. 

And planning to implement it to more. It isn't everytime the same job which 
labeled wrong.

 

 

The output of the job:

 

Build OS:   i486-pc-linux-gnu debian lenny/sid

  JobId:  176

  Job:Gatexx.2009-03-19_23.05.07

  Backup Level:   Incremental, since=2009-03-18 23:05:57

  Client: gatexx.-fd 2.2.8 (26Jan08) 
i486-pc-linux-gnu,debian,lenny/sid

  FileSet:Full Set 2009-02-20 11:39:13

  Pool:   Incr (From Job IncPool override)

  Storage:File (From Job resource)

  Scheduled time: 19-Mär-2009 23:05:00

  Start time: 19-Mär-2009 23:06:01

  End time:   19-Mär-2009 23:06:10

  Elapsed time:   9 secs

  Priority:   10

  FD Files Written:   335

  SD Files Written:   335

  FD Bytes Written:   49,219,193 (49.21 MB)

  SD Bytes Written:   49,256,673 (49.25 MB)

  Rate:   5468.8 KB/s

  Software Compression:   75.1 %

  VSS:no

  Storage Encryption: no

  Volume name(s): bacula-fd-200922523231-incr

 

 

 

The configuration of the job:

 

Job {

  Name = Gatexx

  Type = Backup

  Client = gatexx.-fd

  FileSet = Full Set

  Schedule = WeeklyCycle

  Storage = File

  Messages = Standard

  Pool = Default

  Full Backup Pool = Full

  Incremental Backup Pool = Incr

  Differential Backup Pool = Diff

  Write Bootstrap = /var/lib/bacula/gatexx..bsr

}

Here is my pool:

 

Pool {

  Name = Incr

  Pool Type = Backup

  Recycle = yes

  Autoprune = yes

  Volume Retention = 22 days

  Label Format = $JobName-incr

  Maximum Volume Jobs = 1

  Use Volume Once = yes

}

If I run the jobs manually everything is ok. The Volumes are labeled according 
to their jobs. Do I have to upgrade Bacula because it's a known Bug in my 
Version?

Or have I overseen something?

OS used: Ubuntu 8.04 LTS

Bacula Version: 2.2.8-5ubuntu7.2

Hints are appreciated.

Best regards

Achim Saur

--
Apps built with the Adobe(R) Flex(R) framework and Flex Builder(TM) are
powering Web 2.0 with engaging, cross-platform capabilities. Quickly and
easily build your RIAs with Flex Builder, the Eclipse(TM)based development
software that enables intelligent coding and step-through debugging.
Download the free 60 day trial. http://p.sf.net/sfu/www-adobe-com___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] Issue with Restoring a Full which Spanned Tapes.

2009-03-20 Thread Ralf Gross
Doug Forster schrieb:
 I have gone into the database and can see that the database is empty for the
 job in question. I think that there is an issue with the insertion of over a
 million entrees all at once that is giving bacula a hard time. I have found
 a supporting post here:
 http://www.backupcentral.com/phpBB2/two-way-mirrors-of-external-mailing-list
 s-3/bacula-25/bacula-prunes-files-too-early-95935/ but they really didn't
 get a resolution either. Is this addressed in any of the change logs that

Are you using psql? Is your bacula db SQL_ASCII or UTF8?

http://article.gmane.org/gmane.comp.sysutils.backup.bacula.devel/12074/match=utf+sql%5fascii

Any errors in you psql log? This problems is serious and I can't
understand, that it's still not catched in bacula (to my knowledge).

Ralf

--
Apps built with the Adobe(R) Flex(R) framework and Flex Builder(TM) are
powering Web 2.0 with engaging, cross-platform capabilities. Quickly and
easily build your RIAs with Flex Builder, the Eclipse(TM)based development
software that enables intelligent coding and step-through debugging.
Download the free 60 day trial. http://p.sf.net/sfu/www-adobe-com
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


[Bacula-users] Bconsole prune/purge commands

2009-03-20 Thread Ferdinando Pasqualetti
Hello List,
maybe I am missing something, but I had this question since a lot of time.
Does anybody knows if there is a reason why the prune command, which is 
not dangerous and also automatically triggered in some cases ask for a 
confirmation before being executed, while the purge command, which 
overrides retention times, shows a caution message, but it is executed 
anyway?
Maybe it depends on some pool definition, but I did not see anything 
suitable.

--
Ferdinando Pasqualetti
G.T.Dati srl
Tel. 0557310862 - 3356172731 - Fax 055720143


--
Apps built with the Adobe(R) Flex(R) framework and Flex Builder(TM) are
powering Web 2.0 with engaging, cross-platform capabilities. Quickly and
easily build your RIAs with Flex Builder, the Eclipse(TM)based development
software that enables intelligent coding and step-through debugging.
Download the free 60 day trial. http://p.sf.net/sfu/www-adobe-com___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] End of tape error - pbzip2

2009-03-20 Thread Ralf Brinkmann
John Drescher schrieb:

 This is pbzip2, I use it for a custom build process with gentoo. I am
 not sure how hard it would be to add this to bacula. 

I'm not willing to go thru the bacula-code, but I think it might be easy 
to write my own wrapper for pbzip2 if I know how bacula calls the 
compression tools.

-- 
Ralf Brinkmann




--
Apps built with the Adobe(R) Flex(R) framework and Flex Builder(TM) are
powering Web 2.0 with engaging, cross-platform capabilities. Quickly and
easily build your RIAs with Flex Builder, the Eclipse(TM)based development
software that enables intelligent coding and step-through debugging.
Download the free 60 day trial. http://p.sf.net/sfu/www-adobe-com
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


[Bacula-users] concurrent job / compression

2009-03-20 Thread Olivier Delestre
Hello,

I ask myself:

Is it possible to make the backup work by activating the concurrent
job and compression (gzip) only on certain client?
The combination of a score pad and a client might not compress does not
manage problems in the restoration?

Thanks.
Olivier Delestre

--
Apps built with the Adobe(R) Flex(R) framework and Flex Builder(TM) are
powering Web 2.0 with engaging, cross-platform capabilities. Quickly and
easily build your RIAs with Flex Builder, the Eclipse(TM)based development
software that enables intelligent coding and step-through debugging.
Download the free 60 day trial. http://p.sf.net/sfu/www-adobe-com
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


[Bacula-users] An idea for a feature (not a request yet :) - weighted concurrency

2009-03-20 Thread James Harper
I have half an idea for a feature request but it's not well defined
yet...

Basically, I have a bunch of clients to back up, some are on gigabit
network and some are stuck on 100mbit. They are being backed up to a
disk that has throughput of around 20-30mbytes/second.

I am allowing 2 jobs to run at once, which works well but I think it
could work better. I was thinking of something along the lines of
assigning a 'cost' to each client, and have the scheduler make sure that
the sum of all current clients is under some threshold, something like:

Client1 - gigabit network and fast disks - cost = 100
Client2 - gigabit network and slow disks - cost = 50
Client3 - 100mbit network - cost = 20
Client4 - 100mbit network - cost = 20
Client5 - gigabit network and fast disks - cost = 100
Client6 - gigabit network and fast disks - cost = 100

Client1,5, and 6 are pretty much capable of saturating the backup medium
on their own, Client2,3, and 4 are not. I could then set a maximum
concurrent job 'cost' of 150, and bacula would run as many jobs as it
could concurrently as long as the total cost remained = than that
figure. So...

Client1 and 2 would start running (total cost = 150)
Client1 would finish and Client3 and 4 would start (total cost = 90)
Client2 would finish and Client5 would start (total cost = 140)
Client3 would finish
Client4 would finish
Client5 would finish and then Client6 would start

What I'm finding now is that once the two slowest jobs start running,
there is idle bandwidth on the backup medium until one of them finishes
and a faster one starts. The problem gets worse once you start thinking
about maybe backing up some data over a wan or something. Maybe I could
increase the concurrent jobs to 3 or more but I'd like to minimise the
backup window of each client if I can.

Comments?

James

--
Apps built with the Adobe(R) Flex(R) framework and Flex Builder(TM) are
powering Web 2.0 with engaging, cross-platform capabilities. Quickly and
easily build your RIAs with Flex Builder, the Eclipse(TM)based development
software that enables intelligent coding and step-through debugging.
Download the free 60 day trial. http://p.sf.net/sfu/www-adobe-com
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] Best way to backup simultaneously

2009-03-20 Thread Kevin Keane
Jason Dixon wrote:
 On Thu, Mar 19, 2009 at 06:08:23PM -0700, Kevin Keane wrote:
   
 Jason Dixon wrote:
 
 I've tried that.  But since the scheduled OS backup jobs are already
 running, the client-initiated transaction log jobs are forced to wait.
   
   
 Then you probaby still had a Maximum Concurrent Jobs = 1 setting somewhere.
 

 The only place I have Maximum Concurrent Jobs = 1 are for the OS
 backups.  The database and transaction log jobs all use Maximum
 Concurrent Jobs = 10.
   
As long as they both go to the same storage, that one single Maximum 
Concurrent Jobs will block the other backups.
 What I am thinking is that if the transaction log only needs to be kept 
 for 24 hours (since the last full backup), you might be able - and 
 better off - storing it on a hard disk rather than a tape in the first 
 place. Preferably of course a separate hard disk.
 

 We run full backups every day of the database.  Mind you, these are 
 backups of a slave replication PostgreSQL database.  There are multiple
 layers of redundancy and recovery.

   
 I've been unable to get Migration working as intended.  I've posted
 previously about that but never got a working conclusion.
   
   
 I'm not talking about migration. Spooling is a pretty low-level feature; 
 it's in the original job itself. Basically, instead of writing the data 
 from a job straight to the tape, bacula creates a file, stores the data 
 there, and then as soon as the tape becomes available copies it to the tape.
 

 One of the requirements dictated to me is that these logs are saved to
 tape immediately.  Spooling or queueing to disk is not a viable option
 for this application (in management's eyes).
   
Oh, the pesky little thing of requirement. Been there, done that (of 
course, we all have).
 Thank you for your feedback thus far.  But please understand that my
 role within the scope of this task is that of the implementor of these
 requirements.  I'd like to complete this as painlessly and correctly
 within the parameters defined.
Been there, done that. But ultimately, that's not the point here. What I 
was trying to do was first to *understand* the requirements. Usually, I 
found that impossible requirements come down from management for one of 
two reasons. Either, there is something going on that I am not aware of, 
or a simple misunderstanding in the first place. Or management has not 
been giving you requirements at all, but rather come up with what they 
thought was a solution. And if it doesn't work, you'll get the blame 
even if it was the wrong solution in the first place. In that case, it's 
a matter of office politics.

Say you are given the requirement I want you to build a race car, but 
it must be built with a John Deere engine and able to transport two 20' 
cargo containers. At first glance, completely nonsensical. It comes 
down to figuring out if the requirement came down because the race was a 
tractor-trailer race sponsored by John Deere - or because the manager 
didn't know anything but John Deere and shipping containers.

That is why I usually like to dig beyond the requirements, and 
understand the context; it will address both issues. That's what I was 
trying to do here.

Very often, once you understand a requirement, you'll find that it 
actually can be implemented with a slight tweak.
 Under other circumstances, I'd be happy
 to weigh the pros and cons of the architecture.  But in this case, it's
 just a matter of:

 1) Backing up the transaction logs immediately (initiated by client).
 2) Backing up the full Database snapshot.
 3) Backing up the OS jobs nightly.
 4) Making doubly sure #1 happens as quickly as possible (i.e. not having
 to queue behind #2 or #3.
   
It seems to me that the only way to accomplish this is to ask for the 
budget to buy a separate backup device.

Otherwise, you will have a choice of:

- Interrupting the full backup every ten minutes and then resuming it 
(which can't be done - and would probably be too slow even if it could)
- Interleaving the full and log backups through multiple concurrent 
jobs. Bacula can do that, but it will slow down the restore and cause 
tape-management issues. It will also likely be too slow. On a tape 
drive, it might also cause shoeshining issues or the like.
- Spooling. Which doesn't meet your requirements.

Incidentally, I see yet another possible issue. I am assuming that you 
are taking the full database snapshot from an LVM snapshot? The backup 
of the log files would be based on the database state at the time of the 
backup, so the full database and the log file backup might be 
inconsistent with each other.


-- 
Kevin Keane
Owner
The NetTech
Find the Uncommon: Expert Solutions for a Network You Never Have to Think About

Office: 866-642-7116
http://www.4nettech.com

This e-mail and attachments, if any, may contain confidential and/or 
proprietary information. Please be advised that the unauthorized use or 
disclosure of 

Re: [Bacula-users] Bconsole prune/purge commands

2009-03-20 Thread Martin Simmons
 On Fri, 20 Mar 2009 09:42:36 +0100, Ferdinando Pasqualetti said:
 
 Does anybody knows if there is a reason why the prune command, which is 
 not dangerous and also automatically triggered in some cases ask for a 
 confirmation before being executed, while the purge command, which 
 overrides retention times, shows a caution message, but it is executed 
 anyway?

The prune command removes records from the database, so it is slightly
dangerous (you can't restore those jobs directly afterwards).

Automatic pruning only happens when a job runs and it only prunes the client
being backed up, whereas the prune command can affect any client.

__Martin

--
Apps built with the Adobe(R) Flex(R) framework and Flex Builder(TM) are
powering Web 2.0 with engaging, cross-platform capabilities. Quickly and
easily build your RIAs with Flex Builder, the Eclipse(TM)based development
software that enables intelligent coding and step-through debugging.
Download the free 60 day trial. http://p.sf.net/sfu/www-adobe-com
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] Best way to backup simultaneously

2009-03-20 Thread Jason Dixon
On Fri, Mar 20, 2009 at 03:51:58AM -0700, Kevin Keane wrote:
 Jason Dixon wrote:
  On Thu, Mar 19, 2009 at 06:08:23PM -0700, Kevin Keane wrote:

  Jason Dixon wrote:
  
  I've tried that.  But since the scheduled OS backup jobs are already
  running, the client-initiated transaction log jobs are forced to wait.


  Then you probaby still had a Maximum Concurrent Jobs = 1 setting somewhere.
  
 
  The only place I have Maximum Concurrent Jobs = 1 are for the OS
  backups.  The database and transaction log jobs all use Maximum
  Concurrent Jobs = 10.

 As long as they both go to the same storage, that one single Maximum 
 Concurrent Jobs will block the other backups.

They don't.  Previously, the OS backups and the log backups each had
their own pool on the same storage device (tape drive).  Recently, the
OS backups have used their own pool on a File device instead.  It has
made no difference.

I have one question here which should clarify a lot.  I've been unable
to find it anywhere in the documentation.  Can the Storage Daemon write
to devices using different pools at the same time?  Example:

client 1 - job 1 - bacula-sd - pool 1 - media
client 2 - job 2 - bacula-sd - pool 2 - media

If this can be done simultaneously, then I'm doing something wrong.  If
it can't, I just need to know this so I can focus on getting a 2nd
Storage Daemon running for the OS jobs to FileStorage.


-- 
Jason Dixon
OmniTI Computer Consulting, Inc.
jdi...@omniti.com
443.325.1357 x.241

--
Apps built with the Adobe(R) Flex(R) framework and Flex Builder(TM) are
powering Web 2.0 with engaging, cross-platform capabilities. Quickly and
easily build your RIAs with Flex Builder, the Eclipse(TM)based development
software that enables intelligent coding and step-through debugging.
Download the free 60 day trial. http://p.sf.net/sfu/www-adobe-com
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


[Bacula-users] Question about Priorities and Maximum Concurrent Jobs

2009-03-20 Thread John Lockard
Hi All,

I have a mix of disk and tape backups.  To disk I allow up to
20 jobs run concurrently.  On my tape library I have 3 tape
drives, so only allow a max of 3 jobs to run concurrently.

I run Full backups once a month, Differentials once a week
and incrementals most days of the week.  I would prefer to
give preference to a Full backup over a Diff or Incr and I'd
like to give preference to a Diff over an Incr.

So...

I set:
  Full backups to have a priority of 30
  Differential backups to have a priority of 40
  Incremental backups to have a priority of 50

I figured that since I had concurrency setup with my Max
Concurrent Jobs setting that this would happen...  If there
was a fight for a medium, with no other medium currently
free, that a Full would have preference to the medium over a
Differential which would have preference over an Incremental.

What I'm seeing is that if a Full is running on a certain
type of storage, only other Fulls will run on that storage.
If a full is running on one type of storage, other jobs
(Diffs and Incrs) will run on the other types of storage.
So, if I have a Full running to disk storage #1, then an Incr
will run to disk storage #2, but not #1.  For disk storage I
mostly understand this.

This really becomes a problem for tape storage.  I would like
to be able to run backups on the other 2 tape drives in my
library when a Full backup is running.  I have several large,
slow servers which take upwards of 36 hours to backup and
during this time I can't backup anything of a lower Priority
than that system which I'm currently backing up.

Do I have to entirely can (forget) the notion of job Priorities
except in the cases where I absolutely want a certain job to
have exclusive rights to a backup medium?

Thanks, in advance for all the help,
-John

-- 
We have Enough Youth, How About A Fountain Of Smart?
---
 John M. Lockard |  U of Michigan - School of Information
 Unix and Security Admin |  1214 SI North - 1075 Beal Ave.
  jlock...@umich.edu |Ann Arbor, MI  48109-2112
 www.umich.edu/~jlockard | 734-615-8776 | 734-647-8045 FAX
---

--
Apps built with the Adobe(R) Flex(R) framework and Flex Builder(TM) are
powering Web 2.0 with engaging, cross-platform capabilities. Quickly and
easily build your RIAs with Flex Builder, the Eclipse(TM)based development
software that enables intelligent coding and step-through debugging.
Download the free 60 day trial. http://p.sf.net/sfu/www-adobe-com
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


[Bacula-users] Rif: Re: Bconsole prune/purge commands

2009-03-20 Thread Ferdinando Pasqualetti
Martin Simmons mar...@lispworks.com wrote on 20/03/2009 11.59.10:

  On Fri, 20 Mar 2009 09:42:36 +0100, Ferdinando Pasqualetti said:
  
  Does anybody knows if there is a reason why the prune command, which 
is 
  not dangerous and also automatically triggered in some cases ask for a 

  confirmation before being executed, while the purge command, which 
  overrides retention times, shows a caution message, but it is executed 

  anyway?
 
 The prune command removes records from the database, so it is slightly
 dangerous (you can't restore those jobs directly afterwards).
 

OK, but this only happens if all retention periods expired. Ok a confirm 
anyway,
but if I use the PURGE command I remove records even if periods did not 
expire.
So ok for the warning message, but if there is not a confirm the warning 
is just telling me
I could be in trouble, but I cannot change my mind.

 Automatic pruning only happens when a job runs and it only prunes the 
client
 being backed up, whereas the prune command can affect any client.
 
 __Martin
 
 
--
 Apps built with the Adobe(R) Flex(R) framework and Flex Builder(TM) are
 powering Web 2.0 with engaging, cross-platform capabilities. Quickly and
 easily build your RIAs with Flex Builder, the Eclipse(TM)based 
development
 software that enables intelligent coding and step-through debugging.
 Download the free 60 day trial. http://p.sf.net/sfu/www-adobe-com
 ___
 Bacula-users mailing list
 Bacula-users@lists.sourceforge.net
 https://lists.sourceforge.net/lists/listinfo/bacula-users
--
Apps built with the Adobe(R) Flex(R) framework and Flex Builder(TM) are
powering Web 2.0 with engaging, cross-platform capabilities. Quickly and
easily build your RIAs with Flex Builder, the Eclipse(TM)based development
software that enables intelligent coding and step-through debugging.
Download the free 60 day trial. http://p.sf.net/sfu/www-adobe-com___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] Best way to backup simultaneously

2009-03-20 Thread Jason Dixon
On Fri, Mar 20, 2009 at 06:56:38AM -0700, Kevin Keane wrote:
 Jason Dixon wrote:
 
  They don't.  Previously, the OS backups and the log backups each had
  their own pool on the same storage device (tape drive).  Recently, the
  OS backups have used their own pool on a File device instead.  It has
  made no difference.

 Also keep in mind that when you don't specify a Maximum Concurrent Job 
 somewhere, it may have defaulted to 1.
 
 You can actually see whether that is the problem. In bconsole, when you 
 do a stat dir while the log job is waiting for the storage device, it 
 will tell you why it is waiting. You can also do a stat storage to 
 find out more details.

Here is an example from yesterday.  Job 11174 is the transaction logs.
The others are OS jobs I ran manually from bconsole.

Running Jobs:
 JobId Level   Name   Status
==
 11172 FullUnix_crank-va-3.2009-03-19_17.36.24 is running
 11173 FullUnix_crank-va-4.2009-03-19_17.39.25 is running
 11174 FullDatabaseArchives_crank-va-3.2009-03-19_17.40.27 is
waiting for higher priority jobs to finish


  I have one question here which should clarify a lot.  I've been unable
  to find it anywhere in the documentation.  Can the Storage Daemon write
  to devices using different pools at the same time?  Example:
 
  client 1 - job 1 - bacula-sd - pool 1 - media
  client 2 - job 2 - bacula-sd - pool 2 - media
 
  If this can be done simultaneously, then I'm doing something wrong.  If
  it can't, I just need to know this so I can focus on getting a 2nd
  Storage Daemon running for the OS jobs to FileStorage.

 It's actually a bit different. Pool and storage resources are managed by 
 the director, not the SD.
 
 client1 - job1 - pool1/storage resource1 - bacula-sd - media1
 client2 - job2 - pool2/storage resource2 - bacula-sd - media2
 
 That should work. I have had two concurrent backups going to two 
 different media at the same time.

I'll try it again today with all of the Maximum Concurrent Jobs
set to 10.

-- 
Jason Dixon
OmniTI Computer Consulting, Inc.
jdi...@omniti.com
443.325.1357 x.241

--
Apps built with the Adobe(R) Flex(R) framework and Flex Builder(TM) are
powering Web 2.0 with engaging, cross-platform capabilities. Quickly and
easily build your RIAs with Flex Builder, the Eclipse(TM)based development
software that enables intelligent coding and step-through debugging.
Download the free 60 day trial. http://p.sf.net/sfu/www-adobe-com
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] Better way to garbage collect postgresql database

2009-03-20 Thread Hemant Shah



--- On Thu, 3/19/09, Kevin Keane subscript...@kkeane.com wrote:

 From: Kevin Keane subscript...@kkeane.com
 Subject: Re: [Bacula-users] Better way to garbage collect postgresql database
 To: 
 Cc: baculausers bacula-users@lists.sourceforge.net
 Date: Thursday, March 19, 2009, 8:30 PM
 Hemant Shah wrote:
  Folks,
 
  This is a database question, but I figured some of the
 bacula users may have come across this problem so I am
 posting it here.
 
 
  Every monday I run following commands to check and
 garbage collect bacula database:
 
  dbcheck command
  vacuumdb -q -d bacula -z -f
  reindexdb
 
  Usually I purge one or two backup volumes and the
 above commands run in less than 20 minutes. 
 
  Before my monthly Full backup I delete large amount of
 data from the database as I delete one month worth of Full
 and Incremental backups. When I run the above commands after
 the Full backup, the vacummdb command take 12 hours to run.
 Is there a faster/better way of doing it?

 It has been a long time since I administered a postgres DB,
 but if 
 memory serves me right you might be able to drop some
 indexes, then do 
 the vacuuming, and then recreate them. Also, I believe you
 can vacuum 
 individual tables rather than the database as a whole.
 
 The lion's share of the vacuuming would happen in the
 files table, so 
 that's probably the one you'd want to first look at
 in terms of 
 dropping/recreating indexes, and also in terms of vacuuming
 separately.
 
 Also there are several levels of vacuuming. With this type
 of table, you 
 would probably not want to get too aggressive. What you
 don't want to do 
 is eliminate all the empty space in the database, only to
 later need the 
 same empty space again. You do want to vacuum simply to
 consolidate 
 empty space into larger chunks. Basically, the same idea as
 disk 
 defragmentation. If memory serves me right, this
 milder vacuuming is 
 the default.
 

Kevin,

  This is exactly what I want to do. I come from DB2 world and we would reorg 
the table. I run milder version of vacuum on other days, but I run extensive 
vacuum after full backup. 



 Sorry I have to speak in concepts rather than concrete
 here, but it just 
 has been too long.
  My database is about 9GB.
 
  If I backup database using pgdump and then restore it,
 will it do the same thing as vacuumdb and reindexdb
 commands?

 Pretty close, but keep in mind that you would have
 considerable database 
 downtime. You can do this, too, on a per-table basis.
 

If I can reduce the time it takes to do full vacuum then I would like to do it 
before my full backup. This database is for bacula only so unless I am doing 
backups it could be down for few hours.

If dump/load takes less  than four hours then I can do it before full backups 
start.

 -- 
 Kevin Keane
 Owner
 The NetTech
 Find the Uncommon: Expert Solutions for a Network You Never
 Have to Think About
 
 Office: 866-642-7116
 http://www.4nettech.com
 
 This e-mail and attachments, if any, may contain
 confidential and/or proprietary information. Please be
 advised that the unauthorized use or disclosure of the
 information is strictly prohibited. The information herein
 is intended only for use by the intended recipient(s) named
 above. If you have received this transmission in error,
 please notify the sender immediately and permanently delete
 the e-mail and any copies, printouts or attachments thereof.
 
 
 --
 Apps built with the Adobe(R) Flex(R) framework and Flex
 Builder(TM) are
 powering Web 2.0 with engaging, cross-platform
 capabilities. Quickly and
 easily build your RIAs with Flex Builder, the
 Eclipse(TM)based development
 software that enables intelligent coding and step-through
 debugging.
 Download the free 60 day trial.
 http://p.sf.net/sfu/www-adobe-com
 ___
 Bacula-users mailing list
 Bacula-users@lists.sourceforge.net
 https://lists.sourceforge.net/lists/listinfo/bacula-users



Hemant Shah
E-mail: hj...@yahoo.com



  

--
Apps built with the Adobe(R) Flex(R) framework and Flex Builder(TM) are
powering Web 2.0 with engaging, cross-platform capabilities. Quickly and
easily build your RIAs with Flex Builder, the Eclipse(TM)based development
software that enables intelligent coding and step-through debugging.
Download the free 60 day trial. http://p.sf.net/sfu/www-adobe-com
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] Better way to garbage collect postgresql database

2009-03-20 Thread Hemant Shah




--- On Fri, 3/20/09, Jesper Krogh jes...@krogh.cc wrote:

 From: Jesper Krogh jes...@krogh.cc
 Subject: Re: [Bacula-users] Better way to garbage collect postgresql database
 To: hj...@yahoo.com
 Cc: baculausers bacula-users@lists.sourceforge.net
 Date: Friday, March 20, 2009, 12:30 AM
 Hemant Shah wrote:
  This is a database question, but I figured some of the
 bacula users may have 
  come across this problem so I am posting it here.
  
  Every monday I run following commands to check and
 garbage collect bacula database:
  
  dbcheck command
  vacuumdb -q -d bacula -z -f
 
 There is absolutely no reason to vacumm full
 unless your data-size actually are shrinking over time.
 (longer periods). A normal vacuum will make it available for
 the next run.. you most likely are running autovacuum
 anyway.

  Yes, I do run autovacuum, but I just start using postgresql and I am not sure 
how efficient it is in space usage. I was afraid that the data would be 
fragmented if it re-uses the free space.

 
  reindexdb
 
 Might make sense, but weekly?..  There is AFAIK a small
 amount of index-bloat collecting up over time in PG. But in
 general just yearly or monthly
 should really be sufficient.

 Weekly may be overkill, but it is part of script that run through cron.


 
  Usually I purge one or two backup volumes and the
 above commands run in less than 20 minutes. 
  Before my monthly Full backup I delete large amount of
 data from the database as I 
  delete one month worth of Full and Incremental
 backups. When I run the above
  commands after the Full backup, the vacummdb command
 take 12 hours
  to run. Is there a faster/better way of doing it?
 
 No not really VACUUM FULL i telling PG to reorder the data
 on-disk and free up space for the os, that is a hard
 task. But it is also not needed, since you are going
 to used it within the next week/month anyway.. so ordinary
 VACUUM is sufficient.
 
  My database is about 9GB.
  
  If I backup database using pgdump and then restore it,
 will it do the same
  thing as vacuumdb and reindexdb commands?
 
 Basically yes.



Hemant Shah
E-mail: hj...@yahoo.com


  

--
Apps built with the Adobe(R) Flex(R) framework and Flex Builder(TM) are
powering Web 2.0 with engaging, cross-platform capabilities. Quickly and
easily build your RIAs with Flex Builder, the Eclipse(TM)based development
software that enables intelligent coding and step-through debugging.
Download the free 60 day trial. http://p.sf.net/sfu/www-adobe-com
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] Issue with Restoring a Full which Spanned Tapes.

2009-03-20 Thread Doug Forster
Dang it looks as though this morning this doesn't seem to be the case. I
have split up the trouble server and am now checking for other issues. I am
also in the process of recreating the database in ASCII format so that we
can rule that out as an issue even though there are no logs in postgres that
point to it. 

-Original Message-
From: John Drescher [mailto:dresche...@gmail.com] 
Sent: Thursday, March 19, 2009 7:20 PM
To: Doug Forster; bacula-users
Subject: Re: [Bacula-users] Issue with Restoring a Full which Spanned Tapes.

On Thu, Mar 19, 2009 at 9:03 PM, Doug Forster dfors...@part.net wrote:
 I have gone into the database and can see that the database is empty for
the
 job in question. I think that there is an issue with the insertion of over
a
 million entrees all at once that is giving bacula a hard time. I have
found
 a supporting post here:

http://www.backupcentral.com/phpBB2/two-way-mirrors-of-external-mailing-list
 s-3/bacula-25/bacula-prunes-files-too-early-95935/ but they really didn't
 get a resolution either. Is this addressed in any of the change logs that
 someone can remember if so point me to it otherwise I will be searching.


To me this could be a bug in the batch insert code. A compile time
switch enables this. I am not sure if there is a way to tell from an
executable if it is enabled or not.

John


--
Apps built with the Adobe(R) Flex(R) framework and Flex Builder(TM) are
powering Web 2.0 with engaging, cross-platform capabilities. Quickly and
easily build your RIAs with Flex Builder, the Eclipse(TM)based development
software that enables intelligent coding and step-through debugging.
Download the free 60 day trial. http://p.sf.net/sfu/www-adobe-com
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] Better way to garbage collect postgresql database

2009-03-20 Thread Kevin Keane
Hemant Shah wrote:

 --- On Thu, 3/19/09, Kevin Keane subscript...@kkeane.com wrote:

   
 From: Kevin Keane subscript...@kkeane.com
 Subject: Re: [Bacula-users] Better way to garbage collect postgresql database
 To: 
 Cc: baculausers bacula-users@lists.sourceforge.net
 Date: Thursday, March 19, 2009, 8:30 PM
 Hemant Shah wrote:
 
 Folks,

 This is a database question, but I figured some of the
   
 bacula users may have come across this problem so I am
 posting it here.
 
 Every monday I run following commands to check and
   
 garbage collect bacula database:
 
 dbcheck command
 vacuumdb -q -d bacula -z -f
 reindexdb

 Usually I purge one or two backup volumes and the
   
 above commands run in less than 20 minutes. 
 
 Before my monthly Full backup I delete large amount of
   
 data from the database as I delete one month worth of Full
 and Incremental backups. When I run the above commands after
 the Full backup, the vacummdb command take 12 hours to run.
 Is there a faster/better way of doing it?
 
   
   
 It has been a long time since I administered a postgres DB,
 but if 
 memory serves me right you might be able to drop some
 indexes, then do 
 the vacuuming, and then recreate them. Also, I believe you
 can vacuum 
 individual tables rather than the database as a whole.

 The lion's share of the vacuuming would happen in the
 files table, so 
 that's probably the one you'd want to first look at
 in terms of 
 dropping/recreating indexes, and also in terms of vacuuming
 separately.

 Also there are several levels of vacuuming. With this type
 of table, you 
 would probably not want to get too aggressive. What you
 don't want to do 
 is eliminate all the empty space in the database, only to
 later need the 
 same empty space again. You do want to vacuum simply to
 consolidate 
 empty space into larger chunks. Basically, the same idea as
 disk 
 defragmentation. If memory serves me right, this
 milder vacuuming is 
 the default.

 

 Kevin,

   This is exactly what I want to do. I come from DB2 world and we would reorg 
 the table. I run milder version of vacuum on other days, but I run extensive 
 vacuum after full backup. 
   
I think that's excessive. I used to run a major production database, 
processing hundreds of online purchases per hour, in Postgres. And we 
only did the regular vacuum once a week, if memory serves me right. Full 
vacuums only in very rare circumstances - pretty much never.

Not saying that this is always true; our database at the time pretty 
much only ever grew, we almost never deleted any records from it. The 
bacula database obviously does a lot more deleting.

In fact, a full vacuum can be harmful to performance, because it 
eliminates the free space that Postgres can work with. In a scenario 
such as yours, it may well be better if the database stays at its 
biggest size at all times, and you let Postgres manage the free space.

-- 
Kevin Keane
Owner
The NetTech
Find the Uncommon: Expert Solutions for a Network You Never Have to Think About

Office: 866-642-7116
http://www.4nettech.com

This e-mail and attachments, if any, may contain confidential and/or 
proprietary information. Please be advised that the unauthorized use or 
disclosure of the information is strictly prohibited. The information herein is 
intended only for use by the intended recipient(s) named above. If you have 
received this transmission in error, please notify the sender immediately and 
permanently delete the e-mail and any copies, printouts or attachments thereof.


--
Apps built with the Adobe(R) Flex(R) framework and Flex Builder(TM) are
powering Web 2.0 with engaging, cross-platform capabilities. Quickly and
easily build your RIAs with Flex Builder, the Eclipse(TM)based development
software that enables intelligent coding and step-through debugging.
Download the free 60 day trial. http://p.sf.net/sfu/www-adobe-com
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] [Bacula-devel] Bacula BETA 2.5.42-b2 released to Source Forge

2009-03-20 Thread Kern Sibbald
It looks like you are trying to build Bacula using a Debian packaging that was 
designed for 2.4.  Version 2.5.x is significantly different and will require 
a number of modifications.

Regards,

Kern


On Thursday 19 March 2009 13:51:54 Thomas Mueller wrote:
 hi

 as one asked for ubuntu packages, i tried to recompile it for Ubuntu
 intrepid (8.10). it failes:


 make[1]: Leaving directory `/tmp/buildd/bacula-2.5.42~beta2mit1/debian/
 tmp-build-sqlite3'
 /usr/bin/make -C /tmp/buildd/bacula-2.5.42~beta2mit1/debian/tmp-build-
 sqlite3/src/tools
 make[1]: Entering directory `/tmp/buildd/bacula-2.5.42~beta2mit1/debian/
 tmp-build-sqlite3/src/tools'
 /tmp/buildd/bacula-2.5.42~beta2mit1/debian/tmp-build-sqlite3/libtool --
 silent --tag=CXX --mode=link /usr/bin/x86_64-linux-gnu-g++ -Wl,-Bsymbolic-
 functions -L../lib -o bsmtp bsmtp.o -lbac -lm  -lpthread -ldl   -lssl -
 lcrypto
 eval: 1: libtool_args+=: not found
 eval: 1: compile_command+=: not found
 eval: 1: finalize_command+=: not found
 eval: 1: libtool_args+=: not found
 eval: 1: libtool_args+=: not found
 eval: 1: compile_command+=: not found
 eval: 1: finalize_command+=: not found
 eval: 1: libtool_args+=: not found
 eval: 1: compile_command+=: not found
 eval: 1: finalize_command+=: not found
 eval: 1: libtool_args+=: not found
 eval: 1: compile_command+=: not found
 eval: 1: finalize_command+=: not found
 eval: 1: libtool_args+=: not found
 eval: 1: libtool_args+=: not found
 eval: 1: libtool_args+=: not found
 eval: 1: libtool_args+=: not found
 eval: 1: libtool_args+=: not found
 eval: 1: libtool_args+=: not found
 /usr/lib/gcc/x86_64-linux-gnu/4.3.2/../../../../lib/crt1.o: In function
 `_start':
 (.text+0x20): undefined reference to `main'
 collect2: ld returned 1 exit status
 make[1]: *** [bsmtp] Error 1
 make[1]: Leaving directory `/tmp/buildd/bacula-2.5.42~beta2mit1/debian/
 tmp-build-sqlite3/src/tools'
 make: *** [build-stamp-sqlite3] Error 2
 rm configure-stamp-sqlite3

 - Thomas


 ---
--- Apps built with the Adobe(R) Flex(R) framework and Flex Builder(TM) are
 powering Web 2.0 with engaging, cross-platform capabilities. Quickly and
 easily build your RIAs with Flex Builder, the Eclipse(TM)based development
 software that enables intelligent coding and step-through debugging.
 Download the free 60 day trial. http://p.sf.net/sfu/www-adobe-com
 ___
 Bacula-devel mailing list
 bacula-de...@lists.sourceforge.net
 https://lists.sourceforge.net/lists/listinfo/bacula-devel



--
Apps built with the Adobe(R) Flex(R) framework and Flex Builder(TM) are
powering Web 2.0 with engaging, cross-platform capabilities. Quickly and
easily build your RIAs with Flex Builder, the Eclipse(TM)based development
software that enables intelligent coding and step-through debugging.
Download the free 60 day trial. http://p.sf.net/sfu/www-adobe-com
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] Question about Priorities and Maximum Concurrent Jobs

2009-03-20 Thread John Lockard
I stand somewhat corrected.  I was wrong in stating
that priority of a job on a certain media blocked
only jobs on that media.  It actually blocks all other
lower priority jobs from running no matter whether the
lower priority job is on the same media or not.

-John

On Fri, Mar 20, 2009 at 10:04:48AM -0400, John Lockard wrote:
 Hi All,
 
 I have a mix of disk and tape backups.  To disk I allow up to
 20 jobs run concurrently.  On my tape library I have 3 tape
 drives, so only allow a max of 3 jobs to run concurrently.
 
 I run Full backups once a month, Differentials once a week
 and incrementals most days of the week.  I would prefer to
 give preference to a Full backup over a Diff or Incr and I'd
 like to give preference to a Diff over an Incr.
 
 So...
 
 I set:
   Full backups to have a priority of 30
   Differential backups to have a priority of 40
   Incremental backups to have a priority of 50
 
 I figured that since I had concurrency setup with my Max
 Concurrent Jobs setting that this would happen...  If there
 was a fight for a medium, with no other medium currently
 free, that a Full would have preference to the medium over a
 Differential which would have preference over an Incremental.
 
 What I'm seeing is that if a Full is running on a certain
 type of storage, only other Fulls will run on that storage.
 If a full is running on one type of storage, other jobs
 (Diffs and Incrs) will run on the other types of storage.
 So, if I have a Full running to disk storage #1, then an Incr
 will run to disk storage #2, but not #1.  For disk storage I
 mostly understand this.
 
 This really becomes a problem for tape storage.  I would like
 to be able to run backups on the other 2 tape drives in my
 library when a Full backup is running.  I have several large,
 slow servers which take upwards of 36 hours to backup and
 during this time I can't backup anything of a lower Priority
 than that system which I'm currently backing up.
 
 Do I have to entirely can (forget) the notion of job Priorities
 except in the cases where I absolutely want a certain job to
 have exclusive rights to a backup medium?
 
 Thanks, in advance for all the help,
 -John
 
 -- 
 We have Enough Youth, How About A Fountain Of Smart?
 ---
  John M. Lockard |  U of Michigan - School of Information
  Unix and Security Admin |  1214 SI North - 1075 Beal Ave.
   jlock...@umich.edu |Ann Arbor, MI  48109-2112
  www.umich.edu/~jlockard | 734-615-8776 | 734-647-8045 FAX
 ---
 
 

-- 
Time and time and time again, you wake up screaming
 and you wake up dead. - RevCo
---
 John M. Lockard |  U of Michigan - School of Information
 Unix and Security Admin |  1214 SI North - 1075 Beal Ave.
  jlock...@umich.edu |Ann Arbor, MI  48109-2112
 www.umich.edu/~jlockard | 734-615-8776 | 734-647-8045 FAX
---

--
Apps built with the Adobe(R) Flex(R) framework and Flex Builder(TM) are
powering Web 2.0 with engaging, cross-platform capabilities. Quickly and
easily build your RIAs with Flex Builder, the Eclipse(TM)based development
software that enables intelligent coding and step-through debugging.
Download the free 60 day trial. http://p.sf.net/sfu/www-adobe-com
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] Best way to backup simultaneously

2009-03-20 Thread Kevin Keane
Jason Dixon wrote:
 On Fri, Mar 20, 2009 at 06:56:38AM -0700, Kevin Keane wrote:
   
 Jason Dixon wrote:
 
 They don't.  Previously, the OS backups and the log backups each had
 their own pool on the same storage device (tape drive).  Recently, the
 OS backups have used their own pool on a File device instead.  It has
 made no difference.
   
   
 Also keep in mind that when you don't specify a Maximum Concurrent Job 
 somewhere, it may have defaulted to 1.

 You can actually see whether that is the problem. In bconsole, when you 
 do a stat dir while the log job is waiting for the storage device, it 
 will tell you why it is waiting. You can also do a stat storage to 
 find out more details.
 

 Here is an example from yesterday.  Job 11174 is the transaction logs.
 The others are OS jobs I ran manually from bconsole.

 Running Jobs:
  JobId Level   Name   Status
 ==
  11172 FullUnix_crank-va-3.2009-03-19_17.36.24 is running
  11173 FullUnix_crank-va-4.2009-03-19_17.39.25 is running
  11174 FullDatabaseArchives_crank-va-3.2009-03-19_17.40.27 is
 waiting for higher priority jobs to finish
 
   
Concurrency is actually working fine for you. You see that jobs 11172 
and 11173 are both backing up at the same time.

Job 11174 would wait no matter what, because you gave it a lower 
priority than the other jobs. Give it the same priority as the others, 
and it may participate in the concurrency (unless there are other things 
that prevent it)

-- 
Kevin Keane
Owner
The NetTech
Find the Uncommon: Expert Solutions for a Network You Never Have to Think About

Office: 866-642-7116
http://www.4nettech.com

This e-mail and attachments, if any, may contain confidential and/or 
proprietary information. Please be advised that the unauthorized use or 
disclosure of the information is strictly prohibited. The information herein is 
intended only for use by the intended recipient(s) named above. If you have 
received this transmission in error, please notify the sender immediately and 
permanently delete the e-mail and any copies, printouts or attachments thereof.


--
Apps built with the Adobe(R) Flex(R) framework and Flex Builder(TM) are
powering Web 2.0 with engaging, cross-platform capabilities. Quickly and
easily build your RIAs with Flex Builder, the Eclipse(TM)based development
software that enables intelligent coding and step-through debugging.
Download the free 60 day trial. http://p.sf.net/sfu/www-adobe-com
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] Bacula BETA 2.5.42-b2 released to Source Forge

2009-03-20 Thread Thomas Mueller
On Fri, 20 Mar 2009 18:18:34 +0100, Kern Sibbald wrote:

 It looks like you are trying to build Bacula using a Debian packaging
 that was designed for 2.4.  Version 2.5.x is significantly different and
 will require a number of modifications.

i made modifications for 2.5. the package builds without problems on 
debian etch and lenny (the ones here: http://chaschperli.ch/debian/lenny-
bacula). but not on ubuntu. 

- Thomas


--
Apps built with the Adobe(R) Flex(R) framework and Flex Builder(TM) are
powering Web 2.0 with engaging, cross-platform capabilities. Quickly and
easily build your RIAs with Flex Builder, the Eclipse(TM)based development
software that enables intelligent coding and step-through debugging.
Download the free 60 day trial. http://p.sf.net/sfu/www-adobe-com
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] Question about Priorities and Maximum Concurrent Jobs

2009-03-20 Thread John Drescher
 I stand somewhat corrected.  I was wrong in stating
 that priority of a job on a certain media blocked
 only jobs on that media.  It actually blocks all other
 lower priority jobs from running no matter whether the
 lower priority job is on the same media or not.

I find this makes priorities not that useful for me.

Have you thought of using concurrency and a small (2 to 5GB) spool
file and scrap the priorities. I am unsure why you only want 1 job per
tape drive. Are your drives really slow such that 1 client backup will
be faster than the tape can handle?

John

--
Apps built with the Adobe(R) Flex(R) framework and Flex Builder(TM) are
powering Web 2.0 with engaging, cross-platform capabilities. Quickly and
easily build your RIAs with Flex Builder, the Eclipse(TM)based development
software that enables intelligent coding and step-through debugging.
Download the free 60 day trial. http://p.sf.net/sfu/www-adobe-com
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] Best way to backup simultaneously

2009-03-20 Thread Jason Dixon
On Fri, Mar 20, 2009 at 10:36:16AM -0700, Kevin Keane wrote:
 Jason Dixon wrote:
 
  Here is an example from yesterday.  Job 11174 is the transaction logs.
  The others are OS jobs I ran manually from bconsole.
 
  Running Jobs:
   JobId Level   Name   Status
  ==
   11172 FullUnix_crank-va-3.2009-03-19_17.36.24 is running
   11173 FullUnix_crank-va-4.2009-03-19_17.39.25 is running
   11174 FullDatabaseArchives_crank-va-3.2009-03-19_17.40.27 is
  waiting for higher priority jobs to finish
  

 Concurrency is actually working fine for you. You see that jobs 11172 
 and 11173 are both backing up at the same time.
 
 Job 11174 would wait no matter what, because you gave it a lower 
 priority than the other jobs. Give it the same priority as the others, 
 and it may participate in the concurrency (unless there are other things 
 that prevent it)

As you can see, the Database jobs clearly have a higher priority.

JobDefs {
  Name = DefaultJob
  Type = Backup
  Level = Incremental
  FileSet = Full Set
  Schedule = None
  Storage = FileStorage
  Messages = Standard
  Pool = FileStorage
  Maximum Concurrent Jobs = 10
  Job Retention = 14 days
  Priority = 20
}

JobDefs {
  Name = DatabaseJob
  Type = Backup
  Level = Incremental
  Schedule = None
  Storage = SDX-700C
  Messages = Standard
  Pool = Database
  Maximum Concurrent Jobs = 10
  Job Retention = 14 days
  Priority = 10
}

Job {
  Name = DatabaseArchives_crank-va-3
  JobDefs = DatabaseJob
  Level = Full
  Client = crank-va-3
  FileSet = FileSet_PgWal
  ClientRunAfterJob = /usr/local/bin/pg_cleanup_wallogs.sh
  Messages = Quiet
}

Job {
  Name = Unix_crank-va-3
  JobDefs = DefaultJob
  Client = crank-va-3
  FileSet = FileSet_crank-va-3
}

Job {
  Name = Unix_crank-va-4
  JobDefs = DefaultJob
  Client = crank-va-4
  FileSet = FileSet_crank-va-4
}


-- 
Jason Dixon
OmniTI Computer Consulting, Inc.
jdi...@omniti.com
443.325.1357 x.241

--
Apps built with the Adobe(R) Flex(R) framework and Flex Builder(TM) are
powering Web 2.0 with engaging, cross-platform capabilities. Quickly and
easily build your RIAs with Flex Builder, the Eclipse(TM)based development
software that enables intelligent coding and step-through debugging.
Download the free 60 day trial. http://p.sf.net/sfu/www-adobe-com
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


[Bacula-users] spool attributes in Schedule Resource?

2009-03-20 Thread Mark Nienberg
I've decided to do some tests with Spool Attributes to see if it speeds up my 
full 
backups to tape.  I noticed that the documentation says I can set Spool 
Attributes in 
the Job resource.  It does not mention that I can set Spool Attributes in the 
Schedule Resource, although it does have Spool Data in the Schedule Resource.  
I 
think the Schedule would make more sense for my setup.  Will it work there?
-- 
Mark Nienberg
Sent from an invalid address. Please reply to the group.


--
Apps built with the Adobe(R) Flex(R) framework and Flex Builder(TM) are
powering Web 2.0 with engaging, cross-platform capabilities. Quickly and
easily build your RIAs with Flex Builder, the Eclipse(TM)based development
software that enables intelligent coding and step-through debugging.
Download the free 60 day trial. http://p.sf.net/sfu/www-adobe-com
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] Question about Priorities and Maximum Concurrent Jobs

2009-03-20 Thread John Lockard
The minimum setting I have on Max Concurrent Jobs is on the
Tape Library and that's set to 3.  It appears that priority
trumps all, unless the priority is the same or better.

So, if I have one job that has priority of, say, 10, then
any job running on any other tape drive or virtual library
will sit and wait till that higher priority job to finish
before they'll begin.

This also makes priority mostly useless for me as well.  I
guess it would take care of situations where I'd want one
job to finish before a secondary or tertiary job starts, but
then I run the risk of another job postponing the 2nd and
3rd job, which wouldn't be my intention.

-John

On Fri, Mar 20, 2009 at 02:16:03PM -0400, John Drescher wrote:
 I find this makes priorities not that useful for me.
 
 Have you thought of using concurrency and a small (2 to 5GB) spool
 file and scrap the priorities. I am unsure why you only want 1 job per
 tape drive. Are your drives really slow such that 1 client backup will
 be faster than the tape can handle?
 
 John
 
 

-- 
ACTION: None of the violence you are about to see was simulated.
 People were actually injured for your entertainment.
 - SciFi program intro
---
 John M. Lockard |  U of Michigan - School of Information
 Unix and Security Admin |  1214 SI North - 1075 Beal Ave.
  jlock...@umich.edu |Ann Arbor, MI  48109-2112
 www.umich.edu/~jlockard | 734-615-8776 | 734-647-8045 FAX
---

--
Apps built with the Adobe(R) Flex(R) framework and Flex Builder(TM) are
powering Web 2.0 with engaging, cross-platform capabilities. Quickly and
easily build your RIAs with Flex Builder, the Eclipse(TM)based development
software that enables intelligent coding and step-through debugging.
Download the free 60 day trial. http://p.sf.net/sfu/www-adobe-com
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] [Bacula-devel] Bacula BETA 2.5.42-b2 released to Source Forge

2009-03-20 Thread Kern Sibbald
On Friday 20 March 2009 18:56:55 Thomas Mueller wrote:
 On Fri, 20 Mar 2009 18:18:34 +0100, Kern Sibbald wrote:
  It looks like you are trying to build Bacula using a Debian packaging
  that was designed for 2.4.  Version 2.5.x is significantly different and
  will require a number of modifications.

 i made modifications for 2.5. the package builds without problems on
 debian etch and lenny (the ones here: http://chaschperli.ch/debian/lenny-
 bacula). but not on ubuntu.


I don't know.  I am building Bacula every day on Hardy, and I have one 
Intrepid system where I believe that I have also built it (not 100% sure).

Regards,

Kern

--
Apps built with the Adobe(R) Flex(R) framework and Flex Builder(TM) are
powering Web 2.0 with engaging, cross-platform capabilities. Quickly and
easily build your RIAs with Flex Builder, the Eclipse(TM)based development
software that enables intelligent coding and step-through debugging.
Download the free 60 day trial. http://p.sf.net/sfu/www-adobe-com
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] Best way to backup simultaneously

2009-03-20 Thread Jason Dixon
On Fri, Mar 20, 2009 at 02:37:06PM -0400, Jason Dixon wrote:
 On Fri, Mar 20, 2009 at 10:36:16AM -0700, Kevin Keane wrote:
  Jason Dixon wrote:
  
   Here is an example from yesterday.  Job 11174 is the transaction logs.
   The others are OS jobs I ran manually from bconsole.
  
   Running Jobs:
JobId Level   Name   Status
   ==
11172 FullUnix_crank-va-3.2009-03-19_17.36.24 is running
11173 FullUnix_crank-va-4.2009-03-19_17.39.25 is running
11174 FullDatabaseArchives_crank-va-3.2009-03-19_17.40.27 is
   waiting for higher priority jobs to finish
   
 
  Concurrency is actually working fine for you. You see that jobs 11172 
  and 11173 are both backing up at the same time.
  
  Job 11174 would wait no matter what, because you gave it a lower 
  priority than the other jobs. Give it the same priority as the others, 
  and it may participate in the concurrency (unless there are other things 
  that prevent it)

Just to be certain, I kicked off a few OS jobs just prior to the
transaction log backup.  I also changed the Storage directive to use
Maximum Concurrent Jobs = 1 for FileStorage.  This forces only one OS
job at a time.

I would expect the DatabaseArchives_crank-va-3 job (11242) to run before
the queued OS jobs (11240, 11241) but that isn't the case.  I don't know
why this reports the other jobs as a higher priority.  And remember that
these are using *different* storage devices.  The OS jobs use
FileStorage, the transaction logs backup to tape (SDX-700C).


Running Jobs:
 JobId Level   Name   Status
==
 11239 Increme  Unix_crank-va-4.2009-03-20_15.39.55 is running
 11240 Increme  Unix_puffer-va-3.2009-03-20_15.39.56 is waiting on max
Storage jobs
 11241 Increme  Unix_puffer-va-4.2009-03-20_15.39.57 is waiting on max
Storage jobs
 11242 FullDatabaseArchives_crank-va-3.2009-03-20_15.40.59 is
waiting for higher priority jobs to finish



Thanks,

-- 
Jason Dixon
OmniTI Computer Consulting, Inc.
jdi...@omniti.com
443.325.1357 x.241

--
Apps built with the Adobe(R) Flex(R) framework and Flex Builder(TM) are
powering Web 2.0 with engaging, cross-platform capabilities. Quickly and
easily build your RIAs with Flex Builder, the Eclipse(TM)based development
software that enables intelligent coding and step-through debugging.
Download the free 60 day trial. http://p.sf.net/sfu/www-adobe-com
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] An idea for a feature (not a request yet :) - weighted concurrency

2009-03-20 Thread Arno Lehmann
Hi,

20.03.2009 11:24, James Harper wrote:
 I have half an idea for a feature request but it's not well defined
 yet...
 
 Basically, I have a bunch of clients to back up, some are on gigabit
 network and some are stuck on 100mbit. They are being backed up to a
 disk that has throughput of around 20-30mbytes/second.
 
 I am allowing 2 jobs to run at once, which works well but I think it
 could work better. I was thinking of something along the lines of
 assigning a 'cost' to each client, and have the scheduler make sure that
 the sum of all current clients is under some threshold, something like:
 
 Client1 - gigabit network and fast disks - cost = 100
 Client2 - gigabit network and slow disks - cost = 50
 Client3 - 100mbit network - cost = 20
 Client4 - 100mbit network - cost = 20
 Client5 - gigabit network and fast disks - cost = 100
 Client6 - gigabit network and fast disks - cost = 100
 
 Client1,5, and 6 are pretty much capable of saturating the backup medium
 on their own, Client2,3, and 4 are not. I could then set a maximum
 concurrent job 'cost' of 150, and bacula would run as many jobs as it
 could concurrently as long as the total cost remained = than that
 figure. So...

Sounds interesting.

If you add the clause The maximum number of jobs run concurrently is 
still limited by the existing configuration settings it might be 
implemented sooner.

In that case, the DIR would only have to determine cost and choose 
which jobs to start one level below the current concurreny management.

I.e., do everything as today, but when it comes to actually starting 
one more job, consider the cost to choose the job, or not start one at 
all. Might be easier to implement - and it keeps the configuration 
compatible, assuming default MaxCost is unlimited.

Of course, a setting for default cost would be nice, too.

...
 Comments?

Sounds good... massage it into a feature request and I'm sure people 
will be interested - perhaps even developers :-)

Arno

-- 
Arno Lehmann
IT-Service Lehmann
Sandstr. 6, 49080 Osnabrück
www.its-lehmann.de

--
Apps built with the Adobe(R) Flex(R) framework and Flex Builder(TM) are
powering Web 2.0 with engaging, cross-platform capabilities. Quickly and
easily build your RIAs with Flex Builder, the Eclipse(TM)based development
software that enables intelligent coding and step-through debugging.
Download the free 60 day trial. http://p.sf.net/sfu/www-adobe-com
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] Best way to backup simultaneously

2009-03-20 Thread Jason Dixon
On Fri, Mar 20, 2009 at 03:46:49PM -0400, Jason Dixon wrote:
 
 Just to be certain, I kicked off a few OS jobs just prior to the
 transaction log backup.  I also changed the Storage directive to use
 Maximum Concurrent Jobs = 1 for FileStorage.  This forces only one OS
 job at a time.
 
 I would expect the DatabaseArchives_crank-va-3 job (11242) to run before
 the queued OS jobs (11240, 11241) but that isn't the case.  I don't know
 why this reports the other jobs as a higher priority.  And remember that
 these are using *different* storage devices.  The OS jobs use
 FileStorage, the transaction logs backup to tape (SDX-700C).
 
 
 Running Jobs:
  JobId Level   Name   Status
 ==
  11239 Increme  Unix_crank-va-4.2009-03-20_15.39.55 is running
  11240 Increme  Unix_puffer-va-3.2009-03-20_15.39.56 is waiting on max
 Storage jobs
  11241 Increme  Unix_puffer-va-4.2009-03-20_15.39.57 is waiting on max
 Storage jobs
  11242 FullDatabaseArchives_crank-va-3.2009-03-20_15.40.59 is
 waiting for higher priority jobs to finish
 

Ok, it looks like these ran correctly after all.  I'm a bit perplexed
why the Director reports 11242 as being lower priority, but at least it
worked as designed.  Extracted from llist jobs:

   jobid: 11,239
 job: Unix_crank-va-4.2009-03-20_15.39.55
   schedtime: 2009-03-20 15:39:45
   starttime: 2009-03-20 15:40:02
 endtime: 2009-03-20 15:40:20
 realendtime: 2009-03-20 15:40:20

   jobid: 11,240
 job: Unix_puffer-va-3.2009-03-20_15.39.56
   schedtime: 2009-03-20 15:39:48
   starttime: 2009-03-20 15:40:28
 endtime: 2009-03-20 15:40:39
 realendtime: 2009-03-20 15:40:39

   jobid: 11,241
 job: Unix_puffer-va-4.2009-03-20_15.39.57
   schedtime: 2009-03-20 15:39:53
   starttime: 2009-03-20 15:40:40
 endtime: 2009-03-20 15:40:51
 realendtime: 2009-03-20 15:40:51

   jobid: 11,242
 job: DatabaseArchives_crank-va-3.2009-03-20_15.40.59
   schedtime: 2009-03-20 15:40:02
   starttime: 2009-03-20 15:40:21
 endtime: 2009-03-20 15:40:28
 realendtime: 2009-03-20 15:40:28


-- 
Jason Dixon
OmniTI Computer Consulting, Inc.
jdi...@omniti.com
443.325.1357 x.241

--
Apps built with the Adobe(R) Flex(R) framework and Flex Builder(TM) are
powering Web 2.0 with engaging, cross-platform capabilities. Quickly and
easily build your RIAs with Flex Builder, the Eclipse(TM)based development
software that enables intelligent coding and step-through debugging.
Download the free 60 day trial. http://p.sf.net/sfu/www-adobe-com
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] Best way to backup simultaneously

2009-03-20 Thread Jason Dixon
On Fri, Mar 20, 2009 at 04:54:01PM -0400, John Lockard wrote:
 On Fri, Mar 20, 2009 at 04:11:55PM -0400, Jason Dixon wrote:
   
   Running Jobs:
JobId Level   Name   Status
   ==
11239 Increme  Unix_crank-va-4.2009-03-20_15.39.55 is running
11240 Increme  Unix_puffer-va-3.2009-03-20_15.39.56 is waiting on max
   Storage jobs
11241 Increme  Unix_puffer-va-4.2009-03-20_15.39.57 is waiting on max
   Storage jobs
11242 FullDatabaseArchives_crank-va-3.2009-03-20_15.40.59 is
   waiting for higher priority jobs to finish
   
  
  Ok, it looks like these ran correctly after all.  I'm a bit perplexed
  why the Director reports 11242 as being lower priority, but at least it
  worked as designed.  Extracted from llist jobs:
 
 From the run-times, the job order was 11239, 11242, 11240, 11241.
 This would make sense, it just listed 11242 last, it was waiting
 for 11239 to finish, thus the waiting for higher priority jobs
 message.

That's a misleading message.  Job 11239 had a Priority of 20.  Job 11242
had a Priority of 10.  I think the phrase waiting for running jobs to
finish would be more appropriate.

Thanks,

-- 
Jason Dixon
OmniTI Computer Consulting, Inc.
jdi...@omniti.com
443.325.1357 x.241

--
Apps built with the Adobe(R) Flex(R) framework and Flex Builder(TM) are
powering Web 2.0 with engaging, cross-platform capabilities. Quickly and
easily build your RIAs with Flex Builder, the Eclipse(TM)based development
software that enables intelligent coding and step-through debugging.
Download the free 60 day trial. http://p.sf.net/sfu/www-adobe-com
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] Best way to backup simultaneously

2009-03-20 Thread John Lockard
On Fri, Mar 20, 2009 at 04:11:55PM -0400, Jason Dixon wrote:
 On Fri, Mar 20, 2009 at 03:46:49PM -0400, Jason Dixon wrote:
  
  Just to be certain, I kicked off a few OS jobs just prior to the
  transaction log backup.  I also changed the Storage directive to use
  Maximum Concurrent Jobs = 1 for FileStorage.  This forces only one OS
  job at a time.
  
  I would expect the DatabaseArchives_crank-va-3 job (11242) to run before
  the queued OS jobs (11240, 11241) but that isn't the case.  I don't know
  why this reports the other jobs as a higher priority.  And remember that
  these are using *different* storage devices.  The OS jobs use
  FileStorage, the transaction logs backup to tape (SDX-700C).
  
  
  Running Jobs:
   JobId Level   Name   Status
  ==
   11239 Increme  Unix_crank-va-4.2009-03-20_15.39.55 is running
   11240 Increme  Unix_puffer-va-3.2009-03-20_15.39.56 is waiting on max
  Storage jobs
   11241 Increme  Unix_puffer-va-4.2009-03-20_15.39.57 is waiting on max
  Storage jobs
   11242 FullDatabaseArchives_crank-va-3.2009-03-20_15.40.59 is
  waiting for higher priority jobs to finish
  
 
 Ok, it looks like these ran correctly after all.  I'm a bit perplexed
 why the Director reports 11242 as being lower priority, but at least it
 worked as designed.  Extracted from llist jobs:

From the run-times, the job order was 11239, 11242, 11240, 11241.
This would make sense, it just listed 11242 last, it was waiting
for 11239 to finish, thus the waiting for higher priority jobs
message.

 
jobid: 11,239
  job: Unix_crank-va-4.2009-03-20_15.39.55
schedtime: 2009-03-20 15:39:45
starttime: 2009-03-20 15:40:02
  endtime: 2009-03-20 15:40:20
  realendtime: 2009-03-20 15:40:20
 
jobid: 11,240
  job: Unix_puffer-va-3.2009-03-20_15.39.56
schedtime: 2009-03-20 15:39:48
starttime: 2009-03-20 15:40:28
  endtime: 2009-03-20 15:40:39
  realendtime: 2009-03-20 15:40:39
 
jobid: 11,241
  job: Unix_puffer-va-4.2009-03-20_15.39.57
schedtime: 2009-03-20 15:39:53
starttime: 2009-03-20 15:40:40
  endtime: 2009-03-20 15:40:51
  realendtime: 2009-03-20 15:40:51
 
jobid: 11,242
  job: DatabaseArchives_crank-va-3.2009-03-20_15.40.59
schedtime: 2009-03-20 15:40:02
starttime: 2009-03-20 15:40:21
  endtime: 2009-03-20 15:40:28
  realendtime: 2009-03-20 15:40:28

-- 
Brain: Pinky, Are you pondering what I'm pondering?
Pinky: Wuh, I think so, Brain, but if we didn't have
   ears, we'd look like weasels.
---
 John M. Lockard |  U of Michigan - School of Information
 Unix and Security Admin |  1214 SI North - 1075 Beal Ave.
  jlock...@umich.edu |Ann Arbor, MI  48109-2112
 www.umich.edu/~jlockard | 734-615-8776 | 734-647-8045 FAX
---

--
Apps built with the Adobe(R) Flex(R) framework and Flex Builder(TM) are
powering Web 2.0 with engaging, cross-platform capabilities. Quickly and
easily build your RIAs with Flex Builder, the Eclipse(TM)based development
software that enables intelligent coding and step-through debugging.
Download the free 60 day trial. http://p.sf.net/sfu/www-adobe-com
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] Better way to garbage collect postgresql database

2009-03-20 Thread Kevin Keane
Hemant Shah wrote:


 --- On Fri, 3/20/09, Jesper Krogh jes...@krogh.cc wrote:

   
 From: Jesper Krogh jes...@krogh.cc
 Subject: Re: [Bacula-users] Better way to garbage collect postgresql database
 To: hj...@yahoo.com
 Cc: baculausers bacula-users@lists.sourceforge.net
 Date: Friday, March 20, 2009, 12:30 AM
 Hemant Shah wrote:
 
 This is a database question, but I figured some of the
   
 bacula users may have 
 
 come across this problem so I am posting it here.

 Every monday I run following commands to check and
   
 garbage collect bacula database:
 
 dbcheck command
 vacuumdb -q -d bacula -z -f
   
 There is absolutely no reason to vacumm full
 unless your data-size actually are shrinking over time.
 (longer periods). A normal vacuum will make it available for
 the next run.. you most likely are running autovacuum
 anyway.
 

   Yes, I do run autovacuum, but I just start using postgresql and I am not 
 sure how efficient it is in space usage. I was afraid that the data would be 
 fragmented if it re-uses the free space.
   
The opposite is true, actually. When there is no free space left in the 
database, postgres has no choice but to append new data at the end, 
regardless of where the remaining table data is stored. When you let 
postgres manage the free space, it can pick the space most appropriate 
for storing any particular piece of data, and minimize fragmentation.

-- 
Kevin Keane
Owner
The NetTech
Find the Uncommon: Expert Solutions for a Network You Never Have to Think About

Office: 866-642-7116
http://www.4nettech.com

This e-mail and attachments, if any, may contain confidential and/or 
proprietary information. Please be advised that the unauthorized use or 
disclosure of the information is strictly prohibited. The information herein is 
intended only for use by the intended recipient(s) named above. If you have 
received this transmission in error, please notify the sender immediately and 
permanently delete the e-mail and any copies, printouts or attachments thereof.


--
Apps built with the Adobe(R) Flex(R) framework and Flex Builder(TM) are
powering Web 2.0 with engaging, cross-platform capabilities. Quickly and
easily build your RIAs with Flex Builder, the Eclipse(TM)based development
software that enables intelligent coding and step-through debugging.
Download the free 60 day trial. http://p.sf.net/sfu/www-adobe-com
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users