Re: [Bacula-users] RunScript and Redirecting output.

2012-04-02 Thread Graham Keeling
On Mon, Apr 02, 2012 at 10:56:43AM +, Rob Becker wrote:
 RunScript {
 RunsWhen  = Before
 Runs On Client = Yes
 Command = /bin/echo `/bin/hostname`  
 /usr/local/bacula/working/restore_file
Command = /bin/date +%%F  /usr/local/bacula/working/restore_file
   }
...
 It looks like Bacula just ignores everything after, and including, the
 greater than sign.

I think that the problem is probably that your greater than signs are shell
features.
You could try putting your commands into a shell script and running the script
instead.


--
This SF email is sponsosred by:
Try Windows Azure free for 90 days Click Here 
http://p.sf.net/sfu/sfd2d-msazure
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] Exclude directories

2011-09-26 Thread Graham Keeling
On Mon, Sep 26, 2011 at 04:47:11PM +0800, Lyn Amery wrote:
 
 Hi all,
 
 I've been wading through the documentation on how to exclude 
 files and directories from backup jobs but have had no luck.
 I'm trying to exclude files like .pgpass and .svn directories
 from wherever they occur in the filesystem.
 
 The following works for /tmp and /var/tmp but seems to have 
 no effect on directories named .svn:
 
   Exclude {
File = /tmp
File = /var/tmp
File = .svn
   }
 
 (I've also tried it without the quotes, i.e. File = .svn)
 
 The examples in the official Bacula documentation seem to 
 indicate that it should just work.  Have I missed the point?
 
 I'm using the latest Bacula 5.0.3.

Have you tried WildFile?


--
All the data continuously generated in your IT infrastructure contains a
definitive record of customers, application performance, security
threats, fraudulent activity and more. Splunk takes this data and makes
sense of it. Business sense. IT sense. Common sense.
http://p.sf.net/sfu/splunk-d2dcopy1
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] error restoring Exchange 2003

2011-09-16 Thread Graham Keeling
On Fri, Sep 16, 2011 at 08:52:18AM +0300, Silver Salonen wrote:
 On Thu, 15 Sep 2011 15:27:42 +0100, Martin Simmons wrote:
  On Thu, 15 Sep 2011 16:47:53 +0300, Silver Salonen said:
 
  15-Sep 13:48 bextract JobId 0: Error: Unknown stream=26 ignored. 
  This shouldn't happen!
 
  I think bextract doesn't work with backups from a plugin (stream=26).
 
  __Martin
 
 Hmm.. that's a pity. Any ideas why restoring through the Exchange 
 plugin would hang at .log files?

Yes.

See bacula bug number 0001647, which has the status closed with the
resolution won't fix:
http://marc.info/?l=bacula-bugsm=129690630228142w=2


--
BlackBerryreg; DevCon Americas, Oct. 18-20, San Francisco, CA
http://p.sf.net/sfu/rim-devcon-copy2
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] Full-Job gets killed by time limit

2011-08-31 Thread Graham Keeling
On Wed, Aug 31, 2011 at 04:44:59PM +0200, Uwe Bolick wrote:
 Thank you for your answer,
 
 On Wed, Aug 31, 2011 at 04:20:20PM +0200, Andre Lorenz wrote:
  ...
  i have solved this problem, by splitting up the data which has to be
  backed up.
  so amount of data which will go to tape is smaller backup is running
  faster, and restore is much easier ;-)
  
  andre
  ...
 This has already be done. This was my 3rd try to get it done. After
 the first one I splitted it as usefull as possible but one directory
 ramins to large and I can not getting into smaller peaces, without a
 terrible config.
 
 I do have a plan B - reading the tapes with bscan an get the missing
 files by an incremental dump - but imho a professional software like
 bacula should be able to handle such cases.
 
 Kind regards,
   Uwe

Hello,

How about copying the files from the remote location onto a local disk, using
something like rsync? You could then use bacula (or tar) to get the files onto
the tape.

I don't think rsync will just suddenly time out after six days. And if it does,
you can just start it up again from where it was interrupted. 


--
Special Offer -- Download ArcSight Logger for FREE!
Finally, a world-class log management solution at an even better 
price-free! And you'll get a free Love Thy Logs t-shirt when you
download Logger. Secure your free ArcSight Logger TODAY!
http://p.sf.net/sfu/arcsisghtdev2dev
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] Script Client Run and

2011-08-24 Thread Graham Keeling
On Wed, Aug 24, 2011 at 04:09:33PM +0300, Yuri Timofeev wrote:
 Hi
 
 Problem in the symbol of .
 File /tmp/test.log is not created.
 Running only the first part of the command ls -la.
  /tmp/test.log not worked.
 
 
 Config file:
 
 Job {
   ...
   Client Run Before Job = ls -la  /tmp/test.log
 }
 
 
 Console log:
 
 fd JobId 32287: shell command: run ClientRunBeforeJob ls -la 
 /tmp/test.log
 fd JobId 32287: ClientRunBeforeJob: ls: : No such file or directory
 fd JobId 32287: ClientRunBeforeJob: ls: /tmp/test.log: No such file or
 directory
 fd JobId 32287: Error: Runscript: ClientRunBeforeJob returned non-zero
 status=2. ERR=Child exited with code 2
 
 
 
 This is just a test. My working version will be mysqldump ... dump.sql

As a workaround, you could try putting your commands in a script and running
the script instead.


--
EMC VNX: the world's simplest storage, starting under $10K
The only unified storage solution that offers unified management 
Up to 160% more powerful than alternatives and 25% more efficient. 
Guaranteed. http://p.sf.net/sfu/emc-vnx-dev2dev
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] clarification on purging and File volume retention

2011-08-03 Thread Graham Keeling
On Tue, Aug 02, 2011 at 05:11:11PM -0700, scar wrote:
 -BEGIN PGP SIGNED MESSAGE-
 Hash: SHA256
 
 i'm not using tapes, i just have my volumes stored in a directory.  it's
 my intention to take a full backup every month on the first, with
 incrementals and differentials during the month like in the default
 configuration.  with a volume use duration of 23 hours, a new volume
 will get created every day.  so, it seemed reasonable to me to have the
 volume retention set to 40 days.  with AutoPrune and Recycle both set to
 yes, this seemed like a maximum of 40 volumes would be created and used,
 thus limiting the amount of disk space the backups would use.
 
 but then i started reading the other settings for the Pool resource, and
 confusion ensued.
 
 in there, it indicates the minimum volume retention should be twice the
 interval of my full backups, i.e. two months.  anyone care to explain
 that?  it seems to me with a 40 day limit, at least 2 full backups will
 always exist.

Just taking your first question - there is discussion here:
http://adsm.org//lists/html/Bacula-users/2011-01/msg00305.html

 secondly, it goes on to say that i should be using `Action On Purge =
 Truncate` along with a RunScript in my CatalogBackup Job to run `purge
 volume action=all allpools storage=File`.  when i went to read up on the
 `purge volume` command, it seems to me that the command should really be
 `purge volume action=*truncate* allpools storage=File`.
 
 finally, i then went on to read about the Recycle Oldest Volume
 directive which sounds reasonable, but i'm confused as to what is really
 happening without this directive set to yes.
 
 is all of this really necessary?  thanks
 
 here's my Pool resource currently,
 
 Pool {
   Name = File
   Pool Type = Backup
   Recycle = yes
   AutoPrune = yes
   Volume Retention = 40 days
   Maximum Volume Bytes = 100G
   Maximum Volumes = 100
   LabelFormat = Vol
   Volume Use Duration = 23h
 }
 
 -BEGIN PGP SIGNATURE-
 
 iEYEAREIAAYFAk44kh4ACgkQXhfCJNu98qD7BgCcDJuO8TfpSPXz1IasbYWesDZL
 VZIAoNu0JbiW+/R9Fu2xjTeROd9OTrzZ
 =AP63
 -END PGP SIGNATURE-
 
 
 --
 BlackBerryreg; DevCon Americas, Oct. 18-20, San Francisco, CA
 The must-attend event for mobile developers. Connect with experts. 
 Get tools for creating Super Apps. See the latest technologies.
 Sessions, hands-on labs, demos  much more. Register early  save!
 http://p.sf.net/sfu/rim-blackberry-1
 ___
 Bacula-users mailing list
 Bacula-users@lists.sourceforge.net
 https://lists.sourceforge.net/lists/listinfo/bacula-users


--
BlackBerryreg; DevCon Americas, Oct. 18-20, San Francisco, CA
The must-attend event for mobile developers. Connect with experts. 
Get tools for creating Super Apps. See the latest technologies.
Sessions, hands-on labs, demos  much more. Register early  save!
http://p.sf.net/sfu/rim-blackberry-1
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] Problems backing up to disk

2011-06-27 Thread Graham Keeling
On Sun, Jun 26, 2011 at 07:10:19PM -0700, mikewilt wrote:
 Media Type = File
 
 Same for both storage daemons.

I found out by experience that bacula gets confused if you have the same media
type on different storages.
It may work better if you set the media types different to each other.

 Mike
 
 +--
 |This was sent by m...@mwilt.org via Backup Central.
 |Forward SPAM to ab...@backupcentral.com.
 +--
 
 
 
 --
 All of the data generated in your IT infrastructure is seriously valuable.
 Why? It contains a definitive record of application performance, security 
 threats, fraudulent activity, and more. Splunk takes this data and makes 
 sense of it. IT sense. And common sense.
 http://p.sf.net/sfu/splunk-d2d-c2
 ___
 Bacula-users mailing list
 Bacula-users@lists.sourceforge.net
 https://lists.sourceforge.net/lists/listinfo/bacula-users


--
All of the data generated in your IT infrastructure is seriously valuable.
Why? It contains a definitive record of application performance, security 
threats, fraudulent activity, and more. Splunk takes this data and makes 
sense of it. IT sense. And common sense.
http://p.sf.net/sfu/splunk-d2d-c2
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] Restore of files when they don't list

2011-06-24 Thread Graham Keeling
On Thu, Jun 23, 2011 at 02:01:56PM -0700, Steve Ellis wrote:
 On 6/23/2011 1:31 PM, Troy Kocher wrote:
  Listers,
 
  I'm trying to restore data from medicaid 27, but it appears there are no 
  files.  There is a file corresponding with this still on the disk, so I 
  think it's just been purged from the database.
 
  Could someone help me thru the restore process when the files are no longer 
  in the database.
 
  Thanks!
 
  Troy
 
 There are really only 3 options here that I can think of:
  1) restore the entire job (probably to an temporary location), then 
 prune the bits you don't want.
  2) use bscan of the volume to recreate the file list in the db 
 (note that I have only used this when the job itself had been expired 
 from the DB)
  3) restore a dump of the catalog that contains the file entries 
 that you wanted that have been expired

4) Use bextract.

 I'm pretty sure I've done both #1 and #2, #3 I'd be much more reluctant 
 to just try, as I would worry about clobbering more recent catalog data, 
 unless you used a separate catalog db for the restoration.  Unless the 
 job is really huge, I'd probably do #1, because bscan is (slightly) 
 dodgy, especially for backups that span volumes (IMHO, note that it is 
 _much_ better than not having bscan at all).  Sorry I can't provide more 
 detail, hopefully someone else will be able to help more.
 
 -se
 
 --
 Simplify data backup and recovery for your virtual environment with vRanger.
 Installation's a snap, and flexible recovery options mean your data is safe,
 secure and there when you need it. Data protection magic?
 Nope - It's vRanger. Get your free trial download today.
 http://p.sf.net/sfu/quest-sfdev2dev
 ___
 Bacula-users mailing list
 Bacula-users@lists.sourceforge.net
 https://lists.sourceforge.net/lists/listinfo/bacula-users


--
All the data continuously generated in your IT infrastructure contains a 
definitive record of customers, application performance, security 
threats, fraudulent activity and more. Splunk takes this data and makes 
sense of it. Business sense. IT sense. Common sense.. 
http://p.sf.net/sfu/splunk-d2d-c1
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] Removing failed jobs from media/database?

2011-06-07 Thread Graham Keeling
On Tue, Jun 07, 2011 at 12:40:18PM +0200, Roy Sigurd Karlsbakk wrote:
 As far as I can understand, if a job is interrupted or failed, the data 
 stored on tape/disk won't be freed until the normal retention is over. Would 
 it be possible to add a flag for 'failed' data for quicker (or immediate) 
 retention?

I have a patch that might help you.

Index: src/cats/sql_cmds.c
===
RCS file: /home/graham/bacula-5.0.3/src/cats/sql_cmds.c,v
retrieving revision 1.1
retrieving revision 1.2
diff -u -r1.1 -r1.2
--- src/cats/sql_cmds.c
+++ src/cats/sql_cmds.c
@@ -95,10 +95,13 @@
 const char *del_JobMedia = DELETE FROM JobMedia WHERE JobId=%s;
 const char *cnt_JobMedia = SELECT count(*) FROM JobMedia WHERE MediaId=%s;

+/* Graham says: This was hacked so that it also selects jobs
+   in error, as well as those past the retention time. As of 2009-08-25, it
+   is only used in one place - ua_prune.c */
 const char *sel_JobMedia =
SELECT DISTINCT JobMedia.JobId FROM JobMedia,Job 
WHERE MediaId=%s AND Job.JobId=JobMedia.JobId 
-   AND Job.JobTDate%s;
+   AND (Job.JobTDate%s OR Job.JobStatus IN ('A','E','f'));

 /* Count Select JobIds for File deletion */
 const char *count_select_job =


--
EditLive Enterprise is the world's most technically advanced content
authoring tool. Experience the power of Track Changes, Inline Image
Editing and ensure content is compliant with Accessibility Checking.
http://p.sf.net/sfu/ephox-dev2dev
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] Clarification on Exchange plugin

2011-05-31 Thread Graham Keeling
On Tue, May 31, 2011 at 10:52:21AM +0100, Marc Goujon wrote:
 This is my client configurations, and in the Plugin directory I have
 exchange-fd.dll.
 
 
 FileDaemon {# this is me
   Name = sbs-fd
   FDport = 9102# where we listen for the director
   WorkingDirectory = C:/Program Files/Bacula/working
   Pid Directory = C:/Program Files/Bacula/working
   Plugin Directory = C:/Program Files/Bacula/plugins
   Maximum Concurrent Jobs = 10
 }
 
 My fileSet is the one you suggested:
 
 
 FileSet {
   Name = Exchange
   Include {
Options {
 signature = MD5
 compression = GZIP9
 IgnoreCase = yes
 }
   Plugin = exchange:@EXCHANGE/Microsoft Information Store
 
   }
 }
 
 
 
 
 The error I am getting is the following:
 
 bacula-dir Start Backup JobId 59, Job=Backup_Exchange.2011-05-31_10.40.23_29
 
  Using Device FileStorage
 
 
 sbs-fd Warning: VSS was not initialized properly. VSS support is disabled. 
 ERR=An attempt was made to reference a token that does not exist.
 
 Fatal error: /home/kern/bacula/k/bacula/src/filed/fd_plugins.c:223 Command 
 plugin exchange:@EXCHANGE/Microsoft Information Store requested, but is not 
 loaded.
 bacula-sd Volume HERE previously written, moving to end of data. Ready to 
 append to end of Volume HERE size=12959845363
 bacula-dir
 
 
 I have taken note about the problem with the restore situation, but any help 
 to sort this would be great.

Sorry, I think I have reached my limit of what I can help you with.
All I can suggest is that you check everything again and if everything looks
right to you, add some debug to relevant areas of the code.
It seems from the last error message that the exchange plugin wasn't loaded.
And upgrade to bacula-5.0.3 if you're not there already.

 marc
 
 
 
 On 26/05/11 14:24, Graham Keeling wrote:
 On Thu, May 26, 2011 at 01:50:39PM +0100, Marc Goujon wrote:
 Hello,
 
 I know this might have been asked millions of times before, but I cannot
 seem to locate clear information via the archives or the official
 documentation and I feel I am almost there.
 
   From what I have understood, the Exchange plugin is basically that
 exchange-fd.dll file. The first question is: is that enough? I ask this
 because while reading the archives someone mentioned that bpipe plugin
 was needed, however the documentation says The purpose of the plugin is
 to provide an interface to any system program for backup and restore.
 So... a plugin requires another plugin? Is bpipe just allowing to
 include plugin =  expressions in the file sets?
 
 Secondly, my config for the director (a linux machine) says:
 
 FileSet {
 Name = Exchange
 #Enable VSS = yes
 Include {
 
 File = C:\\Program Files\\Microsoft\\Exchange Server\\Mailbox
 Plugin = exchange:@EXCHANGE/Microsoft Information Store
 
 }
 
 Exclude {
   File = C:\\Program
 Files\\Microsoft\\Exchange\\Mailbox\\First Storage Group\\Mailbox
 Database.edb
   File = C:\\Program
 Files\\Microsoft\\Exchange\\Mailbox\\First Storage Group\\Mailbox
 Database.edb
   File = C:\\Program
 Files\\Microsoft\\Exchange\\Mailbox\\Second Storage Group\\Public Folder
 Database.edb
 
   }
 }
 
 However when my job fails, I get the following error:
 
 sbs-fd Cannot open C:\Program Files\Microsoft\Exchange
 Server\Mailbox/First Storage Group/Mailbox Database.edb: ERR=The
 process cannot access the file because it is being used by another process.
 
 sbs-fd Cannot open C:\Program Files\Microsoft\Exchange
 Server\Mailbox/Second Storage Group/Public Folder Database.edb: ERR=The
 process cannot access the file because it is being used by another process.
 
Fatal error: /home/kern/bacula/k/bacula/src/filed/fd_plugins.c:223 
  Command plugin exchange:@EXCHANGE/Microsoft Information Store requested, 
  but is not loaded.
 
 
 There must be something I am missing here, as although those files are
 explicitly excluded, they are still causing the error?
 
 Any tips or docs you can point me to would be greatly  appreciated.
 a) I think you should only use forward slashes ('/') in your filesets, not
 backslashes.
 
 b) This is the fileset that I used to use:
 
 FileSet {
Name = Windows:2k3-pt2:Windows Exchange Server data
Include {
  Options {
signature = MD5
compression = GZIP9
IgnoreCase = yes
  }
  Plugin = exchange:/@EXCHANGE/Microsoft Information Store
}
 }
 
 c) Be very very careful with this. I believe that the plugin doesn't work
 properly. Specifically, it will restore from:
  * A full backup
  * A full backup, plus one incremental.
 But then it may not restore from:
  * A full backup, plus two incrementals.
 And be less and less likely to work for each incremental that you add.
 And without attempting to restore, it will seem as if it is working

Re: [Bacula-users] Clarification on Exchange plugin

2011-05-31 Thread Graham Keeling
On Wed, Jun 01, 2011 at 01:01:39AM +1000, James Harper wrote:
  The error I am getting is the following:
  
  bacula-dir Start Backup JobId 59,
 Job=Backup_Exchange.2011-05-31_10.40.23_29
  
Using Device FileStorage
  
  
  sbs-fd Warning: VSS was not initialized properly. VSS support is
 disabled.
  ERR=An attempt was made to reference a token that does not exist.
  
  Fatal error: /home/kern/bacula/k/bacula/src/filed/fd_plugins.c:223
 Command
  plugin exchange:@EXCHANGE/Microsoft Information Store requested, but
 is not
  loaded.
  bacula-sd Volume HERE previously written, moving to end of data.
 Ready to
  append to end of Volume HERE size=12959845363
  bacula-dir
  
 
 Questions I should have asked you earlier... this looks a lot like you
 are trying to run a 32 bit fd on a 64 bit windows system with those VSS
 failures. Is that the case? If so, you probably aren't using Exchange
 2003, and I'm pretty sure that the plugin doesn't work on anything
 newer.

It was supposed to work on 2007 too:
http://www.bacula.org/manuals/en/concepts/concepts/New_Features.html#SECTION00320


--
Simplify data backup and recovery for your virtual environment with vRanger. 
Installation's a snap, and flexible recovery options mean your data is safe,
secure and there when you need it. Data protection magic?
Nope - It's vRanger. Get your free trial download today. 
http://p.sf.net/sfu/quest-sfdev2dev
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


[Bacula-users] Bacula 5.0.3 Macintosh file daemon?

2011-05-27 Thread Graham Keeling
Hello,
Does anybody know where I might be able to find a bacula-5.0.3 Mac file daemon?
Or an installer?
Thanks.


--
vRanger cuts backup time in half-while increasing security.
With the market-leading solution for virtual backup and recovery, 
you get blazing-fast, flexible, and affordable data protection.
Download your free trial now. 
http://p.sf.net/sfu/quest-d2dcopy1
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] Priority of Jobdefinition

2011-05-27 Thread Graham Keeling
On Fri, May 27, 2011 at 05:37:42PM +0200, Joris Heinrich wrote:
 no one have an idea?

I think I wanted to do something similar.
That is, have different Runs in a Schedule go at different priority.
I asked the question on this list and got no replies.
I couldn't work out how to do it, so now I think that it is not possible.

 best regards
 
 JHN
  Hello List,
 
  i have following configuration of an backup job:
 
  Job {
  Name= stf-imap1-imap1-mailsystem
  Client  = stf-imap1
  JobDefs = s1_mon_sat_s2_sun_2
  Schedule= imap
  FileSet = imap1-mailsystem
  Priority= 9
  }
 
  Schedule {
  Name= imap
  Run = Level=Incremental  Pool=library-S1 SpoolData=yes daily
  at 00:01
  Run = Level=Incremental  Pool=library-S1 SpoolData=yes daily
  at 04:01
  Run = Level=Incremental  Pool=library-S1 SpoolData=yes daily
  at 08:01
  Run = Level=Incremental  Pool=library-S1 SpoolData=yes daily
  at 12:01
  Run = Level=Incremental  Pool=library-S1 SpoolData=yes daily
  at 16:01
  Run = Level=Incremental  Pool=library-S1 SpoolData=yes daily
  at 20:01
 
  Run = Level=Differential Pool=library-S1 SpoolData=yes mon-fri
  at 01:00
 
  Run = Level=Full Pool=library-S2 SpoolData=yes sat at
  01:00
  }
 
  JobDefs {
  Name= s1_mon_sat_s2_sun_2
  Type= Backup
  Level   = Full
  Pool= library-S1 # overwritten in schedule
  SpoolData   = yes
  #FileSet = see concrete job
  Schedule= s1_mon_sat_s2_sun_2
  Messages= Standard
  Write Bootstrap = /backup/bsr/%n.bsr
  Allow Mixed Priority = yes
  }
 
 
  All other machines running with priority 10
  The incremental and differential jobs with priority are running fine
  and should not be changed. But now my problem... The fullbackup job on
  sat.. is running 12 h, all other machines will waiting.. (bad solution
  ;-)).
 
  Now my question how can i change the priority for fullbackup on sat to
  10 and get parallel running jobs?
  An second job for the fullbackup is not possible? because then the
  missing fullbackup in first job? and i get two fullbackup jobs?
 
  Thanks for help
 
  JHN
 
 
  --
  vRanger cuts backup time in half-while increasing security.
  With the market-leading solution for virtual backup and recovery, 
  you get blazing-fast, flexible, and affordable data protection.
  Download your free trial now. 
  http://p.sf.net/sfu/quest-d2dcopy1
 
 
  ___
  Bacula-users mailing list
  Bacula-users@lists.sourceforge.net
  https://lists.sourceforge.net/lists/listinfo/bacula-users
 

 --
 vRanger cuts backup time in half-while increasing security.
 With the market-leading solution for virtual backup and recovery, 
 you get blazing-fast, flexible, and affordable data protection.
 Download your free trial now. 
 http://p.sf.net/sfu/quest-d2dcopy1

 ___
 Bacula-users mailing list
 Bacula-users@lists.sourceforge.net
 https://lists.sourceforge.net/lists/listinfo/bacula-users



--
vRanger cuts backup time in half-while increasing security.
With the market-leading solution for virtual backup and recovery, 
you get blazing-fast, flexible, and affordable data protection.
Download your free trial now. 
http://p.sf.net/sfu/quest-d2dcopy1
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] Restore problems

2011-05-26 Thread Graham Keeling
On Thu, May 26, 2011 at 11:43:16AM +0100, Alan Brown wrote:
 John Drescher wrote:
  I'm trying to restore a 3 month old backup, but the database says there
  are no files to restore.
 
  The strange thing is, I can _see_ the entries in the database for that
  date and the full backup 10 days before.
 
  Has anyone seen this kind of behaviour?
 
  
  Do you have different file and job retentions?
 
 Just to clarify this - I can see the _files_ in the database when I look 
 at the files in the JobId of the full backup, but when I go to do a 
 restore it says no files are available.

Perhaps the problem could be something to do with the Full job that the backup
depends on having being purged already.


Can you copy and paste the output of your restore attempt?


--
vRanger cuts backup time in half-while increasing security.
With the market-leading solution for virtual backup and recovery, 
you get blazing-fast, flexible, and affordable data protection.
Download your free trial now. 
http://p.sf.net/sfu/quest-d2dcopy1
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] Clarification on Exchange plugin

2011-05-26 Thread Graham Keeling
On Thu, May 26, 2011 at 01:50:39PM +0100, Marc Goujon wrote:
 Hello,
 
 I know this might have been asked millions of times before, but I cannot 
 seem to locate clear information via the archives or the official 
 documentation and I feel I am almost there.
 
  From what I have understood, the Exchange plugin is basically that 
 exchange-fd.dll file. The first question is: is that enough? I ask this 
 because while reading the archives someone mentioned that bpipe plugin 
 was needed, however the documentation says The purpose of the plugin is 
 to provide an interface to any system program for backup and restore. 
 So... a plugin requires another plugin? Is bpipe just allowing to 
 include plugin =  expressions in the file sets?
 
 Secondly, my config for the director (a linux machine) says:
 
 FileSet {
Name = Exchange
#Enable VSS = yes
Include {
 
File = C:\\Program Files\\Microsoft\\Exchange Server\\Mailbox
Plugin = exchange:@EXCHANGE/Microsoft Information Store
 
}
 
Exclude {
  File = C:\\Program 
 Files\\Microsoft\\Exchange\\Mailbox\\First Storage Group\\Mailbox 
 Database.edb
  File = C:\\Program 
 Files\\Microsoft\\Exchange\\Mailbox\\First Storage Group\\Mailbox 
 Database.edb
  File = C:\\Program 
 Files\\Microsoft\\Exchange\\Mailbox\\Second Storage Group\\Public Folder 
 Database.edb
 
  }
}
 
 However when my job fails, I get the following error:
 
 sbs-fd Cannot open C:\Program Files\Microsoft\Exchange 
 Server\Mailbox/First Storage Group/Mailbox Database.edb: ERR=The 
 process cannot access the file because it is being used by another process.
 
 sbs-fd Cannot open C:\Program Files\Microsoft\Exchange 
 Server\Mailbox/Second Storage Group/Public Folder Database.edb: ERR=The 
 process cannot access the file because it is being used by another process.
 
   Fatal error: /home/kern/bacula/k/bacula/src/filed/fd_plugins.c:223 Command 
 plugin exchange:@EXCHANGE/Microsoft Information Store requested, but is not 
 loaded.
 
 
 There must be something I am missing here, as although those files are 
 explicitly excluded, they are still causing the error?
 
 Any tips or docs you can point me to would be greatly  appreciated.

a) I think you should only use forward slashes ('/') in your filesets, not
backslashes.

b) This is the fileset that I used to use:

FileSet {
  Name = Windows:2k3-pt2:Windows Exchange Server data
  Include {
Options {
  signature = MD5
  compression = GZIP9
  IgnoreCase = yes
}
Plugin = exchange:/@EXCHANGE/Microsoft Information Store
  }
}

c) Be very very careful with this. I believe that the plugin doesn't work
properly. Specifically, it will restore from:
* A full backup
* A full backup, plus one incremental.
But then it may not restore from:
* A full backup, plus two incrementals.
And be less and less likely to work for each incremental that you add.
And without attempting to restore, it will seem as if it is working.

See bacula bug number 0001647, which has the status closed with the
resolution won't fix:
http://marc.info/?l=bacula-bugsm=129690630228142w=2

You might be OK if you always did Full backups, but that defeats the point.


--
vRanger cuts backup time in half-while increasing security.
With the market-leading solution for virtual backup and recovery, 
you get blazing-fast, flexible, and affordable data protection.
Download your free trial now. 
http://p.sf.net/sfu/quest-d2dcopy1
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] Clarification on Exchange plugin

2011-05-26 Thread Graham Keeling
On Thu, May 26, 2011 at 03:55:41PM +0100, Marc Goujon wrote:

 a) I think you should only use forward slashes ('/') in your filesets, not
 backslashes.
 
 Replaced all paths for forward slashes.
 However, exactly the same results are obtained. Bak to my original
 question, is the bpipe plugin required to run the exchange plugin?

I don't think so.

d) 
  Fatal error: /home/kern/bacula/k/bacula/src/filed/fd_plugins.c:223 Command 
  plugin exchange:@EXCHANGE/Microsoft Information Store requested, but is 
  not loaded.

I think This 'fatal error' is the big problem. You need the exchange-fd.dll
file somewhere on the client machine, and you need the client conf on the
client machine to have a 'Plugin Directory = C:/some/path/to/the/plugin/dir'
option in it.

e) I think you don't need your includes/excludes. In fact you have included a
directory and then excluded the same directory. Try the fileset that I posted
earlier. That will get rid of the 'process cannot access the file' messages.
And make things simpler.

f) I refer you again to (c) in my first reply, because it is very important.


--
vRanger cuts backup time in half-while increasing security.
With the market-leading solution for virtual backup and recovery, 
you get blazing-fast, flexible, and affordable data protection.
Download your free trial now. 
http://p.sf.net/sfu/quest-d2dcopy1
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] restore specific jobid

2011-05-26 Thread Graham Keeling
On Thu, May 26, 2011 at 07:24:57PM -0300, Cleuson Alves wrote:
 Hello everybody, if you need to restore a full jobid and some incremental
 but not all, as could be done via the console, for a given jobid
 restore a specific
 server and all incremental, one at a time, ie, in order, seems too much work,
 so what would be the quickest way?

I would like to help, but I find your question confusing.

There is an option in bconsole restore to type in a list of jobids by hand.
There is also an option (I think number 12) to enter a jobid to restore to -
you give the jobid, and it gets all the jobids it depends upon.

Do these not do what you want?


--
vRanger cuts backup time in half-while increasing security.
With the market-leading solution for virtual backup and recovery, 
you get blazing-fast, flexible, and affordable data protection.
Download your free trial now. 
http://p.sf.net/sfu/quest-d2dcopy1
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] Accurate Job - Cannot find previous jobids

2011-05-18 Thread Graham Keeling
On Wed, May 18, 2011 at 10:47:05AM +0200, Christian Manal wrote:
 Hi list,
 
 I have a problem regarding accurate backups. When I set 'Accurate = yes'
 for any given job in my setup, the next run fails with the following
 error(s):
 
Fatal error: Cannot find previous jobids.
Fatal error: Network error with FD during Backup: ERR=Interrupted
 system call
 
 The strange thing is, contrary to everything google came up with for
 these messages, that the catalog seems to be in order. At least I can
 build a filetree for the most recent backups of all my clients in both
 bconsole and bat and restore files without a problem.
 
 Does anyone have an idea what could be going on here? My Bacula version
 is 5.0.3 with a Postgres 8.3 catalog on Solaris 10. Any pointers would
 be appreciated.

Bacula looks for the last full backup in the database. And it relies on 
timestamps to find it. So, I would look for your previous full and its
timestamps, and check that your clock is set later than those timestamps.
Though, if this were the case, I would expect your job to be upgraded to be
a full. So perhaps something more complicated is going on.

If times don't explain it, take a look at this bacula code from
src/cats/sql_get.c (function db_accurate_get_jobids()), which is getting
the jobids from the database. You should be able to construct very similar
queries and run them by hand to see what the database says.
Or add some debug to get the exact sql queries being used.

   /* First, find the last good Full backup for this job/client/fileset */
   Mmsg(query,
CREATE TABLE btemp3%s AS 
 SELECT JobId, StartTime, EndTime, JobTDate, PurgedFiles 
   FROM Job JOIN FileSet USING (FileSetId) 
  WHERE ClientId = %s 
AND Level='F' AND JobStatus IN ('T','W') AND Type='B' 
AND StartTime'%s' 
AND FileSet.FileSet=(SELECT FileSet FROM FileSet WHERE FileSetId = %s) 
  ORDER BY Job.JobTDate DESC LIMIT 1,
edit_uint64(jcr-JobId, jobid),
edit_uint64(jr-ClientId, clientid),
date,
edit_uint64(jr-FileSetId, filesetid));

   if (jr-JobLevel == L_INCREMENTAL || jr-JobLevel == L_VIRTUAL_FULL) {
  /* Now, find the last differential backup after the last full */
  Mmsg(query,
INSERT INTO btemp3%s (JobId, StartTime, EndTime, JobTDate, PurgedFiles) 
 SELECT JobId, StartTime, EndTime, JobTDate, PurgedFiles 
   FROM Job JOIN FileSet USING (FileSetId) 
  WHERE ClientId = %s 
AND Level='D' AND JobStatus IN ('T','W') AND Type='B' 
AND StartTime  (SELECT EndTime FROM btemp3%s ORDER BY EndTime DESC LIMIT 
1) 
AND StartTime  '%s' 
AND FileSet.FileSet= (SELECT FileSet FROM FileSet WHERE FileSetId = %s) 
  ORDER BY Job.JobTDate DESC LIMIT 1 ,
   jobid,
   clientid,
   jobid,
   date,
   filesetid);

  /* We just have to take all incremental after the last Full/Diff */
  Mmsg(query,
INSERT INTO btemp3%s (JobId, StartTime, EndTime, JobTDate, PurgedFiles) 
 SELECT JobId, StartTime, EndTime, JobTDate, PurgedFiles 
   FROM Job JOIN FileSet USING (FileSetId) 
  WHERE ClientId = %s 
AND Level='I' AND JobStatus IN ('T','W') AND Type='B' 
AND StartTime  (SELECT EndTime FROM btemp3%s ORDER BY EndTime DESC LIMIT 
1) 
AND StartTime  '%s' 
AND FileSet.FileSet= (SELECT FileSet FROM FileSet WHERE FileSetId = %s) 
  ORDER BY Job.JobTDate DESC ,
   jobid,
   clientid,
   jobid,
   date,
   filesetid);


 Regards,
 Christian Manal
 
 --
 What Every C/C++ and Fortran developer Should Know!
 Read this article and learn how Intel has extended the reach of its 
 next-generation tools to help Windows* and Linux* C/C++ and Fortran 
 developers boost performance applications - including clusters. 
 http://p.sf.net/sfu/intel-dev2devmay
 ___
 Bacula-users mailing list
 Bacula-users@lists.sourceforge.net
 https://lists.sourceforge.net/lists/listinfo/bacula-users


--
What Every C/C++ and Fortran developer Should Know!
Read this article and learn how Intel has extended the reach of its 
next-generation tools to help Windows* and Linux* C/C++ and Fortran 
developers boost performance applications - including clusters. 
http://p.sf.net/sfu/intel-dev2devmay
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] Accurate Job - Cannot find previous jobids

2011-05-18 Thread Graham Keeling
On Wed, May 18, 2011 at 11:54:18AM +0200, Christian Manal wrote:
 Am 18.05.2011 11:13, schrieb Graham Keeling:
  If times don't explain it, take a look at this bacula code from
  src/cats/sql_get.c (function db_accurate_get_jobids()), which is getting
  the jobids from the database. You should be able to construct very similar
  queries and run them by hand to see what the database says.
  Or add some debug to get the exact sql queries being used.
  
 /* First, find the last good Full backup for this job/client/fileset */
 snip
 
 Thank you. The problem seems to be that the query doesn't account for
 the job name it is supposed to do, just the client and fileset. I have
 two jobs with the same fileset for each client. One backs up to local
 storage with a full/diff/incr cycle and a rather long retention period,
 the other does monthly full backups to another building for DR and gets
 immediately purged.
 
 I enabled accurate for the onsite job but the query returns the last
 full run of the offsite job. When I add AND Name = 'JobName' to the
 query it gets the right jobid.
 
 I think this qualifies for a bug, doesn't it?

I agree with you, but...
I have just remembered coming across this before. The thread starts here:
http://www.mail-archive.com/bacula-devel@lists.sourceforge.net/msg04050.html

Kern:
Bacula does not support this option.

Me:
It does appear to be *trying* to support it, as some parts of the code that
figure out dependent jobs take note of the job name, though others do not.

Kern:
I wouldn't exactly say that it is trying to support it, but rather that since 
the program is so complicated, and I try not to restrict it too much, there 
are places where it can seem to work, but it is just not designed to do so 
(at least at the moment), and thus it will not work.  It isn't that I don't 
want it to work, but there is only so much that the developers can do in the 
time we have.

Unfortunate what you are trying to do is simply not possible in the way you 
are trying to do it with the current code.



 Regards,
 Christian Manal
 
 --
 What Every C/C++ and Fortran developer Should Know!
 Read this article and learn how Intel has extended the reach of its 
 next-generation tools to help Windows* and Linux* C/C++ and Fortran 
 developers boost performance applications - including clusters. 
 http://p.sf.net/sfu/intel-dev2devmay
 ___
 Bacula-users mailing list
 Bacula-users@lists.sourceforge.net
 https://lists.sourceforge.net/lists/listinfo/bacula-users


--
What Every C/C++ and Fortran developer Should Know!
Read this article and learn how Intel has extended the reach of its 
next-generation tools to help Windows* and Linux* C/C++ and Fortran 
developers boost performance applications - including clusters. 
http://p.sf.net/sfu/intel-dev2devmay
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] Accurate Job - Cannot find previous jobids

2011-05-18 Thread Graham Keeling
On Wed, May 18, 2011 at 01:02:08PM +0200, Christian Manal wrote:
 Am 18.05.2011 12:26, schrieb Graham Keeling:
  On Wed, May 18, 2011 at 11:54:18AM +0200, Christian Manal wrote:
  Am 18.05.2011 11:13, schrieb Graham Keeling:
  If times don't explain it, take a look at this bacula code from
  src/cats/sql_get.c (function db_accurate_get_jobids()), which is getting
  the jobids from the database. You should be able to construct very similar
  queries and run them by hand to see what the database says.
  Or add some debug to get the exact sql queries being used.
 
 /* First, find the last good Full backup for this job/client/fileset */
 snip
 
  Thank you. The problem seems to be that the query doesn't account for
  the job name it is supposed to do, just the client and fileset. I have
  two jobs with the same fileset for each client. One backs up to local
  storage with a full/diff/incr cycle and a rather long retention period,
  the other does monthly full backups to another building for DR and gets
  immediately purged.
 
  I enabled accurate for the onsite job but the query returns the last
  full run of the offsite job. When I add AND Name = 'JobName' to the
  query it gets the right jobid.
 
  I think this qualifies for a bug, doesn't it?
  
  I agree with you, but...
  I have just remembered coming across this before. The thread starts here:
  http://www.mail-archive.com/bacula-devel@lists.sourceforge.net/msg04050.html
  
  Kern:
  Bacula does not support this option.
  
  Me:
  It does appear to be *trying* to support it, as some parts of the code that
  figure out dependent jobs take note of the job name, though others do not.
  
  Kern:
  I wouldn't exactly say that it is trying to support it, but rather that 
  since 
  the program is so complicated, and I try not to restrict it too much, there 
  are places where it can seem to work, but it is just not designed to do so 
  (at least at the moment), and thus it will not work.  It isn't that I don't 
  want it to work, but there is only so much that the developers can do in 
  the 
  time we have.
  
  Unfortunate what you are trying to do is simply not possible in the way you 
  are trying to do it with the current code.
 
 Great... so I have to create two identical filesets to get this to work?

Or add AND Name = 'JobName', as was your idea. Maybe it works fine.

 If this kind of setup is not supported, it would be nice if I'd get at
 least a warning by './bacula-dir -t' or something.
 
 Thanks for the help, though, I'll fix my config.
 
 
 Regards,
 Christian Manal
 
 
  
  
  
  Regards,
  Christian Manal
 
  --
  What Every C/C++ and Fortran developer Should Know!
  Read this article and learn how Intel has extended the reach of its 
  next-generation tools to help Windows* and Linux* C/C++ and Fortran 
  developers boost performance applications - including clusters. 
  http://p.sf.net/sfu/intel-dev2devmay
  ___
  Bacula-users mailing list
  Bacula-users@lists.sourceforge.net
  https://lists.sourceforge.net/lists/listinfo/bacula-users
  
  
  --
  What Every C/C++ and Fortran developer Should Know!
  Read this article and learn how Intel has extended the reach of its 
  next-generation tools to help Windows* and Linux* C/C++ and Fortran 
  developers boost performance applications - including clusters. 
  http://p.sf.net/sfu/intel-dev2devmay
  ___
  Bacula-users mailing list
  Bacula-users@lists.sourceforge.net
  https://lists.sourceforge.net/lists/listinfo/bacula-users
  
 
 
 --
 What Every C/C++ and Fortran developer Should Know!
 Read this article and learn how Intel has extended the reach of its 
 next-generation tools to help Windows* and Linux* C/C++ and Fortran 
 developers boost performance applications - including clusters. 
 http://p.sf.net/sfu/intel-dev2devmay
 ___
 Bacula-users mailing list
 Bacula-users@lists.sourceforge.net
 https://lists.sourceforge.net/lists/listinfo/bacula-users


--
What Every C/C++ and Fortran developer Should Know!
Read this article and learn how Intel has extended the reach of its 
next-generation tools to help Windows* and Linux* C/C++ and Fortran 
developers boost performance applications - including clusters. 
http://p.sf.net/sfu/intel-dev2devmay
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] OneFS = no doesn't work

2011-05-18 Thread Graham Keeling
On Wed, May 18, 2011 at 04:31:15PM +0200, Roy Sigurd Karlsbakk wrote:
 Hi all
 
 Working on setting up Bacula backup of a fileserver, I can't make OneFS = no 
 work. The server is running OpenIndiana and has a few terabytes of storage. 
 The home directories under /tos-data/home/${username} are each a ZFS 
 filesystem/dataset. The configuration below looks good to me, but Bacula 
 still complains about /tos-data/home/znw is a different filesystem. Will not 
 descend from /tos-data/home into it.
 
 How can I make it decend automatically?

onefs = yes   ?


 We have ~30 users on this site, and it'll be far more flexible to just backup 
 the lot than backing up each and every one of them


 
 roy (see below for config)
 
 # Home directories
 FileSet {
   Name = verdande.nilu.no-home-fileset
   Include {
 Options {
   signature = MD5
   OneFS  = no
   FSType = zfs
 }
 Options {
   Exclude = yes
   WildFile = *.mp3
 }
 File = /tos-data/home
   }
 }
 
 
 -- 
 Vennlige hilsener / Best regards
 
 roy
 --
 Roy Sigurd Karlsbakk
 (+47) 97542685
 r...@karlsbakk.net
 http://blogg.karlsbakk.net/
 --
 I all pedagogikk er det essensielt at pensum presenteres intelligibelt. Det 
 er et elementært imperativ for alle pedagoger å unngå eksessiv anvendelse av 
 idiomer med fremmed opprinnelse. I de fleste tilfeller eksisterer adekvate og 
 relevante synonymer på norsk.
 
 --
 What Every C/C++ and Fortran developer Should Know!
 Read this article and learn how Intel has extended the reach of its 
 next-generation tools to help Windows* and Linux* C/C++ and Fortran 
 developers boost performance applications - including clusters. 
 http://p.sf.net/sfu/intel-dev2devmay
 ___
 Bacula-users mailing list
 Bacula-users@lists.sourceforge.net
 https://lists.sourceforge.net/lists/listinfo/bacula-users


--
What Every C/C++ and Fortran developer Should Know!
Read this article and learn how Intel has extended the reach of its 
next-generation tools to help Windows* and Linux* C/C++ and Fortran 
developers boost performance applications - including clusters. 
http://p.sf.net/sfu/intel-dev2devmay
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] OneFS = no doesn't work

2011-05-18 Thread Graham Keeling
On Wed, May 18, 2011 at 04:05:45PM +0100, Graham Keeling wrote:
 On Wed, May 18, 2011 at 04:31:15PM +0200, Roy Sigurd Karlsbakk wrote:
  Hi all
  
  Working on setting up Bacula backup of a fileserver, I can't make OneFS = 
  no work. The server is running OpenIndiana and has a few terabytes of 
  storage. The home directories under /tos-data/home/${username} are each a 
  ZFS filesystem/dataset. The configuration below looks good to me, but 
  Bacula still complains about /tos-data/home/znw is a different filesystem. 
  Will not descend from /tos-data/home into it.
  
  How can I make it decend automatically?
 
 onefs = yes   ?

Sorry, my mistake. Having checked the documentation, you got it the right way
round. I don't know the answer.

  We have ~30 users on this site, and it'll be far more flexible to just 
  backup the lot than backing up each and every one of them
 
 
  
  roy (see below for config)
  
  # Home directories
  FileSet {
Name = verdande.nilu.no-home-fileset
Include {
  Options {
signature = MD5
OneFS  = no
FSType = zfs
  }
  Options {
Exclude = yes
WildFile = *.mp3
  }
  File = /tos-data/home
}
  }
  
  
  -- 
  Vennlige hilsener / Best regards
  
  roy
  --
  Roy Sigurd Karlsbakk
  (+47) 97542685
  r...@karlsbakk.net
  http://blogg.karlsbakk.net/
  --
  I all pedagogikk er det essensielt at pensum presenteres intelligibelt. Det 
  er et elementært imperativ for alle pedagoger å unngå eksessiv anvendelse 
  av idiomer med fremmed opprinnelse. I de fleste tilfeller eksisterer 
  adekvate og relevante synonymer på norsk.
  
  --
  What Every C/C++ and Fortran developer Should Know!
  Read this article and learn how Intel has extended the reach of its 
  next-generation tools to help Windows* and Linux* C/C++ and Fortran 
  developers boost performance applications - including clusters. 
  http://p.sf.net/sfu/intel-dev2devmay
  ___
  Bacula-users mailing list
  Bacula-users@lists.sourceforge.net
  https://lists.sourceforge.net/lists/listinfo/bacula-users
 
 
 --
 What Every C/C++ and Fortran developer Should Know!
 Read this article and learn how Intel has extended the reach of its 
 next-generation tools to help Windows* and Linux* C/C++ and Fortran 
 developers boost performance applications - including clusters. 
 http://p.sf.net/sfu/intel-dev2devmay
 ___
 Bacula-users mailing list
 Bacula-users@lists.sourceforge.net
 https://lists.sourceforge.net/lists/listinfo/bacula-users


--
What Every C/C++ and Fortran developer Should Know!
Read this article and learn how Intel has extended the reach of its 
next-generation tools to help Windows* and Linux* C/C++ and Fortran 
developers boost performance applications - including clusters. 
http://p.sf.net/sfu/intel-dev2devmay
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] mysqldump and Full,Diff,Incr

2011-05-17 Thread Graham Keeling
On Tue, May 17, 2011 at 12:31:15PM +0200, Robert Kromoser wrote:
 I have one question where I can't found any information in the bacula
 documentation.
...
 Question 1:
 
 The mysqldump backups via the bpipe plugin should be every backup run a
 full backup of the mysql database regardless which level will be used
 from the schedule, isn't it?

Yes.
As I understand it:
Bacula doesn't do delta differencing, so you will be backing up the whole
database each time, regardless of whether you have mysqldumped to a file, or
you are mysqldumping to bpipe.

Unless you can somehow get mysql to only give you the differences it has made
since the last backup. That might be a question to put to a mysql list. I've
never heard of such a thing, though, but I might be ignorant.

 Question 2:
...
 Where can I see which file sizes the pseudo filenames for the mysql
 databases has after the backup via plugin bpibe?

That is two questions, not one. :)
Sorry, I don't know the answer to the second question.


--
Achieve unprecedented app performance and reliability
What every C/C++ and Fortran developer should know.
Learn how Intel has extended the reach of its next-generation tools
to help boost performance applications - inlcuding clusters.
http://p.sf.net/sfu/intel-dev2devmay
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] Full backup will not be idendtified

2011-05-16 Thread Graham Keeling
On Mon, May 16, 2011 at 08:43:51AM +0200, Robert Kromoser wrote:
 Hi folks.
 
 In my configuration I have 2 files per client.
 One file contains the storage definition (Archive Device)
 and one file contains the directory directives.
 I use one fileset definition named SugarCRM__Fileset for
 my three backup jobs SugarCRM_xxx_Full, SugarCRM_xxx_Diff 
 and SugarCRM_xxx_Incr where xxx is the name of the client.
 
 When I start a Full Backup it will be terminated successfully.
 When I start a Differential or Incremental backup immediate
 following the Full backup then the Full backup won't be identified
 and the Differential or Incremental Backup starts a Full backup again.
 
 Does anyone know this problem?

Does it find the full backup when you start the next job a long time later?

Bacula relies heavily on timestamps to determine the last full backup.
Therefore, a good thing to check is your clock and the times that are getting
written into your database.

You can use this SQL command to get a list of jobs and times:
SELECT JobId,Job,Level,StartTime,EndTime FROM Job;


--
Achieve unprecedented app performance and reliability
What every C/C++ and Fortran developer should know.
Learn how Intel has extended the reach of its next-generation tools
to help boost performance applications - inlcuding clusters.
http://p.sf.net/sfu/intel-dev2devmay
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] How does Bacula back-up files?

2011-05-13 Thread Graham Keeling
On Fri, May 13, 2011 at 05:32:05AM -0700, obviously wrote:
 Hello,
 
 I have a question I can't solve...
 
 The is the situation:
 
 I create a file with: dd if=/dev/urandom of=test.bin bs=10M count=300
 This gives me a file of 3GB.
 I check it's MD5 with md5sum test.bin
 
 I clear my cache with echo 3  /proc/sys/vm/drop_caches.
 
 I check my chache with free -m.
 
 I start a backup with Bacula of only 1 file, namely test.bin
 
 Again, I flush the cache and when the back-up job is starting I remove the 
 test.bin file on the server.
 
 And Bacula doens't react at all, it keeps backing up the file like it is 
 still there.
 
 The backup finishes with no warnings, even it is removed during the backup.
 
 I restore the test.bin file from tape and checks the md5 of it, and strangely 
 the md5sum is the same... 
 
 So my question, how does Bacula do this? Cause I remove the file during the 
 backup and flush the cache frequently...
 
 I hope you guys understand my q, my english is realy bad :) excuse me...

I would have thought that it is because the space isn't really reclaimed until
all open file descriptors are closed. Repeat the experiment and run 'df' at
these times (assuming that bacula is saving to a different filesystem):

a) before the file is created
b) after the file is created, while bacula is backing it up
c) after the file is deleted, before bacula is finished
d) after bacula has finished


a) will have low space usage
b) will have higher space usage
c) will have the same as (b)
d) will have the same as (a)



 +--
 |This was sent by tijshooyber...@hotmail.com via Backup Central.
 |Forward SPAM to ab...@backupcentral.com.
 +--
 
 
 
 --
 Achieve unprecedented app performance and reliability
 What every C/C++ and Fortran developer should know.
 Learn how Intel has extended the reach of its next-generation tools
 to help boost performance applications - inlcuding clusters.
 http://p.sf.net/sfu/intel-dev2devmay
 ___
 Bacula-users mailing list
 Bacula-users@lists.sourceforge.net
 https://lists.sourceforge.net/lists/listinfo/bacula-users


--
Achieve unprecedented app performance and reliability
What every C/C++ and Fortran developer should know.
Learn how Intel has extended the reach of its next-generation tools
to help boost performance applications - inlcuding clusters.
http://p.sf.net/sfu/intel-dev2devmay
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] Questions regarding migration job failure

2011-05-12 Thread Graham Keeling
On Wed, May 11, 2011 at 02:06:44PM -0700, Jerry Lowry wrote:
 another mistake on my part.  You have to give bls the correct spelling  
 of the volume ( sometimes I wonder )

 Once I corrected the volume name this is the results I get:

 Volume Record: File:blk=0: 206 Sessid=16 SessTime=1303843290 Jobid=3  
 DataLen=171
 11-May 13:42 bls JobId 0: Error: block.c:318 Volumne data error at 0:206!
 Block checksum mismatch in block=6010112 len=64512: calc=c6a6912d  
 blk=50a7d773

Well, that's the problem right there.
Your migration doesn't work when volumes that are not corrupted are being read.

As to how your volumes got corrupted, that's a much harder question.

If it were me, I would start everything from scratch, and after every backup
run your 'bls' command on any volume that changed. This will let you catch
the problem just after it happened, and you might be able to spot anything
strange that happened before that.

(assuming that it is a bacula bug, rather than you having a disk or a file
system problem)

 I ran this again with debug at level 200. I have attached the file with  
 the output.

 thanks for all your help!

 On 5/11/2011 12:11 PM, Jerry Lowry wrote:
 Hi,

 No, the migration job is occurring on the same storage daemon.  This  
 storage daemon has 6 raid devices setup as jbod, 3 are for daily use  
 and 3 are setup as hotswap devices for off-site backups.  The problem  
 is when I run bls on the storage daemon where the disks are located I  
 get a message asking me to mount the disk, which is already mounted  
 according to the director, as well as being mounted by the OS.



 On 5/11/2011 11:26 AM, Phil Stracchino wrote:
 On 05/11/11 13:48, Jerry Lowry wrote:
 Ok, I am trying to run bls on the specified volume file that is
 associated with this job. But the problem I am having is that bls is
 failing trying to stat the device.

 I have one director and two storage directors.  The volume I am trying
 to run against is on the second SD.  Do I run bls on the system where
 the 'director' is or on the system thats running the stand alone 'sd'
 where the volume is located?
 Jerry,
 If I'm understanding you correctly, you have two storage daemons, and
 you're trying to do a migration from a device on one SD to a device on
 the other.  Is this correct?

 If this understanding is correct, sorry, it won't work.  Copy and
 migration can currently only be done between devices controlled by the
 same SD.  (This is in large part a result of there being no current
 capability for direct communication between one storage daemon and another.)



 -- 

 ---
 Jerold Lowry
 IT Manager / Software Engineer
 Engineering Design Team (EDT), Inc. a HEICO company
 1400 NW Compton Drive, Suite 315
 Beaverton, Oregon 97006 (U.S.A.)
 Phone: 503-690-1234 / 800-435-4320
 Fax: 503-690-1243
 Web: _www.edt.com http://www.edt.com/_



 --
 Achieve unprecedented app performance and reliability
 What every C/C++ and Fortran developer should know.
 Learn how Intel has extended the reach of its next-generation tools
 to help boost performance applications - inlcuding clusters.
 http://p.sf.net/sfu/intel-dev2devmay


 ___
 Bacula-users mailing list
 Bacula-users@lists.sourceforge.net
 https://lists.sourceforge.net/lists/listinfo/bacula-users

 -- 

 ---
 Jerold Lowry
 IT Manager / Software Engineer
 Engineering Design Team (EDT), Inc. a HEICO company
 1400 NW Compton Drive, Suite 315
 Beaverton, Oregon 97006 (U.S.A.)
 Phone: 503-690-1234 / 800-435-4320
 Fax: 503-690-1243
 Web: _www.edt.com http://www.edt.com/_



 [jlowry@distress-sd bin]$ ./bls -d 200 -j -v -v -V home-0006 -c 
 /etc/bacula/bacula-sd.conf /Home
 bls: stored_conf.c:698-0 Inserting director res: distress-mon
 bls: stored_conf.c:698-0 Inserting device res: DBB
 bls: stored_conf.c:698-0 Inserting device res: Hardware
 bls: stored_conf.c:698-0 Inserting device res: Swift
 bls: stored_conf.c:698-0 Inserting device res: Home
 bls: stored_conf.c:698-0 Inserting device res: Workstations
 bls: stored_conf.c:698-0 Inserting device res: TopSwap
 bls: stored_conf.c:698-0 Inserting device res: MidSwap
 bls: stored_conf.c:698-0 Inserting device res: BottomSwap
 bls: stored_conf.c:698-0 Inserting device res: FileStorage
 bls: stored_conf.c:698-0 Inserting device res: FileStorage1
 bls: stored_conf.c:698-0 Inserting device res: Drive-1
 bls: match.c:250-0 add_fname_to_include prefix=0 gzip=0 fname=/
 bls: butil.c:281 Using device: /Home for reading.
 bls: dev.c:284-0 init_dev: tape=0 dev_name=/Home
 bls: vol_mgr.c:162-0 add read_vol=home-0006 JobId=0
 bls: butil.c:186-0 Acquire device for read
 bls: acquire.c:95-0 Want Vol=home-0006 Slot=0
 bls: acquire.c:109-0 MediaType dcr= dev=File
 bls: acquire.c:189-0 

Re: [Bacula-users] Jobs won't start

2011-05-12 Thread Graham Keeling
On Thu, May 12, 2011 at 09:36:37AM -0500, Jake Debord wrote:
 The file I have setup to backup is: C:\jaketestfolder   It isn't a
 network share or anything, just a folder I created with a couple .mpeg
 in there so give it something to backup.

Have you tried this?
C:/jaketestfolder
(forward slash instead of backslash)

 On Thu, May 12, 2011 at 9:29 AM, John Drescher dresche...@gmail.com wrote:
  I am setting up Bacula for the first time. I am attempting to test it
  by backing up a test folder on an XP machine.
  Here are the messages I get when I try to run my job:
 
  12-May 08:45 DoraleeII-dir JobId 26: No prior Full backup Job record found.
  12-May 08:45 DoraleeII-dir JobId 26: No prior or suitable Full backup
  found in catalog. Doing FULL backup.
  12-May 08:45 DoraleeII-dir JobId 26: Start Backup JobId 26,
  Job=XPtest.2011-05-12_08.45.37_04
  12-May 08:45 DoraleeII-dir JobId 26: Using Device FileStorage
  12-May 08:45 DoraleeII-sd JobId 26: Volume Volume1 previously
  written, moving to end of data.
  12-May 08:45 DoraleeII-sd JobId 26: Ready to append to end of Volume
  Volume1 size=2137
  12-May 08:45 jake-laptop-fd JobId 26: No drive letters found for
  generating VSS snapshots.
 
 
  Are you backing up network shares or some drive that can not be
  accessed by the SYSTEM user? Remember the SYSTEM user will not have
  the network drives that the logged in user has. Each user will have
  its own profile.
 
  John
 
 
 --
 Achieve unprecedented app performance and reliability
 What every C/C++ and Fortran developer should know.
 Learn how Intel has extended the reach of its next-generation tools
 to help boost performance applications - inlcuding clusters.
 http://p.sf.net/sfu/intel-dev2devmay
 ___
 Bacula-users mailing list
 Bacula-users@lists.sourceforge.net
 https://lists.sourceforge.net/lists/listinfo/bacula-users


--
Achieve unprecedented app performance and reliability
What every C/C++ and Fortran developer should know.
Learn how Intel has extended the reach of its next-generation tools
to help boost performance applications - inlcuding clusters.
http://p.sf.net/sfu/intel-dev2devmay
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] Jobs won't start

2011-05-12 Thread Graham Keeling
On Thu, May 12, 2011 at 10:28:13AM -0500, Jake Debord wrote:
 Ok edited the conf directly and reloaded through bconsole.
 Any other Ideas?
 Same error:
 
 12-May 10:22 DoraleeII-dir JobId 28: Start Backup JobId 28,
 Job=XPtest.2011-05-12_10.22.35_04
 12-May 10:22 DoraleeII-dir JobId 28: Using Device FileStorage
 12-May 10:22 jake-laptop-fd JobId 28: DIR and FD clocks differ by 9
 seconds, FD automatically compensating.
 12-May 10:22 DoraleeII-sd JobId 28: Volume Volume1 previously
 written, moving to end of data.
 12-May 10:22 DoraleeII-sd JobId 28: Ready to append to end of Volume
 Volume1 size=2913
 12-May 10:22 jake-laptop-fd JobId 28: No drive letters found for
 generating VSS snapshots.

I think this 'No drive letters found for generating VSS snapshots' bit is the
problem.
Bacula goes through the includes in your fileset and tries to figure out the
drive letters (it should find C in your case).

I think you probably have a mistake in your fileset that means that
C:/yourpath is not being read in properly.

If you post your config, somebody might be able to spot the problem.

 12-May 10:22 DoraleeII-sd JobId 28: Job write elapsed time = 00:00:09,
 Transfer rate = 0  Bytes/second
 12-May 10:22 DoraleeII-dir JobId 28: Bacula DoraleeII-dir 5.0.2
 (28Apr10): 12-May-2011 10:22:46
 
 On Thu, May 12, 2011 at 10:19 AM, John Drescher dresche...@gmail.com wrote:
  I am using webmin to edit Bacula. I went to file sets, and changed the
  files and directories to backup
 
  Everytime I change it to c:/jaketestfolder  when I save it, it reverts
  it back to c:\jaketestfolder
 
  That normal?
 
  No. But by your second question you are not editing the configuration
  file directly but using some program.
 
 Should I just change it in the conf?
 
 
  Yes. I always edit the conf files directly. Remember to issue the
  reload command in bconsole after changing the configuration.
 
  John
 
 
 --
 Achieve unprecedented app performance and reliability
 What every C/C++ and Fortran developer should know.
 Learn how Intel has extended the reach of its next-generation tools
 to help boost performance applications - inlcuding clusters.
 http://p.sf.net/sfu/intel-dev2devmay
 ___
 Bacula-users mailing list
 Bacula-users@lists.sourceforge.net
 https://lists.sourceforge.net/lists/listinfo/bacula-users


--
Achieve unprecedented app performance and reliability
What every C/C++ and Fortran developer should know.
Learn how Intel has extended the reach of its next-generation tools
to help boost performance applications - inlcuding clusters.
http://p.sf.net/sfu/intel-dev2devmay
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] Jobs won't start

2011-05-12 Thread Graham Keeling
On Thu, May 12, 2011 at 10:35:29AM -0500, Jake Debord wrote:
 Sure, here you go:

I don't think your fileset is connected to your job.
If you are using fileset WinXPTest and job XPtest, then job XPtest should have:

FileSet = WinXPTest

whereas it currently has:

FileSet = Full Set

 #
 # Default Bacula Director Configuration file
 #
 #  The only thing that MUST be changed is to add one or more
 #   file or directory names in the Include directive of the
 #   FileSet resource.
 #
 #  For Bacula release 5.0.2 (28 April 2010) -- ubuntu 10.10
 #
 #  You might also want to change the default email address
 #   from root to your address.  See the mail and operator
 #   directives in the Messages resource.
 #
 
 Director {# define myself
   Name = DoraleeII-dir
   DIRport = 9101# where we listen for UA connections
   QueryFile = /etc/bacula/scripts/query.sql
   WorkingDirectory = /var/lib/bacula
   PidDirectory = /var/run/bacula
   Maximum Concurrent Jobs = 1
   Password = * # Console password
   Messages = Daemon
   #DirAddress = 127.0.0.1
 }
 
 JobDefs {
   Name = DefaultJob
   Type = Backup
   Level = Incremental
   Client = DoraleeII-fd
   FileSet = Full Set
   Schedule = WeeklyCycle
   Storage = File
   Messages = Standard
   Pool = File
   Priority = 10
   Write Bootstrap = /var/lib/bacula/%c.bsr
 }
 
 
 #
 # Define the main nightly save backup job
 #   By default, this job will back up to disk in
 /nonexistant/path/to/file/archive/dir
 Job {
   Name = BackupClient1
   JobDefs = DefaultJob
 }
 
 #Job {
 #  Name = BackupClient2
 #  Client = DoraleeII2-fd
 #  JobDefs = DefaultJob
 #}
 
 # Backup the catalog database (after the nightly save)
 Job {
   Name = BackupCatalog
   JobDefs = DefaultJob
   Level = Full
   FileSet=Catalog
   Schedule = WeeklyCycleAfterBackup
   # This creates an ASCII copy of the catalog
   # Arguments to make_catalog_backup.pl are:
   #  make_catalog_backup.pl catalog-name
   RunBeforeJob = /etc/bacula/scripts/make_catalog_backup.pl MyCatalog
   # This deletes the copy of the catalog
   RunAfterJob  = /etc/bacula/scripts/delete_catalog_backup
   Write Bootstrap = /var/lib/bacula/%n.bsr
   Priority = 11   # run after main backup
 }
 
 #
 # Standard Restore template, to be changed by Console program
 #  Only one such job is needed for all Jobs/Clients/Storage ...
 #
 Job {
   Name = RestoreFiles
   Type = Restore
   Client=DoraleeII-fd
   FileSet=Full Set
   Storage = File
   Pool = Default
   Messages = Standard
   Where = /nonexistant/path/to/file/archive/dir/bacula-restores
 }
 
 
 # List of files to be backed up
 FileSet {
   Name = Full Set
   Include {
 Options {
   signature = MD5
 }
 #
 #  Put your list of files here, preceded by 'File =', one per line
 #or include an external list with:
 #
 #File = file-name
 #
 #  Note: / backs up everything on the root partition.
 #if you have other partitions such as /usr or /home
 #you will probably want to add them too.
 #
 #  By default this is defined to point to the Bacula binary
 #directory to give a reasonable FileSet to backup to
 #disk storage during initial testing.
 #
 #File = /usr/sbin
   }
 
 #
 # If you backup the root directory, the following two excluded
 #   files can be useful
 #
   Exclude {
 File = /var/lib/bacula
 File = /nonexistant/path/to/file/archive/dir
 File = /proc
 File = /tmp
 File = /.journal
 File = /.fsck
   }
 }
 
 #
 # When to do the backups, full backup on first sunday of the month,
 #  differential (i.e. incremental since full) every other sunday,
 #  and incremental backups other days
 Schedule {
   Name = WeeklyCycle
   Run = Full 1st sun at 23:05
   Run = Differential 2nd-5th sun at 23:05
   Run = Incremental mon-sat at 23:05
 }
 
 # This schedule does the catalog. It starts after the WeeklyCycle
 Schedule {
   Name = WeeklyCycleAfterBackup
   Run = Full sun-sat at 23:10
 }
 
 # This is the backup of the catalog
 FileSet {
   Name = Catalog
   Include {
 Options {
   signature = MD5
 }
 File = /var/lib/bacula/bacula.sql
   }
 }
 
 # Client (File Services) to backup
 Client {
   Name = DoraleeII-fd
   Address = localhost
   FDPort = 9102
   Catalog = MyCatalog
   Password = **  # password for FileDaemon
   File Retention = 30 days# 30 days
   Job Retention = 6 months# six months
   AutoPrune = yes # Prune expired Jobs/Files
 }
 
 #
 # Second Client (File Services) to backup
 #  You should change Name, Address, and Password before using
 #
 #Client {
 #  Name = DoraleeII2-fd
 #  Address = localhost2
 #  FDPort = 9102
 #  Catalog = MyCatalog
 #  Password = uDZsFQvCq2wvRs986stkk77AEKvrEoWL-2 # password
 for FileDaemon 2
 #  File Retention = 30 days# 30 days
 #  Job Retention = 6 months# six months
 #  AutoPrune = yes # Prune expired 

Re: [Bacula-users] Questions regarding migration job failure

2011-05-11 Thread Graham Keeling
On Wed, May 11, 2011 at 08:44:18AM -0700, Jerry Lowry wrote:
 Is there anyone that can help me with this problem?  Surely someone is  
 using the migration job.

I'm not using migration jobs, but maybe I can give you a hint...

 On 5/9/2011 2:51 PM, jerry lowry wrote:
 09-May 13:59 distress-sd-sd JobId 2549: Forward spacing Volume 
 hardware-0007 tofile:block  0:215.
 09-May 13:59 distress-sd-sd JobId 2549: Error: block.c:275 Volume data error 
 at 0:215! Wanted ID: BB02, got 2. Buffer discarded.

It seems to me that the error is not with the write to the new volume, but with
the read from the existing volume hardware-0007.

I've seen similar errors before, when I found bugs in bacula that trashed the
data on my disk volumes.

One thing to try is a restore from hardware-0007. I predict that you will
get the same error.


--
Achieve unprecedented app performance and reliability
What every C/C++ and Fortran developer should know.
Learn how Intel has extended the reach of its next-generation tools
to help boost performance applications - inlcuding clusters.
http://p.sf.net/sfu/intel-dev2devmay
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] Run out of disk space

2011-05-11 Thread Graham Keeling
On Wed, May 11, 2011 at 11:05:47AM -0500, Duncan McQueen wrote:
 The second command gives me this:
 
 Error updating Volume records: ERR=sql_update.c:443 Update failed: 
 affected_rows=0 for UPDATE Media SET ActionOnPurge=0, 
 Recycle=1,VolRetention=31536000,VolUseDuration=0,MaxVolJobs=0,MaxVolFiles=0,MaxVolBytes=0,RecyclePoolId=0
  WHERE PoolId=3

Is it because your database files are on the same disk that has filled up?

If so and it were me, I would truncate a volume that I knew that I didn't want,
restart the database and try again.

 -Original Message-
 From: John Drescher [mailto:dresche...@gmail.com] 
 Sent: Wednesday, May 11, 2011 10:33 AM
 To: Duncan McQueen
 Cc: Bacula-users@lists.sourceforge.net
 Subject: Re: [Bacula-users] Run out of disk space
 
  I set my retention periods too long and now have run out of disk space.  If
  I want to start over (after I have edited the conf files for a good
  retention period) and blow away the old volumes, what do I need to do?  Just
  delete the files or do I need to do something in the database?
 
 
 You would need to delete each volume in the database. I would just
 change the retention period on the pool and then use bconsole to apply
 that to the existing volumes.
 
 I think the commands are
 
 update Pool from resource
 
 then
 
 update  all volumes in pool
 
 or similar.
 
 John
 
 --
 Achieve unprecedented app performance and reliability
 What every C/C++ and Fortran developer should know.
 Learn how Intel has extended the reach of its next-generation tools
 to help boost performance applications - inlcuding clusters.
 http://p.sf.net/sfu/intel-dev2devmay
 ___
 Bacula-users mailing list
 Bacula-users@lists.sourceforge.net
 https://lists.sourceforge.net/lists/listinfo/bacula-users


--
Achieve unprecedented app performance and reliability
What every C/C++ and Fortran developer should know.
Learn how Intel has extended the reach of its next-generation tools
to help boost performance applications - inlcuding clusters.
http://p.sf.net/sfu/intel-dev2devmay
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] Questions regarding migration job failure

2011-05-11 Thread Graham Keeling
On Wed, May 11, 2011 at 09:19:49AM -0700, Jerry Lowry wrote:
 I have not tried to restore from that particular job as yet, but the  
 next question would be, if it fails on the restore that would mean that  
 anything backed up in that job would not be valid, correct?

I think that depends upon what you mean by valid.

For example, it might be possible to skip over the bad area of the volume and
restore some files past that bad area.

If it were me, I have to say that I would indeed be treating the whole job as
suspicious. And the others too, probably.

But let's not get ahead of ourselves. Perhaps the volume is actually fine and
the problem is something else.

Rather than doing a restore, maybe it would be worth running commands like
'bls' on the volume first. It would probably give a quicker diagnosis, if
there is a problem.


--
Achieve unprecedented app performance and reliability
What every C/C++ and Fortran developer should know.
Learn how Intel has extended the reach of its next-generation tools
to help boost performance applications - inlcuding clusters.
http://p.sf.net/sfu/intel-dev2devmay
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] About retention, and pruning.

2011-05-05 Thread Graham Keeling
On Thu, May 05, 2011 at 12:12:10AM -0400, Dan Langille wrote:
 On May 4, 2011, at 3:26 AM, Graham Keeling wrote:
 
  On Fri, Apr 29, 2011 at 11:11:24AM +0200, Hugo Letemplier wrote:
   I think that a new feature that add dependency between various job
   levels for the next versions of bacula could be cool.
   The idea is to allow pruning only for volume/jobs that aren't needed
   by other ones whatever are the retention time.
...
   What do you think about such a feature ?
  
  A while ago, I made a patch that does it. Nobody seemed to want it though.
  http://www.adsm.org/lists/html/Bacula-users/2011-01/msg00308.html
 
 
 Just because you didn't find anyone that wanted it does not make it a bad
 idea.
 
 Ideas are sometimes difficult to comprehend.  I didn't follow the above in a
 30 second scanning
 
 If you think it's a good idea. Pursue it.  Give examples.  Describe the
 issues, in brief, and then in general.  Build a case that others can
 comprehend with minimal effort.

Thank you for the pep talk. :)

But I think that I have explained very well, in plain terms here:
http://www.adsm.org/lists/html/Bacula-users/2011-01/msg00308.html
Bacula doesn't prevent backups that other backups depend on from being
purged.

 If you think it's a good idea, something will come of it.  But nothing will
 come of it if you don't persist.

Bacula is clearly not built to think that one backup depends directly on
another one. For example, there is no database field that says 'this backup
depends on this other backup'. This would have been one of the first fields
that I would have designed into it. Instead, it heavily relies on timestamps
and sort of scoops up everything that has a timestamp that matches.

I think that this may be down to its main authors having a 'tape mentality'
that I don't fully understand, since I have never worked with tapes.

Therefore, although I think that it would be a very good idea for bacula to
not purge a backup that another backup depends upon, I do not expect that this
patch will be adopted. And I do not expect bacula to gain this ability anytime
soon, since the concept seems so alien to it.
Making bacula do my concept of the 'right thing' would require a very radical
redesign. And perhaps my 'right thing' is different to many other people's
'right thing'.

Making the patch (which I have now attached to this email) felt as if I was
subverting some basic assumptions that I did not fully understand.

Furthermore, the patch makes its own assumptions:
a) you are using one Job per Volume
b) you are not using Job / File retentions (ie, you have set them very high)
and instead rely purely on Volume retentions. 
c) you are using mysql (it might work fine on postgres, but I haven't tried)

So, the patch is provided here on an 'as is' basis. If anybody likes it, or
wants to do something with it, or persist with it to get it into bacula, then
they can. :)

A note on the patch:
The main part starts in src/dird/autoprune.c with the call to
db_volume_has_dependents().


Final note:
I have actually gone very far down the path of 'very radical redesign' - see
http://burp.grke.net/ if you are interested.
Index: WORK/src/cats/protos.h
===
RCS file: /cvs/netpilot/GPL/bacula-5.0.3/WORK/src/cats/protos.h,v
retrieving revision 1.1
retrieving revision 1.2
diff -u -r1.1 -r1.2
--- WORK/src/cats/protos.h
+++ WORK/src/cats/protos.h
@@ -90,6 +90,8 @@
 int db_delete_media_record(JCR *jcr, B_DB *mdb, MEDIA_DBR *mr);
 
 /* sql_find.c */
+int db_volume_has_dependents(JCR *jcr, B_DB *mdb, int mediaid);
+void db_volume_list_dependents(JCR *jcr, B_DB *mdb, DB_LIST_HANDLER sendit, void *ctx);
 bool db_find_last_job_start_time(JCR *jcr, B_DB *mdb, JOB_DBR *jr, POOLMEM **stime, int JobLevel);
 bool db_find_job_start_time(JCR *jcr, B_DB *mdb, JOB_DBR *jr, POOLMEM **stime);
 bool db_find_last_jobid(JCR *jcr, B_DB *mdb, const char *Name, JOB_DBR *jr);
@@ -131,6 +133,7 @@
 void db_list_files_for_job(JCR *jcr, B_DB *db, uint32_t jobid, DB_LIST_HANDLER sendit, void *ctx);
 void db_list_media_records(JCR *jcr, B_DB *mdb, MEDIA_DBR *mdbr, DB_LIST_HANDLER *sendit, void *ctx, e_list_type type);
 void db_list_jobmedia_records(JCR *jcr, B_DB *mdb, JobId_t JobId, DB_LIST_HANDLER *sendit, void *ctx, e_list_type type);
+void db_list_jobandmedia_records(JCR *jcr, B_DB *mdb, JobId_t JobId, DB_LIST_HANDLER *sendit, void *ctx, e_list_type type);
 void db_list_joblog_records(JCR *jcr, B_DB *mdb, JobId_t JobId, DB_LIST_HANDLER *sendit, void *ctx, e_list_type type);
 int  db_list_sql_query(JCR *jcr, B_DB *mdb, const char *query, DB_LIST_HANDLER *sendit, void *ctx, int verbose, e_list_type type);
 void db_list_client_records(JCR *jcr, B_DB *mdb, DB_LIST_HANDLER *sendit, void *ctx, e_list_type type);
Index: WORK/src/cats/sql_cmds.c
===
RCS file: /cvs/netpilot/GPL/bacula-5.0.3/WORK/src/cats/sql_cmds.c,v
retrieving

Re: [Bacula-users] About retention, and pruning.

2011-05-04 Thread Graham Keeling
On Fri, Apr 29, 2011 at 11:11:24AM +0200, Hugo Letemplier wrote:
 2011/4/29 Jérôme Blion jerome.bl...@free.fr:
  On Thu, 28 Apr 2011 17:33:48 +0200, Hugo Letemplier
  hugo.let...@gmail.com
  wrote:
  After the job ran many times: I have the following volume = job
  matching
  Vol name   Level      Time
  Test1         Full        15:50
  324            Inc         16:00
  325            Inc         16:10
  326            Inc         16:20
  324            Inc         16:30
  Test2         Full        16:40
  325            Inc         16:50
  326            Inc         17:00
 
  This is problematic because Vol324 is recycled instead of creating a new
  one
  I am not sure to understand the various retention periods : File, job,
  volume
  I think that I can increase the retention times but the problem will
  always be the same.
  ex : if I keep my incremental one hour then my first ones will always
  be purged first
  In a good strategy you purge the full sequence of incremental at the
  same time because you need to recycle you volume and don't want to
  keep a recent volume (incremental) without the previous ones.
 
  You would waste your tape/disk space.
 
  To do that I imagine that I need to create one pool per day and reduce
  progressively the retention periods. It doesn't makes sense !
   I turned the problem on all its sides but I cant find a good
  solution. Maybe the other retention period are the solution but I
  didn't succeeded ?
  Thanks in advance
 
  That means that your upper backup levels should have greater retentions to
  be sure that at any time, you can use the full + diff + inc if needed.
  Keeping incremental without full backup can be useful to restore only
  specific files.
 Yes, but this problem is the same between incremental backups:
 Lots of people recommended me to use one pool per level:
 It works for Full and differentials, but not for inc pool
 Maybe one inc-pool per incremental run of a scheduling cycle should
 be good ? But it 's not simple
 I think that a new feature that add dependency between various job
 levels for the next versions of bacula could be cool.
 The idea is to allow pruning only for volume/jobs that aren't needed
 by other ones whatever are the retention time.
 As a consequence : you can prune a full only (((if the differential is
 pruned) if the XXX incrementals are pruned) if the last incremental is
 pruned )
 So you you can say that the maximum retention time for a full is at
 least equal to the retention time of the last inc + the delay between
 the full and the this last inc so you have something like this :
 full  : 
 inc  :   =
 inc  : =
 inc  :   =
 inc  : =
 inc  :   =
 inc  : =
 diff  :   
 inc  : =
 inc  :   =
 inc  : =
 inc  :   =
 inc  : =
 inc  :   =
 diff  : 
 inc  :   =
 inc  : =
 inc  :   =
 inc  : =
 inc  :   =
 inc  : =
 
 and not like that :
 diff  : ==
 inc  :   ===
 inc  : ===
 inc  :   ===
 
 What do you think about such a feature ?

A while ago, I made a patch that does it. Nobody seemed to want it though.
http://www.adsm.org/lists/html/Bacula-users/2011-01/msg00308.html


--
WhatsUp Gold - Download Free Network Management Software
The most intuitive, comprehensive, and cost-effective network 
management toolset available today.  Delivers lowest initial 
acquisition cost and overall TCO of any competing solution.
http://p.sf.net/sfu/whatsupgold-sd
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] It's been 16 days..

2011-04-20 Thread Graham Keeling
On Wed, Apr 20, 2011 at 12:31:44PM -0400, Dan Langille wrote:
 Hi, 
 
 My name is Dan, and it's been 16 days since I last touched my bacula-dir.conf 
 file. 

Haha! This made me laugh like a drain. :)


--
Benefiting from Server Virtualization: Beyond Initial Workload 
Consolidation -- Increasing the use of server virtualization is a top
priority.Virtualization can reduce costs, simplify management, and improve 
application availability and disaster protection. Learn more about boosting 
the value of server virtualization. http://p.sf.net/sfu/vmware-sfdev2dev
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] incremental backup

2011-04-14 Thread Graham Keeling
On Thu, Apr 14, 2011 at 11:33:27AM +0400, ruslan usifov wrote:
 Hello
 
 I'm new in bacula world so have a question:
 
 If i do incremental backup. For example veri big file change only few bytes
 i it, what bacula do send all file, or only changed part of file?

Bacula will send the whole file.

 --
 Benefiting from Server Virtualization: Beyond Initial Workload 
 Consolidation -- Increasing the use of server virtualization is a top
 priority.Virtualization can reduce costs, simplify management, and improve 
 application availability and disaster protection. Learn more about boosting 
 the value of server virtualization. http://p.sf.net/sfu/vmware-sfdev2dev
 ___
 Bacula-users mailing list
 Bacula-users@lists.sourceforge.net
 https://lists.sourceforge.net/lists/listinfo/bacula-users



--
Benefiting from Server Virtualization: Beyond Initial Workload 
Consolidation -- Increasing the use of server virtualization is a top
priority.Virtualization can reduce costs, simplify management, and improve 
application availability and disaster protection. Learn more about boosting 
the value of server virtualization. http://p.sf.net/sfu/vmware-sfdev2dev
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] incremental backup

2011-04-14 Thread Graham Keeling
On Thu, Apr 14, 2011 at 08:48:37AM +0100, Gavin McCullagh wrote:
 Hi,
 
 On Thu, 14 Apr 2011, ruslan usifov wrote:
 
  I'm new in bacula world so have a question:
  
  If i do incremental backup. For example veri big file change only few bytes
  i it, what bacula do send all file, or only changed part of file?
 
 It backs up the whole file each time a single byte or more changes.

To be more accurate:
It backs up the whole file when its timestamp changes, though I don't recall
whether it is the mtime or the ctime. It might be configurable.


--
Benefiting from Server Virtualization: Beyond Initial Workload 
Consolidation -- Increasing the use of server virtualization is a top
priority.Virtualization can reduce costs, simplify management, and improve 
application availability and disaster protection. Learn more about boosting 
the value of server virtualization. http://p.sf.net/sfu/vmware-sfdev2dev
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] Overwriting expired File Volume

2011-02-08 Thread Graham Keeling
On Mon, Feb 07, 2011 at 01:13:58PM -0500, Josh Fisher wrote:
 On 2/7/2011 10:43 AM, Graham Keeling wrote:
 It occurs to me that I might prefer to run the 'truncate all' command via
 a cron job, at some point in the day that bacula doesn't know about.

 Is there a reason, or reasons, that I should not do this?

 The documentation states Be sure that your storage device is idle when  
 you decide to run this command.

Does anybody know what happens if it is not?

If it just means that the right volume doesn't get truncated, that is not so
bad because I can run the command again later.

If it can cause a volume to be truncated that shouldn't be truncated, then
I wouldn't want to risk it at all.


--
The ultimate all-in-one performance toolkit: Intel(R) Parallel Studio XE:
Pinpoint memory and threading errors before they happen.
Find and fix more than 250 security defects in the development cycle.
Locate bottlenecks in serial and parallel code that limit performance.
http://p.sf.net/sfu/intel-dev2devfeb
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] Overwriting expired File Volume

2011-02-07 Thread Graham Keeling
On Mon, Feb 07, 2011 at 06:43:17AM -0500, Phil Stracchino wrote:
 On 02/07/11 04:11, xunil321 wrote:
  
  I have a lack of understanding concerning the handling of a disk based file
  volume to
  avoid a disk overflow. Let's say there is a 100MB file volume Volume-1.
  What will be 
  happen after the Volume Retention Period when the Recycle and AutoPrune are
  set to 
  Yes in our Pool setup?
  Will Bacula 5.0.2 clean the 100MB contents of the file Volume-1 so that it
  will start from
  scratch or do i have to delete/relabel the volume manually for further
  usage?
 
 Starting with 5.0.x, there is a feature by which purged disk volumes can
 be automatically truncated.  This feature was BROKEN AND UNSAFE in
 5.0.1.  It is my recollection that it is fixed, and works properly, in
 5.0.3...  but as with all things, test first before you put it into
 production, just to be sure.

I can save you some testing and tell you that this feature doesn't work because
it doesn't truncate automatically.

See the last comment here:

http://sourceforge.net/apps/wordpress/bacula/2010/02/01/new-actiononpurge-feature/


--
The modern datacenter depends on network connectivity to access resources
and provide services. The best practices for maximizing a physical server's
connectivity to a physical network are well understood - see how these
rules translate into the virtual world? 
http://p.sf.net/sfu/oracle-sfdevnlfb
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] Overwriting expired File Volume

2011-02-07 Thread Graham Keeling
On Mon, Feb 07, 2011 at 06:53:50AM -0500, Dan Langille wrote:
 On 2/7/2011 4:11 AM, xunil321 wrote:
 
  I have a lack of understanding concerning the handling of a disk based file
  volume to
  avoid a disk overflow.
 
 A simple strategy: Max Num Volumes * Max Vol Space = amount of space you 
 want to use.  Why not use that?

This thread gives some good reasons:
http://marc.info/?t=12961209911r=1w=2

(start with this post):
http://marc.info/?l=bacula-usersm=129612717507818w=2

(end with this post):
http://marc.info/?l=bacula-usersm=129621174413003w=2


--
The modern datacenter depends on network connectivity to access resources
and provide services. The best practices for maximizing a physical server's
connectivity to a physical network are well understood - see how these
rules translate into the virtual world? 
http://p.sf.net/sfu/oracle-sfdevnlfb
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] Overwriting expired File Volume

2011-02-07 Thread Graham Keeling
On Mon, Feb 07, 2011 at 10:09:12AM -0500, Josh Fisher wrote:
 On 2/7/2011 6:54 AM, Graham Keeling wrote:
 On Mon, Feb 07, 2011 at 06:43:17AM -0500, Phil Stracchino wrote:
 On 02/07/11 04:11, xunil321 wrote:
 I have a lack of understanding concerning the handling of a disk based file
 volume to
 avoid a disk overflow. Let's say there is a 100MB file volume Volume-1.
 What will be
 happen after the Volume Retention Period when the Recycle and AutoPrune are
 set to
 Yes in our Pool setup?
 Will Bacula 5.0.2 clean the 100MB contents of the file Volume-1 so that 
 it
 will start from
 scratch or do i have to delete/relabel the volume manually for further
 usage?
 Starting with 5.0.x, there is a feature by which purged disk volumes can
 be automatically truncated.  This feature was BROKEN AND UNSAFE in
 5.0.1.  It is my recollection that it is fixed, and works properly, in
 5.0.3...  but as with all things, test first before you put it into
 production, just to be sure.
 I can save you some testing and tell you that this feature doesn't work 
 because
 it doesn't truncate automatically.

 See the last comment here:

 http://sourcefORGE.NET/apps/wordpress/bacula/2010/02/01/new-actiononpurge-feature/

 It does work with 5.0.3. A simple console command, 'purge volume  
 action=all pool=pool_name', can easily be scripted with the RunScript  
 directive. I run it as a run after script in my catalog backup job,  
 but it can be run before a particular job, etc. just as easily.

I stand corrected! I shall try this out, one day.
So I need a RunScript on every job, which runs the truncate commands for each
pool that the job uses.
Thank you.


--
The modern datacenter depends on network connectivity to access resources
and provide services. The best practices for maximizing a physical server's
connectivity to a physical network are well understood - see how these
rules translate into the virtual world? 
http://p.sf.net/sfu/oracle-sfdevnlfb
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] Overwriting expired File Volume

2011-02-07 Thread Graham Keeling
On Mon, Feb 07, 2011 at 03:31:20PM +, Graham Keeling wrote:
 On Mon, Feb 07, 2011 at 10:09:12AM -0500, Josh Fisher wrote:
  On 2/7/2011 6:54 AM, Graham Keeling wrote:
  On Mon, Feb 07, 2011 at 06:43:17AM -0500, Phil Stracchino wrote:
  On 02/07/11 04:11, xunil321 wrote:
  I have a lack of understanding concerning the handling of a disk based 
  file
  volume to
  avoid a disk overflow. Let's say there is a 100MB file volume Volume-1.
  What will be
  happen after the Volume Retention Period when the Recycle and AutoPrune 
  are
  set to
  Yes in our Pool setup?
  Will Bacula 5.0.2 clean the 100MB contents of the file Volume-1 so 
  that it
  will start from
  scratch or do i have to delete/relabel the volume manually for further
  usage?
  Starting with 5.0.x, there is a feature by which purged disk volumes can
  be automatically truncated.  This feature was BROKEN AND UNSAFE in
  5.0.1.  It is my recollection that it is fixed, and works properly, in
  5.0.3...  but as with all things, test first before you put it into
  production, just to be sure.
  I can save you some testing and tell you that this feature doesn't work 
  because
  it doesn't truncate automatically.
 
  See the last comment here:
 
  http://sourcefORGE.NET/apps/wordpress/bacula/2010/02/01/new-actiononpurge-feature/
 
  It does work with 5.0.3. A simple console command, 'purge volume  
  action=all pool=pool_name', can easily be scripted with the RunScript  
  directive. I run it as a run after script in my catalog backup job,  
  but it can be run before a particular job, etc. just as easily.
 
 I stand corrected! I shall try this out, one day.
 So I need a RunScript on every job, which runs the truncate commands for each
 pool that the job uses.
 Thank you.

It occurs to me that I might prefer to run the 'truncate all' command via
a cron job, at some point in the day that bacula doesn't know about.

Is there a reason, or reasons, that I should not do this?


--
The modern datacenter depends on network connectivity to access resources
and provide services. The best practices for maximizing a physical server's
connectivity to a physical network are well understood - see how these
rules translate into the virtual world? 
http://p.sf.net/sfu/oracle-sfdevnlfb
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


[Bacula-users] Why won't my volume truncate?

2011-02-07 Thread Graham Keeling
Hello,

I have just been trying to use the bacula truncate command on disk volumes.

I set 'ActionOnPurge = truncate' on all my Pools.
I ran a bconsole update command to update the field in the database.
This didn't update the field for volumes that were already purged, so I used
mysql to update the 'ActionOnPurge' field for them.

Then I ran this command:

purge volume action=truncate storage=Tower 1 allpools

This truncated lots of purged volumes, but on closer inspection, I find that I
have one volume that hasn't been truncated. Re-running the command doesn't
help.


I have also attempted to go through the bconsole menus to pick the specific
pool, like this:

*purge volume action=truncate

The defined Storage resources are:
 1: Tower 1
 2: Tower 2
Select Storage resource (1-4): 1
The defined Pool resources are:
 1: Tower 1
 2: Tower 2
Select Pool resource (1-19): 1
No volume founds to perform truncate action(s)

bconsole 'list volumes' gives this:
(lots of volumes that did truncate, then last in the list for the pool:)
| 124 | backup-0124 | Purged|   1 |  227 |0 |   
 0 |   1 |0 | 0 | Tower 1   | 2010-12-25 23:49:11 |

mysql select MediaId,VolumeName,ActionOnPurge,VolStatus,VolBytes,StorageId 
from Media where MediaId=124;
+-+-+---+---+--+---+
| MediaId | VolumeName  | ActionOnPurge | VolStatus | VolBytes | StorageId |
+-+-+---+---+--+---+
| 124 | backup-0124 | 1 | Purged|  227 | 1 | 
+-+-+---+---+--+---+

volumes # ls -l backup-0124 
-rw-r- 1 admin-bacula-sd admin-bacula-sd 9160873 Feb  4 00:03 backup-0124


So, any suggestions?


--
The modern datacenter depends on network connectivity to access resources
and provide services. The best practices for maximizing a physical server's
connectivity to a physical network are well understood - see how these
rules translate into the virtual world? 
http://p.sf.net/sfu/oracle-sfdevnlfb
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] Restoring old file - Filling DB with FileRecords using bscan

2011-02-03 Thread Graham Keeling
On Thu, Feb 03, 2011 at 03:21:35PM +0100, Richard Marnau wrote:
   On Wed, 2 Feb 2011 16:13:49 +0100, Richard Marnau said:
  
   2. = Bscan the related archives ===
  
   bscan -p -s -v -P  -h localhost Backupserver -V
  Vol0039\|Vol0045\|Vol0094
  
   Records would have been added or updated in the catalog:
 3 Media
 3 Pool
54 Job
352620 File
   root@mond:~# bconsole
   Connecting to Director localhost:9101
  
   === But still I cannot browse the files on the volume...
  
   For one or more of the JobIds selected, no files were found,
   so file selection is not possible. Most likely your retention policy
  pruned the files.
  
  It could be this bug:
  
  http://www.mail-archive.com/bacula-
  us...@lists.sourceforge.net/msg44944.html
  
 
 Martin, thanks for the hint. I will check that ..
 
   == Could it be related to the following output?
  
   bscan: bscan.c:637 End of all Volumes. VolFiles=10 VolBlocks=0
  VolBytes=44,246,214,007
   02-Feb 15:27 bscan JobId 0: Error: 47 block read errors not printed.
  
  That might cause a few missing files, but the majority should be
  available.
  Are these tape or disk volumes?
 
 Disk Volumes on an smb share (software raid1).
 I already checked the harddrives, but they seems to be fine.
 
 Something is very odd with my database. I walk through yesterday backups but 
 not even 1 week old volumes.
 
 ===
 Select the Client (1-25): 18
 Automatically selected FileSet: FullWindows
 +---+---+--++-+-+
 | JobId | Level | JobFiles | JobBytes   | StartTime   | 
 VolumeName  |
 +---+---+--++-+-+
 | 1,631 | F |  232,077 | 63,158,041,709 | 2011-01-03 00:46:38 | 
 Yearly0887  |
 | 1,631 | F |  232,077 | 63,158,041,709 | 2011-01-03 00:46:38 | 
 Yearly0898  |
 | 1,915 | D |   12,237 |  5,763,031,019 | 2011-01-31 00:11:23 | 
 Monthly0901 |
 +---+---+--++-+-+
 You have selected the following JobIds: 1631,1915
 
 Building directory tree for JobId(s) 1631,1915 ...  ++
 --
 
 For one or more of the JobIds selected, no files were found,
 so file selection is not possible.
 Most likely your retention policy pruned the files.
 
 ===
 I did now setup a new bacula server and try rebuild the database using bscan.
 It's a good exercise anyway, but I really need to fix and understand the main 
 problem.
 There is nothing worse than a unreliable backup system.

What have you got set for 'File retention'?


--
Special Offer-- Download ArcSight Logger for FREE (a $49 USD value)!
Finally, a world-class log management solution at an even better price-free!
Download using promo code Free_Logger_4_Dev2Dev. Offer expires 
February 28th, so secure your free ArcSight Logger TODAY! 
http://p.sf.net/sfu/arcsight-sfd2d
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] Restoring old file - Filling DBwith FileRecords using bscan

2011-02-03 Thread Graham Keeling
On Thu, Feb 03, 2011 at 03:52:15PM +0100, Richard Marnau wrote:
  On Thu, Feb 03, 2011 at 03:21:35PM +0100, Richard Marnau wrote:
 On Wed, 2 Feb 2011 16:13:49 +0100, Richard Marnau said:

 2. = Bscan the related archives ===

 bscan -p -s -v -P  -h localhost Backupserver -V
Vol0039\|Vol0045\|Vol0094

 Records would have been added or updated in the catalog:
   3 Media
   3 Pool
  54 Job
  352620 File
 root@mond:~# bconsole
 Connecting to Director localhost:9101

 === But still I cannot browse the files on the volume...

 For one or more of the JobIds selected, no files were found,
 so file selection is not possible. Most likely your retention
  policy
pruned the files.
   
It could be this bug:
   
http://www.mail-archive.com/bacula-
us...@lists.sourceforge.net/msg44944.html
   
  
   Martin, thanks for the hint. I will check that ..
  
 == Could it be related to the following output?

 bscan: bscan.c:637 End of all Volumes. VolFiles=10 VolBlocks=0
VolBytes=44,246,214,007
 02-Feb 15:27 bscan JobId 0: Error: 47 block read errors not
  printed.
   
That might cause a few missing files, but the majority should be
available.
Are these tape or disk volumes?
  
   Disk Volumes on an smb share (software raid1).
   I already checked the harddrives, but they seems to be fine.
  
   Something is very odd with my database. I walk through yesterday
  backups but not even 1 week old volumes.
  
   ===
   Select the Client (1-25): 18
   Automatically selected FileSet: FullWindows
   +---+---+--++-+--
  ---+
   | JobId | Level | JobFiles | JobBytes   | StartTime   |
  VolumeName  |
   +---+---+--++-+--
  ---+
   | 1,631 | F |  232,077 | 63,158,041,709 | 2011-01-03 00:46:38 |
  Yearly0887  |
   | 1,631 | F |  232,077 | 63,158,041,709 | 2011-01-03 00:46:38 |
  Yearly0898  |
   | 1,915 | D |   12,237 |  5,763,031,019 | 2011-01-31 00:11:23 |
  Monthly0901 |
   +---+---+--++-+--
  ---+
   You have selected the following JobIds: 1631,1915
  
   Building directory tree for JobId(s) 1631,1915 ...  ++
   --
  
   For one or more of the JobIds selected, no files were found,
   so file selection is not possible.
   Most likely your retention policy pruned the files.
  
   ===
   I did now setup a new bacula server and try rebuild the database
  using bscan.
   It's a good exercise anyway, but I really need to fix and understand
  the main problem.
   There is nothing worse than a unreliable backup system.
  
  What have you got set for 'File retention'?
 
 From the pool:
 
   Volume Retention = 370 days # one year
   File Retention = 370 days   # one year
   Job Retention = 370 days# one year
 
 From bconsole:
 
 Pool: YearlyRotation
 +-++---+-++--+--+-+--+---+---+-+
 | MediaId | VolumeName | VolStatus | Enabled | VolBytes   | VolFiles | 
 VolRetention | Recycle | Slot | InChanger | MediaType | LastWritten |
 +-++---+-++--+--+-+--+---+---+-+
 | 865 | Vol0865| Used  |   1 |509,904,466 |0 |   
 11,232,000 |   0 |0 | 0 | File  | 2010-11-03 23:50:40 |
 | 887 | Yearly0887 | Full  |   1 | 53,687,038,081 |   12 |   
 31,968,000 |   1 |0 | 0 | File  | 2011-01-03 03:18:35 |
 | 894 | Yearly0894 | Full  |   1 | 53,687,043,318 |   12 |   
 31,968,000 |   1 |0 | 0 | File  | 2010-12-06 02:06:16 |
 | 895 | Yearly0895 | Full  |   1 | 53,687,032,874 |   12 |   
 31,968,000 |   1 |0 | 0 | File  | 2011-01-03 00:39:27 |
 | 898 | Yearly0898 | Append|   1 | 23,144,111,901 |5 |   
 31,968,000 |   1 |0 | 0 | File  | 2011-01-29 02:50:53 |
 +-++---+-++--+--+-+--+---+---+-+
 
 From the Job:
 
 03-Jan 03:53 mond-dir JobId 1631: Begin pruning Jobs older than 1 year 5 days 
 .
 03-Jan 03:53 mond-dir JobId 1631: No Jobs found to prune.
 03-Jan 03:53 mond-dir JobId 1631: Begin pruning Jobs.
 03-Jan 03:53 mond-dir JobId 1631: No Files found to prune.
 03-Jan 03:53 mond-dir JobId 1631: End auto prune.
 
 
 ... that's what I don't get. I proberly miss something but where is the file 
 index gone ?!?!

Do the file retentions look correct in the mysql database?
What do you get in the Retention columns 

Re: [Bacula-users] Restoring old file - Filling DBwith FileRecordsusing bscan

2011-02-03 Thread Graham Keeling
On Thu, Feb 03, 2011 at 04:40:58PM +0100, Richard Marnau wrote:
  On Thu, Feb 03, 2011 at 03:52:15PM +0100, Richard Marnau wrote:
On Thu, Feb 03, 2011 at 03:21:35PM +0100, Richard Marnau wrote:
   On Wed, 2 Feb 2011 16:13:49 +0100, Richard Marnau said:
  
   2. = Bscan the related archives ===
  
   bscan -p -s -v -P  -h localhost Backupserver -V
  Vol0039\|Vol0045\|Vol0094
  
   Records would have been added or updated in the catalog:
 3 Media
 3 Pool
54 Job
352620 File
   root@mond:~# bconsole
   Connecting to Director localhost:9101
  
   === But still I cannot browse the files on the
  volume...
  
   For one or more of the JobIds selected, no files were found,
   so file selection is not possible. Most likely your retention
policy
  pruned the files.
 
  It could be this bug:
 
  http://www.mail-archive.com/bacula-
  us...@lists.sourceforge.net/msg44944.html
 

 Martin, thanks for the hint. I will check that ..

   == Could it be related to the following output?
  
   bscan: bscan.c:637 End of all Volumes. VolFiles=10
  VolBlocks=0
  VolBytes=44,246,214,007
   02-Feb 15:27 bscan JobId 0: Error: 47 block read errors not
printed.
 
  That might cause a few missing files, but the majority should
  be
  available.
  Are these tape or disk volumes?

 Disk Volumes on an smb share (software raid1).
 I already checked the harddrives, but they seems to be fine.

 Something is very odd with my database. I walk through yesterday
backups but not even 1 week old volumes.

 ===
 Select the Client (1-25): 18
 Automatically selected FileSet: FullWindows
 +---+---+--++
  -+--
---+
 | JobId | Level | JobFiles | JobBytes   | StartTime
  |
VolumeName  |
 +---+---+--++
  -+--
---+
 | 1,631 | F |  232,077 | 63,158,041,709 | 2011-01-03 00:46:38
  |
Yearly0887  |
 | 1,631 | F |  232,077 | 63,158,041,709 | 2011-01-03 00:46:38
  |
Yearly0898  |
 | 1,915 | D |   12,237 |  5,763,031,019 | 2011-01-31 00:11:23
  |
Monthly0901 |
 +---+---+--++
  -+--
---+
 You have selected the following JobIds: 1631,1915

 Building directory tree for JobId(s) 1631,1915 ...  ++
 --

 For one or more of the JobIds selected, no files were found,
 so file selection is not possible.
 Most likely your retention policy pruned the files.

 ===
 I did now setup a new bacula server and try rebuild the database
using bscan.
 It's a good exercise anyway, but I really need to fix and
  understand
the main problem.
 There is nothing worse than a unreliable backup system.
   
What have you got set for 'File retention'?
  
   From the pool:
  
 Volume Retention = 370 days # one year
 File Retention = 370 days   # one year
 Job Retention = 370 days# one year
  
   From bconsole:
  
   Pool: YearlyRotation
   +-++---+-++--
  +--+-+--+---+---+--
  ---+
   | MediaId | VolumeName | VolStatus | Enabled | VolBytes   |
  VolFiles | VolRetention | Recycle | Slot | InChanger | MediaType |
  LastWritten |
   +-++---+-++--
  +--+-+--+---+---+--
  ---+
   | 865 | Vol0865| Used  |   1 |509,904,466 |
  0 |   11,232,000 |   0 |0 | 0 | File  | 2010-11-03
  23:50:40 |
   | 887 | Yearly0887 | Full  |   1 | 53,687,038,081 |
  12 |   31,968,000 |   1 |0 | 0 | File  | 2011-01-03
  03:18:35 |
   | 894 | Yearly0894 | Full  |   1 | 53,687,043,318 |
  12 |   31,968,000 |   1 |0 | 0 | File  | 2010-12-06
  02:06:16 |
   | 895 | Yearly0895 | Full  |   1 | 53,687,032,874 |
  12 |   31,968,000 |   1 |0 | 0 | File  | 2011-01-03
  00:39:27 |
   | 898 | Yearly0898 | Append|   1 | 23,144,111,901 |
  5 |   31,968,000 |   1 |0 | 0 | File  | 2011-01-29
  02:50:53 |
   +-++---+-++--
  +--+-+--+---+---+--
  ---+
  
   From the Job:
  
   03-Jan 03:53 mond-dir JobId 1631: Begin pruning Jobs older than 1
  year 5 days .
   03-Jan 03:53 mond-dir JobId 1631: No Jobs found to prune.
   03-Jan 03:53 mond-dir JobId 1631: Begin pruning Jobs.
   03-Jan 03:53 mond-dir 

Re: [Bacula-users] dump my configuration?

2011-01-31 Thread Graham Keeling
On Sun, Jan 30, 2011 at 09:48:38PM -0500, John Drescher wrote:
 On Sun, Jan 30, 2011 at 12:59 PM, hymie! hy...@lactose.homelinux.net wrote:
 
  Short question -- can a running Bacula Console dump out what it thinks
  my configuration is?
 
  Long question -- my Bacula installation is acting a little weird.
  I'm using File Storage.  A couple of bacups didn't have the right name,
  and one of those didn't auto-label the backup file.
 
  Most of my backups worked fine; just a couple acted weird.
 
  I don't see anything wrong with my setup, but clearly, there's a
  mistake.  So I'm wondering if I can get Bacula to tell me what it
  thinks my configuration is, so I can compare it to my config file
  and find the discrepancy.
 
 
 I am confused at the question. I mean you should have edited the
 configuration. And it is a simple text file. If you do not know what
 it is how on earth did you setup bacula?

Bacula writes some settings into database objects, so if you later change your
config files, those changes may well not take effect on existing objects - even
on a reload/restart.
Perhaps that is what the problem is.
But I don't have an answer to the question, except to look directly at either
the database fields that you think might be causing your problem, or to look
at the config files.


--
Special Offer-- Download ArcSight Logger for FREE (a $49 USD value)!
Finally, a world-class log management solution at an even better price-free!
Download using promo code Free_Logger_4_Dev2Dev. Offer expires 
February 28th, so secure your free ArcSight Logger TODAY! 
http://p.sf.net/sfu/arcsight-sfd2d
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] dump my configuration?

2011-01-31 Thread Graham Keeling
On Mon, Jan 31, 2011 at 10:32:32AM -0500, hymie! wrote:
 What it is doing that is weird:
 
 There are 6 other backup jobs in addition to the GreatPlains-Backup job
 listed above.  The other backups all work correctly.  However, the
 GreatPlains-Backup job:
 (*) did not name the Volume correctly
 GreatPlains-Backup.2011-01-29_23.05.00_08
 instead of
 GreatPlains-Backup.2011-01-29_23.05.00_08-Full

I have a suggestion for this first problem.

As I said earlier, if you change your configuration file and reload/restart,
bacula doesn't update the mysql database without further intervention.
This may be the case for your labels.

So, my suggestion is to try looking at the output of this sql command:

select PoolId, Name, LabelFormat from Pool;

If the output doesn't match what is in your configuration, you might need
to run 'update somethingorother' from bconsole.


--
Special Offer-- Download ArcSight Logger for FREE (a $49 USD value)!
Finally, a world-class log management solution at an even better price-free!
Download using promo code Free_Logger_4_Dev2Dev. Offer expires 
February 28th, so secure your free ArcSight Logger TODAY! 
http://p.sf.net/sfu/arcsight-sfd2d
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] How to organize File-Bases Volumes...

2011-01-27 Thread Graham Keeling
On Thu, Jan 27, 2011 at 05:42:10AM -0500, Phil Stracchino wrote:
 On 01/27/11 04:29, ml ml wrote:
  Hello List,
  
  I have this Pool definition with autolabeling.
  
  Pool {
Name = File
Pool Type = Backup
Recycle = yes   # Bacula can automatically recycle 
  Volumes
AutoPrune = yes # Prune expired volumes
Volume Retention = 21 days # one year
Maximum Volume Bytes = 1M  # Limit Volume size to something 
  reasonable
Maximum Volumes = 500   # Limit number of Volumes in Pool
Label Format = Volume-
  }
 
 Um, 1Mb volume size limit is reasonable on which planet?  Perhaps if
 you're running a computer that boots from floppy disks.
 
  The big disadvantage is that i will run out of Volumes soon since each
  Backup Job will create a Volume, even if its only 1MB big.
  Can i set Maximum Volumes = 64000. I guess not since autolabeling
  starts with -0001.
  
  I also failed to reuse the old Volumes since they are on one Pool
  but have diffrent Archive Device paths.
  
  
  Any idea how to solve this?
 
 Well, I'd start by setting a more realistic Volume size limit.  At one
 megabyte, you're going to need more than one volume to hold many single
 files.  For instance, a typical Linux kernel will occupy two to four
 volumes.  That's like backing up to 3.5 floppies.  Purely apart from
 the insane number of Volumes it'll create, the overhead of constantly
 creating, opening and closing volumes will kill your throughput.
 
 With your settings, setting volume size to something saner like, say,
 10GB or more would solve most of your problems.  But frankly, setting
 size limits is far from the best way to handle disk volumes in the first
 place.  You're much better off to specify a Maximum Volume Jobs
 sufficient to hold a day's or a weeks' worth of backups, or setting a
 Volume Use Duration just short of 24 hours or 7 days (or however long
 you want to use a single Volume before moving on to the next one).

I was thinking about something related to this the other day.

If you wanted to manage the space on your disk, you might decide to
limit the number of equal-sized volumes that bacula can have.

But how big do you make your equal-sized volumes?
Assuming one job per volumes...
If you make them too big (like 10GB or more each), you will inevitably
waste disk space because if your backup was small (an incremental might well
be expected to be a few MB), a whole 10GB will be used up.
If you make them too small (like 1MB each, which makes about 1 million volumes
on a terabyte disk), it seems quite likely that you will start hitting some
sort of disk access overhead problem, or that the database will be
unmanageable, or that bacula will slow down massively on particular
database queries.
So, I don't know what a reasonable size would be.

Perhaps 10GB becomes more reasonable if you allow more than one job per volume,
but then you still have the problem of disk space being wasted because you
cannot purge the volume until the last small 1MB job passes its retention time.

I think this last problem is what Phil is trying to solve by setting either
Maximum Volume Jobs or Volume Use Duration. But these solutions seem
unsatisfactory for disks (I can't comment on tapes because I don't know enough
about them).
You are wasting space if the volume is not full up by the time Volume Use
Duration expires.
And you are wasting space if the volume is not full up with Maximum Volume
Jobs.


--
Special Offer-- Download ArcSight Logger for FREE (a $49 USD value)!
Finally, a world-class log management solution at an even better price-free!
Download using promo code Free_Logger_4_Dev2Dev. Offer expires 
February 28th, so secure your free ArcSight Logger TODAY! 
http://p.sf.net/sfu/arcsight-sfd2d
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] How to organize File-Bases Volumes...

2011-01-27 Thread Graham Keeling
On Thu, Jan 27, 2011 at 06:26:04AM -0500, Phil Stracchino wrote:
 On 01/27/11 06:12, Graham Keeling wrote:
  Perhaps 10GB becomes more reasonable if you allow more than one job per 
  volume,
  but then you still have the problem of disk space being wasted because you
  cannot purge the volume until the last small 1MB job passes its retention 
  time.
  
  I think this last problem is what Phil is trying to solve by setting either
  Maximum Volume Jobs or Volume Use Duration. But these solutions seem
  unsatisfactory for disks (I can't comment on tapes because I don't know 
  enough
  about them).
  You are wasting space if the volume is not full up by the time Volume Use
  Duration expires.
 
 Define full.  Disk volumes aren't like packing crates.  They're more
 like balloons.  They grow as you add data to them.
 
  And you are wasting space if the volume is not full up with Maximum Volume
  Jobs.
 
 Again, how is it wasting space?  A disk volume doesn't preallocate
 space to hold the maximum size of the number of jobs you might write
 into it.  A disk volume consumes only as much space as the data you
 wrote into it.

The idea is to split the disk up into equal-sized chunks. 
In this scenario, you have specified a number of volumes, and a maximum size
for each volume. For example, on a terabyte disk, you might define 100 volumes,
10GB each.

If you don't use the full 10GB in each volume, the space that you don't use
is wasted.

You can't use the unused space for some other application, because the volume
might get purged, recycled, and then the next job that writes to it wants to
use all 10GB - you will then find that it can't.

You can't just add more equal-sized volumes, because then you'll have more
potential for running out of space. The maximum space that your
volumes can balloon to will be bigger than your disk.


--
Special Offer-- Download ArcSight Logger for FREE (a $49 USD value)!
Finally, a world-class log management solution at an even better price-free!
Download using promo code Free_Logger_4_Dev2Dev. Offer expires 
February 28th, so secure your free ArcSight Logger TODAY! 
http://p.sf.net/sfu/arcsight-sfd2d
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] How to organize File-Bases Volumes...

2011-01-27 Thread Graham Keeling
On Thu, Jan 27, 2011 at 12:59:37PM +0100, ml ml wrote:
 On Thu, Jan 27, 2011 at 12:26 PM, Phil Stracchino ala...@metrocast.net 
 wrote:
  On 01/27/11 06:12, Graham Keeling wrote:
  Perhaps 10GB becomes more reasonable if you allow more than one job per 
  volume,
  but then you still have the problem of disk space being wasted because you
  cannot purge the volume until the last small 1MB job passes its retention 
  time.
 
  I think this last problem is what Phil is trying to solve by setting either
  Maximum Volume Jobs or Volume Use Duration. But these solutions seem
  unsatisfactory for disks (I can't comment on tapes because I don't know 
  enough
  about them).
  You are wasting space if the volume is not full up by the time Volume Use
  Duration expires.
 
  Define full.  Disk volumes aren't like packing crates.  They're more
  like balloons.  They grow as you add data to them.
 
  And you are wasting space if the volume is not full up with Maximum Volume
  Jobs.
 
  Again, how is it wasting space?  A disk volume doesn't preallocate
  space to hold the maximum size of the number of jobs you might write
  into it.  A disk volume consumes only as much space as the data you
  wrote into it.
 
 If i set Maximum Volume Bytes = 10GB then it uses the full 10GB at the
 very first run as full-backup. But the increments later get always
 written to new Volume Files. E.g.
 Volume-0001: 10GB (from full run)
 Volume-0002: 10GB (from full run)
 Volume-0003: 10GB (from full run)
 Volume-0004: 126MB (from full run)
 Volume-0004: 7MB (incremental run later on...)
 
 I would have expected, that it would ALWAYS append to my volume until
 it reaches its Maximum Volume Bytes (here 10GB) limit.
 
 Again, here is my Pool Definition:
 
 Pool {
  Name = File
  Pool Type = Backup
  Recycle = yes   # Bacula can automatically recycle 
 Volumes
  AutoPrune = yes # Prune expired volumes
  Volume Retention = 14 days
  Maximum Volume Bytes = 10GB
  Maximum Volumes = 500
  Label Format = Volume-
 
 }

I first thought that you need to set 'Maximum Volume Jobs'. But the
bacula-5.0.3 source seems to indicate that the default is 0, which means
no limit.

Perhaps bacula will use all the empty volumes first, then start writing
additional jobs to them.


So, my two suggestions are:
a) Define 'Maximum Volume Jobs' and see what happens.
b) Set 'Maximum Volumes' to be small and see what happens.


--
Special Offer-- Download ArcSight Logger for FREE (a $49 USD value)!
Finally, a world-class log management solution at an even better price-free!
Download using promo code Free_Logger_4_Dev2Dev. Offer expires 
February 28th, so secure your free ArcSight Logger TODAY! 
http://p.sf.net/sfu/arcsight-sfd2d
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] How to organize File-Bases Volumes...

2011-01-27 Thread Graham Keeling
On Thu, Jan 27, 2011 at 08:56:54AM -0500, Phil Stracchino wrote:
 On 01/27/11 07:33, Graham Keeling wrote:
  On Thu, Jan 27, 2011 at 06:26:04AM -0500, Phil Stracchino wrote:
  On 01/27/11 06:12, Graham Keeling wrote:
  I think this last problem is what Phil is trying to solve by setting 
  either
  Maximum Volume Jobs or Volume Use Duration. But these solutions seem
  unsatisfactory for disks (I can't comment on tapes because I don't know 
  enough
  about them).
  You are wasting space if the volume is not full up by the time Volume Use
  Duration expires.
 
  Define full.  Disk volumes aren't like packing crates.  They're more
  like balloons.  They grow as you add data to them.
  
  The idea is to split the disk up into equal-sized chunks. 
  In this scenario, you have specified a number of volumes, and a maximum size
  for each volume. For example, on a terabyte disk, you might define 100 
  volumes,
  10GB each.
  
  If you don't use the full 10GB in each volume, the space that you don't 
  use
  is wasted.
 
 No, it isn't.  Because it isn't used.  If you set maximum volume size to
 10GB, and you create a new volume and write a 5KB job to it, you have
 5KB of data in a 5KB volume, not 5KB of data in a 10GB volume.
 
 Now, if that volume fills, and gets purged, and you *keep the purged
 volume around consuming 10GB of disk space* until it gets reused, well,
 then you're wasting disk space, yes.  But that has absolutely nothing to
 do with what's governing the size of the volume.  It is always going to
 be the case with any purged disk volume.
 
 5.0.3 has a feature to truncate purged disk volumes which gets around
 this.  But the problem can equally easily be addressed by only using any
 given volume once - by whatever means you decide it's full - deleting
 used volumes as you purge them.

The truncate on purge feature does not work automatically - you have to do it
by hand - which makes it almost useless. See here:
http://sourceforge.net/apps/wordpress/bacula/2010/02/01/new-actiononpurge-feature/

  You can't use the unused space for some other application, because the 
  volume
  might get purged, recycled, and then the next job that writes to it wants to
  use all 10GB - you will then find that it can't.
 
 This simply neither reflects reality nor makes any sense.  I can't even
 understand what you're trying to say here.


I shall summarise what I thought I had already said, because it is quite clear
to me.


You have a terabyte disk that you want to use for backups.

You split it into 100 Volumes, set 10GB max volume size each, and 1 job
per volume.

All your backup jobs are 5KB.
You can then only use 500KB of disk space before you run out of volumes.


You have three obvious options:
a) Make more volumes, reduce the max sizes.
b) Make more volumes, keep the max sizes the same.
c) Increase the number of jobs per volumes.


Problems:
(a): If you make the volumes too small, you get overhead/maintenance problems.

(b): You can easily run out of disk space because you have allocated more than
the size of the disk.

(c): It becomes very difficult to work out which volumes you can purge/recycle.


--
Special Offer-- Download ArcSight Logger for FREE (a $49 USD value)!
Finally, a world-class log management solution at an even better price-free!
Download using promo code Free_Logger_4_Dev2Dev. Offer expires 
February 28th, so secure your free ArcSight Logger TODAY! 
http://p.sf.net/sfu/arcsight-sfd2d
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] How to organize File-Bases Volumes...

2011-01-27 Thread Graham Keeling
On Thu, Jan 27, 2011 at 09:45:29AM -0500, Phil Stracchino wrote:
 On 01/27/11 09:28, Graham Keeling wrote:
  I shall summarise what I thought I had already said, because it is quite 
  clear
  to me.
  
  You have a terabyte disk that you want to use for backups.
  
  You split it into 100 Volumes, set 10GB max volume size each, and 1 job
  per volume.
  
  All your backup jobs are 5KB.
  You can then only use 500KB of disk space before you run out of volumes.
 
 Assuming that you've also set max volumes = 100.
 
 However, I believe in this case the problem is summarized by PEBCAK.  If
 all of your backup jobs are 5KB, this would be a completely
 brain-damaged way to set up your storage.

Please don't be deliberately obtuse.
The point of the example is to use an extreme to easily demonstrate real
problems.
I used the specific figure of 5KB to provide continuity with your own example
in which you used the same figure.


The same problems exist in more realistic situations.

Assuming that I somehow know that all my backups will range from 100MB to 10GB,
then what should I set?

a) 1 volumes, 100MB max size?
b)   100 volumes,  10GB max size?

a) gives me wasted space when a backup is not a multiple of 100MB in size,
and possible overhead problems due to the number of volumes.
b) gives me wasted space when a backup is not a multiple of 10GB in size.


--
Special Offer-- Download ArcSight Logger for FREE (a $49 USD value)!
Finally, a world-class log management solution at an even better price-free!
Download using promo code Free_Logger_4_Dev2Dev. Offer expires 
February 28th, so secure your free ArcSight Logger TODAY! 
http://p.sf.net/sfu/arcsight-sfd2d
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] How to organize File-Bases Volumes...

2011-01-27 Thread Graham Keeling
On Thu, Jan 27, 2011 at 11:15:59AM -0500, Phil Stracchino wrote:
 On 01/27/11 10:48, Graham Keeling wrote:
  The same problems exist in more realistic situations.
  
  Assuming that I somehow know that all my backups will range from 100MB to 
  10GB,
  then what should I set?
  
  a) 1 volumes, 100MB max size?
  b)   100 volumes,  10GB max size?
  
  a) gives me wasted space when a backup is not a multiple of 100MB in size,
  and possible overhead problems due to the number of volumes.
  b) gives me wasted space when a backup is not a multiple of 10GB in size.
 
 If you insist on limiting your number of volumes to the number of
 maximum-size volumes that would fill your disk and restricting the
 number of jobs allowed on a volume, yes, one could argue that it does.
 But any time your Bacula pool design answer involves Max Volume Jobs =
 1, the probability is extremely high that you started out by asking the
 wrong question.
 
 The problem here is not that the directives don't work, or that any
 particular method of governing volume size wastes space.  It's that ANY
 possible set of volume management options is capable of being misused to
 create pathological configurations that waste disk space.  If you only
 allow yourself 100 volumes on a 1TB disk, and then write 1GB to each one
 and complain that the remaining 900GB of disk space is wasted because
 you can't create any more volumes and can't append to any of the ones
 you have, you only have yourself to blame; you consciously, with malice
 aforethought, took careful aim and shot yourself in the foot with an
 elephant gun.

Well done.

Now follow the thinking through. Luckily, I have already done that for you.

Remember, at the start you had three obvious options:
a) Make more volumes, reduce the max sizes.
b) Make more volumes, keep the max sizes the same.
c) Increase the number of jobs per volumes.

You have realised that (a) and (b) are not very good.

So you try (c) and increase the number of jobs per volumes.

Your suggestion was to use either 'Maximum Volume Jobs' or 'Volume Use
Duration'.

'Maximum Volume Jobs' initially appears to work because each volume can now get
filled up to its allocated size. But now, it becomes very difficult to work
out which volumes you are able to purge / recycle.

'Volume Use Duration' prevents the allocated space being used up, so it also
wastes space.


So, at this point, I am stuck.


--
Special Offer-- Download ArcSight Logger for FREE (a $49 USD value)!
Finally, a world-class log management solution at an even better price-free!
Download using promo code Free_Logger_4_Dev2Dev. Offer expires 
February 28th, so secure your free ArcSight Logger TODAY! 
http://p.sf.net/sfu/arcsight-sfd2d
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] FW: Bacula help

2011-01-25 Thread Graham Keeling
On Tue, Jan 25, 2011 at 05:01:53PM -, Laxansh K. Adesara wrote:
 
 In addition, I did not set any password then how come it assigned password to 
 bacula director. Is there any default password for it?
 How can I configure it with window client? And how can I make sure that both 
 are connected?

I don't think that the director password in bacula-dir.conf is encrypted.
You can probably copy and paste it into the bacula-fd.conf on the client
(at least, this is what I have always been able to do on my setup).

At a guess, whether there is a default for the password probably depends upon
your distribution. But since you know where it is now, you might as well
change it if you don't like it.

Also, remember to restart your daemons when you make configuration changes.
If you are using a Windows client, you need to find the Services screen.

 Regards,
 Laxansh 
 -Original Message-
 From: Laxansh K. Adesara 
 Sent: 25 January 2011 16:58
 To: 'John Drescher'; 'j...@schaubroeck.be'
 Cc: 'bacula-users@lists.sourceforge.net'
 Subject: RE: [Bacula-users] Bacula help
 
 
 Hi,
 
 I have installed bacula director on ubuntu by simply following this command 
 sudo apt-get install bacula and it installed on my ubuntu server and then I 
 installed bacula-client on window machine and when it asked me about director 
 information I just filled out ubuntu info. 
 
 (
  Dir name : ubuntu-dir
  Dir address : IP address of ubuntu machine
  Dir Password:  ? what am I supposed to enter here? Because by default 
 it's encrypted password on my ubuntu machine bacula-dir.conf 
 (bacula-director) file. 
 )
 
 Then I tried to start bacula concole from window machine but I could not see 
 director and it was not showing any error as well so I just went to event 
 viewer and came to know about following error
 
 Bacula ERROR: , /home/kern/bacula/k/bacula/src/lib/bsock.c:135 Unable to 
 connect to Director daemon on 192.168.10.68:9101. ERR=The operation completed 
 successfully.
 
 If anything I did wrong then please let me know the exact procedures 
 
 I am trying to follow these procedures 
 http://www.devtoolshed.com/content/how-back-windows-bacula but before that I 
 need to sort out bacula-dir problem..
 
 
 Regards,
 Laxansh
 -Original Message-
 From: John Drescher [mailto:dresche...@gmail.com] 
 Sent: 17 January 2011 16:35
 To: Dan Langille
 Cc: Laxansh K. Adesara; j...@schaubroeck.be; 
 bacula-users@lists.sourceforge.net
 Subject: Re: [Bacula-users] Bacula help
 
  You will want that tape drive on the same system as your bacula-sd, which in
  your case is Ubuntu.  Start with bacula-dir, bacula-sd, and the database all
  on one server.  You will also want bacula-fd on that server so you can
  backup that server.
 
 Also for ubuntu make sure that none of the ip addresses in the
 configuration files are 127.0.0.1 or localhost. Instead use the
 external ip address. Debian derivatives tend to set bacula up this way
 by default but that prevents bacula from being used as a network
 backup since the only machine that can connect is the server itself..
 
 John
 
 --
 Special Offer-- Download ArcSight Logger for FREE (a $49 USD value)!
 Finally, a world-class log management solution at an even better price-free!
 Download using promo code Free_Logger_4_Dev2Dev. Offer expires 
 February 28th, so secure your free ArcSight Logger TODAY! 
 http://p.sf.net/sfu/arcsight-sfd2d
 ___
 Bacula-users mailing list
 Bacula-users@lists.sourceforge.net
 https://lists.sourceforge.net/lists/listinfo/bacula-users


--
Special Offer-- Download ArcSight Logger for FREE (a $49 USD value)!
Finally, a world-class log management solution at an even better price-free!
Download using promo code Free_Logger_4_Dev2Dev. Offer expires 
February 28th, so secure your free ArcSight Logger TODAY! 
http://p.sf.net/sfu/arcsight-sfd2d
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] Best Practice to Backup Exchange Only?

2011-01-14 Thread Graham Keeling
On Fri, Jan 14, 2011 at 12:31:53PM +, Mister IT Guru wrote:
 Is there a best practise on how to backup an exchange server? I have a  
 number of them, and I would like to be able to back them up in backups  
 that I know are only exchange data. I know there is the plugin from  
 Equiinet -- I am currently looking into that, but I'm pretty sure that  
 someone has already done this, and I don't really want to reinvent the  
 wheel. If it hasn't, I'm sure there are a few people on here who  
 wouldn't mind throwing in some suggestions. So ... let open source 
 commence!

The plugin was funded by Equiinet.
Bacula Systems wrote it.

As far as I know (I work for Equiinet and have tested the plugin), the plugin
doesn't work.

The major problem is that you cannot restore from multiple incremental backups.
See unacknowledged bacula bug number 0001647, or google mailing list archives
for something like bacula exchange keeling.

Bacula Systems are working on a replacement plugin for their Enterprise
version.


Therefore, I would say that currently, the best way to backup an Exchange
server is via the usual VSS stuff. Put the path to the Program Files / Exchange
directory in your fileset.

Make sure that you test the restores before you start relying on it.
(Make sure you shut down Exchange before you restore the files).


--
Protect Your Site and Customers from Malware Attacks
Learn about various malware tactics and how to avoid them. Understand 
malware threats, the impact they can have on your business, and how you 
can protect your company and customers by using code signing.
http://p.sf.net/sfu/oracle-sfdevnl
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] Best Practice to Backup Exchange Only?

2011-01-14 Thread Graham Keeling
On Fri, Jan 14, 2011 at 01:20:33PM +, Mister IT Guru wrote:
 On 14/01/2011 12:57, Graham Keeling wrote:
 On Fri, Jan 14, 2011 at 12:31:53PM +, Mister IT Guru wrote:
 Is there a best practise on how to backup an exchange server? I have a
 number of them, and I would like to be able to back them up in backups
 that I know are only exchange data. I know there is the plugin from
 Equiinet -- I am currently looking into that, but I'm pretty sure that
 someone has already done this, and I don't really want to reinvent the
 wheel. If it hasn't, I'm sure there are a few people on here who
 wouldn't mind throwing in some suggestions. So ... let open source
 commence!
 The plugin was funded by Equiinet.
 Bacula Systems wrote it.

 As far as I know (I work for Equiinet and have tested the plugin), the plugin
 doesn't work.
 What? Oh no! Well, not everything is perfect, and thank you for letting  
 me know. I guess that explains why I couldn't find any examples around  
 the net.
 The major problem is that you cannot restore from multiple incremental 
 backups.
 See unacknowledged bacula bug number 0001647, or google mailing list archives
 for something like bacula exchange keeling.
 Yes, I would agree this is a major problem! I'll look up the bug when I  
 get a chance.

 Bacula Systems are working on a replacement plugin for their Enterprise
 version.

 Therefore, I would say that currently, the best way to backup an Exchange
 server is via the usual VSS stuff. Put the path to the Program Files / 
 Exchange
 directory in your fileset.
 I have an email achiever - I was hoping to state in my network  
 documentation that bacula backing up exchange was providing some  
 redundancy. But the more I think about it, I'm thinking that I should  
 concentrate on the data in the archiever, rather than the data in 
 exchange.
 Make sure that you test the restores before you start relying on it.
 (Make sure you shut down Exchange before you restore the files).

 I'm getting some info from  
 http://wiki.bacula.org/doku.php?id=application_specific_backups#applications 
 but the fact that the exchange plugin doesn't work, how come that  
 information hasn't made it into the wiki? (okay, rhetorical question),  
 if the author is on the list - it could do with an update, same goes to  
 myself, and also the community!

That page makes no mention of the plugin, so it is technically correct.

It talks about VSS, which is the stuff that does work.


--
Protect Your Site and Customers from Malware Attacks
Learn about various malware tactics and how to avoid them. Understand 
malware threats, the impact they can have on your business, and how you 
can protect your company and customers by using code signing.
http://p.sf.net/sfu/oracle-sfdevnl
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] Question about shedule

2011-01-12 Thread Graham Keeling
On Wed, Jan 12, 2011 at 12:19:31PM +0100, Valerio Pachera wrote:
 Could you please help me understunding how shedule works?
 
 Analize this example from the wiki:
 
 Schedule {
 Name = ScheduleTest
 Run = Full 1st sun at 23:05
 Run = Differential 2nd-5th sun at 23:05
 Run = Incremental mon-sat at 23:05
 }

I can't seem to find schedule documentation either. I'm sure I've seen it
at one time.

Anyway, here is my interpretation of ScheduleTest.

Run a Full on the first Sunday of each month at 23:05.
Run a Differential on the 2nd, 3rd, 4th, 5th Sunday of each month as 23:05.
Run an Incremental every Mon,Tue,Wed,Thu,Fri,Sat at 23:05.

So, a backup happens everyday.
On Sundays, the backup is Differential, unless it is the first Sunday of the
month, which is a Full.

 
 QUESTION 1: '1st' refear to the backup number (the first backup) or
 the day of the week? or the day of the month?
 
 I read the first line like:
 The first backup has to be full (so it doesn't matter the day it
 begin). Each sunday do a full backup
 
 The second line...:
 Second and fifth backup has to be Differential what about 'sun' 
 
 The third line:
 Make an incremental backup from monday to satturday
 
 I think that the second and third line overlap:
 if sunday we make a full backup, monday (2th) should be done
 differential (second line) but also incremental (third line).
 QUESTION 2:I guess the first match is used and the other ignored (like
 in iptables rules).
 
 QUESTION 3: what means 'sun' ? If it's sunday, why is it used in
 second line if it's already  specified in first line?
 
 QUESTION 4: why is it usefull to make differential backups instead of
 all incremental backup?
 
 QUESTION 5: I read bacula user manual but found nothing about shedule
 sintax. If you have any link, please paste it here.
 
 Thank you.
 
 --
 Protect Your Site and Customers from Malware Attacks
 Learn about various malware tactics and how to avoid them. Understand 
 malware threats, the impact they can have on your business, and how you 
 can protect your company and customers by using code signing.
 http://p.sf.net/sfu/oracle-sfdevnl
 ___
 Bacula-users mailing list
 Bacula-users@lists.sourceforge.net
 https://lists.sourceforge.net/lists/listinfo/bacula-users


--
Protect Your Site and Customers from Malware Attacks
Learn about various malware tactics and how to avoid them. Understand 
malware threats, the impact they can have on your business, and how you 
can protect your company and customers by using code signing.
http://p.sf.net/sfu/oracle-sfdevnl
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] Help on retention period

2011-01-12 Thread Graham Keeling
On Wed, Jan 12, 2011 at 10:54:13AM -0600, Mark wrote:
 On Wed, Jan 12, 2011 at 9:35 AM, Valerio Pachera siri...@gmail.com wrote:
 
 
  SCOPE: We want the possibility of restore any file till 2 weeks ago.
 
 
 ...
 
 
  at sunday of the third week, the first full backup get overwritten.
 
  _ _ _ _ _ _ | _ _ _ _ _ _ |
 
  This means that, of the first week, I can only restore file present in
  the incremental backup.
  In other words I do not have a cicle of 2 weeks but 1.
 
 
 When your first week's full backup gets overwritten, what are those
 incremental backups incremental to?  What's you're describing sounds like
 what I expect fulls and incrementals to be.  When you overwrite the full,
 you've essentially orphaned the incrementals that were created based on that
 full backup.

Bacula doesn't prevent backups that other backups depend on from being
purged.

If there is no previous Full before the Incrementals, you cannot easily restore
the files in the Incrementals. You have to extract the files from the
individual volumes with the 'bextract' command.

It also doesn't have anything that indicates that a particular backup was
based on another particular backup. It calculates everything based on dates.

So, if an Incremental from the middle of a sequence got purged (somehow),
bacula won't notice and will happily restore from the latest Incremental.
It is good that you can restore something, but bad because the files you get
back may well not be the same as the ones that were on your client machine on
the day that the Incremental was made.


Anyway, this all means that you need to set your retention times very
carefully.

If you set them so that they cover the periods that you're worried
about - like Valerio's example of wanting to restore from two weeks back...
F I I I I I I F I I I I I I I F
...you might decide to set the retention of Fulls to 3 weeks.
But be careful! If, for some reason, a Full backup fails, time will march on
and you will end up having a Full that other backups depend on getting purged
(imagine the 2nd 'F' in the sequence above being missed out).


I actually wrote a patch that enforces that bacula only purges backups
that other backups don't depend on. But it makes some assumptions about your
setup. It assumes that you are using one job per volume and it assumes that
you are not using Job / File retentions (ie, you have set them very high) and
instead rely purely on Volume retentions.

So, if your retention is set to one week, the purge sequence will be like this
(with new backups being made on the right). 

F I I I I I I F I I I I I I F
F I I I I I   F I I I I I I F I
F I I I I F I I I I I I F I I
F I I I   F I I I I I I F I I I
F I I F I I I I I I F I I I I
F I   F I I I I I I F I I I I I
F F I I I I I I F I I I I I I
  F I I I I I I F I I I I I I F
  F I I I I I   F I I I I I I F I

This also suffers if a Full backup is missed, because you end up having to
keep more old backups.

I can upload this patch if it is interesting to people.



P.S. One last thing - try not to worry about what bacula does if your clock
somehow goes wrong and decides that it is 2037. Or 1970. Or last week. :)


--
Protect Your Site and Customers from Malware Attacks
Learn about various malware tactics and how to avoid them. Understand 
malware threats, the impact they can have on your business, and how you 
can protect your company and customers by using code signing.
http://p.sf.net/sfu/oracle-sfdevnl
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] Virtual Backups - Do we actually need full backups anymore?

2011-01-07 Thread Graham Keeling
On Thu, Jan 06, 2011 at 12:52:22PM -0500, Phil Stracchino wrote:
 On 01/06/11 12:35, Graham Keeling wrote:
  On Thu, Jan 06, 2011 at 05:24:07PM +, Mister IT Guru wrote:
  On 06/01/2011 17:16, Graham Keeling wrote:
  So, I would be very pleased if a VirtualFull also grabbed new files from 
  the
  client.
 
  Thank you for pointing this out! So it doesn't grab new files from the  
  client first? Well, that's not the smartest! Hmm, I wonder - How would  
  you get a job to run run after another job, rather than have bacula  
  decide via priorities?
  
  I don't know, but I think that your idea of combining them both into one job
  is a far better solution.
 
 No, it isn't, because that's not the purpose of a VirtualFull job.
 Consider, for a single example, what would happen if your backup
 strategy includes grabbing a fast incremental of all laptops about an
 hour or two before everyone leaves for the day, then running a
 VirtualFull later that evening after everyone's gone home and taken
 their laptops with them.  All of the VirtualFulls would fail.

Alright, what I should have said is:

For me, a better solution would to add a new kind of job level. Call it
something different, like a KernFull or something. This doesn't stop you from
attempting to schedule Incrementals and VirtualFulls so that they don't
overlap.
(By the way, the strategy you outline uses up twice the amount of space for
the files included in the Incremental, because they will also be copied into
the VirtualFull)


--
Gaining the trust of online customers is vital for the success of any company
that requires sensitive data to be transmitted over the Web.   Learn how to 
best implement a security strategy that keeps consumers' information secure 
and instills the confidence they need to proceed with transactions.
http://p.sf.net/sfu/oracle-sfdevnl 
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] Virtual Backups - Do we actually need full backups anymore?

2011-01-07 Thread Graham Keeling
On Thu, Jan 06, 2011 at 12:48:18PM -0500, Phil Stracchino wrote:
 On 01/06/11 12:16, Graham Keeling wrote:
  With my configuration, a VirtualFull sometimes prevents an Incremental from
  running, because the VirtualFull took too long (or vice versa). I have not 
  been
  able to solve this, because every idea that I've come up with either doesn't
  work or makes something else happen that is worse.
 
 Suggestion:
 
 Schedule the day's Incremental, then schedule the VirtualFull, say, 30
 minutes later.
 
 Put a RunBeforeJob script on the incremental that creates a lockfile (in
 a properly race-safe manner, of course) for the client.
 
 Put a RunAfterJob script on the incremental that removes the lockfile.
 
 Put a RunBeforeJob script on the VirtualFull job that checks for
 presence of the client's lockfile, and, if it finds it still present,
 sleeps for five minutes before checking again, and does not return until
 the lockfile has been gone for two consecutive checks (thus making
 certain there is a minimum of five minutes for attribute metadata from
 the job to be flushed).

Thanks for the suggestion.

It seems like a lot of hard work though (3 scripts for each different operating
system that needs to be backed up) when it feels as if this is something
that bacula should be able to deal with globally.

I find Mr IT Guru's idea of chaining a VirtualFull onto the end of a
particular Incremental more appealing. Can you think of a way of doing that?


--
Gaining the trust of online customers is vital for the success of any company
that requires sensitive data to be transmitted over the Web.   Learn how to 
best implement a security strategy that keeps consumers' information secure 
and instills the confidence they need to proceed with transactions.
http://p.sf.net/sfu/oracle-sfdevnl 
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] Virtual Backups - Do we actually need full backups anymore?

2011-01-07 Thread Graham Keeling
On Fri, Jan 07, 2011 at 08:43:09AM +, Graham Keeling wrote:
 On Thu, Jan 06, 2011 at 12:48:18PM -0500, Phil Stracchino wrote:
  On 01/06/11 12:16, Graham Keeling wrote:
   With my configuration, a VirtualFull sometimes prevents an Incremental 
   from
   running, because the VirtualFull took too long (or vice versa). I have 
   not been
   able to solve this, because every idea that I've come up with either 
   doesn't
   work or makes something else happen that is worse.
  
  Suggestion:
  
  Schedule the day's Incremental, then schedule the VirtualFull, say, 30
  minutes later.
  
  Put a RunBeforeJob script on the incremental that creates a lockfile (in
  a properly race-safe manner, of course) for the client.
  
  Put a RunAfterJob script on the incremental that removes the lockfile.
  
  Put a RunBeforeJob script on the VirtualFull job that checks for
  presence of the client's lockfile, and, if it finds it still present,
  sleeps for five minutes before checking again, and does not return until
  the lockfile has been gone for two consecutive checks (thus making
  certain there is a minimum of five minutes for attribute metadata from
  the job to be flushed).
 
 Thanks for the suggestion.
 
 It seems like a lot of hard work though (3 scripts for each different 
 operating
 system that needs to be backed up) when it feels as if this is something
 that bacula should be able to deal with globally.
 
 I find Mr IT Guru's idea of chaining a VirtualFull onto the end of a
 particular Incremental more appealing. Can you think of a way of doing that?

(Sorry, I can see that this is sort of what you've described - what I meant is
to do it using standard configuration directives on the director, without
needing to add custom scripts for each different client operating system)


--
Gaining the trust of online customers is vital for the success of any company
that requires sensitive data to be transmitted over the Web.   Learn how to 
best implement a security strategy that keeps consumers' information secure 
and instills the confidence they need to proceed with transactions.
http://p.sf.net/sfu/oracle-sfdevnl 
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] Virtual Backups - Do we actually need full backups anymore?

2011-01-07 Thread Graham Keeling
On Fri, Jan 07, 2011 at 09:53:27AM +, Mister IT Guru wrote:
 On 06/01/2011 18:47, Phil Stracchino wrote:
  On 01/06/11 12:54, Mister IT Guru wrote:
  Okay, I see the point of Virtual Full Backup - this is to be done
  without talking to the client at call, (i did know that! I've been doing
  my homework!) Well, now that I'm looking at the virtual backup in the
  capacity in which it was intended, it seems that a virtual full backup,
  is an amalgamation of the current files stored within bacula. So
  effectively it's a point in time snapshot from when the last
  differential, or incremental finished for that client?
  Yes, that's a very good way to look at it.
 
  I would still prefer to have the latest files from the client packed
  into this job, but I do understand, that even the very best backups
  really are just a point in time snapshot. Well, I'm a little upset to
  come to this realisation with regards to the theory of it - In practical
  terms, will a virtual full cause a new volume to be created?
  No, it should create no new media, as no new data is copied, only new DB
  records.
 
 
 So a VirtualFull is not just the last full plus all the latest files 
 within bacula, it also doesn't actually exist as media?
 
 So I cannot copy a volume? This means that if I want to take a physical 
 copy of the latest full backup, I will literally have to run a full 
 backup across my entire server farm?

He was confused. A VirtualFull will create a new backup, which may involve
creating a new volume.


--
Gaining the trust of online customers is vital for the success of any company
that requires sensitive data to be transmitted over the Web.   Learn how to 
best implement a security strategy that keeps consumers' information secure 
and instills the confidence they need to proceed with transactions.
http://p.sf.net/sfu/oracle-sfdevnl 
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] Virtual Backups - Do we actually need full backups anymore?

2011-01-07 Thread Graham Keeling
On Fri, Jan 07, 2011 at 10:25:05AM +, Mister IT Guru wrote:
 On 07/01/2011 09:59, Graham Keeling wrote:
  On Fri, Jan 07, 2011 at 09:53:27AM +, Mister IT Guru wrote:
  On 06/01/2011 18:47, Phil Stracchino wrote:
  On 01/06/11 12:54, Mister IT Guru wrote:
  Okay, I see the point of Virtual Full Backup - this is to be done
  without talking to the client at call, (i did know that! I've been doing
  my homework!) Well, now that I'm looking at the virtual backup in the
  capacity in which it was intended, it seems that a virtual full backup,
  is an amalgamation of the current files stored within bacula. So
  effectively it's a point in time snapshot from when the last
  differential, or incremental finished for that client?
  Yes, that's a very good way to look at it.
 
  I would still prefer to have the latest files from the client packed
  into this job, but I do understand, that even the very best backups
  really are just a point in time snapshot. Well, I'm a little upset to
  come to this realisation with regards to the theory of it - In practical
  terms, will a virtual full cause a new volume to be created?
  No, it should create no new media, as no new data is copied, only new DB
  records.
 
 
  So a VirtualFull is not just the last full plus all the latest files
  within bacula, it also doesn't actually exist as media?
 
  So I cannot copy a volume? This means that if I want to take a physical
  copy of the latest full backup, I will literally have to run a full
  backup across my entire server farm?
  He was confused. A VirtualFull will create a new backup, which may involve
  creating a new volume.
 
 Thank you for the clarification! That's what I thought before hand. 
 *phew* otherwise my backup plans would be fubard! Why do you say, may 
 involve creating a new volume?

Because, depending on what bacula feels like doing, it may append to or recycle
an existing volume.

 I'm thinking of having a separate pool for Virtual Backups, that I will 
 rsync offsite. Will these backups be viable? If my virtual backups are 
 mixed in with my regular backups, for me, that becomes way too much to 
 be syncing offsite constantly.

The VirtualFulls will be viable backups.
Once they have finished, they are almost indistinguishable from normal Fulls.
Even the 'Level' field in the database says 'F', rather than 'V' or something
like that.

 At least if my VirtualFulls are only really Diffs, I know that I'm 
 syncing the minimum amount of data off site so that I can recover in the 
 event of a full system failure. If I'm walking into a false sense of 
 security, please, someone stop me!

Unless somebody knows better, I don't think rsync can help you, because all
the files will be written fresh to a new location (a new volume, appended to
an existing volume, or overwriting an existing volume), and rsync will then
have to send the whole lot.


--
Gaining the trust of online customers is vital for the success of any company
that requires sensitive data to be transmitted over the Web.   Learn how to 
best implement a security strategy that keeps consumers' information secure 
and instills the confidence they need to proceed with transactions.
http://p.sf.net/sfu/oracle-sfdevnl 
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] Virtual Backups

2011-01-07 Thread Graham Keeling
On Fri, Jan 07, 2011 at 04:21:57PM +0100, Gandalf Corvotempesta wrote:
 Hi all,
 i'm trying to configure a bacula 5.0.2 to do virtual backups.
 I'm able to do the first virtual backup with success, but after
 that, i'm unable to do a new one.
 
 This is the error message that i'm receiving in the second virtualfull:
 
 07-Jan 15:07 b01-dir JobId 1335: Using Device
 myclient-FileStorageVirtualFull
 07-Jan 15:07 b01-sd JobId 1335: acquire.c:117 Changing read device. Want
 Media Type=myclient-FileVirtualFull have=myclient-File
   device=myclient-FileStorage (/var/bacula/backups/clients/myclient)
 
 
 I'm using a different Pool for every kind of backups:
 
 Job {
...
Pool = myclient-FullPool
Full Backup Pool = myclient-FullPool
Incremental Backup Pool = myclient-IncrementalPool
Differential Backup Pool = myclient-DifferentialPool
Storage = myclient-Storage
...
 }
 
 Pool {
...
Name = myclient-VirtualFullPool
Label Format = ${Year}${Month:p/2/0/r}${Day:p/2/0/r}_
 ${Hour:p/2/0/r}${Minute:p/2/0/r}-myclient-virtualfull
Pool Type = Backup
Maximum Volume Jobs = 1
Storage = myclient-StorageVirtualFull
NextPool = myclient-FullPool
...
 }
 
 Pool {
...
Name = myclient-FullPool
Label Format = ${Year}${Month:p/2/0/r}${Day:p/2/0/r}_
 ${Hour:p/2/0/r}${Minute:p/2/0/r}-myclient-full
Pool Type = Backup
Maximum Volume Jobs = 1
...
 }
 
 Obviously I also have a pool for Incremental and Differential.
 
 
 
 This is the storage daemon configuration:
 
 Device {
Name = myclient-FileStorage
Device Type = File;
Media Type = myclient-File
Archive Device = /var/bacula/backups/clients/myclient
LabelMedia = yes;
Random Access = yes;
AutomaticMount = yes;
Requires Mount=no;
RemovableMedia = no;
AlwaysOpen = no;
 }
 
 Device {
Name = myclient-FileStorageVirtualFull
Device Type = File;
Media Type = myclient-FileVirtualFull
Archive Device = /var/bacula/backups/clients/myclient
LabelMedia = yes;
Random Access = yes;
AutomaticMount = yes;
Requires Mount=no;
RemovableMedia = no;
AlwaysOpen = no;
 }
 
 
 What I understood from documentation is that Bacula will read
 Full,Differential and Incremental volumes from the Pool specified
 in Pool directive from the Job resource and then it will write
 the VirtualFull backup to the NextPool resource of the original Pool.
 
 This will work for the first time, but the second time, it tries
 to read the VirtualFull from the original Pool regardless the
 real pool and storage specified with NextPool
 
 Any hints?

Hello,
In my configuration, I have (paraphrasing it for simplicity):

ScratchPoolA
Type=Scratch

FullPoolA
Type=Backup
RecyclePool=ScratchPoolA
ScratchPool=ScratchPoolA

IncrPoolA
Type=Backup
RecyclePool=ScratchPoolA
ScratchPool=ScratchPoolA
NextPool = FullPool

Job
Full Backup Pool = FullPoolA
Incremental Backup Pool = IncrPoolA

All my Virtuals then end up in FullPoolA.
There are no separate pools for Virtuals.


--
Gaining the trust of online customers is vital for the success of any company
that requires sensitive data to be transmitted over the Web.   Learn how to 
best implement a security strategy that keeps consumers' information secure 
and instills the confidence they need to proceed with transactions.
http://p.sf.net/sfu/oracle-sfdevnl 
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] Virtual Backups

2011-01-07 Thread Graham Keeling
On Fri, Jan 07, 2011 at 04:05:06PM +, Graham Keeling wrote:
 On Fri, Jan 07, 2011 at 04:21:57PM +0100, Gandalf Corvotempesta wrote:
  Hi all,
  i'm trying to configure a bacula 5.0.2 to do virtual backups.
  I'm able to do the first virtual backup with success, but after
  that, i'm unable to do a new one.
  
  This is the error message that i'm receiving in the second virtualfull:
  
  07-Jan 15:07 b01-dir JobId 1335: Using Device
  myclient-FileStorageVirtualFull
  07-Jan 15:07 b01-sd JobId 1335: acquire.c:117 Changing read device. Want
  Media Type=myclient-FileVirtualFull have=myclient-File
device=myclient-FileStorage (/var/bacula/backups/clients/myclient)
  
  
  I'm using a different Pool for every kind of backups:
  
  Job {
 ...
 Pool = myclient-FullPool
 Full Backup Pool = myclient-FullPool
 Incremental Backup Pool = myclient-IncrementalPool
 Differential Backup Pool = myclient-DifferentialPool
 Storage = myclient-Storage
 ...
  }
  
  Pool {
 ...
 Name = myclient-VirtualFullPool
 Label Format = ${Year}${Month:p/2/0/r}${Day:p/2/0/r}_
  ${Hour:p/2/0/r}${Minute:p/2/0/r}-myclient-virtualfull
 Pool Type = Backup
 Maximum Volume Jobs = 1
 Storage = myclient-StorageVirtualFull
 NextPool = myclient-FullPool
 ...
  }
  
  Pool {
 ...
 Name = myclient-FullPool
 Label Format = ${Year}${Month:p/2/0/r}${Day:p/2/0/r}_
  ${Hour:p/2/0/r}${Minute:p/2/0/r}-myclient-full
 Pool Type = Backup
 Maximum Volume Jobs = 1
 ...
  }
  
  Obviously I also have a pool for Incremental and Differential.
  
  
  
  This is the storage daemon configuration:
  
  Device {
 Name = myclient-FileStorage
 Device Type = File;
 Media Type = myclient-File
 Archive Device = /var/bacula/backups/clients/myclient
 LabelMedia = yes;
 Random Access = yes;
 AutomaticMount = yes;
 Requires Mount=no;
 RemovableMedia = no;
 AlwaysOpen = no;
  }
  
  Device {
 Name = myclient-FileStorageVirtualFull
 Device Type = File;
 Media Type = myclient-FileVirtualFull
 Archive Device = /var/bacula/backups/clients/myclient
 LabelMedia = yes;
 Random Access = yes;
 AutomaticMount = yes;
 Requires Mount=no;
 RemovableMedia = no;
 AlwaysOpen = no;
  }
  
  
  What I understood from documentation is that Bacula will read
  Full,Differential and Incremental volumes from the Pool specified
  in Pool directive from the Job resource and then it will write
  the VirtualFull backup to the NextPool resource of the original Pool.
  
  This will work for the first time, but the second time, it tries
  to read the VirtualFull from the original Pool regardless the
  real pool and storage specified with NextPool
  
  Any hints?
 
 Hello,
 In my configuration, I have (paraphrasing it for simplicity):
 
 ScratchPoolA
   Type=Scratch
 
 FullPoolA
   Type=Backup
   RecyclePool=ScratchPoolA
   ScratchPool=ScratchPoolA
   
 IncrPoolA
   Type=Backup
   RecyclePool=ScratchPoolA
   ScratchPool=ScratchPoolA
   NextPool = FullPool
 
 Job
   Full Backup Pool = FullPoolA
   Incremental Backup Pool = IncrPoolA
 
 All my Virtuals then end up in FullPoolA.
 There are no separate pools for Virtuals.

If this doesn't help, I'll get around to digging out an actual bacula-dir.conf
that I know to work.


--
Gaining the trust of online customers is vital for the success of any company
that requires sensitive data to be transmitted over the Web.   Learn how to 
best implement a security strategy that keeps consumers' information secure 
and instills the confidence they need to proceed with transactions.
http://p.sf.net/sfu/oracle-sfdevnl 
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] Virtual Backups - Do we actually need full backups anymore?

2011-01-06 Thread Graham Keeling
On Thu, Jan 06, 2011 at 05:02:47PM +, Mister IT Guru wrote:
 I've been trying to get my head around virtual full backups.
 
 Now, from my understanding, (i'm 80% through my work day, shut down 20 
 tickets, and had to deal with too many user incidents for my liking, so 
 please bare with me if I say something stupid!), virtual fulls can be 
 run on the same pool as a real 'recent' full has been run on, and it 
 will create a new full based on all the latest files still in the pool. 
 It then takes these files, and only take the latest changed files, from 
 the client to create a new usable full backup, which should pretty much 
 take the same time as between and incremental and a differential.
 
 If this is the case, then I can slash my backup times, from 5 hours per 
 host, to around 20 minutes, which is something I think would be pretty 
 frikkin' awesome! Feel free to comment, and suggest :)

No, it doesn't take the latest files from the client.

It would solve a couple of problems that I have if that is what it did though.

A VirtualFull combines previous backups into a single backup that is
equivalent to a Full.

So, if you have a schedule like this:

Monday:Incremental
Tuesday:   Incremental
Wednesday: Incremental
Thursday:  Incremental
Friday:Incremental
Saturday:  Incremental
Sunday:Incremental

You can't, say, just do this:

Monday:Incremental
Tuesday:   Incremental
Wednesday: Incremental
Thursday:  VirtualFull
Friday:Incremental
Saturday:  Incremental
Sunday:Incremental

You actually have to do this, otherwise you don't get a backup for that day:

Monday:Incremental
Tuesday:   Incremental
Wednesday: Incremental
Thursday:  VirtualFull plus seperate Incremental
Friday:Incremental
Saturday:  Incremental
Sunday:Incremental

And that means that you get into problems with the VirtualFull and Incremental
overlapping and getting in each other's way.

With my configuration, a VirtualFull sometimes prevents an Incremental from
running, because the VirtualFull took too long (or vice versa). I have not been
able to solve this, because every idea that I've come up with either doesn't
work or makes something else happen that is worse.

So, I would be very pleased if a VirtualFull also grabbed new files from the
client.


--
Learn how Oracle Real Application Clusters (RAC) One Node allows customers
to consolidate database storage, standardize their database environment, and, 
should the need arise, upgrade to a full multi-node Oracle RAC database 
without downtime or disruption
http://p.sf.net/sfu/oracle-sfdevnl
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] Virtual Backups - Do we actually need full backups anymore?

2011-01-06 Thread Graham Keeling
On Thu, Jan 06, 2011 at 05:24:07PM +, Mister IT Guru wrote:
 On 06/01/2011 17:16, Graham Keeling wrote:
 On Thu, Jan 06, 2011 at 05:02:47PM +, Mister IT Guru wrote:
 I've been trying to get my head around virtual full backups.

 Now, from my understanding, (i'm 80% through my work day, shut down 20
 tickets, and had to deal with too many user incidents for my liking, so
 please bare with me if I say something stupid!), virtual fulls can be
 run on the same pool as a real 'recent' full has been run on, and it
 will create a new full based on all the latest files still in the pool.
 It then takes these files, and only take the latest changed files, from
 the client to create a new usable full backup, which should pretty much
 take the same time as between and incremental and a differential.

 If this is the case, then I can slash my backup times, from 5 hours per
 host, to around 20 minutes, which is something I think would be pretty
 frikkin' awesome! Feel free to comment, and suggest :)
 No, it doesn't take the latest files from the client.

 It would solve a couple of problems that I have if that is what it did 
 though.

 A VirtualFull combines previous backups into a single backup that is
 equivalent to a Full.

 So, if you have a schedule like this:

 Monday:Incremental
 Tuesday:   Incremental
 Wednesday: Incremental
 Thursday:  Incremental
 Friday:Incremental
 Saturday:  Incremental
 Sunday:Incremental

 You can't, say, just do this:

 Monday:Incremental
 Tuesday:   Incremental
 Wednesday: Incremental
 Thursday:  VirtualFull
 Friday:Incremental
 Saturday:  Incremental
 Sunday:Incremental

 You actually have to do this, otherwise you don't get a backup for that day:

 Monday:Incremental
 Tuesday:   Incremental
 Wednesday: Incremental
 Thursday:  VirtualFull plus seperate Incremental
 Friday:Incremental
 Saturday:  Incremental
 Sunday:Incremental

 And that means that you get into problems with the VirtualFull and 
 Incremental
 overlapping and getting in each other's way.

 With my configuration, a VirtualFull sometimes prevents an Incremental from
 running, because the VirtualFull took too long (or vice versa). I have not 
 been
 able to solve this, because every idea that I've come up with either doesn't
 work or makes something else happen that is worse.

 So, I would be very pleased if a VirtualFull also grabbed new files from the
 client.

 Thank you for pointing this out! So it doesn't grab new files from the  
 client first? Well, that's not the smartest! Hmm, I wonder - How would  
 you get a job to run run after another job, rather than have bacula  
 decide via priorities?


I don't know, but I think that your idea of combining them both into one job
is a far better solution.


--
Learn how Oracle Real Application Clusters (RAC) One Node allows customers
to consolidate database storage, standardize their database environment, and, 
should the need arise, upgrade to a full multi-node Oracle RAC database 
without downtime or disruption
http://p.sf.net/sfu/oracle-sfdevnl
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] Virtual Backups - Do we actually need full backups anymore?

2011-01-06 Thread Graham Keeling
On Thu, Jan 06, 2011 at 11:36:10AM -0600, Sean Clark wrote:
 On 01/06/2011 11:24 AM, Mister IT Guru wrote:
  On 06/01/2011 17:16, Graham Keeling wrote:
  On Thu, Jan 06, 2011 at 05:02:47PM +, Mister IT Guru wrote:
  I've been trying to get my head around virtual full backups.
 
  [...]
  So, I would be very pleased if a VirtualFull also grabbed new files from 
  the
  client.
  Thank you for pointing this out! So it doesn't grab new files from the 
  client first? Well, that's not the smartest! Hmm, I wonder - How would 
  you get a job to run run after another job, rather than have bacula 
  decide via priorities?
 To be fair - if it's grabbing actual files directly from the client,
 it's no longer a virtual backup.  I got the impression
 that the point was to generate a full backup without having to talk to
 the client at all.
 
 I think if you give the virtual full a lower priority than the
 incremental, you can schedule both for the same day and have it always
 do the incremental then the virtual full in the correct order (haven't
 actually TRIED to do this myself, so I'm guessing).

You cannot do that with priorities.
You can only set a priority on a Job. And the VirtualFulls and Incrementals
all have to come under the same Job.
I've TRIED! :)

And ordering isn't the problem. I don't mind what order they happen in.
The problem is that one overlapping with the other, and causes the other's
MaxWaitTime to exceed.


--
Learn how Oracle Real Application Clusters (RAC) One Node allows customers
to consolidate database storage, standardize their database environment, and, 
should the need arise, upgrade to a full multi-node Oracle RAC database 
without downtime or disruption
http://p.sf.net/sfu/oracle-sfdevnl
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] How would I 'nuke' my bacula instance - Start afresh so to speak.

2011-01-05 Thread Graham Keeling
On Wed, Jan 05, 2011 at 09:38:14AM +, Mister IT Guru wrote:
 I've run lots of test jobs, and I have a lot of backup data, that I 
 don't really need, around 2TB or so! (we have a few servers!) I would 
 like to know if it's possible to remove all of those jobs out of the 
 bacula database. Personally, I would have cut this configure out, and 
 drop it on a previous backup I have, but then I don't learn about how 
 bacula works.
 
 My main fear, is that I rsync my disk backend offsite, and I've 
 currently suspended that because of all these test jobs that I'm 
 running. Also, I've reset the bacula-dir and sd, during backups, and 
 I've a feeling that some of them are not viable.
 
 I guess what I'm asking is, is it possible to wipe the slate clean, but 
 keep my working configuration from within bacula?

It sounds like you just want to wipe your sql database and keep your bacula
configuration files.

When I want to do this, I stop bacula and stop mysql (I use mysql):

/etc/rc.d/init.d/bacula stop
/etc/rc.d/init.d/mysql stop

I then 'rm -r' the bacula mysql database files - something like this:

cd /var/lib/mysql/data
rm -r bacula

I then start mysql and start bacula:

/etc/rc.d/init.d/mysql start
/etc/rc.d/init.d/bacula start


--
Learn how Oracle Real Application Clusters (RAC) One Node allows customers
to consolidate database storage, standardize their database environment, and, 
should the need arise, upgrade to a full multi-node Oracle RAC database 
without downtime or disruption
http://p.sf.net/sfu/oracle-sfdevnl
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] How would I 'nuke' my bacula instance - Start afresh so to speak.

2011-01-05 Thread Graham Keeling
On Wed, Jan 05, 2011 at 09:55:17AM +, Mister IT Guru wrote:
 On 05/01/2011 09:51, Graham Keeling wrote:
  On Wed, Jan 05, 2011 at 09:38:14AM +, Mister IT Guru wrote:
  I've run lots of test jobs, and I have a lot of backup data, that I
  don't really need, around 2TB or so! (we have a few servers!) I would
  like to know if it's possible to remove all of those jobs out of the
  bacula database. Personally, I would have cut this configure out, and
  drop it on a previous backup I have, but then I don't learn about how
  bacula works.
 
  My main fear, is that I rsync my disk backend offsite, and I've
  currently suspended that because of all these test jobs that I'm
  running. Also, I've reset the bacula-dir and sd, during backups, and
  I've a feeling that some of them are not viable.
 
  I guess what I'm asking is, is it possible to wipe the slate clean, but
  keep my working configuration from within bacula?
  It sounds like you just want to wipe your sql database and keep your bacula
  configuration files.
 
  When I want to do this, I stop bacula and stop mysql (I use mysql):
 
  /etc/rc.d/init.d/bacula stop
  /etc/rc.d/init.d/mysql stop
 
  I then 'rm -r' the bacula mysql database files - something like this:
 
  cd /var/lib/mysql/data
  rm -r bacula
 
  I then start mysql and start bacula:
 
  /etc/rc.d/init.d/mysql start
  /etc/rc.d/init.d/bacula start
 
 This would mean I would manually deleted my disk based backups? 
 Otherwise if I don't then wouldn't the first attempt to run a backup 
 fail, because that disk volume exists? DiskBackupPool-001 as a file 
 already exists, so will bacula just overwrite it?

Sorry, I don't know what bacula will do if the actual volume already exists.

But, yes, if I wanted to start fresh but keep the same configuration, I would
wipe the database as already described, and delete the actual volumes manually.


--
Learn how Oracle Real Application Clusters (RAC) One Node allows customers
to consolidate database storage, standardize their database environment, and, 
should the need arise, upgrade to a full multi-node Oracle RAC database 
without downtime or disruption
http://p.sf.net/sfu/oracle-sfdevnl
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] err=text file busy

2010-12-24 Thread Graham Keeling
On Thu, Dec 23, 2010 at 10:34:33AM -0800, CountBacula wrote:
 Does anyone know how to backup open files?
 
 Brightstore had an open file manager which would do the job.

On what operating system?

If you are trying to backup some version of windows with bacula, you probably
need to check that you have told bacula to enable VSS.


--
Learn how Oracle Real Application Clusters (RAC) One Node allows customers
to consolidate database storage, standardize their database environment, and, 
should the need arise, upgrade to a full multi-node Oracle RAC database 
without downtime or disruption
http://p.sf.net/sfu/oracle-sfdevnl
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] 5.0.3 - scratch pools

2010-12-13 Thread Graham Keeling
On Mon, Dec 13, 2010 at 04:49:22PM +, Alan Brown wrote:
 
 I'm regularly seeing bacula grab multiple volumes from the scratch pool 
 and add them to another pool, instead of only taking one volume.
 
 There don't seem to be any erroe messages associated with the event.
 
 This is starving other pools which use the same scratch-pool of resources.
 
 Has anyone else seen this behaviour? Is there a fix?

I have seen this behaviour (and the sort-of-similar behaviour where bacula
randomly creates twenty new volumes in the database instead of creating one).

Sorry, I don't know about a fix.


--
Lotusphere 2011
Register now for Lotusphere 2011 and learn how
to connect the dots, take your collaborative environment
to the next level, and enter the era of Social Business.
http://p.sf.net/sfu/lotusphere-d2d
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] Understanding purge

2010-11-17 Thread Graham Keeling
On Wed, Nov 17, 2010 at 10:48:36AM +, Dermot Beirne wrote:
 Hi Phil,
 
 Here is the pool definitions I'm using.
 
 Is there some way I can get the entire disk pool volumes purged when
 they expire, so they are all truncated and all that space is released.

I don't think that you're going to have much luck with this.
If you do, I would be interested in how you did it.

When the ActionOnPurge feature originally came along, I found a dangerous
bug in it. The bacula people said that they would try to fix it in the
next version.
But as far as I understand it, the fix is that you shouldn't try to run it
automatically.

http://sourceforge.net/apps/wordpress/bacula/2010/02/01/new-actiononpurge-feature/
http://sourceforge.net/apps/wordpress/bacula/2010/01/28/action-on-purge-feature-broken-in-5-0-0/


--
Beautiful is writing same markup. Internet Explorer 9 supports
standards for HTML5, CSS3, SVG 1.1,  ECMAScript5, and DOM L2  L3.
Spend less time writing and  rewriting code and more time creating great
experiences on the web. Be a part of the beta today
http://p.sf.net/sfu/msIE9-sfdev2dev
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] Understanding purge

2010-11-17 Thread Graham Keeling
On Wed, Nov 17, 2010 at 11:32:44AM +, Dermot Beirne wrote:
 Hi Graham,
 I think this is a key feature, and am surprised it's not easily
 possible.  The user should have the choice.  I saw the blog entries
 you refer to, and that bug appears to have been fixed, but I don't see
 what use it is in the current system.  If it's not possible to get
 bacula to purge a volume until it has absolutely no option, (which it
 then truncates and relabels anyway) then under what circumstances is
 the actiononpurge=trucate feature useful?
 
 Hence I am wondering if I am misunderstanding how Bacula works in
 regard to purging volumes.
 
 There must have been a good reason to implement this new feature, and
 I think it's what I need, but I can't see how to use it properly.
 
 Dermot.

I agree, and I am sorry because I can't offer you any more help.
I was just stating how things appear to stand at the moment.

 On 17 November 2010 11:03, Graham Keeling gra...@equiinet.com wrote:
  On Wed, Nov 17, 2010 at 10:48:36AM +, Dermot Beirne wrote:
  Hi Phil,
 
  Here is the pool definitions I'm using.
 
  Is there some way I can get the entire disk pool volumes purged when
  they expire, so they are all truncated and all that space is released.
 
  I don't think that you're going to have much luck with this.
  If you do, I would be interested in how you did it.
 
  When the ActionOnPurge feature originally came along, I found a dangerous
  bug in it. The bacula people said that they would try to fix it in the
  next version.
  But as far as I understand it, the fix is that you shouldn't try to run it
  automatically.
 
  http://sourceforge.net/apps/wordpress/bacula/2010/02/01/new-actiononpurge-feature/
  http://sourceforge.net/apps/wordpress/bacula/2010/01/28/action-on-purge-feature-broken-in-5-0-0/
 
 
 


--
Beautiful is writing same markup. Internet Explorer 9 supports
standards for HTML5, CSS3, SVG 1.1,  ECMAScript5, and DOM L2  L3.
Spend less time writing and  rewriting code and more time creating great
experiences on the web. Be a part of the beta today
http://p.sf.net/sfu/msIE9-sfdev2dev
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] unwanted fullbackup

2010-11-10 Thread Graham Keeling
On Wed, Nov 10, 2010 at 10:32:16AM +0100, Alex Huth wrote:
 Hello!
 
 We have a timetable for the backups, where we do fullbackups on the 1st
 sunday, differentials on the other sundays and incremental on each other
 day.
 
 on 07.11 (1st Sunday) bacula did a fullbackup like expected. Next monday
 a incremental like expected. On the following tuesday bacula made
 a fullbackup instead of the incremental, because he didn´t find a prior 
 fullbackup in the catalog.
 
 The fullbackup on the 7th was the second fullback. The first one was the
 initial fullbackup after the setup and start with bacula.
 
 Where do i have to search for the problem?

Bacula relies heavily on your clock being correct. 
If your clock went backwards since it did the first full, it won't be able to
find the first full.

Also, if your retention for your first full has expired, it will purge it
even though another backup depends on it existing.
So perhaps your retention for your first full was too low.


 Thx in advance
 
 Alex
 
 
 --
 The Next 800 Companies to Lead America's Growth: New Video Whitepaper
 David G. Thomson, author of the best-selling book Blueprint to a 
 Billion shares his insights and actions to help propel your 
 business during the next growth cycle. Listen Now!
 http://p.sf.net/sfu/SAP-dev2dev
 ___
 Bacula-users mailing list
 Bacula-users@lists.sourceforge.net
 https://lists.sourceforge.net/lists/listinfo/bacula-users


--
The Next 800 Companies to Lead America's Growth: New Video Whitepaper
David G. Thomson, author of the best-selling book Blueprint to a 
Billion shares his insights and actions to help propel your 
business during the next growth cycle. Listen Now!
http://p.sf.net/sfu/SAP-dev2dev
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] unwanted fullbackup

2010-11-10 Thread Graham Keeling
On Wed, Nov 10, 2010 at 08:12:51PM +1100, James Harper wrote:
   on 07.11 (1st Sunday) bacula did a fullbackup like expected. Next monday
   a incremental like expected. On the following tuesday bacula made
   a fullbackup instead of the incremental, because he didn´t find a prior
  fullbackup in the catalog.
  
   The fullbackup on the 7th was the second fullback. The first one was the
   initial fullbackup after the setup and start with bacula.
  
   Where do i have to search for the problem?
  
  Bacula relies heavily on your clock being correct.
  If your clock went backwards since it did the first full, it won't be able 
  to
  find the first full.
  
  Also, if your retention for your first full has expired, it will purge it
  even though another backup depends on it existing.
  So perhaps your retention for your first full was too low.
  
 
 Hmmm... it never occurred to me before but mucking around with the clock on 
 your director could seriously mess with your day! Advancing the clock by one 
 year by mistake would probably purge most peoples catalogues.


I've thought about it before, and posted my thoughts on the devel list, on
a thread called 'bconsole restore bug - option 12', starting here...
http://marc.info/?l=bacula-develm=125302334920002w=2
...and continue to worry about it.

I still think that getting rid of bacula's dependence on clocks, and having
a 'this job depends on jobid x' field would fix a host of problems.

I can see that maintaining continuity with, or migrating from previous bacula
versions would be difficult. But I think the move would be best in the long
term.


--
The Next 800 Companies to Lead America's Growth: New Video Whitepaper
David G. Thomson, author of the best-selling book Blueprint to a 
Billion shares his insights and actions to help propel your 
business during the next growth cycle. Listen Now!
http://p.sf.net/sfu/SAP-dev2dev
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] Exchange plugin: Unable to restore

2010-11-05 Thread Graham Keeling
On Thu, Nov 04, 2010 at 08:52:18PM +0100, Michael Heydenbluth wrote:
 Graham Keeling schrieb:
 
  On Thu, Nov 04, 2010 at 03:34:35PM +0100, Michael Heydenbluth wrote:
   Hello,
   
   I'm trying to backup/restore an Exchange database.
   
   Following configuration:
   W2K3 R2 Server (32) with Exchange 2003 SP2, bacula-fd 5.0.3 from the
   download area,
   bacula-sd and bacula-dir (5.0.3 - mysql) compiled from source to
   run on SLES 9.
   
  [short description of commands I entered]
 
  Can you copy and paste the exact commands that you enter, and the
  output of them?
 
 Here we go. Sorry it's bit long, but you asked for it :-):
 
 (just to be sure, there's nothing executing right now):
 *status director
 
 sqlbsrv1-dir Version: 5.0.3 (04 August 2010) i686-pc-linux-gnu suse 9
 Daemon started 03-Nov-10 15:08, 18 Jobs run since started.
  Heap: heap=946,176 smbytes=167,123 max_bytes=2,761,470 bufs=564
 max_bufs=5,473
 
 Scheduled Jobs:
 Level  Type Pri  Scheduled  Name Volume
 ===
 [10 jobs or so scheduled to run at 11:00pm]
 
 Running Jobs:
 Console connected at 04-Nov-10 20:16
 No Jobs running.
 
 
 Terminated Jobs:
  JobId  LevelFiles  Bytes   Status   FinishedName
 
 12050 0   Cancel   04-Nov-10 14:10 RestoreFiles
 12060 0   Cancel   04-Nov-10 14:17 RestoreFiles
 12070 0   Cancel   04-Nov-10 14:52 RestoreFiles 
 12080 0   Cancel   04-Nov-10 15:50 RestoreFiles
 
 
 
 *status storage=SuperLoader3
 Connecting to Storage daemon SuperLoader3 at sqlbsrv1:9103
 
 sqlbsrv1-sd Version: 5.0.3 (04 August 2010) i686-pc-linux-gnu suse 9
 Daemon started 03-Nov-10 15:08. Jobs: run=15, running=0.
  Heap: heap=421,888 smbytes=161,927 max_bytes=342,749 bufs=114
 max_bufs=202
 Sizes: boffset_t=8 size_t=4 int32_t=4 int64_t=8
 
 Running Jobs:
 No Jobs running.
 
 
 Jobs waiting to reserve a drive:
 
 
 Terminated Jobs:
  JobId  LevelFiles  Bytes   Status   FinishedName
 ===
 [...]
 12050 0   Cancel   04-Nov-10 14:10 RestoreFiles
 12060 0   Cancel   04-Nov-10 14:17 RestoreFiles
 12070 0   Cancel   04-Nov-10 14:52 RestoreFiles
 12080 0   Cancel   04-Nov-10 15:50 RestoreFiles
 
 
 Device status:
 Autochanger Autochanger with devices:
LTO-4 (/dev/nst1)
 Device FileStorage (/bacula) is not open.
 Device LTO-4 (/dev/nst1) is mounted with:
 Volume:  KYE718L4
 Pool:WinServer
 Media type:  LTO-4
 Slot 6 is loaded in drive 0.
 Total Bytes Read=151,151,616 Blocks Read=2,343 Bytes/block=64,512
 Positioned at File=212 Block=2,343
 
 Used Volume status:
 KYE718L4 on device LTO-4 (/dev/nst1)
 Reader=0 writers=0 devres=0 volinuse=0
 
 
 Data spooling: 0 active jobs, 0 bytes; 1 total jobs, 2,304,272,492 max
 bytes/job. Attr spooling: 0 active jobs, 2,436,193 bytes; 1 total jobs,
 2,436,193 max bytes.
 
 
 *restore
 Automatically selected Catalog: MyCatalog
 Using Catalog MyCatalog
 
 First you select one or more JobIds that contain files
 to be restored. You will be presented several methods
 of specifying the JobIds. Then you will be allowed to
 select which files from those JobIds are to be restored.
 
 To select the JobIds, you have the following choices:
  1: List last 20 Jobs run
  2: List Jobs where a given File is saved
  3: Enter list of comma separated JobIds to select
 [...]
 13: Cancel
 Select item:  (1-13): 3
 Enter JobId(s), comma separated, to restore: 1200
 
 You have selected the following JobId: 1200
 
 Building directory tree for JobId(s) 1200 ...
 ++ 24,089 files inserted
 into the tree.
 
 You are now entering file selection mode where you add (mark) and
 remove (unmark) files to be restored. No files are initially added,
 unless you used the all keyword on the command line.
 Enter done to leave this mode.
 
 cwd is: /
 $ cd /@EXCHANGE/Microsoft Information Store/Erste Speichergruppe/
 cwd is: /@EXCHANGE/Microsoft Information Store/Erste Speichergruppe/
 $ ls
 C:\Programme\Exchsrvr\mdbdata\E0002449.log
 C:\Programme\Exchsrvr\mdbdata\E000244A.log
 C:\Programme\Exchsrvr\mdbdata\E000244B.log
 C:\Programme\Exchsrvr\mdbdata\E000244C.log
 C:\Programme\Exchsrvr\mdbdata\E000244D.log
 C:\Programme\Exchsrvr\mdbdata\E000244E.log
 C:\Programme\Exchsrvr\mdbdata\E000244F.log
 C:\Programme\Exchsrvr\mdbdata\E0002450.log
 C:\Programme\Exchsrvr\mdbdata\E0002451.log
 C:\Programme\Exchsrvr\mdbdata\E0002452.log
 C:\Programme\Exchsrvr\mdbdata\E0002453.log
 C:\Programme\Exchsrvr\mdbdata\E0002454.log
 C:\Programme\Exchsrvr\mdbdata\E0002455.log
 C:\Programme

Re: [Bacula-users] Exchange plugin: Unable to restore

2010-11-04 Thread Graham Keeling
On Thu, Nov 04, 2010 at 03:34:35PM +0100, Michael Heydenbluth wrote:
 Hello,
 
 I'm trying to backup/restore an Exchange database.
 
 Following configuration:
 W2K3 R2 Server (32) with Exchange 2003 SP2, bacula-fd 5.0.3 from the
 download area,
 bacula-sd and bacula-dir (5.0.3 - mysql) compiled from source to run on
 SLES 9.
 
 Backing up the Information Store works and I can browse the files in
 bconsole by typing:
 
 *restore, then 3 (specify by jobno), then number of backup-job
 
 I select all files from the Postfachspeicher (servername)-directory
 and run the restore job. I always get the error message
 Fatal error: Invalid restore path specified, must start with
 '/@EXCHANGE/', then the job seems to just sit there and does nothing,
 even not ending.

Can you copy and paste the exact commands that you enter, and the output of
them?
By the way, I have found out that it is very easy to crash the file daemon
when running the Exchange plugin, and it is not always obvious.
Whenever anything goes wrong, it is a good idea to check that it is still
running. Restarting the director helps too.

Also, I have found that you can only restore reliably from a full or a full
and a single incremental. If you have a chain of incrementals, it doesn't work.
Which rather defeats the point of the plugin.
There has been an unacknowledged bug entry for this for a month now
(bugid 1647).

 *status director
 
 Running Jobs:
 Console connected at 04-Nov-10 13:49
  JobId Level   Name   Status
 ==
   1208 RestoreFiles.2010-11-04_14.52.32_06 is waiting on
 Storage SuperLoader3
 
 *status SuperLoader3
 
 Device status:
 Autochanger Autochanger with devices:
LTO-4 (/dev/nst1)
 Device LTO-4 (/dev/nst1) is mounted with:
 Volume:  KYE718L4
 Pool:WinServer
 Media type:  LTO-4
 Slot 6 is loaded in drive 0.
 Total Bytes Read=151,151,616 Blocks Read=2,343 Bytes/block=64,512
 Positioned at File=212 Block=2,343
 
 These values don't change over the time.
 
 Am I missing something?
 
 Greetings
 Michael
 
 just in case, it might be useful: Here's the relevant part of what I
 get when typing show fileset:
 
 O Mie
 WD [A-Z]:/Dokumente und Einstellungen/*/Lokale
 Einstellungen/History 
 WD [A-Z]:/Dokumente und Einstellungen/*/Lokale
 Einstellungen/Temp
 WD [A-Z]:/Dokumente und Einstellungen/*/Lokale
 Einstellungen/Temporary Internet Files
 WD [A-Z]:/Dokumente und
 Einstellungen/*/Lokale Einstellungen/Cookies
 WD [A-Z]:/dokumente und
 einstellungen/*/lokale einstellungen/verlauf
 WD [A-Z]:/Winnt/system32/config
 WD [A-Z]:/Windows/system32/config
 WF [A-Z]:/pagefile.sys
 WF [A-Z]:/hibernate.sys
 N
 I c:/
 I d:/
 N
 P exchange:/@EXCHANGE/Microsoft Information Store
 N
 E d:/MDBDATA
 E d:/EASY_DATA
 E d:/MP3
 N
 
 
 --
 The Next 800 Companies to Lead America's Growth: New Video Whitepaper
 David G. Thomson, author of the best-selling book Blueprint to a 
 Billion shares his insights and actions to help propel your 
 business during the next growth cycle. Listen Now!
 http://p.sf.net/sfu/SAP-dev2dev
 ___
 Bacula-users mailing list
 Bacula-users@lists.sourceforge.net
 https://lists.sourceforge.net/lists/listinfo/bacula-users


--
The Next 800 Companies to Lead America's Growth: New Video Whitepaper
David G. Thomson, author of the best-selling book Blueprint to a 
Billion shares his insights and actions to help propel your 
business during the next growth cycle. Listen Now!
http://p.sf.net/sfu/SAP-dev2dev
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] What happen if I delete a single incremental between the full and another incremental

2010-10-18 Thread Graham Keeling
On Sat, Oct 16, 2010 at 09:33:13AM +0200, Hugo Letemplier wrote:
 Hi thanks a lot for your answers
 
 I have retried with a new test scenario its clear now and deleting an 
 incremental is really dangerous.
 But I think that a function that enable the administrator to join 2 jobs 
 would be cool.
 Imagine that one day lots of data manipulation are done on the machine that I 
 want to backup, so there is a great difference between 2 incremental. The 
 jobs are done, and deleting one job is dangerous for the jobs that follows
 In this case, that would be great to mix 2 jobs.
 Its quite complicated to explain I know.
 Take a look at this little scenario, a classical Full with his incremental 
 jobs : the client is typically a big file server
 
 1 - The full
 2 - an incremental
 3 - someone make a mistake while he was exploring the file server he made 
 lots of copy of files in the server (for example : a bad drag and drop).
 4 - a nightly scheduled incremental 
 5 - the administrator see that the last incremental got a lot of new files 
 and that job bytes got a huge value.
 5 - the user see his error and deletes the duplicates
 6 - a new incremental is ran
 7 - after checking everything, I want to reduce the size of my backups by 
 fusioning the two last incrementals. The idea is to add new files of step 4 
 to step 6 but without the files deleted at step 6
 
 In a mathematical view, it can be seen like that : Inc6.1 = Inc4 - (Files of 
 Inc4 deleted at Inc6.0) + (new files of inc6.0) +  (crush files modified 
 after Inc4 with their version of inc6.0)
 
 I hope that it can be understood more easily than the previous post !

Perhaps a VirtualFull backup is what you are looking for?

 Thanks a lot 
 
 Hugo
 
 Le 13 oct. 2010 à 17:53, Jari Fredriksson a écrit :
 
  On 13.10.2010 18:21, Hugo Letemplier wrote:
  Hi,
  I have an important question that will help me validating some specs
  about bacula 5.0.2
  Imagine the following scenario:
  1 - a full
  2 - an incremental
  3 - an incremental
  4 - another incremental
  
  if I delete the incremental of step 3, does it move the files that
  have been added during step 3 onto the incremental of step 4
  
  I have tried this scenario but my result is not clear. Can you tell me
  your experience ?
  
  In other words: can I delete one Incremental without deleting more
  recents incrementals or if I delete the full does it upgrade the first
  incremental into full ?
  
  
  I *think* Bacula uses timestamps when doing incrementals. if you delete
  one incremental, you lose the files modified/created for that day.
  
  But if you delete the full, Bacula upgrades the next incremental to
  Full, as it finds no suitable Full to do the incremental for.
  
  
  --
  Beautiful is writing same markup. Internet Explorer 9 supports
  standards for HTML5, CSS3, SVG 1.1,  ECMAScript5, and DOM L2  L3.
  Spend less time writing and  rewriting code and more time creating great
  experiences on the web. Be a part of the beta today.
  http://p.sf.net/sfu/beautyoftheweb
  ___
  Bacula-users mailing list
  Bacula-users@lists.sourceforge.net
  https://lists.sourceforge.net/lists/listinfo/bacula-users
 
 
 --
 Download new Adobe(R) Flash(R) Builder(TM) 4
 The new Adobe(R) Flex(R) 4 and Flash(R) Builder(TM) 4 (formerly 
 Flex(R) Builder(TM)) enable the development of rich applications that run
 across multiple browsers and platforms. Download your free trials today!
 http://p.sf.net/sfu/adobe-dev2dev
 ___
 Bacula-users mailing list
 Bacula-users@lists.sourceforge.net
 https://lists.sourceforge.net/lists/listinfo/bacula-users


--
Download new Adobe(R) Flash(R) Builder(TM) 4
The new Adobe(R) Flex(R) 4 and Flash(R) Builder(TM) 4 (formerly 
Flex(R) Builder(TM)) enable the development of rich applications that run
across multiple browsers and platforms. Download your free trials today!
http://p.sf.net/sfu/adobe-dev2dev
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] Restore from a file listing

2010-10-04 Thread Graham Keeling
On Mon, Oct 04, 2010 at 12:00:12PM +0100, Rory Campbell-Lange wrote:
 We provide clients with a Bacula backup-to-tape service, which is
 complementary to our offsite backup services.
 
 As part of the backup-to-tape service we wish to audit each tape by
 checking that we can retrieve several files from each tape in the backup
 set.
 
 Our audit programme (a python script) produces a listing of files
 suitable for the audit in the format below. The listing is presently
 made to show the tape name, filename and base64 md5sum. Many of the file
 names have odd characters in them.
 
 At present I am doing a restore job by using option 3 and entering the
 job id as all our backups are full backups. Then I go into the restore
 console.
 
 For each file I want to retrieve I find I have to walk the directory
 tree to mark the file. 
 mark /survey/GTC/02_SURVEY/02 Graphics/A Exisiting Site 
 Images/GTC_40_ContactSheet-002.pdf 
 doesn't seem to work.

I don't know if this is helpful to you, but the way I do this (with a script)
is:
path=/your/path/to/file.abc
dir=${path%/*}
file=${path##*/}
(then in bconsole)
cd $dir
mark $file

Bear in mind, last time I checked, bconsole has a very eccentric way of
quoting things. And the 'cd' quoting is different to the 'mark' quoting.

For 'cd', you need to quote '\' and '' with '\'.

For 'mark':
'\' needs '\\\'.
'' needs '\'.
'*', '?' and '[' need '\\'.



 Is there a simple and accurate way of providing a list of files of this
 sort to Bacula in order to mark them and proceed with a restore job?
 
 Advice gratefully received.
 
 Regards
 Rory
 
 
 ZA-09 : /survey/GTC/02_SURVEY/02 Graphics/A Exisiting Site 
 Images/GTC_40_ContactSheet-002.pdf  : BxtAuFFc/f1ad9KAu6QcTA
 ZA-09 : /survey/GTC/02_SURVEY/01 Draw/05 
 3d/_Mark/03_renders/elements/viewno02_VRay_RenderID.tif   : 
 +uXYLMX+dzF3tagX1HLxGA
 ZA-09 : /survey/GTC/02_SURVEY/01 Draw/05 
 3d/_Mark/03_renders/elements/viewno03_VRay_SampleRate.tif : 
 8mV7L15K2oD8Myl3RHGH1g
 ZA-09 : /survey/GTC/02_SURVEY/02 Graphics/A Exisiting Site Images/080605_site 
 visit/IMG_0153.JPG   : Bffxysn05Jf835pjn8EhWg
 ZA-09 : /survey/GTC/02_SURVEY/01 Draw/09 Details/A_DE_L.dgn   
  : 8n6FPZ8LUOvd2j0yhpe5Jw
 ZA-09 : /survey/GTC/02_SURVEY/02 Graphics/A Exisit Images/00_site 
 visit/IMG_0207.JPG   : Lc1l+Npa3fAWR7dG7UNtbw
 ZA-10 : /survey/USA2/Design/Graphics/99_SCM/Material/Elevation.pdf
  : cO4XMEdlCtdI7g9wL5B/Dg
 ZA-10 : /survey/USA2/Design/Graphics/99_SCM/Material/added to binder/ 
 this.pdf: KSNJEaQHmW0+xvrqjFFmog
 ZA-10 : /survey/USA2/Design/Graphics/99_A 
 Visit_mplan/Material/uerplan_A1_130208.indd  : kKBqVl5Dh9elOe1HeXqMDw
 ZA-10 : 
 /survey/USA2/Design/Graphics/99_B/material/Coloured/WR_MultiBay_with_Stair.pdf
  : Rk43I9cB+hkWAd+BHNrzkQ
 ZA-10 : /survey/USA2/Design/Graphics/99_C/Material/corner sketch.pdf  
  : 723kZeU7rY5l7620SWwS0w
 ZA-10 : /survey/USA2/Design/Graphics/99_phased/Material/Finished 
 JPEGs/phase 5.jpg : 5R+Mvs0JsQW+RtWUo6SIXg
 ZA-10 : /survey/USA2/Design/Graphics/99_SCM/Material/for 
 binder/elevations 4dec08.pdf  : G2AOwxxYs8PnuJuxpdJY3A
 ZA-10 : 
 /survey/USA2/Design/Graphics/99_SCM/Material/east_flatroof_sketch.pdf 
  : 8cOmdPhe+Hgkkxb60l+0og
 ZA-10 : /survey/USA2/Design/Graphics/99_SCM/Material/A_nnn_ 
 East-West_OR.pdf   : XrGzU07JjrWevaxWDUjMNQ
 ZA-10 : /survey/USA2/Design/Graphics/99_meeting/A___Typical_bay.pdf   
  : vM0DRH5xevICHlVxKg3elg
 ZA-11 : /issue/ZAN/OUT/ISSUES/IN_Scheme updated/PDF/_LongSect250.tiff 
  : TySX9plPTq4uIy05iwY5fA
 ZA-11 : /issue/ZAN/OUT/ISSUES/IN_DWG/IN/1527.dwg  
  : ew1rXI6UJoLmizY4wvIUrw
 ZA-11 : /issue/ZAN/OUT/ISSUES/IN0/DGN/0005.dgn
  : n040KyL8cX9crgpY9asLfg
 ZA-11 : /issue/ZAN/OUT/ISSUES/IN0/ff=/0004.dgn
  : V2jTeupRp3Yv1qmpteDnWw
 ZA-11 : /issue/ZAN/OUT/ISSUES/IN0_Construction issue/15L9.hpgl
  : 8fOJdGO5SwaOMkxNi43kxg
 ZA-11 : /issue/ZAN/OUT/ISSUES/IN0_ZAN/A_ZAN.pdf   
  : v+oAaYnWv+1Bjq5ezWgmlg
 ZA-11 : /issue/ZAN/OUT/ISSUES/IN0_arch/1000_.000  
  : ODbF++krYYWMLsBmI/GgRQ
 ZA-11 : /issue/ZAN/OUT/ISSUES/IN0_Engage  Plan/copy/3002.dwg 
  : O+haqF++WUp519AnX+q1uQ
 ZA-11 : /issue/ZAN/OUT/ISSUES/IN0_DWGs/1515.bak   
  : Ca7aA7v95BNv+rK6HoyvYA
 ZA-11 : /issue/ZAN/OUT/ISSUES/IN0_Construction issue southblock/Binder1.pdf   
  : K/iqrW28Z+Lemk4osfrnCQ
 
 -- 
 Rory Campbell-Lange
 r...@campbell-lange.net
 
 Campbell-Lange Workshop
 

Re: [Bacula-users] Bacula with two independent storage and pools

2010-09-24 Thread Graham Keeling
On Fri, Sep 24, 2010 at 04:02:43PM +0200, Richard Marnau wrote:
 Hi all,
 
 our backup server is in the same room like all the other servers and just to 
 make sure I want to backup some server on
 separate usb disks. So far so easy. The setup:
 
 [storage1] - [Pool1] - [JOB1,2,3,4,5,6]
 [storage2] - [Pool2] - [D-Job1,2,3,4,5,6]
 
 I did define a separate storage with a separate pool and created separate 
 jobs for each server. Everything is working so far
 but bacula is mixing up the archives when I want to do a restore. 
 
 
 === SNIP ===
 
 Full Recovery:
 Automatically selected FileSet: FullLinux
 +---+---+--++-++
 | JobId | Level | JobFiles | JobBytes   | StartTime   | 
 VolumeName |
 +---+---+--++-++
 |   463 | F |   91,974 | 19,934,664,291 | 2010-09-12 00:55:10 | Archiv1   
  | --- USB-Full-Archiv, wrong SD and Pool
 |   550 | D |   15,186 | 12,726,800,679 | 2010-09-19 23:21:42 | Vol0413   
  |
 |   558 | I |3,382 |572,511,925 | 2010-09-20 23:20:24 | Vol0421   
  |
 |   569 | I |7,297 |186,410,950 | 2010-09-21 23:20:51 | Vol0429   
  |
 |   577 | I |3,878 |594,435,445 | 2010-09-22 23:20:28 | Vol0437   
  |
 |   585 | I |2,634 |187,140,431 | 2010-09-23 23:20:37 | Vol0445   
  |
 +---+---+--++-++
 
 === SNAPP ===
 
 Archiv1 was the last full backup on the USB archive disk, not the last full 
 backup on the normal backup server.
 Couple of reasons why this is bad, but most important I'm way to lazy to swap 
 usb-drives each time a backup occurs ;-).
 
 Thanks for any hints...

I had lots of problems when I didn't set different MediaTypes for each Storage
that I was using. Perhaps that would help.


--
Nokia and ATT present the 2010 Calling All Innovators-North America contest
Create new apps  games for the Nokia N8 for consumers in  U.S. and Canada
$10 million total in prizes - $4M cash, 500 devices, nearly $6M in marketing
Develop with Nokia Qt SDK, Web Runtime, or Java and Publish to Ovi Store 
http://p.sf.net/sfu/nokia-dev2dev
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] Exchange plugin with VirtualFulls ('truncating' logs)

2010-09-21 Thread Graham Keeling
On Tue, Sep 21, 2010 at 08:19:21AM +1000, James Harper wrote:
  Hello,
  
  I am testing backups of an Exchange 2003 server, with the plugin. I
 have a
  schedule of incrementals with occasional virtualfulls.
  
  The idea being that, after the first real full, doing a full backup is
 no
  longer required (unless I restore the whole Exchange database, because
  Exchange then demands a full backup).
 
 Or if you apply a microsoft update / hotfix. That's caught me out a few
 times in Backup Exec which crashes the exchange services when this
 happens.
 
  But, I have realised that the Exchange logs are never 'truncated'
 because
  there is never a real full backup.
  Looking at the plugin code and the documentation, the logs get
 'truncated' on
  a
  real full unless you give an option to the plugin for it not to
 happen.
 
 And a differential I think.
 
  
  As I understand it, this means that the logs will just build up and up
 on the
  Exchange server.
  
  Is there a way of getting the Exchange logs to 'truncate' when doing a
  virtualfull?
  Or does somebody have another suggestion?
  
 
 If you know a bit of C, you could do the following:
 
 Add a alwaystrunc_option member just after notrunconfull_option to
 exchange_fd_context_t in exchange-fd.h
 
 Init it to false in newPlugin
 
 Add something like:
 if (context-alwaystrunc_option) {
context-truncate_logs = true;
 }
 To exchange-fd.c after switch(context-job_level) in exchange-fd.c after
 case bEventStartBackupJob: to override the context-truncate_logs flag
 despite whatever has been set prior.
 
 Add something like:
 else if (stricmp(option, alwaystrunc) == 0){
context-alwaystrunc_option = true;
 }
 In bEventBackupCommand for the option parsing.
 
 Once you do that, you should be able to append :alwaystrunc to the
 backup command to tell it to always truncate the logs.
 
 The reason for this complexity is that Exchange maintains its own
 'backup state'. It knows when the last full backup was done and
 determines what needs to be backed up itself, while Bacula expects to
 set those parameters itself too (via the 'since' parameter). By keeping
 the logs around on incremental we can still do a differential too
 because the logs aren't thrown away.
 
 The alwaystrunc options should probably be split out into 'always trunc
 on incremental' and 'always trunc on differential' flags for maximum
 flexibility but the above should be a good starting point.
 
 If the above suggestions are not something you feel able to do I might
 be able to put a patch together, but not this week.

OK, that sounds easy enough.

Attached is my patch to bacula-5.0.3 (completely untested at the moment).

I've made two new options:
truncondiff
trunconincr
I will report on whether it works in a while.


--
Start uncovering the many advantages of virtual appliances
and start using them to simplify application deployment and
accelerate your shift to cloud computing.
http://p.sf.net/sfu/novell-sfdev2dev
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


[Bacula-users] Exchange plugin with VirtualFulls ('truncating' logs)

2010-09-20 Thread Graham Keeling
Hello,

I am testing backups of an Exchange 2003 server, with the plugin. I have a
schedule of incrementals with occasional virtualfulls.

The idea being that, after the first real full, doing a full backup is no
longer required (unless I restore the whole Exchange database, because Exchange
then demands a full backup).

But, I have realised that the Exchange logs are never 'truncated' because
there is never a real full backup.
Looking at the plugin code and the documentation, the logs get 'truncated' on a
real full unless you give an option to the plugin for it not to happen.

As I understand it, this means that the logs will just build up and up on the
Exchange server.

Is there a way of getting the Exchange logs to 'truncate' when doing a
virtualfull?
Or does somebody have another suggestion?

Thanks.


--
Start uncovering the many advantages of virtual appliances
and start using them to simplify application deployment and
accelerate your shift to cloud computing.
http://p.sf.net/sfu/novell-sfdev2dev
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] Reschedule a VirtualFull?

2010-07-06 Thread Graham Keeling
On Mon, Jul 05, 2010 at 12:55:24PM -0400, Dan Langille wrote:
 resending with cc to list

 On 7/5/2010 12:26 PM, Graham Keeling wrote:
 On Mon, Jul 05, 2010 at 11:58:31AM -0400, Dan Langille wrote:
 On 7/5/2010 11:37 AM, Graham Keeling wrote:
 Hello,
 I have a schedule like this in my bacula-dir.conf:

 Schedule {
 Name = ScheduleX
 Run = Level=Incremental mon at 18:00
 Run = Level=Incremental tue at 18:00
 Run = Level=Incremental wed at 18:00
 Run = Level=Incremental thu at 18:00
 Run = Level=Incremental fri at 18:00
 Run = Level=Incremental sat at 18:00
 Run = Level=Incremental sun at 18:00
 Run = Level=VirtualFull 1st sun at 21:00
 }

 FYI: with this schedule you'll get both a VirtualFull and an Incremental
 on the 1st Sun.  Just saying...  To avoid that, you could do this:

  Run = Level=Incremental 2nd-5th sun at 18:00

 If I did that, I would risk losing data because I wouldn't be doing any real
 backup on the 1st Sunday.

 ahh yes.

 Why are you worried about not rescheduling inc?

If one is missed, I will get one the next day.
If a virtual is missed, I will not get another until next month.

Also, if bacula sees the virtuals and the incrementals as identical except for
the level, I'm guessing that it would be possible for incremental reschedules
to conflict with the virtual reschedules. There are options for deleting
'duplicates', for example, that won't distinguish between them. I notice
that you cannot set a priority on a schedule entry - you can only do it for
the job.

Ideally, I'd like to say something like this:

Job {
...
VirtualTime = 1 month
...
}

Which would make bacula insert a virtual full after a month, at a convenient
time of its own choosing.
It would also solve the problem where bacula misses the virtual because it
wasn't running on that day.


--
This SF.net email is sponsored by Sprint
What will you do first with EVO, the first 4G phone?
Visit sprint.com/first -- http://p.sf.net/sfu/sprint-com-first
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


[Bacula-users] Reschedule a VirtualFull?

2010-07-05 Thread Graham Keeling
Hello,
I have a schedule like this in my bacula-dir.conf:

Schedule {
  Name = ScheduleX
  Run = Level=Incremental mon at 18:00
  Run = Level=Incremental tue at 18:00
  Run = Level=Incremental wed at 18:00
  Run = Level=Incremental thu at 18:00
  Run = Level=Incremental fri at 18:00
  Run = Level=Incremental sat at 18:00
  Run = Level=Incremental sun at 18:00
  Run = Level=VirtualFull 1st sun at 21:00
}

And a Job that looks like this:

Job {
  Name = JobX
  Type = Backup
  Schedule = ScheduleX
  Pool = JobXPool
  Priority = 10
  Client = ClientX-fd
  FileSet = FileSetX
  Full Backup Pool = FullPoolX
  Incremental Backup Pool = IncrPoolX
  Maximum Concurrent Jobs = 1
  Storage = StorageX
  Accurate = yes
}

I want to make sure that I get a VirtualFull every month, so I think I want
to 'Reschedule' it if it gets an error.
But I don't want to 'Reschedule' any Incrementals.

Is this possible?

Thanks,
Graham.


--
This SF.net email is sponsored by Sprint
What will you do first with EVO, the first 4G phone?
Visit sprint.com/first -- http://p.sf.net/sfu/sprint-com-first
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] Reschedule a VirtualFull?

2010-07-05 Thread Graham Keeling
On Mon, Jul 05, 2010 at 11:58:31AM -0400, Dan Langille wrote:
 On 7/5/2010 11:37 AM, Graham Keeling wrote:
 Hello,
 I have a schedule like this in my bacula-dir.conf:

 Schedule {
Name = ScheduleX
Run = Level=Incremental mon at 18:00
Run = Level=Incremental tue at 18:00
Run = Level=Incremental wed at 18:00
Run = Level=Incremental thu at 18:00
Run = Level=Incremental fri at 18:00
Run = Level=Incremental sat at 18:00
Run = Level=Incremental sun at 18:00
Run = Level=VirtualFull 1st sun at 21:00
 }

 FYI: with this schedule you'll get both a VirtualFull and an Incremental  
 on the 1st Sun.  Just saying...  To avoid that, you could do this:

 Run = Level=Incremental 2nd-5th sun at 18:00

If I did that, I would risk losing data because I wouldn't be doing any real
backup on the 1st Sunday.

 And a Job that looks like this:

 Job {
Name = JobX
Type = Backup
Schedule = ScheduleX
Pool = JobXPool
Priority = 10
Client = ClientX-fd
FileSet = FileSetX
Full Backup Pool = FullPoolX
Incremental Backup Pool = IncrPoolX
Maximum Concurrent Jobs = 1
Storage = StorageX
Accurate = yes
 }

 I want to make sure that I get a VirtualFull every month, so I think I want
 to 'Reschedule' it if it gets an error.
 But I don't want to 'Reschedule' any Incrementals.

 Is this possible?

 The only way I can think of is to have one job for Inc and another for  
 the Full.

Thanks for the suggestion.
I am a bit dubious as to whether it can really work like that.

If I came to do a restore (or the virtualfull), bacula would have to know to
group the source data, perhaps by client/fileset, rather than by job name.
Perhaps I just need to try it.


A second question:
What can I do in the case where there is a power cut on the first sunday? I
don't think that bacula is going to notice that a virtualfull has been missed.


--
This SF.net email is sponsored by Sprint
What will you do first with EVO, the first 4G phone?
Visit sprint.com/first -- http://p.sf.net/sfu/sprint-com-first
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] Virtual Full backups

2010-06-09 Thread Graham Keeling
On Wed, Jun 09, 2010 at 12:05:34PM +0400, Alexander Pyhalov wrote:
 Hello.
 I have some questions about Virtual Full backups. As I understood, VF 
 backups should use separate volume pool. We are going to use bacula for 
 making backups of a lot of data (2 TB, about 10 servers). In this 
 circumstances I'd like to avoid making Full backup. I'd like to make one 
 initial Full backup and after only Incremental and Virtual Full.
 But how can I achieve this? Virtual Full use separate pool and as far as 
 I know we can't use one pool for reading one VF backup and writing other 
 one. So I assume I must make VF and before making next VF backup the old 
 one should be moved to other pool (on separate storage).
 What is the best way to do it? We use file storage. Should I modify my 
 Job to run after job script which moves VF volume to other pool? But how 
 will I recognize that it is VF backup and not incremental backup for the 
 same job?

Hello,
I have been using Virtual Full for a while now. Doing something like the
following seems to work for me (I don't recall having to do anything else
special):

Schedule {
  Name = Daily
...
  Run = Level=Incremental mon at 18:50
  Run = Level=Incremental tue at 18:50
...
  Run = Level=VirtualFull sun at 18:50
}

Pool {
  Name = FullPool
  Maximum Volume Jobs = 1
...
}

Pool {
  Name = IncrPool
...
  Maximum Volume Jobs = 1
  NextPool = FullPool
}

Job {
...
  Schedule = Daily
  Full Backup Pool = FullPool
  Incremental Backup Pool = IncrPool
...
}


--
ThinkGeek and WIRED's GeekDad team up for the Ultimate 
GeekDad Father's Day Giveaway. ONE MASSIVE PRIZE to the 
lucky parental unit.  See the prize list and enter to win: 
http://p.sf.net/sfu/thinkgeek-promo
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] How to find all file .avi from full tape

2010-05-28 Thread Graham Keeling
On Fri, May 28, 2010 at 10:47:00AM +0200, Simone Martina wrote:
 Hi at all,
 someone of my colleagues tends to save non-work files (like large avi 
 file) in shared directory and so my bacula backup Job take a lot of time 
 due to save these unuseful rubbish... I would like to find full path of 
 something file name contaings avi or AVI, has bconsole a sort of command 
 for these type of query, or should I do a query directly to mysql?

I had to do something similar yesterday. I don't know how to do it with
bconsole.

I ended up running something like this...

SELECT DISTINCT p.Path, fn.Name FROM Path p, File f, Filename fn
WHERE p.PathId=f.PathId AND fn.FilenameId=f.FileId AND
(p.Path LIKE '%avi%' OR fn.Name LIKE '%avi%');

...or maybe this...

SELECT DISTINCT p.Path, fn.Name FROM Path p, File f, Filename fn
WHERE p.PathId=f.PathId AND fn.FilenameId=f.FileId AND
fn.Name LIKE '%.avi';

You have to add more table joins to get the job and client out.


--

___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] How to find all file .avi from full tape

2010-05-28 Thread Graham Keeling
On Fri, May 28, 2010 at 10:12:51AM +0100, Graham Keeling wrote:
 On Fri, May 28, 2010 at 10:47:00AM +0200, Simone Martina wrote:
  Hi at all,
  someone of my colleagues tends to save non-work files (like large avi 
  file) in shared directory and so my bacula backup Job take a lot of time 
  due to save these unuseful rubbish... I would like to find full path of 
  something file name contaings avi or AVI, has bconsole a sort of command 
  for these type of query, or should I do a query directly to mysql?
 
 I had to do something similar yesterday. I don't know how to do it with
 bconsole.
 
 I ended up running something like this...
 
 SELECT DISTINCT p.Path, fn.Name FROM Path p, File f, Filename fn
 WHERE p.PathId=f.PathId AND fn.FilenameId=f.FileId AND
 (p.Path LIKE '%avi%' OR fn.Name LIKE '%avi%');
 
 ...or maybe this...
 
 SELECT DISTINCT p.Path, fn.Name FROM Path p, File f, Filename fn
 WHERE p.PathId=f.PathId AND fn.FilenameId=f.FileId AND
 fn.Name LIKE '%.avi';
 
 You have to add more table joins to get the job and client out.

...actually, the JobId is in the File table, so that is easy:

SELECT DISTINCT f.JobId, p.Path, fn.Name FROM Path p, File f, Filename fn
 WHERE p.PathId=f.PathId AND fn.FilenameId=f.FileId AND
 fn.Name LIKE '%.avi';


--

___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] Windows 2008/2008r2 Server backup

2010-05-11 Thread Graham Keeling
On Tue, May 11, 2010 at 09:51:52AM +1000, James Harper wrote:
 With full VSS support, VSS defines the files that make up the system  
 state backup - it's a flag on the writer. Bacula handles junction  
 points perfectly.

So, to get a backup with Windows 2008 that includes the system state, you need
to set a flag on the writer.
Do you know how to set that flag?

 Sent from my iPhone
 
 On 11/05/2010, at 4:13, Kevin Keane subscript...@kkeane.com wrote:
 
  There is no such thing as system state backup any more in Windows  
  2008. It's always the whole C: drive. I'm not sure how well bacula  
  handles it in the end. There also is the issue that Windows 2008  
  relies heavily on junction points, which bacula doesn't handle well.
 
  I'm using Windows backup to an iSCSI drive, and then use bacula to  
  back up a snapshot of that iSCSI volume.
 
  -Original Message-
  From: Michael Da Cova [mailto:mdac...@equiinet.com]
  Sent: Monday, May 10, 2010 9:47 AM
  To: bacula-users@lists.sourceforge.net
  Subject: [Bacula-users] Windows 2008/2008r2 Server backup
 
  Hi
 
  anyone have any tips recommendation on how to backup and restore
  windows
  2008 system state, do you need to if using VSS
 
  Michael
 
 
 
  --- 
  --- 
  --- 
  -
 
  ___
  Bacula-users mailing list
  Bacula-users@lists.sourceforge.net
  https://lists.sourceforge.net/lists/listinfo/bacula-users
 
  --- 
  --- 
  --- 
  -
 
  ___
  Bacula-users mailing list
  Bacula-users@lists.sourceforge.net
  https://lists.sourceforge.net/lists/listinfo/bacula-users
 
 --
 
 ___
 Bacula-users mailing list
 Bacula-users@lists.sourceforge.net
 https://lists.sourceforge.net/lists/listinfo/bacula-users


--

___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] Windows 2008/2008r2 Server backup

2010-05-11 Thread Graham Keeling
On Tue, May 11, 2010 at 01:58:07PM +0200, Koldo Santisteban wrote:
 No,
 As you can see in my last mail, you need to run scripts before bacula
 backup. In Windows 2003 too.

So what is the flag that James Harper was talking about?

With full VSS support, VSS defines the files that make up the system
state backup - it's a flag on the writer.


 On Tue, May 11, 2010 at 1:33 PM, Michael Da Cova mdac...@equiinet.comwrote:
 
   Hi all
 
 
 
  Sorry for the top post, and thanks for replying so far but I just need a
  simple question answered if you don’t mind
 
 
 
  Does Bacula with VSS support on backup windows 2008 system state
 
 
 
  Michael
 
 
--
 
  *From:* Koldo Santisteban [mailto:ksantiste...@gmail.com]
  *Sent:* 11 May 2010 11:50 AM
  *To:* James Harper
  *Cc:* mdac...@equiinet.com; bacula-users@lists.sourceforge.net
 
  *Subject:* Re: [Bacula-users] Windows 2008/2008r2 Server backup
 
 
 
  Hello
 
  I use this little script on weekly basis before bacula backup
 
  wbadmin delete systemstatebackup -backupTarget:e: -keepVersions:0 -quiet
  wbadmin start systemstatebackup -backupTarget:e: -quiet
 
  Wbadmin saves several system states, and in my case with the last one is
  engouh (Bacula store several copies)
 
  Regarding the error/warning on junction points. Bacula tries to backup
  folders that not fisically exists.
 
  In the Job log i can see error/warnings like this (with VSS enabled):
 
  servername JobId 643:  Could not open directory c:/Documents and
  Settings/username/AppData/Local/Temporary Internet Files: ERR=Access is
  denied.
 
  servername JobId 643:  Could not open directory c:/Documents and
  Settings/username/Application Data: ERR=Access is denied.
 
  servername JobId 643:  Could not open directory c:/Documents and
  Settings/username/Cookies: ERR=Access is denied.
 
  servername JobId 643:  Could not open directory c:/Documents and
  Settings/username/Documents/My Music: ERR=Access is denied.
 
  servername JobId 643:  Could not open directory c:/Documents and
  Settings/username/Documents/My Pictures: ERR=Access is denied.
 
  servername JobId 643:  Could not open directory c:/Documents and
  Settings/username/Documents/My Videos: ERR=Access is denied.
 
  servername JobId 643:  Could not open directory c:/Documents and
  Settings/username/Local Settings: ERR=Access is denied.
 
  servername JobId 643:  Could not open directory c:/Documents and
  Settings/username/My Documents: ERR=Access is denied.
 
  servername JobId 643:  Could not open directory c:/Documents and
  Settings/username/NetHood: ERR=Access is denied.
 
  servername JobId 643:  Cannot open c:/Documents and
  Settings/username/NTUSER.DAT: ERR=A required privilege is not held by the
  client.
  .
  servername JobId 643:  Cannot open c:/Documents and
  Settings/username/ntuser.dat.LOG1: ERR=A required privilege is not held by
  the client.
  .
  servername JobId 643:  Cannot open c:/Documents and
  Settings/username/NTUSER.DAT{7d5ec63a-c5bc-11dc-a02b-0019bbe6a65a}.TM.blf:
  ERR=A required privilege is not held by the client.
  .
  servername JobId 643:  Cannot open c:/Documents and
  Settings/username/NTUSER.DAT{7d5ec63a-c5bc-11dc-a02b-0019bbe6a65a}.TMContainer0001.regtrans-ms:
  ERR=A required privilege is not held by the client.
  .
  servername JobId 643:  Could not open directory c:/Documents and
  Settings/username/PrintHood: ERR=Access is denied.
 
  servername JobId 643:  Could not open directory c:/Documents and
  Settings/username/Recent: ERR=Access is denied.
 
  servername JobId 643:  Could not open directory c:/Documents and
  Settings/username/SendTo: ERR=Access is denied.
 
  servername JobId 643:  Could not open directory c:/Documents and
  Settings/username/Start Menu: ERR=Access is denied.
 
  servername JobId 643:  Could not open directory c:/Documents and
  Settings/username/Templates: ERR=Access is denied.
 
 
  I have checked that c:\users is saved correctly (is the correct folder
  name),then i think that is a bug in bacula (is noisy, because job finish
  status is no ok).
  I have no time to make a full restore and i don´t know if with a Bacula
  backup and windows System State is possible on windows 2008. I am very
  grateful if someone that test it post his conclusions.
  Regards
 
  On Tue, May 11, 2010 at 12:15 PM, James Harper 
  james.har...@bendigoit.com.au wrote:
 
   Regarding the junction points, i have hundred of warnings each time i
  make a
   backup with bacula, i don´t know how to avoid this and ,like Michael, i
  am
   very interesting in how to solve it...
  
 
  What are the warnings? Is it the one about 'different filesystem'?
 
  James
 
 
 

 --
 

 ___
 Bacula-users mailing list
 Bacula-users@lists.sourceforge.net
 

Re: [Bacula-users] VirtualFull mysql query blocks other jobs for a long time

2010-04-13 Thread Graham Keeling
Hello,

I now believe that the 'taking hours' problem that I was having was
down to having additional indexes on my File table, as Eric suggested.

I am using mysql-5.0.45.

I had these indexes:
JobId
JobId, PathId, FilenameId
PathId
FilenameId

Now I have these indexes:
JobId
JobId, PathId, FilenameId

The queries on my 'real' database now take about a second, rather than half a
day.

A suggestion - perhaps the following comment in src/cats/make_mysql_tables.in
could be changed to include a warning:

#
# Possibly add one or more of the following indexes
#  to the above File table if your Verifies are
#  too slow.
#
#  INDEX (PathId),
#  INDEX (FilenameId),
#  INDEX (FilenameId, PathId)
#  INDEX (JobId),
#



However, I also tested the 3.0.3 and 5.0.1 queries using Eric's test script and
the much larger database that it generates.
I found that there is a definite slowdown.

Results from do_bench(10,13, 220). In this case, the slowdown is about
15%.

new|220|220|312
old|220|220|268
graham|220|220|158

Result 'graham' is the time it takes to do a query that I came up with that
looks similar to the postgresql query, but uses the mysql group by trick that
is frowned upon:

SELECT MAX(JobTDate) AS JobTDate, JobId, FileId, FileIndex, PathId, FilenameId, 
LStat, MD5 FROM
 (SELECT JobTDate, JobId, FileId, FileIndex, PathId, FilenameId, LStat, MD5
   FROM
   (SELECT FileId, JobId, PathId, FilenameId, FileIndex, LStat, MD5
  FROM File WHERE JobId IN ($jobid)
 UNION ALL
SELECT File.FileId, File.JobId, PathId, FilenameId,
   File.FileIndex, LStat, MD5
  FROM BaseFiles JOIN File USING (FileId)
 WHERE BaseFiles.JobId IN ($jobid)
) AS T JOIN Job USING (JobId)
   ORDER BY FilenameId, PathId, JobTDate DESC ) AS U
GROUP BY PathId, FilenameId


--
Download Intel#174; Parallel Studio Eval
Try the new software tools for yourself. Speed compiling, find bugs
proactively, and fine-tune applications for parallel performance.
See why Intel Parallel Studio got high marks during beta.
http://p.sf.net/sfu/intel-sw-dev
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


[Bacula-users] How to turn off 'ask the operator'?

2010-04-13 Thread Graham Keeling
Hello,

I am using disk based backups and bacula-5.0.1.

When the disk gets full up, bacula gets stuck in a state where a job 'is
waiting for a mount request'. Presumably, it wants the system operator to do
something.

However, at this point, the system operator cannot do anything but cancel the
job.

I would like to be able to turn off the 'ask the operator' feature so that
it just cancels the job straight away.

Is there an option for doing this?

Thanks.


--
Download Intel#174; Parallel Studio Eval
Try the new software tools for yourself. Speed compiling, find bugs
proactively, and fine-tune applications for parallel performance.
See why Intel Parallel Studio got high marks during beta.
http://p.sf.net/sfu/intel-sw-dev
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] bacula 5.0.1 and db issues - please, share your experience

2010-04-13 Thread Graham Keeling
On Tue, Apr 13, 2010 at 02:42:15PM +0200, Koldo Santisteban wrote:
 Hello
 I am working with bacula 5.0.1. On first stage i setup the server with
 bacula 5.0.1 and Mysql, but, when i need to restore i have found that the
 bulid tree process take 10-12 hours (or more). I have read all about this
 issues and i can see that no exists any magic solution. In order to solve
 it, i have migrate from mysql to postgre, but i can see the same symtopms.
 Perhaps it works better, but, in my opnion, this is not serios on a
 production environment.
 If it´s possible, i will appreciate it people share their experience with
 bacula last version and this kind of issues. A couple of month ago i
 finished to deploy bacula on my environment, but now, i am considering to go
 back all.
 Please any comment regarding this case is welcome.
 Regards

Hello,

I had similar problems with virtual and accurate backups until I made sure
that the indexes on my mysql database were the bacula defaults.
In particular, I had these indexes on my File table:

JobId
JobId, PathId, FilenameId
PathId
FilenameId

Once I had removed the PathId and FilenameId indexes, my queries changed from
taking many hours to taking about a second.

To check these on your database:
Log into mysql.
use bacula;
show indexes from File;

If you have any extra indexes, you can drop them like this:

drop index index name on File;
e.g: drop index PathId on File;

If you need to add one:

create index index name on File (list of fields);
e.g: create index JobId on File (JobId);


--
Download Intel#174; Parallel Studio Eval
Try the new software tools for yourself. Speed compiling, find bugs
proactively, and fine-tune applications for parallel performance.
See why Intel Parallel Studio got high marks during beta.
http://p.sf.net/sfu/intel-sw-dev
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] bacula 5.0.1 and db issues - please, share your experience

2010-04-13 Thread Graham Keeling
On Tue, Apr 13, 2010 at 02:03:39PM +0100, Graham Keeling wrote:
 On Tue, Apr 13, 2010 at 02:42:15PM +0200, Koldo Santisteban wrote:
  Hello
  I am working with bacula 5.0.1. On first stage i setup the server with
  bacula 5.0.1 and Mysql, but, when i need to restore i have found that the
  bulid tree process take 10-12 hours (or more). I have read all about this
  issues and i can see that no exists any magic solution. In order to solve
  it, i have migrate from mysql to postgre, but i can see the same symtopms.
  Perhaps it works better, but, in my opnion, this is not serios on a
  production environment.
  If it´s possible, i will appreciate it people share their experience with
  bacula last version and this kind of issues. A couple of month ago i
  finished to deploy bacula on my environment, but now, i am considering to go
  back all.
  Please any comment regarding this case is welcome.
  Regards
 
 Hello,
 
 I had similar problems with virtual and accurate backups until I made sure
 that the indexes on my mysql database were the bacula defaults.
 In particular, I had these indexes on my File table:
 
 JobId
 JobId, PathId, FilenameId
 PathId
 FilenameId
 
 Once I had removed the PathId and FilenameId indexes, my queries changed from
 taking many hours to taking about a second.

For clarity, I now have these indexes on my File table:
JobId
JobId, PathId, FilenameId

 
 To check these on your database:
 Log into mysql.
 use bacula;
 show indexes from File;
 
 If you have any extra indexes, you can drop them like this:
 
 drop index index name on File;
 e.g: drop index PathId on File;
 
 If you need to add one:
 
 create index index name on File (list of fields);
 e.g: create index JobId on File (JobId);
 
 
 --
 Download Intel#174; Parallel Studio Eval
 Try the new software tools for yourself. Speed compiling, find bugs
 proactively, and fine-tune applications for parallel performance.
 See why Intel Parallel Studio got high marks during beta.
 http://p.sf.net/sfu/intel-sw-dev
 ___
 Bacula-users mailing list
 Bacula-users@lists.sourceforge.net
 https://lists.sourceforge.net/lists/listinfo/bacula-users


--
Download Intel#174; Parallel Studio Eval
Try the new software tools for yourself. Speed compiling, find bugs
proactively, and fine-tune applications for parallel performance.
See why Intel Parallel Studio got high marks during beta.
http://p.sf.net/sfu/intel-sw-dev
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] Fatal error: backup.c:892 Network send error to SD. ERR=Connection reset by peer

2010-04-12 Thread Graham Keeling
On Sun, Apr 11, 2010 at 09:32:43AM -0500, Jon Schewe wrote:
 I got it to work again last night. Changing the firewall time outs
 didn't help. What fixed it was turning off Accurate backups.

Ah, so possibly bacula spent long enough stuck doing an accurate query in the
catalog that the firewall connection timed out.
Are you using mysql and bacula-5.0.1?


--
Download Intel#174; Parallel Studio Eval
Try the new software tools for yourself. Speed compiling, find bugs
proactively, and fine-tune applications for parallel performance.
See why Intel Parallel Studio got high marks during beta.
http://p.sf.net/sfu/intel-sw-dev
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] VirtualFull mysql query blocks other jobs for a long time

2010-04-09 Thread Graham Keeling
On Thu, Apr 08, 2010 at 12:29:05PM -0700, ebollengier wrote:
 Graham Keeling wrote:
  
  On Thu, Apr 08, 2010 at 07:44:14AM -0700, ebollengier wrote:
  
  Hello,
  
  
  Graham Keeling wrote:
   
   Hello,
   
   I'm still waiting for my test database to fill up with Eric's data
   (actually,
   it's full now, but generating the right indexes is taking lots of
  time).
   
   
   But, I have another proposed solution, better than the last one I made.
   
   My previous solution was still taking a very very long time for a
  backup
   of a
   particular client that I had. Removing mention of BaseFiles did not
  help
   for
   this client.
   
   However, the following did, and it doesn't break Base jobs.
   
   Eric, I would appreciate it if you could give this a go on your test
   machine.
   
   It removes nasty the join on JobTDate by replacing it with a join on
  JobId
   (which makes more sense, and is also an index on Job and File).
   
  
  It's not possible, for example when you bscan a volume, or you run a
  copy,
  old
  records with old JobTDate have new FileIds or JobIds. You can't trust
  this
  field. It
  will work in many situations, but also will fail in an horrible way on
  many
  others...
 
  The first version (3.0.3) was using FileId, and it was quite good, but
  it could result wrong results on some cases, and for restore, you don't
  have
  choice, you
  need the exact one.
  
  I don't understand this at all.
  If you cannot trust the JobIds or FileIds in the File table, then the
  postgres
  query is also broken. The postgres query doesn't even mention JobTDate.
  In fact, the postgres query is using StartTime to do the ordering.
  
 
 And JobTDate is equivalent to StartTime (can be changed in PostgreSQL or in
 MySQL)

I still don't understand your original point.
If you cannot trust the JobIds or FileIds in the File table, then the postgres
query is also broken.

To clarify, can you tell me which fields in the File table you can trust?

   This also means it can get rid of the outer WHERE 'BaseJobId' OR
  'JobId'
   that I
   was complaining about before.
   The correct JobId is chosen with MAX(JobTDate) by ordering by JobTDate
   DESC on
   the innermost select.
   
   
  
  I would love that, it's what DISTINCT ON() on PostgreSQL does. But,
  unfortunately
  in SQL, you can only get fields that have a group function (like MAX,
  MIN,
  AVG) and
  fields present in GROUP BY. (we can also take FileIndex, MD5 and LStat at
  the same time)
  
  from http://dev.mysql.com/doc/refman/5.0/en/group-by-hidden-columns.html
   When using this feature, all rows in each group should have the same
   values for the columns that are
   ommitted from the GROUP BY part. The server is free to return any value
   from the group, so the results 
   are indeterminate unless all values are the same.
  
  This is not only a MySQL gotcha, it's an SQL property. Unless i'm
  mistaken
  this paragraph, I'm sure
  that we can't use it :-(
  
  OK, I understand what you are saying here.
  
  I appreciate your help, and I hope that we will find a solution for those
  that have
  this performance problem (that I can't reproduce myself).
  
  I am probably out of ideas now, other than:
  a) reverting to 3.0.3,
  b) reverting to the 3.0.3 queries, or
  c) switching to postgres (with all the horrible migration problems that
  will
  cause).
  
 
 Even if we found a workaround for MySQL, Postgres will stay far more faster
 (For the 2M file query, postgres was about 12s, MySQL 3.0.3 at 60s and MySQL
 5.0.1 at 90s)
 
 Can you confirm that the BaseJob doesn't change your timing ?

I think that I must have made some sort of mistake when I was testing
that. I do not believe that it changes my timing enough to help with the
problem that I have.

I also think that there must be some kind of mistake with your test program
because the slowness of the query between the 3.0.3 query and the 5.0.1 query
is very, very massive on my real database.

I will attempt to prove this soon (I still haven't set up your test database
properly).


--
Download Intel#174; Parallel Studio Eval
Try the new software tools for yourself. Speed compiling, find bugs
proactively, and fine-tune applications for parallel performance.
See why Intel Parallel Studio got high marks during beta.
http://p.sf.net/sfu/intel-sw-dev
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] VirtualFull mysql query blocks other jobs for a long time

2010-04-09 Thread Graham Keeling
On Fri, Apr 09, 2010 at 05:27:44AM -0700, ebollengier wrote:
 I'm really thinking that the problem is on the MySQL side (bad version
 perhaps), or on your
 modifications (my tests shows that with a FilenameId, PathId index, results
 are 10 times slower than
 with the default indexes)
 
 What version of MySQL are you using ? (and on which OS)

I've been testing this with the default bacula indexes.

I have:
Kernel 2.4.33.3, mysql 5.0.45.

The reporter of bug 1472 (the same problem), mnalis, has:
Kernel 2.6.32+23, mysql 5.1.43-1~bpo50+1.

What are you using?


--
Download Intel#174; Parallel Studio Eval
Try the new software tools for yourself. Speed compiling, find bugs
proactively, and fine-tune applications for parallel performance.
See why Intel Parallel Studio got high marks during beta.
http://p.sf.net/sfu/intel-sw-dev
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] VirtualFull mysql query blocks other jobs for a long time

2010-04-08 Thread Graham Keeling
On Wed, Apr 07, 2010 at 09:58:51AM -0700, ebollengier wrote:
 
 
 ebollengier wrote:
  
  
  Graham Keeling wrote:
  
  On Wed, Apr 07, 2010 at 08:22:09AM -0700, ebollengier wrote:
  I tweaked my test to compare both queries, and it shows no difference
  with
  and without base job part... If you want to test queries on your side
  with
  your data, you can download my tool (accurate-test.pl) on
  http://bacula.git.sourceforge.net/git/gitweb.cgi?p=bacula/docs;a=tree;f=docs/techlogs;hb=HEAD
  
  If you can tweak my script to reproduce your problem, i would be able to
  fix
  things.
  
  http://old.nabble.com/file/p28166612/diff-with-without-basejob.png 
  
  I'm currently running your script to generate the test database. I think
  that is going to take a long time, so I'll leave it overnight.
  
  
  
  This is your first problem, on my server (just a workstation), it takes
  less
  than 10s to add 200,000 records...
  
  
 
 In fact, it's 4seconds, here...
 
 JobId=15 files=2
 Insert takes 4secs for 20 records
 JobId=16 files=5
 Insert takes 11secs for 50 records
 JobId=20 files=20
 Insert takes 43secs for 200 records

I was running these three simultaneously, as described in the script.
I assume that fill_table.pl is the same as docs_techlogs_accurate-test.pl.

# filename=1 ./fill_table.pl 
# path=1 ./fill_table.pl 
# file=1 ./fill_table.pl 

However, I was also not running it on the machine that I did my original tests
on because it didn't have pwgen and the perl DBD-mysql thing installed.
But I have installed them now, so I will switch back to that machine.
Running one instance looks comparable with yours:

tserv tmp # file=1 docs_techlogs_accurate-test.pl 
JobId=1 files=2
Insert takes 9secs for 20 records
JobId=2 files=2
Insert takes 6secs for 20 records
JobId=3 files=2
Insert takes 8secs for 20 records
JobId=4 files=2

(but I think it is still going to take some time)


--
Download Intel#174; Parallel Studio Eval
Try the new software tools for yourself. Speed compiling, find bugs
proactively, and fine-tune applications for parallel performance.
See why Intel Parallel Studio got high marks during beta.
http://p.sf.net/sfu/intel-sw-dev
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


  1   2   >