Re: [Bacula-users] New release Bacularis 1.1.0

2022-08-29 Thread John Lockard
They didn't change the name, it's a fork.

On Mon, Aug 29, 2022 at 8:16 AM Elias Pereira  wrote:

> Hello Marcin,
>
> Sorry for the question, but why did you change the name from "baculum" to
> "bacularis"? :D
>
> On Fri, Aug 26, 2022 at 6:14 PM Marcin Haba  wrote:
>
>> Hello Everybody,
>>
>> We are pleased to let you know that a new version of Bacularis has been
>> released.
>>
>> This is a new feature and bug fix release. One of the most significant
>> changes is improving the Bacula configuration and generally using
>> configuration forms.  Another one is a new graphical job over time report
>> that enables to see all jobs execution together in full months context. One
>> more relevant change is a new dark mode support, so since now users are
>> able to switch between light/dark modes both in the web interface and in
>> the API panel.
>>
>> We also refined and optimized loading data in the web interface,
>> especially the job tables. This performance enhancement can be particularly
>> noticeable for users that have many jobs in the Bacula environment.
>>
>> From the visual part besides of the dark mode we did several improvements
>> in buttons.
>>
>> Below you can find a full list of changes.
>>
>> Bacularis Web:
>>  - Add dark mode support
>>  - Add to job reports new job over time report
>>  - Optimize loading job history table
>>  - Speed up loading job tables and job graphs data
>>  - Many configuration improvements
>>  - Preserve selected values between switching show/hide all directives
>> mode
>>  - Load JobDefs values automatically to Job resource if JobDefs selected
>>  - Create a new control to handle JobDefs selection
>>  - Add simple checkbox control
>>  - Improve configuring messages resource
>>  - Make working requirements page
>>  - Fix required fields in Job resource
>>  - Fix setting in Messages resource VolMgmt message type
>>  - Fix saving Job with Runscript resource without command and console
>> directives defined
>>  - Fix overwriting TOTP settings when user account is edited
>>  - Fix icon visibility in dark mode button on small screens
>>  - Fix PHP error when trying load LDAP user list from not working LDAP
>> server
>>  - Fix template error while loading schedule control on job view page
>>  - Update Polish translations
>>  - Add missing license statement
>>  - Update text in LICENSE file
>>
>> Bacularis API:
>>  - Add dark mode support
>>  - Add to job endpoints sched/start/end/realend times in Unix timestamp
>> format
>>  - Update Polish translations
>>  - Make working requirements page
>>  - Create new column properties dynamically
>>  - Fix icon visibility in dark mode button on small screens
>>  - Update text in LICENSE file
>>
>> Bacularis Common:
>>  - Add dark mode support
>>  - Improve general buttons view
>>  - Add to install script parameter to set web server config directory
>>  - Add init API and Web pages to index file
>>  - Add missing license statement
>>  - Update text in LICENSE file
>>
>> Useful links:
>>  Movie with the new job report and dark mode:
>> https://www.youtube.com/watch?v=B4SFKKZbBFQ
>>  Release announcement:
>> https://bacularis.app/news/34/36/New-release-Bacularis-1.1.0/d,Bacularis%20news%20details
>>  Documentation: https://bacularis.app/doc
>>  Online demo: https://demo.bacularis.app
>>
>> Binary packages 1.1.0 for popular Linux distributions are available
>> already in the package repositories. Also Docker container images in Docker
>> Hub have been updated to 1.1.0.
>>
>> We wish you successful installations and upgrades.
>>
>> Best regards,
>> Marcin Haba (gani)
>>
>> --
>> "Greater love hath no man than this, that a man lay down his life for his
>> friends." Jesus Christ
>>
>> "Większej miłości nikt nie ma nad tę, jak gdy kto życie swoje kładzie za
>> przyjaciół swoich." Jezus Chrystus
>> ___
>> Bacula-users mailing list
>> Bacula-users@lists.sourceforge.net
>> https://lists.sourceforge.net/lists/listinfo/bacula-users
>>
>
>
> --
> Elias Pereira
> ___
> Bacula-users mailing list
> Bacula-users@lists.sourceforge.net
> https://lists.sourceforge.net/lists/listinfo/bacula-users
>


-- 
- Adaptability -- Analytical --- Ideation  Input - Belief -
---
 John M. Lockard |  U of Michigan - School of Information
  Unix Sys Admin |  Suite 205 | 309 Maynard Street
  jlock...@umich.edu |Ann Arbor, MI  48104-2211
 www.umich.edu/~jlockard  |
734-615-8776 | 734-763-9677 FAX
---
- The University of Michigan will never ask you for your password -
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] Q: fileset "magic" - still weird

2022-06-15 Thread John Lockard
Fileset {
 Name = “cadat"
 EnableVss = no
 EnableSnapshot = no
 Include {
   Options {
 OneFS = no
 RegexDir = "/mnt/cdat-.*"
   }
   Options {
 OneFS = no
 Exclude = yes
 RegexDir = ".*"
   }
   File = "/mnt"
 }
}

Justin, looking at this, Within /mnt, doesn't your exclude (".*") just end
up excluding everything?

Maybe you want something more like:

Include {
  Options {
OneFS = noExclude = yes
RegexDir = ""^(?!cdat-).*
  }
  File = /mnt
}


Or, get rid of the "Exclude" and use a script for the File...

Include {
  Options {
OneFS = no  }
  File = "|sh -c ’echo /mnt/cdat-*’"
}

Maybe?


On Wed, Jun 15, 2022 at 5:48 PM Justin Case  wrote:

> I understand what the example does, it is very much standard.
> Alas, it does not touch any of the points where my use case is special:
> - it enumerates all inclusions and exclusions - that is explicitely what I
> cannot and do not want to do as each FD machine has different subfolders,
> but all start with the same prefix
> - it does not employ wildcard characters for folder names
>
> Thank you for the example. To be honest, I did not learn anything new that
> would allow me to solve my problem.
>
> Thank you again , though, for considering my questions.
> J/C
>
> > On 15. Jun 2022, at 23:23, sru...@gemneye.org wrote:
> >
> > On 2022-06-15 13:47, Justin Case wrote:
> >> I re-read the chapter about filesets and fileset options.
> >> In order to better understand what is happening I simplified the
> >> fileset as follows:
> >>> Fileset {
> >>> Name = “cadat"
> >>> EnableVss = no
> >>> EnableSnapshot = no
> >>> Include {
> >>>   Options {
> >>> OneFS = no
> >>> #RegexDir = "/mnt/cdat-.*"
> >>>   }
> >>> #  Options {
> >>> #OneFS = no
> >>> #   Exclude = yes
> >>> #RegexDir = ".*”
> >>> #  }
> >>>   File = "/mnt"
> >>> }
> >>> }
> >> I was hoping it would then backup everything in /mnt (yes they are all
> >> different filesystems, but OneFS is set to no).
> >> Again, nothing was backed up.
> >> I do not understand this result. I thought I had understood what is in
> >> the manual about filesets, but obviously I did not.
> >>> On 15. Jun 2022, at 19:55, Justin Case  wrote:
> >>> Hi all,
> >>> I am somewhat struggling with the fileset algorithm, noob birth pains
> I guess.
> >>> I have a bunch of VMs that have mounted(!!) docker container appdata
> in /mnt/cdat-.
> >>> So I wish to backup /mnt/cdat-* on each of these VMs, meaning I wish
> that the content of each subdirectory in /mnt where the name starts with
> "cdat-“ gets backed up.
> >>> So I looked into the main manual for fileset directive syntax. And I
> found this example:
> >>> FileSet {
> >>>  Name = "Full Set”
> >>>  Include {
> >>> Options {
> >>>  wilddir = "/home/a*”
> >>>  wilddir = "/home/b*"
> >>> }
> >>> Options {
> >>> RegexDir = ".*”
> >>> exclude = yes
> >>>   }
> >>> File = /home
> >>> }
> >>> }
> >>> So what I did is this (and it does not work, just returns 1 file, and
> that is wrong):
> >>> Fileset {
> >>> Name = “cadat"
> >>> EnableVss = no
> >>> EnableSnapshot = no
> >>> Include {
> >>>   Options {
> >>> OneFS = no
> >>> RegexDir = "/mnt/cdat-.*"
> >>>   }
> >>>   Options {
> >>> OneFS = no
> >>> Exclude = yes
> >>> RegexDir = ".*"
> >>>   }
> >>>   File = "/mnt"
> >>> }
> >>> }
> >>> I know it must seem kinda obvious where the problem is for those who
> have been around with bacula for a while. For me it is kinda “magic”.
> >>> Where is my mistake?
> >>> Thanks for helping out!
> >>> J/C
> > Below is an example Fileset I use which has includes and excludes.
> >
> > Fileset {
> >  Name = "Firewall Full"
> >  Include {
> >File = "/"
> >File = "/boot"
> >File = "/home"
> >File = "/var"
> >Options {
> >  Compression = "Gzip"
> >  Signature = "Md5"
> >  Exclude = "Yes"
> >  WildDir = "/ISO"
> >  WildFile = "/.journal"
> >  WildFile = "/.fsck"
> >}
> >  }
> > }
> >
>
>
>
> ___
> Bacula-users mailing list
> Bacula-users@lists.sourceforge.net
> https://lists.sourceforge.net/lists/listinfo/bacula-users
>


-- 
- Adaptability -- Analytical --- Ideation  Input - Belief -
---
 John M. Lockard |  U of Michigan - School of Information
  Unix Sys Admin |  Suite 205 | 309 Maynard Street
  jlock...@umich.edu |Ann Arbor, MI  48104-2211
 www.umich.edu/~jlockard  |
734-615-8776 | 734-763-9677 FAX
---
- The University of Michigan will never ask you for your password -
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net

Re: [Bacula-users] New Bacula web site

2018-05-07 Thread John Lockard
I actually think the removal of the animated dots would make the site more
readable.  The motion on the screen is sort of nauseating while trying to
read and makes me want to not read any more.

On Mon, May 7, 2018 at 5:37 AM, Sven Hartge  wrote:

> On 07.05.2018 07:07, Kern Sibbald wrote:
>
> > FYI: Bacula Systems sponsored this new web site to help bring the
> > project web site up to current "standards".
> >
> > Looking forward to hearing from you.
>
> Well, takes 8MB and ~8s to fully load (with another 6MB in the next 10
> seconds) over a 1GBit/s connection from DFN in Germany.
>
> A bit on the heavy side, no?
>
> The biggest files being some included videos from googlevideos.com with
> ~1.2MB each.
>
> Plus when the site is in focus, Chrome uses about 130% CPU on my system
> (using a i7-7820HQ, so a 8-HT-Core system).
>
> Do you really really need all those video-animated backgrounds, for
> example in the "Testimonials" area? Removing those does not make the
> site look any worse but dramatically cuts down on a) transferred bytes
> and b) CPU and RAM usage.
>
> Grüße,
> Sven.
>
>
>
> 
> --
> Check out the vibrant tech community on one of the world's most
> engaging tech sites, Slashdot.org! http://sdm.link/slashdot
> ___
> Bacula-users mailing list
> Bacula-users@lists.sourceforge.net
> https://lists.sourceforge.net/lists/listinfo/bacula-users
>
>


-- 
---
 John M. Lockard |  U of Michigan - School of Information
  Unix Sys Admin |  Suite 205 | 309 Maynard Street
  jlock...@umich.edu |Ann Arbor, MI  48104-2211
 www.umich.edu/~jlockard  |
734-615-8776 | 734-763-9677 FAX
---
- The University of Michigan will never ask you for your password -
--
Check out the vibrant tech community on one of the world's most
engaging tech sites, Slashdot.org! http://sdm.link/slashdot___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] print numbers without thousands separators

2017-09-22 Thread John Lockard
"I'm not sure I see the utility of copying and pasting any of the other
formatted numbers here."

I do.  Running calculations within a script, a tally of number of files,
job bytes, averages, etc.

On Fri, Sep 22, 2017 at 9:56 AM, Phil Stracchino 
wrote:

> On 09/22/17 09:04, Christoph Litauer wrote:
> > Dear bacula users,
> >
> > using bacula for many years without problems now. Great software!
> > One little thing would be nice:
> > In bconsole, querying e.g. "List alle backups for a client" results in
> >
> > +-+---++---+
> -+--+---++
> > | jobid   | client| fileset| level | starttime
>  | jobfiles | jobbytes  | volumename |
> > +-+---++---+
> -+--+---++
> > | 821,494 | printhost | client files-to-backup | F | 2016-11-19
> 00:10:53 |  222,727 | 5,945,291,349 | LTO6-012   |
> >
> > As you can see, all numbers are printed using thousands separators. I
> think this is independent on the current locale (tried with C and en).
> > These numbers cannot be reused (by copy/paste) within bconsole or some
> other scripts without reformatting.
> > Selecting information using psql seems to ommit those separators.
> >
> > Would be nice, if bconsole didn't print separators within numbers.
>
>
> ...At least, not within JobIDs.  I'm not sure I see the utility of
> copying and pasting any of the other formatted numbers here.
>
>
> --
>   Phil Stracchino
>   Babylon Communications
>   ph...@caerllewys.net
>   p...@co.ordinate.org
>   Landline: +1.603.293.8485
>   Mobile:   +1.603.998.6958
>
> 
> --
> Check out the vibrant tech community on one of the world's most
> engaging tech sites, Slashdot.org! http://sdm.link/slashdot
> ___
> Bacula-users mailing list
> Bacula-users@lists.sourceforge.net
> https://lists.sourceforge.net/lists/listinfo/bacula-users
>



-- 
---
 John M. Lockard |  U of Michigan - School of Information
  Unix Sys Admin |  Suite 205 | 309 Maynard Street
  jlock...@umich.edu |Ann Arbor, MI  48104-2211
 www.umich.edu/~jlockard  |
734-615-8776 | 734-763-9677 FAX
---
- The University of Michigan will never ask you for your password -
--
Check out the vibrant tech community on one of the world's most
engaging tech sites, Slashdot.org! http://sdm.link/slashdot___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] backup volume size

2015-10-27 Thread John Lockard
How often are you backing up?  Fulls, Differentials, Incrementals?  How
long do you want to keep each?  How compressible is your data?  How much
does the data change?  How often does the data change?

Too many variables to answer your questions as given.

Only full backups, once a month, you'd need 200GB x 12 (ignoring
compression) = 2.4TB


On Tue, Oct 27, 2015 at 4:08 PM, Thing  wrote:

> Hi,
>
> To backup 200gb with 1 year retention roughly how big a disk would be
> required?  2tb? 3tb?
>
>
>
> --
>
> ___
> Bacula-users mailing list
> Bacula-users@lists.sourceforge.net
> https://lists.sourceforge.net/lists/listinfo/bacula-users
>
>


-- 
---
 John M. Lockard |  U of Michigan - School of Information
  Unix Sys Admin |  Suite 205 | 309 Maynard Street
  jlock...@umich.edu |Ann Arbor, MI  48104-2211
 www.umich.edu/~jlockard  |
734-615-8776 | 734-763-9677 FAX
---
- The University of Michigan will never ask you for your password -
--
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] backup volume size

2015-10-27 Thread John Lockard
Not sure if there is such a thing as "a standard bacula configuration", but
as I said, the simple math says that if you only do 1 monthly full backup
and you keep the last year's worth of full backups, you'll need 2.4TB of
storage.  This completely ignores the retention of other backup levels.

On Tue, Oct 27, 2015 at 4:51 PM, Thing <thing.th...@gmail.com> wrote:

> Hi,
>
> It is a standard bacula configuration, data will also not change much.  So
> from your estimate with compression a 3TB drive would seem the minimum.
>
> On 28 October 2015 at 09:14, John Lockard <jlock...@umich.edu> wrote:
>
>> How often are you backing up?  Fulls, Differentials, Incrementals?  How
>> long do you want to keep each?  How compressible is your data?  How much
>> does the data change?  How often does the data change?
>>
>> Too many variables to answer your questions as given.
>>
>> Only full backups, once a month, you'd need 200GB x 12 (ignoring
>> compression) = 2.4TB
>>
>>
>> On Tue, Oct 27, 2015 at 4:08 PM, Thing <thing.th...@gmail.com> wrote:
>>
>>> Hi,
>>>
>>> To backup 200gb with 1 year retention roughly how big a disk would be
>>> required?  2tb? 3tb?
>>>
>>>
>>>
>>> --
>>>
>>> ___
>>> Bacula-users mailing list
>>> Bacula-users@lists.sourceforge.net
>>> https://lists.sourceforge.net/lists/listinfo/bacula-users
>>>
>>>
>>
>>
>> --
>> ---
>>  John M. Lockard |  U of Michigan - School of Information
>>   Unix Sys Admin |  Suite 205 | 309 Maynard Street
>>   jlock...@umich.edu |Ann Arbor, MI  48104-2211
>>  www.umich.edu/~jlockard <http://www.umich.edu/%7Ejlockard> |
>> 734-615-8776 | 734-763-9677 FAX
>> ---
>> - The University of Michigan will never ask you for your password -
>>
>>
>


-- 
---
 John M. Lockard |  U of Michigan - School of Information
  Unix Sys Admin |  Suite 205 | 309 Maynard Street
  jlock...@umich.edu |Ann Arbor, MI  48104-2211
 www.umich.edu/~jlockard <http://www.umich.edu/%7Ejlockard> |
734-615-8776 | 734-763-9677 FAX
---
- The University of Michigan will never ask you for your password -
--
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] bacula strategy tape and disk storage

2015-09-16 Thread John Lockard
I would use a "Copy" job so that impact to the client is minimal.

On Wed, Sep 16, 2015 at 2:04 PM, Kepler Mihály  wrote:

> Hi!
>
> What is the good saving strategy if I have tape and disk storage too?
>
> How can I create backup  from "client1" to both storage (tape and disk
> storage)?
>
> Is it possible to define two JOBs ?
>
> Job {
>   Name = "BackupClient1Disk"
>   Client = client1-fd
>   JobDefs = "DefaultJob"
>   Schedule = "WeeklyCycle"
>   Storage = disk1-sd
> }
>
> Job {
>   Name = "BackupClient1Tape"
>   Client = client1-fd
>   JobDefs = "DefaultJob"
>   Schedule = "WeeklyCycle"
>   Storage = tape1-sd
> }
>
> Sincere thanks so far.
>
> --
> mkepler
>
>
> --
> Monitor Your Dynamic Infrastructure at Any Scale With Datadog!
> Get real-time metrics from all of your servers, apps and tools
> in one place.
> SourceForge users - Click here to start your Free Trial of Datadog now!
> http://pubads.g.doubleclick.net/gampad/clk?id=241902991=/4140
> ___
> Bacula-users mailing list
> Bacula-users@lists.sourceforge.net
> https://lists.sourceforge.net/lists/listinfo/bacula-users
>



-- 
---
 John M. Lockard |  U of Michigan - School of Information
  Unix Sys Admin |  Suite 205 | 309 Maynard Street
  jlock...@umich.edu |Ann Arbor, MI  48104-2211
 www.umich.edu/~jlockard  |
734-936-7255 | 734-763-9677 FAX
---
- The University of Michigan will never ask you for your password -
--
Monitor Your Dynamic Infrastructure at Any Scale With Datadog!
Get real-time metrics from all of your servers, apps and tools
in one place.
SourceForge users - Click here to start your Free Trial of Datadog now!
http://pubads.g.doubleclick.net/gampad/clk?id=241902991=/4140___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] Backup report

2015-08-21 Thread John Lockard
Sorry for jumping in late, but was on vacation.

I use the attached script, which I modified from a script posted by Jonas
Björklund.  You'll need to modify it to add your database values and email
specifics (if you want to lock them into the perl script).  If you see
something wrong, please tell me.  It produces output like this:
 Start of Sample Output 

Total 5 jobs - 4 jobs are OK
Total 12.2 GB / 877 files
  0 jobs are waiting

Status   JobName Lvl   MBytes/FilesStartTime   KB/s
   Pool
==
E   SI-Marge F  0/0  08.20 19:006.0m  0
   Full
T  BackupCatalog F   2021/1  08.20 23:194.9m  11894
Catalog
T  Chernobog I  296/133  08.20 21:00   14.6m667
  SIncr
T   SI-Nvivo I 2679/336  08.20 20:00   10.9m   7685
   Incr
T   SI-Homer I 7454/407  08.20 19:00  134.1m   1895
 SBIncr
==

Status codes:

  T Terminated normally
  A Canceled by the user
  B Blocked
  C Created but not yet running
  D Verify Differences
  E Terminated in Error
  F Waiting on the File daemon
  M Waiting for a Mount
  R Running
  S Waiting on the Storage daemon
  c Waiting for Client resource
  d Waiting for Maximum jobs
  e Non-fatal error
  f Fatal error
  j Waiting for Job resource
  m Waiting for a new Volume to be mounted
  p Waiting for higher priority job to finish
  s Waiting for Storage resource
  t Waiting for Start Time

 End of Sample Output 




I run this via cron from my /etc/cron.daily directory using the following
script:
#!/bin/sh

# Generate Daily Bacula Reports
SCRIPT='/usr/local/bacula-scripts/report.pl'
VARIABLES='-wT'

test -x ${SCRIPT} || exit 0
${SCRIPT} ${VARIABLES}




On Mon, Jul 27, 2015 at 9:19 AM, Chris Shelton cshel...@shelton-family.net
wrote:

 The simplest way is likely to send the backup reports to a mailing list
 that offers daily digests for delivery, such as mailman:
 http://wiki.list.org/DOC/Mailman%202.1%20Members%20Manual#A8_Digests
 Then subscribe to that list and choose to have the backup reports
 delivered to you in a daily digest.

 chris

 On Mon, Jul 27, 2015 at 8:05 AM, More, Ankush ankush.m...@capgemini.com
 wrote:

 Team,



 I am getting individual client backup report status.

 Is there way to get consolidated(single)  report of all client every day?



 Thank you,

 Ankush


 --

 ___
 Bacula-users mailing list
 Bacula-users@lists.sourceforge.net
 https://lists.sourceforge.net/lists/listinfo/bacula-users




 --

 ___
 Bacula-users mailing list
 Bacula-users@lists.sourceforge.net
 https://lists.sourceforge.net/lists/listinfo/bacula-users




-- 
---
 John M. Lockard |  U of Michigan - School of Information
  Unix Sys Admin |  Suite 205 | 309 Maynard Street
  jlock...@umich.edu |Ann Arbor, MI  48104-2211
 www.umich.edu/~jlockard http://www.umich.edu/%7Ejlockard |
734-936-7255 | 734-763-9677 FAX
---
- The University of Michigan will never ask you for your password -


bacula-report.pl
Description: Perl program
--
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] Schedule backup Job only from incremental

2015-06-12 Thread John Lockard
You could almost do things this way.  Unfortunately, you'll have to
occasionally wipe the mirror systems.

If I restored a full to the mirror machine, followed by a differential,
followed by any number of incremental backups, there's almost a 100% chance
that files which were deleted since a previous backup will now be present
on your mirror machine.  Since a restore from backup won't wipe out files
on the system which aren't on the backup, you'll always have some
percentage of cruft on the mirror system which should not be there.  If
these day-to-day work files are not a concern, and won't interfere too
much, and the mirror system works, but isn't perfect, this solution is
better than having nothing, and would have you up and running quicker than
doing a BMR after the incident.

-John

On Fri, Jun 12, 2015 at 11:27 AM, Carlo Filippetto 
carlo.filippe...@gmail.com wrote:

 Hi all,
 I would like to have a DR site where the machines are clones of the
 production ones.

 May I use only the incremental volumes to daily restore them (something
 like sync this machine with the original ones)?

 Thank you




 --

 ___
 Bacula-users mailing list
 Bacula-users@lists.sourceforge.net
 https://lists.sourceforge.net/lists/listinfo/bacula-users




-- 
---
 John M. Lockard |  U of Michigan - School of Information
  Unix Sys Admin |  Suite 205 | 309 Maynard Street
  jlock...@umich.edu |Ann Arbor, MI  48104-2211
 www.umich.edu/~jlockard http://www.umich.edu/%7Ejlockard |
734-936-7255 | 734-763-9677 FAX
---
- The University of Michigan will never ask you for your password -
--
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] NFS mount back ups?

2015-05-07 Thread John Lockard
Compression?
Backup level (files not backed up because they haven't been changed)?
Exclusions?

Have you gone through a full list of files to be backed up and a full list
of the files which were actually backed up?

On Thu, May 7, 2015 at 1:42 PM, Romer Ventura rvent...@h-st.com wrote:

 Hello,



 I have 2 HP-UX 11.31 and I have ERP data I need to back up on those
 systems. Since there is no client for it, I decided to copy the files to a
 temp location every night, and export that temp folder via NFS. I mount
 these exports into my bacula server and set it up so that it backs up those
 mount point.



 Everything seems to be working, however, the total size of the ERP data is
 about 33GB, but bacula is only copying 3.2GB, I see no errors in the bacula
 app, logs or the bacula server itself. There are no errors on any of the
 HP-UX servers either..



 Any ideas on what to do? Or detect why bacula it is stopping at 3.2GB and
 marking the job as OK..?



 Thanks


 --
 One dashboard for servers and applications across Physical-Virtual-Cloud
 Widest out-of-the-box monitoring support with 50+ applications
 Performance metrics, stats and reports that give you Actionable Insights
 Deep dive visibility with transaction tracing using APM Insight.
 http://ad.doubleclick.net/ddm/clk/290420510;117567292;y
 ___
 Bacula-users mailing list
 Bacula-users@lists.sourceforge.net
 https://lists.sourceforge.net/lists/listinfo/bacula-users




-- 
---
 John M. Lockard |  U of Michigan - School of Information
  Unix Sys Admin |  Suite 205 | 309 Maynard Street
  jlock...@umich.edu |Ann Arbor, MI  48104-2211
 www.umich.edu/~jlockard http://www.umich.edu/%7Ejlockard |
734-936-7255 | 734-763-9677 FAX
---
- The University of Michigan will never ask you for your password -
--
One dashboard for servers and applications across Physical-Virtual-Cloud 
Widest out-of-the-box monitoring support with 50+ applications
Performance metrics, stats and reports that give you Actionable Insights
Deep dive visibility with transaction tracing using APM Insight.
http://ad.doubleclick.net/ddm/clk/290420510;117567292;y___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] Prevent jobs to run or limit bandwidth in certain hours/days?

2014-12-04 Thread John Lockard
Are you starting all your jobs at the same time?  Wondering if your having
issues with all of your jobs competing for bandwidth and slowing down.
Thinking a staggered start might help you.

On Thu, Dec 4, 2014 at 9:27 AM, Rai Blue raib...@gmail.com wrote:

 Hi everyone,
 I'm using Bacula 7.0 to backup a pack of servers. My schedules are all
 starts in evening time. My problem is: some days jobs take longer so some
 of them arent finished by morning and workhours bandwidth is a problem for
 me.

 I searched max bandwidth but it doesnt fit because i don't want to limit
 bandwidth in evenings, just in workhours, but i cant seperate it in job
 definition.

 Then I came across Max Start Delay and Max Run Sched Time and they
 don't fit either because i want to let the jobs run for example in weekends.

 Is there a way to define a jobs behaviour according to its time? Or what
 would you suggest doing in such situation?


 *Thanks,*

 *Begum Tuncer*


 --
 Download BIRT iHub F-Type - The Free Enterprise-Grade BIRT Server
 from Actuate! Instantly Supercharge Your Business Reports and Dashboards
 with Interactivity, Sharing, Native Excel Exports, App Integration  more
 Get technology previously reserved for billion-dollar corporations, FREE

 http://pubads.g.doubleclick.net/gampad/clk?id=164703151iu=/4140/ostg.clktrk
 ___
 Bacula-users mailing list
 Bacula-users@lists.sourceforge.net
 https://lists.sourceforge.net/lists/listinfo/bacula-users




-- 
---
 John M. Lockard |  U of Michigan - School of Information
  Unix Sys Admin |  105 South State St. | 4325 North Quad
  jlock...@umich.edu |Ann Arbor, MI  48109-1285
 www.umich.edu/~jlockard http://www.umich.edu/%7Ejlockard |
734-936-7255 | 734-764-2475 FAX
---
- The University of Michigan will never ask you for your password -
--
Download BIRT iHub F-Type - The Free Enterprise-Grade BIRT Server
from Actuate! Instantly Supercharge Your Business Reports and Dashboards
with Interactivity, Sharing, Native Excel Exports, App Integration  more
Get technology previously reserved for billion-dollar corporations, FREE
http://pubads.g.doubleclick.net/gampad/clk?id=164703151iu=/4140/ostg.clktrk___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] summarizing messages into a single mail

2014-11-10 Thread John Lockard
There are a number of scripts available which can be run via cron to give
you stats on your backup jobs.  Check your Bacula source
directory/examples/reports

In here I found report.pl from Jonas Björkland which I modified to give me
specific information I was looking for, but it's an excellent starting
point for a summary notifier.

-John

On Mon, Nov 10, 2014 at 10:00 AM, Florian florian.spl...@web.de wrote:

 Hello, everyone.

 I was wondering, if there is a way to avoid receiving one email for each
 completed bacula job and instead receive a single mail with all messages.

 Also, if this is possible, would you recommend it or should I instead
 try something like Web-Bacula to do some reporting?

 I am currently receiving mails for 6 jobs per day. Rising to 8 soon.

 Regards,

 Florian S.


 --
 ___
 Bacula-users mailing list
 Bacula-users@lists.sourceforge.net
 https://lists.sourceforge.net/lists/listinfo/bacula-users




-- 
---
 John M. Lockard |  U of Michigan - School of Information
  Unix Sys Admin |  105 South State St. | 4325 North Quad
  jlock...@umich.edu |Ann Arbor, MI  48109-1285
 www.umich.edu/~jlockard http://www.umich.edu/%7Ejlockard |
734-936-7255 | 734-764-2475 FAX
---
- The University of Michigan will never ask you for your password -
--
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] summarizing messages into a single mail

2014-11-10 Thread John Lockard
Aw, heck.  Might as well attach what I've been using.  Again, it's modified
from what you'll find in the examples/reports directory.  Usage amounts
will be different from what is reported by Bacula on the console or
on-system logs as Bacula reports 1000=1K and my report uses 1024=1K (easily
modified in the script).  I'm sure what I've done could probably be done
more cleanly/efficiently/correctly...   you're free to make those changes
as you wish.

-John

On Mon, Nov 10, 2014 at 10:35 AM, John Lockard jlock...@umich.edu wrote:

 There are a number of scripts available which can be run via cron to give
 you stats on your backup jobs.  Check your Bacula source
 directory/examples/reports

 In here I found report.pl from Jonas Björkland which I modified to give
 me specific information I was looking for, but it's an excellent starting
 point for a summary notifier.

 -John

 On Mon, Nov 10, 2014 at 10:00 AM, Florian florian.spl...@web.de wrote:

 Hello, everyone.

 I was wondering, if there is a way to avoid receiving one email for each
 completed bacula job and instead receive a single mail with all messages.

 Also, if this is possible, would you recommend it or should I instead
 try something like Web-Bacula to do some reporting?

 I am currently receiving mails for 6 jobs per day. Rising to 8 soon.

 Regards,

 Florian S.


 --
 ___
 Bacula-users mailing list
 Bacula-users@lists.sourceforge.net
 https://lists.sourceforge.net/lists/listinfo/bacula-users




 --
 ---
  John M. Lockard |  U of Michigan - School of Information
   Unix Sys Admin |  105 South State St. | 4325 North Quad
   jlock...@umich.edu |Ann Arbor, MI  48109-1285
  www.umich.edu/~jlockard http://www.umich.edu/%7Ejlockard |
 734-936-7255 | 734-764-2475 FAX
 ---
 - The University of Michigan will never ask you for your password -




-- 
---
 John M. Lockard |  U of Michigan - School of Information
  Unix Sys Admin |  105 South State St. | 4325 North Quad
  jlock...@umich.edu |Ann Arbor, MI  48109-1285
 www.umich.edu/~jlockard http://www.umich.edu/%7Ejlockard |
734-936-7255 | 734-764-2475 FAX
---
- The University of Michigan will never ask you for your password -


bacula-report.pl
Description: Perl program
--
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] Job transfer rate

2014-10-30 Thread John Lockard
Yes, but which IO?

Disk IO on the client?
Network IO from the client to the network?
Network IO from the network to the Bacula Director?
Network IO from the Bacula Director to the Bacula SD?
Disk IO on the Bacula SD?
Database IO on the Bacula Director?


Seems like you have more work to do than just saying it's the IO.  Not
sure of the tools on Windows to interrogate IO at disk or network, but on
Linux/Unix a good place to start is the sar (sysstat) utilities.

-John

On Thu, Oct 30, 2014 at 12:20 PM, Jeff MacDonald j...@terida.com wrote:


 Just be aware that you might not see a dramatic increase in speed just
 moving Bacula itself!

 If you are using VMWare with VMDK files on a VMFS volume you need to be
 aware that any IO by a guest requires a reservation of the entire VMFS
 volume.  Locking is happening at the SCSI layer - if one guest wants to
 read one byte of data nobody else can do anything until its IO operation
 is complete.  Remembering that you probably are only going to get around
 75 IOPs you can see how a VMFS volume with more than a handful of
 virtual machines on it can very quickly end up performing very poorly,
 especially with spinning rust underneath it.  A good RAID card with a
 LOT of cache memory can help with overall system performance, but
 backups by definition are going to be touching lots of areas of data
 that aren't likely to be in cache.

 What I'm getting at is you might actually need to focus your efforts and
 dollars on the storage underneath your VMs before you do too much with
 your backup system.  A great big nice happy dedicated Bacula server
 would be nice, but if the VMs are still IOP constrained ESPECIALLY if
 they are actively in use while being backed up you probably won't see
 that much of an improvement.

 An easy way to validate this would be to ensure you have attribute
 spooling turned on and to set up the attribute spooling to write to your
 NAS rather than to local storage.  That will get the VM storage
 infrastructure out of your backup pathway.

 Bryn


 This has been a fantastic education. Thanks. I’ll recommend to the client
 that their IO is slow.. and I’ll get told “Oh! It seems fine to us!” :)


 I googled and found documentation about turning on Data Spooling, but not
 indepnedantly turning on Attribute Spooling.

 Could you point me at that please.. ( I know I Know.. I’ll keep looking :)
 )

 Jeff.




 --

 ___
 Bacula-users mailing list
 Bacula-users@lists.sourceforge.net
 https://lists.sourceforge.net/lists/listinfo/bacula-users




-- 
---
 John M. Lockard |  U of Michigan - School of Information
  Unix Sys Admin |  105 South State St. | 4325 North Quad
  jlock...@umich.edu |Ann Arbor, MI  48109-1285
 www.umich.edu/~jlockard http://www.umich.edu/%7Ejlockard |
734-936-7255 | 734-764-2475 FAX
---
- The University of Michigan will never ask you for your password -
--
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] [7.0.5] Option to start spooling before volume mount?

2014-10-26 Thread John Lockard
I run into this issue with several of my servers and dealt with it by
creating migrate jobs.  First job goes to disk.  Second job runs some
reasonable time later and migrates the D2D job to tape.  I had a number of
key servers I did this for with the advantage that I could offsite the
tapes and keep the D2D job on disk till after the next backup had run.
This way I had immediate recovery available as well as disaster recovery.

On Sun, Oct 26, 2014 at 7:59 AM, Harald Schmalzbauer 
h.schmalzba...@omnilan.de wrote:

  Hello,

 I enable data spooling for almost any job, because my LTO4 drive's
 hw-compression allows to stream _my_ data at little over 100MByte/s
 average, which bacula-fd can't deliver (localhost FD-SD connections
 allow ~25MB/s with 60+% CPU usage; SoftCompression is disabled; oberved
 FD reqests are 4KB/t only, but that's a completely different issue I
 have to look at more closely some times later).
 I'm aware that the numbers above are highly workload dependent and are
 not representative.
 They basically should give an idea why I need data spooling – nothing
 more. Environemnt is bacula 7.0.5 (dir, sd and fd), FreeBSD 10.1 (amd64)
 (also dir, sd and fd), LACP-GbE (irrelevant since I'm using localhost
 sockets), 3.4GHz Xeon-E3v3, 8GB RAM and two LSI2008.

 A typical job is 100-500GB in size. So spooling with only 25MB/s takes
 significant ammount of time.
 That's why I want to start the job as early as possible..
 My problem is that data spooling starts _after_ the volume has been
 mounted and positioned.
 So if I start the job at midnight, no single bit get's spooled unless
 somebody feeds the correct tape next morning :-( And next morning very
 often delays to next lunch; that's were the missing 4 hours breaks the
 timetable for my setup.

 Is there already an option I missed, which would enable data spooling
 without requesting the volume first?
 That would save exactly the hours my setup needs as buffer if somebody
 forgets to feed the tape.
 Or is there any reason why data spooling must be delayed? I can't
 imagine one. If something goes wrong at despooling, nothing changes if
 volume was available before spooling or not.

 Thanks,

 -Harry





 --

 ___
 Bacula-users mailing list
 Bacula-users@lists.sourceforge.net
 https://lists.sourceforge.net/lists/listinfo/bacula-users




-- 
---
 John M. Lockard |  U of Michigan - School of Information
  Unix Sys Admin |  105 South State St. | 4325 North Quad
  jlock...@umich.edu |Ann Arbor, MI  48109-1285
 www.umich.edu/~jlockard http://www.umich.edu/%7Ejlockard |
734-936-7255 | 734-764-2475 FAX
---
- The University of Michigan will never ask you for your password -
--
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] Tandberg LTO-3 only writing ~200GB

2012-04-19 Thread John Lockard
I know this is probably a stupid question, but I've seen stupid questions
solve things in the past...

Are both your tape drive and tape at least LTO3?  If your drive is LTO-3
and your tape is LTO-2, then your results make perfect sense.

-John

On Thu, Apr 19, 2012 at 1:07 AM, Andre Rossouw an...@arnet.co.za wrote:

 I'm hoping someone can point me in the right direction - faulty drive,
 incorrect settings, faulty tape (I hope it's something that simple)

 I've run the archive job again, and again Bacula has reported the media
 as full after just ~200GB. However when I do llist volume=volumename
 it reports a different VolBytes to Last Volume Bytes, which I assume
 should be the same?

 Any idea's?

 19-Apr 06:58 ubuntu-sd JobId 240: JobId=240
 Job=Archive.2012-04-19_02.26.11_03 marked to be canceled.
 19-Apr 06:58 ubuntu-sd JobId 240: Job write elapsed time = 04:32:04,
 Transfer rate = 14.60 M Bytes/second
 19-Apr 06:58 ubuntu-dir JobId 240: Bacula ubuntu-dir 5.0.1 (24Feb10):
 19-Apr-2012 06:58:18
   Build OS:   x86_64-pc-linux-gnu ubuntu 10.04
   JobId:  240
  Job:Archive.2012-04-19_02.26.11_03
   Backup Level:   Full
  Client: ubuntu-fd 5.0.1 (24Feb10)
 x86_64-pc-linux-gnu,ubuntu,10.04
  FileSet:FileSet1 2011-05-23 14:23:25
  Pool:   Pool1 (From Job resource)
  Catalog:MyCatalog (From Client resource)
  Storage:LTO-3 (From Job resource)
   Scheduled time: 19-Apr-2012 02:26:06
  Start time: 19-Apr-2012 02:26:13
  End time:   19-Apr-2012 06:58:18
  Elapsed time:   4 hours 32 mins 5 secs
  Priority:   10
  FD Files Written:   9,215
  SD Files Written:   9,215
  FD Bytes Written:   238,439,334,305 (238.4 GB)
  SD Bytes Written:   238,437,145,970 (238.4 GB)
  Rate:   14605.8 KB/s
   Software Compression:   None
  VSS:no
  Encryption: no
  Accurate:   no
   Volume name(s): 20120419
  Volume Session Id:  1
  Volume Session Time:1334794882
   Last Volume Bytes:  407,329,219,584 (407.3 GB)
   Non-fatal FD errors:0
  SD Errors:  0
  FD termination status:  Canceled
  SD termination status:  Canceled
  Termination:Backup Canceled

 *llist volume=20120419
  MediaId: 51
   VolumeName: 20120419
 Slot: 0
   PoolId: 4
MediaType: LTO-3
 FirstWritten: 2012-04-19 02:26:13
  LastWritten: 2012-04-19 04:54:25
LabelDate: 2012-04-19 02:26:13
  VolJobs: 1
 VolFiles: 239
VolBlocks: 3,698,757
VolMounts: 1
 VolBytes: 238,614,276,096
VolErrors: 0
VolWrites: 3,698,759
  VolCapacityBytes: 0
VolStatus: Full
  Enabled: 1
  Recycle: 1
 VolRetention: 315,360,000
   VolUseDuration: 0
   MaxVolJobs: 0
  MaxVolFiles: 0
  MaxVolBytes: 644,245,094,400
InChanger: 0
  EndFile: 238
 EndBlock: 9,757
 VolParts: 0
LabelType: 0
StorageId: 2
 DeviceId: 0
   LocationId: 0
 RecycleCount: 0
 InitialWrite: -00-00 00:00:00
ScratchPoolId: 0
RecyclePoolId: 0
  Comment: NULL


 On Wed, 2012-04-18 at 22:19 +0200, Andre Rossouw wrote:
  Thanks for the replies. I still have 2 questions:
 
  On Wed, 2012-04-18 at 19:37 +0200, ganiuszka wrote:
   2012/4/18 Andre Rossouw an...@arnet.co.za:
Hello.
   
I have a system running Ubuntu 10.04 with Bacula 5.0.1. and a SCSI
Tandberg LTO-3 HH drive. It has been running without problems since
2010. Last week it started writing ~200GB data to the media, and
reporting that the media is full.
   
 FD Bytes Written:   235,602,974,800 (235.6 GB)
 SD Bytes Written:   0 (0 B)
 Rate:   12196.7 KB/s
 Software Compression:   None
 Volume Session Time:1334732079
 Last Volume Bytes:  407,329,219,584 (407.3 GB)
--
 
  I see that the media is full. But I have purged and relabeled the tape
  for testing. From reading the documentation, should this not have marked
  the tape to be recycled? Or do I need to move the tape into the scratch
  pool for this to happen?
 
  I have also run the btape fill command to test the drive and media
  (after purging and relabeling). I get the following output:
  btape: btape.c:2736 End of tape 201:0. Volume Bytes=200,404,463,616
 
  It is an LTO 3 tape, so I should get 400GB uncompressed?
 
   It looks like on the tape is ~400GB data. Backup from above output has
   ~200GB data written. It looks well for me. Looking on 'Volume Session
   Id' it seems there also is some another backup. You can see jobs
   placed on this volume eg. by using bconsole.
 
  Apologies if I'm doing something silly, but is there a way I can erase
  the tape to ensure that there is nothing on the tape? 

Re: [Bacula-users] Bacula skipped 02:00 AM schedule on start of daylight saving

2012-03-26 Thread John Lockard
Best solution for this one...  Run your backup server on UTC time rather
than local.

-John

On Mon, Mar 26, 2012 at 3:59 AM, Frank Seidinger
frank.seidin...@novity.dewrote:

 Dear Bacula Users,

 I think that I've found a minor bug in bacula concerning the adjustment
 of clocks on the start of daylight saving (or summertime).

 I run a daily schedule on 02:00 AM which was skipped last night, when
 the clocks advanced one hour forward from 02:00 AM to 03:00 AM when the
 daylight saving hast started in central Europe.

 For me it is not a big issue to loose one daily backup cycle but I guess
 this might not be the case for others. I would think that even in such
 circumstances the planned schedule at 02:00 AM should have take place.

 Kind regards,

 Frank.


 --
 This SF email is sponsosred by:
 Try Windows Azure free for 90 days Click Here
 http://p.sf.net/sfu/sfd2d-msazure
 ___
 Bacula-users mailing list
 Bacula-users@lists.sourceforge.net
 https://lists.sourceforge.net/lists/listinfo/bacula-users




-- 
---
 John M. Lockard |  U of Michigan - School of Information
  Unix Sys Admin |  1214 SI North - 1075 Beal Ave.
  jlock...@umich.edu |Ann Arbor, MI  48109-2112
 www.umich.edu/~jlockard http://www.umich.edu/%7Ejlockard |
734-936-7255 | 734-764-2475 FAX
---
- The University of Michigan will never ask you for your password -
--
This SF email is sponsosred by:
Try Windows Azure free for 90 days Click Here 
http://p.sf.net/sfu/sfd2d-msazure___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] weekly reports

2010-10-08 Thread John Lockard
Yes, quite possible.

Check the examples/reports directory in the source tarball.

I've taken the reports.pl script and tweaked it to do
somethings specific to me.  It's very straightforward
and you just kick it off with cron.

-John

On Fri, Oct 08, 2010 at 10:38:37AM +0200, hOZONE wrote:
   hello,
 i use bacula for two client.
 
 my schedule is
 * FULL on sunday
 * INCREMENTAL the other days
 
 i would like to know if it is possible to send by mail a weekly report 
 with the status of the runned jobs
 something like what i see if i do list jobs for client X on the bconsole
 
 thanks,
 hOZONE
 
 
 --
 Beautiful is writing same markup. Internet Explorer 9 supports
 standards for HTML5, CSS3, SVG 1.1,  ECMAScript5, and DOM L2  L3.
 Spend less time writing and  rewriting code and more time creating great
 experiences on the web. Be a part of the beta today.
 http://p.sf.net/sfu/beautyoftheweb
 ___
 Bacula-users mailing list
 Bacula-users@lists.sourceforge.net
 https://lists.sourceforge.net/lists/listinfo/bacula-users
 
 

-- 
Will all section 7 personnel scheduled for decontamination please
 proceed to the decontamination unit in the decontamination office
 for immediate decontamination.
---
 John M. Lockard |  U of Michigan - School of Information
 Unix and Security Admin |  1214 SI North - 1075 Beal Ave.
  jlock...@umich.edu |Ann Arbor, MI  48109-2112
 www.umich.edu/~jlockard | 734-615-8776 | 734-647-8045 FAX
---

--
Beautiful is writing same markup. Internet Explorer 9 supports
standards for HTML5, CSS3, SVG 1.1,  ECMAScript5, and DOM L2  L3.
Spend less time writing and  rewriting code and more time creating great
experiences on the web. Be a part of the beta today.
http://p.sf.net/sfu/beautyoftheweb
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] Anyone written any handy queries (query.sql)???

2009-09-25 Thread John Lockard
On Tue, Aug 11, 2009 at 02:39:39PM -0400, John Lockard wrote:
 I have modified my query.sql to include some queries that
 I use frequently and I thought maybe someone else would
 find them useful additions.  Also, I was wondering if anyone
 had queries which they find useful and would like to share.
 
 In my setup, I need to rotate tapes on a weekly basis to
 keep offsites in case of emergency, so I need to find certain
 groups of tapes for easy removal and it's easier to group
 them in query output than having to scan down a 57 item long
 list and pick out the ones I need (and other similar needs).
 
 I hope someone finds this useful,
 -John

SNIP

I've just done another one which might be useful...

I keep backups on disk for about a week over a month, then
I migrate my Differential and Full backups to tape for safe
keeping.  Bacula doesn't purge media when all jobs from a
virtual tape have been migrated to physical tape, so the
disk jobs stay around for the full life of the job.  In my
case that means my Full backups would live on disk for
9 months, even though they were migrated to tape almost 8
months ago.

So, I need a way to find all MediaId's which have had *ALL*
of their jobs migrated to tape so that I can purge them.
Here's what I've come up with.  (If there's an easier way
to do this, please tell me).  (Works on MySQL)

:Test List Migrated Jobs stored on Media
*Order by (Job or Media):
!DROP TABLE tempmig;
!DROP TABLE tempmig2;
!DROP TABLE tempmig3;
CREATE TABLE tempmig (MediaId INT, Type BINARY(1));
CREATE TABLE tempmig2 (MediaId INT, Type BINARY(1));
CREATE TABLE tempmig3 (MediaId INT NOT NULL);
INSERT INTO tempmig
  SELECT JobMedia.MediaId,Job.Type
FROM Job,JobMedia,Media
WHERE JobMedia.MediaId=Media.MediaId
  AND Job.JobId=JobMedia.JobId
  AND Job.Type='M';
INSERT INTO tempmig2
  SELECT JobMedia.MediaId,Job.Type
FROM Job,JobMedia,Media
WHERE tempmig.MediaId=JobMedia.MediaId
  AND Job.JobId=JobMedia.JobId
  AND Job.Type!='M';
INSERT INTO tempmig3
  SELECT tempmig.MediaId
FROM tempmig
  LEFT JOIN tempmig2
  ON tempmig2.MediaId = tempmig.MediaId
WHERE tempmig2.MediaId IS NULL;
SELECT DISTINCT 
Job.JobId,JobMedia.MediaId,Job.Name,Job.Type,Job.Level,Job.JobStatus AS 
Status,Job.JobFiles AS Files,Job.JobBytes/(1024*1024*1024) AS GB
  FROM JobMedia,Job,tempmig3
  WHERE JobMedia.JobId=Job.JobId
AND JobMedia.MediaId=tempmig3.MediaId
  ORDER by JobMedia.%1Id ASC;
!DROP TABLE tempmig;
!DROP TABLE tempmig2;
!DROP TABLE tempmig3;


-John

-- 
No matter how sophisticated you may be, a large granite mountain
 cannot be denied - it speaks in silence to the very core of your
 being - Ansel Adams
---
 John M. Lockard |  U of Michigan - School of Information
 Unix and Security Admin |  1214 SI North - 1075 Beal Ave.
  jlock...@umich.edu |Ann Arbor, MI  48109-2112
 www.umich.edu/~jlockard | 734-615-8776 | 734-647-8045 FAX
---

--
Come build with us! The BlackBerryreg; Developer Conference in SF, CA
is the only developer event you need to attend this year. Jumpstart your
developing skills, take BlackBerry mobile applications to market and stay 
ahead of the curve. Join us from November 9#45;12, 2009. Register now#33;
http://p.sf.net/sfu/devconf
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] Timeout (?) problems with some Full backups

2009-08-12 Thread John Lockard
While the job is running, keep an eye on the system which houses
your MySQL database and make sure that it isn't filling up a
partition with temp data.  I was running into a similar problem
and needed to move my mysql_tmpdir (definable in /etc/my.cnf)
to another location.

-John

On Wed, Aug 12, 2009 at 05:00:30PM +0100, Nick Lock wrote:
 Hello list!
 
 Sorry to trouble you with what's probably a simple problem, but I'm now
 looking at the very real possibility of wiping all our backups clean and
 starting from scratch if I can't fix it... :(
 
 I'm having problems with some Full backups, which run for between 1 and
 2 hours, appearing to time out after the data transfer from the FD to
 the SD. The error message (shown below) shows that the data transfer
 completes, often in about 1hr30min, and then Bacula does nothing until
 the job has been running for 2 hours at which point it gives an FD
 error.
 
 Other Full backups (which don't take as long) run correctly, and for
 most of the time Inc and Diff backups also run correctly. However, a
 small % of backups will fail at random, also with FD errors but at
 random times-elapsed during the job... this I have been ascribing to
 network fluctuations! The difference is that re-running these random
 failures will succeed, whilst this particular Full failure doesn't! ;)
 
 I've already tried setting a heartbeat interval of 20 minutes in the
 FD/SD and DIR conf files (thinking that the FD - Dir connection was
 timing out) but this doesn't change anything.
 
 In the time between the data transfer finishing and the timeout,
 Postgres has an open connection with a COPY batch FROM STDIN
 transaction in progress, which at the timeout produces errors in the
 Postgres log that I have also shown below.
 
 I'm happy to post portions of the conf files if needed, but they're huge
 and might well lead to tl;dr!
 
 Any suggestions as to how I can troubleshoot this further would be most
 appreciated!
 
 Nick Lock.
 
 
 -
 12-Aug 14:18 exa-bacula-dir JobId 5514: Start Backup JobId 5514,
 Job=backup_scavenger.2009-08-12_14.18.06.04
 12-Aug 14:18 exa-bacula-dir JobId 5514: There are no more Jobs
 associated with Volume scavenger-full-1250. Marking it purged.
 12-Aug 14:18 exa-bacula-dir JobId 5514: All records pruned from Volume
 scavenger-full-1250; marking it Purged
 12-Aug 14:18 exa-bacula-dir JobId 5514: Recycled volume
 scavenger-full-1250
 12-Aug 14:18 exa-bacula-dir JobId 5514: Using Device
 FileStorageScavenger
 12-Aug 14:18 exa-bacula-sd JobId 5514: Recycled volume
 scavenger-full-1250 on device
 FileStorageScavenger (/srv/bacula/volume/web-scavenger), all previous
 data lost.
 12-Aug 14:18 exa-bacula-dir JobId 5514: Max Volume jobs exceeded.
 Marking Volume scavenger-full-1250 as Used.
 12-Aug 15:49 exa-bacula-sd JobId 5514: Job write elapsed time =
 01:31:41, Transfer rate = 401.4 K bytes/second
 12-Aug 16:18 exa-bacula-dir JobId 5514: Fatal error: Network error with
 FD during Backup: ERR=Connection reset by peer
 12-Aug 16:18 exa-bacula-dir JobId 5514: Fatal error: No Job status
 returned from FD.
 12-Aug 16:18 exa-bacula-dir JobId 5514: Error: Bacula exa-bacula-dir
 2.4.4 (28Dec08): 12-Aug-2009 16:18:09
   Build OS:   x86_64-pc-linux-gnu debian lenny/sid
   JobId:  5514
   Job:backup_scavenger.2009-08-12_14.18.06.04
   Backup Level:   Full
   Client: scavenger 2.4.4 (28Dec08)
 i486-pc-linux-gnu,debian,5.0
   FileSet:full-scavenger 2009-04-16 15:58:05
   Pool:   scavenger-full (From Job FullPool override)
   Storage:FileScavenger (From Job resource)
   Scheduled time: 12-Aug-2009 14:18:03
   Start time: 12-Aug-2009 14:18:09
   End time:   12-Aug-2009 16:18:09
   Elapsed time:   2 hours 
   Priority:   10
   FD Files Written:   0
   SD Files Written:   81,883
   FD Bytes Written:   0 (0 B)
   SD Bytes Written:   2,208,578,175 (2.208 GB)
   Rate:   0.0 KB/s
   Software Compression:   None
   VSS:no
   Storage Encryption: no
   Volume name(s): scavenger-full-1250
   Volume Session Id:  5
   Volume Session Time:1250080970
   Last Volume Bytes:  2,212,857,316 (2.212 GB)
   Non-fatal FD errors:0
   SD Errors:  0
   FD termination status:  Error
   SD termination status:  OK
   Termination:*** Backup Error ***
 
 -
 Postgres Log:
 
 2009-08-12 16:18:09 BST ERROR:  unexpected message type 0x58 during COPY
 from stdin
 2009-08-12 16:18:09 BST CONTEXT:  COPY batch, line 81884: 
 2009-08-12 16:18:09 BST STATEMENT:  COPY batch FROM STDIN
 2009-08-12 16:18:09 BST LOG:  could not send data to client: Broken pipe
 2009-08-12 16:18:09 BST LOG:  could not receive data from 

[Bacula-users] Anyone written any handy queries (query.sql)???

2009-08-11 Thread John Lockard
I have modified my query.sql to include some queries that
I use frequently and I thought maybe someone else would
find them useful additions.  Also, I was wondering if anyone
had queries which they find useful and would like to share.

In my setup, I need to rotate tapes on a weekly basis to
keep offsites in case of emergency, so I need to find certain
groups of tapes for easy removal and it's easier to group
them in query output than having to scan down a 57 item long
list and pick out the ones I need (and other similar needs).

I hope someone finds this useful,
-John

##
:List Volumes Bacula thinks are in changer (By Slot)
SELECT Slot,VolumeName,MediaId AS Id,VolBytes/(1024*1024*1024) AS 
GB,Storage.Name
  AS Storage,Pool.Name AS Pool,VolStatus
  FROM Media,Pool,Storage
  WHERE Media.PoolId=Pool.PoolId
  AND Slot0 AND InChanger=1
  AND Media.StorageId=Storage.StorageId
  ORDER BY Slot ASC;

If you want ordered by VolumeName:
  ORDER BY VolumeName ASC;

If you want ordered by Pool, then Slot:
  ORDER BY Pool,Slot;


##
:List Full Volumes Bacula thinks are in changer (By Pool)
SELECT Slot,VolumeName,MediaId AS Id,Pool.Name AS Pool,VolStatus,
VolBytes/(1024*1024*1024) AS GB
  FROM Media,Pool,Storage
  WHERE Media.PoolId=Pool.PoolId
  AND VolStatus='Full'
  AND Slot0 AND InChanger=1
  AND Media.StorageId=Storage.StorageId
  ORDER BY Pool,Slot ASC;

If you want ordered by Slot:
  ORDER BY Slot ASC;



##
:List Non-Full Volumes Bacula thinks are in changer (By Pool)
SELECT Slot,VolumeName,MediaId AS Id,Pool.Name AS Pool,VolStatus,
VolBytes/(1024*1024*1024) AS GB
  FROM Media,Pool,Storage
  WHERE Media.PoolId=Pool.PoolId
  AND VolStatus!='Full'
  AND Slot0 AND InChanger=1
  AND Media.StorageId=Storage.StorageId
  ORDER BY Pool,Slot ASC;

If you want ordered by Slot:
  ORDER BY Slot ASC;


## (Change Media.MediaType entry below to match your settings)
:List All Tape Volumes Bacula knows about (By VolumeName)
SELECT Slot,VolumeName,MediaId AS Id,Pool.Name AS Pool,VolStatus,
VolBytes/(1024*1024*1024) AS GB
  FROM Media,Pool,Storage
  WHERE Media.PoolId=Pool.PoolId
  AND Media.StorageId=Storage.StorageId
  AND Media.MediaType='LTO-2'
  ORDER BY VolumeName ASC;

If you want ordered by Pool and Slot:
  ORDER BY Pool,Slot ASC;

If you want ordered by Slot:
  ORDER BY Slot ASC;


##
:List All Volumes in a Pool
*Enter Pool name:
SELECT MediaId AS Id,VolumeName,VolBytes/(1024*1024*1024) AS GB,Slot,
Pool.Name AS Pool,VolStatus
  FROM Media,Pool,Storage
  WHERE Media.PoolId=Pool.PoolId
  AND Pool.Name='%1'
  AND Media.StorageId=Storage.StorageId
  ORDER BY VolumeName ASC;


## (Not as useful as I thought it would be, but here it is)
:Show Log for JobId
*Enter JobId:
SELECT Time,LogText
  FROM Log
  WHERE JobId='%1'
  ORDER BY Time;


-- 
Do not try the patience of wizards,
for they are subtle and quick to anger.
---
 John M. Lockard |  U of Michigan - School of Information
 Unix and Security Admin |  1214 SI North - 1075 Beal Ave.
  jlock...@umich.edu |Ann Arbor, MI  48109-2112
 www.umich.edu/~jlockard | 734-615-8776 | 734-647-8045 FAX
---

--
Let Crystal Reports handle the reporting - Free Crystal Reports 2008 30-Day 
trial. Simplify your report design, integration and deployment - and focus on 
what you do best, core application coding. Discover what's new with 
Crystal Reports now.  http://p.sf.net/sfu/bobj-july
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] Bacula spool on SSD -- solid state drive?performance testing?

2009-07-24 Thread John Lockard
On Fri, Jul 24, 2009 at 06:48:24AM +0200, Marc Cousin wrote:
  In theory, the latency from random IO should be much closer to zero on a
  flash drive than on a thrashing hard drive, so I was hoping I might need
  only 1 or two 64GB or 128GB flash drives to provide decent spool size,
  perhaps not even raid-ed.
 
  In addition, SSD/flash drives should be silent and heat up the room less
  (although that latter effect will be small--10 watts vs 2 watts for each
  drive)
 
 For spooling/despooling there should be no latency problems. You need 
 throughput more than latency, and a standard hard drive will be as good as a 
 SSD or even better if setup correctly.
 
 All you really need is to be able to read and write big streams at the same 
 time. So the real problem is to help your disk scheduler to be able to read 
 while having a lot of data in the write cache.
 
 We managed to do that by raising the read ahead on the disk array (with the 
 blockdev command in linux). We have managed to get 300MB read and write at 
 the 
 same time with a disk array (I admit it costed a bit more than 2 intel SSD 
 drives), but we have terabytes of spool capacity.
 
 If you really want a SSD, I'd use it for the catalog's database if I were 
 you. 
 There, disk latency is often the main source of contention.

For spool, I would worry about the limited write (erase)
cycles of SSD.  Sure, the speed of read/write is enormously
appealing, but with how much my spool gets hit I'd hate to
have to set a really early replacement schedule because my
media can't handle many writes.  Rather than SSD for spool,
RAM-Disk looks like a better way to go.

-John

-- 
   (In this one, Pinky is smart.)
Brain: Pinky, Are you pondering what I'm pondering?
Pinky: Yes I am.
---
 John M. Lockard |  U of Michigan - School of Information
 Unix and Security Admin |  1214 SI North - 1075 Beal Ave.
  jlock...@umich.edu |Ann Arbor, MI  48109-2112
 www.umich.edu/~jlockard | 734-615-8776 | 734-647-8045 FAX
---

--
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] bacula job times out after 2 hours 11 mins 15 secs

2009-07-17 Thread John Lockard
Check your /tmp directory or MySQL 'tmpdir' location to see it
it's filling up with temporary DB data.  This same problem
happened to, in my case it was dying at around the 180GB mark.

Moving the MySQL tmpdir to a much larger location took care of
my problem.

-John

On Wed, Jul 08, 2009 at 05:32:31PM +0100, Gavin McCullagh wrote:
 Hi,
 
 On Wed, 08 Jul 2009, Gavin McCullagh wrote:
 
  I've set a heartbeat interval of 60 seconds on the director and am running
  the backup again to see what happens.  
 
 Actually, this didn't solve it.  Despite the heartbeat packets visibly
 (in tcpdump) going from director to FD, the connection dropped at the same
 point anyway.  
 
 I then added the heartbeat interval to the FD too, so you can now see a
 heartbeat from DIR-FD and an ACK back from FD-DIR.  I now have a backup
 job running 2 hours 40 minutes and counting.
 
 That seems to be it.  I guess perhaps the connection originated from the
 DIR so the firewall is only happy if it sees packets coming from the system
 which was connected _to_.
 
 Gavin

-- 
Brain: Pinky, Are you pondering what I'm pondering?
Pinky: Uh, I think so Don Cerebro, but why would
   Sophia Loren do a musical?
---
 John M. Lockard |  U of Michigan - School of Information
 Unix and Security Admin |  1214 SI North - 1075 Beal Ave.
  jlock...@umich.edu |Ann Arbor, MI  48109-2112
 www.umich.edu/~jlockard | 734-615-8776 | 734-647-8045 FAX
---

--
Enter the BlackBerry Developer Challenge  
This is your chance to win up to $100,000 in prizes! For a limited time, 
vendors submitting new applications to BlackBerry App World(TM) will have
the opportunity to enter the BlackBerry Developer Challenge. See full prize  
details at: http://p.sf.net/sfu/Challenge
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


[Bacula-users] Migration status (checking)...

2009-06-11 Thread John Lockard
When I'm running a backup job I can check the status of the
backup job (with statistics) through bconsole, using the command
'status client=clientname', which gives results like:


Connecting to Client clientname at clientname.si.umich.edu:9102

clientname-fd Version: 3.0.1 (30 April 2009)  x86_64-unknown-linux-gnu redhat 
Enterprise release
Daemon started 20-May-09 16:35, 19 Jobs run since started.
 Heap: heap=1,466,368 smbytes=501,107 max_bytes=978,476 bufs=155 max_bufs=778
 Sizeof: boffset_t=8 size_t=4 debug=0 trace=0

Running Jobs:
JobId 4623 Job Clientname-Data0.2009-06-10_05.15.00_17 is running.
Backup Job started: 10-Jun-09 05:15
Files=372,250 Bytes=15,584,784,902 Bytes/sec=142,614 Errors=0
Files Examined=34,997,833
Processing file: /data0/users/kumud
SDReadSeqNo=5 fd=6
Director connected at: 11-Jun-09 11:36


With a migrate job, how can I get equivalent statistics?
If I do the same command and use either my director, storage
daemon, file daemon or the client of the job being migrated
I don't seem to be getting that information anywhere.

Thanks,
-John

-- 
Don't stop that dripping I prefer the old tortures - Wire
---
 John M. Lockard |  U of Michigan - School of Information
 Unix and Security Admin |  1214 SI North - 1075 Beal Ave.
  jlock...@umich.edu |Ann Arbor, MI  48109-2112
 www.umich.edu/~jlockard | 734-615-8776 | 734-647-8045 FAX
---

--
Crystal Reports - New Free Runtime and 30 Day Trial
Check out the new simplified licensing option that enables unlimited
royalty-free distribution of the report engine for externally facing 
server and web deployment.
http://p.sf.net/sfu/businessobjects
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] bacula management questions

2009-06-05 Thread John Lockard
If you do 'status client=[clientname]', on the bottom of the
output you'll see the status of the last several jobs which
ran for that client.

I would keep doing like you are with moving old-fd to new-fd
and deleting config for the old config, etc..

Your old backup files I would think should be saved for the
defined life of the backup and will eventually disappear when
that lifespan has been reached.

For example, if I create a new webserver I won't automatically
delete all of the old webserver's backups, I'll keep them
around till they've hit their defined lifespan.

-John

On Thu, Jun 04, 2009 at 11:58:04AM -0700, pedro noticioso wrote:
 
 
 hi guys, bacula kicks arse lol
 
 how may I know how long its been since the clients where backed up from 
 bcosole of whatever? I jsut want to find out quickly which ones have the 
 longest time to see whats going on with them
 
 btw, every once in a while employees lave the company and new ones come in, 
 so my machines names change names with them, how may I move the clients name 
 from old-fd to new-fd? I currently delete old configuration, create a new one 
 and whalla, duplicated information and I am building up a bunch of unsused 
 names  in the clients list and taking space with old backup files :s
 
 thanks
 
 
 
 
   
 
 --
 OpenSolaris 2009.06 is a cutting edge operating system for enterprises 
 looking to deploy the next generation of Solaris that includes the latest 
 innovations from Sun and the OpenSource community. Download a copy and 
 enjoy capabilities such as Networking, Storage and Virtualization. 
 Go to: http://p.sf.net/sfu/opensolaris-get
 ___
 Bacula-users mailing list
 Bacula-users@lists.sourceforge.net
 https://lists.sourceforge.net/lists/listinfo/bacula-users
 
 

-- 
Never Do Anything You Wouldn't Want To Explain To The Paramedics
---
 John M. Lockard |  U of Michigan - School of Information
 Unix and Security Admin |  1214 SI North - 1075 Beal Ave.
  jlock...@umich.edu |Ann Arbor, MI  48109-2112
 www.umich.edu/~jlockard | 734-615-8776 | 734-647-8045 FAX
---

--
OpenSolaris 2009.06 is a cutting edge operating system for enterprises 
looking to deploy the next generation of Solaris that includes the latest 
innovations from Sun and the OpenSource community. Download a copy and 
enjoy capabilities such as Networking, Storage and Virtualization. 
Go to: http://p.sf.net/sfu/opensolaris-get
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] run full backup in bconsole but it runs incremental after one full?

2009-05-27 Thread John Lockard
On Sat, May 23, 2009 at 12:23:32PM -0500, Zhengquan Zhang wrote:
 On Fri, May 22, 2009 at 02:21:26PM -0400, John Lockard wrote:
  Also, when you post your configs, it would be a really good idea
  to remove password and account information.
 
 Thanks John, Can anyone use the passwords to connect to my bacula right?
 Because I already specified the clients in the bacula-dir.conf file.

It's always a good idea to not give the keys to your house to
someone you don't know.  In this case, with the information you
gave it would be possible for someone to take control of your
director (depending on your firewall settings), which may not
have given them access to your files, but they could make your
backups disappear, or issue a restore of files from one server
to another, or restore files from a much older backup, wiping
out newer files.

-John

-- 
The reptiles and I, the reptiles and I.
 All things to everyone, the reptiles and I. - Shriekback
---
 John M. Lockard |  U of Michigan - School of Information
 Unix and Security Admin |  1214 SI North - 1075 Beal Ave.
  jlock...@umich.edu |Ann Arbor, MI  48109-2112
 www.umich.edu/~jlockard | 734-615-8776 | 734-647-8045 FAX
---

--
Register Now for Creativity and Technology (CaT), June 3rd, NYC. CaT 
is a gathering of tech-side developers  brand creativity professionals. Meet
the minds behind Google Creative Lab, Visual Complexity, Processing,  
iPhoneDevCamp as they present alongside digital heavyweights like Barbarian 
Group, R/GA,  Big Spaceship. http://p.sf.net/sfu/creativitycat-com 
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] run full backup in bconsole but it runs incremental after one full?

2009-05-27 Thread John Lockard
On Sat, May 23, 2009 at 12:11:28PM -0500, Zhengquan Zhang wrote:
 On Fri, May 22, 2009 at 02:12:13PM -0400, John Lockard wrote:
  When you run a job by hand the schedule isn't involved.
  Either way, for your Schedule entry you need Level=
  before the work Full.
  
   Schedule {
 Name = test   
 Run = Level=Full at 11:50   
   }
 
 Thank John for pointing out this, I searched the documentation and it
 should be like what you said. But I wonder why bacula does not give me
 an error or warning when I did not get the config correct.
 
  
  
  But, your problem is that your Job doesn't have a Default
  Level defined.  You'll need something like this:
  
   Job {
 Name = job_backup1
 Type = Backup
 Level = Full
 .
 .
   }
 
 I don't understand the point of a default backup level. Since after all
 I will set the levels explicitly in the schedule resources. Is there
 some exceptional case when a default bacckup level is useful?

The point of the default backup level is for when you need to
run a job by hand from the console.

If you don't run anything other than Fulls (for example), you
can just define the backup level in one place.  If you run
multiple levels, you set those in the schedule, but you set
the default level in the Job, so that on the occasion you need
to run a job manually, it will select a certain level of job.

In your case, it sounds like you are most interested in
running incremental jobs by hand, so setting the default
level to Incremental makes the most sense.  In my case,
I'm generally not as worried about Incremental jobs as I am
about the failure of a Full, so my default level is Full.

-John

-- 
The negative is comparable to the composer's score and the print
 to its performance. Each performance differs in subtle ways.
 - Ansel Adams
---
 John M. Lockard |  U of Michigan - School of Information
 Unix and Security Admin |  1214 SI North - 1075 Beal Ave.
  jlock...@umich.edu |Ann Arbor, MI  48109-2112
 www.umich.edu/~jlockard | 734-615-8776 | 734-647-8045 FAX
---

--
Register Now for Creativity and Technology (CaT), June 3rd, NYC. CaT 
is a gathering of tech-side developers  brand creativity professionals. Meet
the minds behind Google Creative Lab, Visual Complexity, Processing,  
iPhoneDevCamp as they present alongside digital heavyweights like Barbarian 
Group, R/GA,  Big Spaceship. http://p.sf.net/sfu/creativitycat-com 
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] Is it a good idea to use bacula if Irarely havenetwork connection?

2009-05-27 Thread John Lockard
For backing up a laptop locally, bacula seems to be HUGE
overkill.  How long will you be keeping these backups?  How many
backups will you be keeping?  My guess is that you'd be better
served by a little scripting, rsync and cron.

Each day, establish a new directory by date, then hourly run rsync
to that location.
Or, Each day establish a new directory by date, create
a watcher file, then hourly run a 'find [dir] -newer
/path/to/watcher' to establish a list of files which have
changed, then 'touch /path/to/watcher'.  From the list of
files which changed, tar them to backup location.

Depending on how you want to keep files, you'll either have
a date directory (/backups/20090527) with all of the days
changed files in it, or you'll have a date directory with hourly
subdirs with the hourly changes (/backups/20090527/01, etc.).

Then, when you're back at the office, where the bacula server
is, you kick off a backup manually which will backup your
/backups directory.

-John

On Sun, May 24, 2009 at 06:29:28PM -0500, Zhengquan Zhang wrote:
 On Sun, May 24, 2009 at 06:50:52PM -0400, Dan Langille wrote:
  -BEGIN PGP SIGNED MESSAGE-
  Hash: SHA1
  
  John Drescher wrote:
   So, if you run backups locally to an external disk, you could back up 
   the
   catalog by taking an ascii dump of the database, and copying the dump 
   file
   to the same disk used for actual backup data.
   So Can I just back the directory where the database is?
  
   That is not a good way to backup a database. You need to run mysqldupm
   instead and backup the text file it generates.
  
  What John said.
  
  Not only does this ensure that your backup contains a valid MySQL dump,
  it also exercises the database.  That is, the act of running mysqldump
  reads every part of the databases and dumps it out.  If there is any
  'hidden' problem with the databases, chances are, you'll find out about
  it during the dump process itself.
  
  Said explanation applies to all databases IMHO.
 
 Thanks John and Dan for pointing out this for me. I will definetely do
 that.
 
 Zhengquan
 
  
  - --
  Dan Langille
  
  BSDCan - The Technical BSD Conference : http://www.bsdcan.org/
  PGCon  - The PostgreSQL Conference: http://www.pgcon.org/
  -BEGIN PGP SIGNATURE-
  Version: GnuPG v2.0.11 (FreeBSD)
  Comment: Using GnuPG with Mozilla - http://enigmail.mozdev.org
  
  iEYEARECAAYFAkoZz0wACgkQCgsXFM/7nTxhwACffVigKxm386DGzYhGlxBkqIhv
  0usAoJnpJxahzI+T8sppKdJtG9g7J4fm
  =lzRI
  -END PGP SIGNATURE-
 
 -- 
 Zhengquan
 
 
 --
 Register Now for Creativity and Technology (CaT), June 3rd, NYC. CaT
 is a gathering of tech-side developers  brand creativity professionals. Meet
 the minds behind Google Creative Lab, Visual Complexity, Processing,  
 iPhoneDevCamp asthey present alongside digital heavyweights like Barbarian
 Group, R/GA,  Big Spaceship. http://www.creativitycat.com 
 ___
 Bacula-users mailing list
 Bacula-users@lists.sourceforge.net
 https://lists.sourceforge.net/lists/listinfo/bacula-users
 
 

-- 
Yes, evil comes in many forms, whether it be a man-eating cow or Joseph
 Stalin, but you can't let the package hide the pudding! Evil is just
 plain bad! You don't cotton to it. You gotta smack it in the nose with
 the rolled-up newspaper of goodness!  Bad dog! Bad dog! - The Tick
---
 John M. Lockard |  U of Michigan - School of Information
 Unix and Security Admin |  1214 SI North - 1075 Beal Ave.
  jlock...@umich.edu |Ann Arbor, MI  48109-2112
 www.umich.edu/~jlockard | 734-615-8776 | 734-647-8045 FAX
---

--
Register Now for Creativity and Technology (CaT), June 3rd, NYC. CaT 
is a gathering of tech-side developers  brand creativity professionals. Meet
the minds behind Google Creative Lab, Visual Complexity, Processing,  
iPhoneDevCamp as they present alongside digital heavyweights like Barbarian 
Group, R/GA,  Big Spaceship. http://p.sf.net/sfu/creativitycat-com 
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


[Bacula-users] Not automounting... why?

2009-05-22 Thread John Lockard
Hi All,

I know I don't have full information here, but don't want to send along
my full config as I'll guess that's overkill.

22-May 06:17 tibor-sd JobId 4223: Please mount Volume 100027L2 or label a new 
one for:
Job:  Belobog-Data2-Users.2009-05-22_05.15.00_35
Storage:  NEO-LTO-0 (/dev/nst0)
Pool: Radev-Diff
Media type:   LTO-2
22-May 06:19 tibor-sd JobId 4209: Please mount Volume 000152L2 or label a new 
one for:
Job:  BackupCatalog.2009-05-21_23.10.00_21
Storage:  NEO-LTO-1 (/dev/nst1)
Pool: Weekly-Tape-WH
Media type:   LTO-2

I have a Overland Neo 4000 Tape Library with 57 slots and 3 tape drives.
My config is set to Automount tapes when needed.  But, sometimes it doesn't
and just sits there, waiting for me to manually load the requested tapes.
I had two instances of that this morning.  Both tapes were in the Library
and checking which tapes Bacula thought were loaded showed both of these
tapes as being In Changer.  I was able to easily do:
  mount storage=Overland_Neo_4000 drive=0 slot=30
  mount storage=Overland_Neo_4000 drive=1 slot=1

Then my jobs continued running nicely.

What would be the reasons that occasionally Bacula won't automount tapes
which it knows are present?

Thanks for any assistance and I can provide whatever configs would help
solve this dilemma,
-John


-- 
Brain: Pinky, Are you pondering what I'm pondering?
Pinky: Uh, I think so Brain, but how are we gonna teach
   a goat to dance with flippers on?
---
 John M. Lockard |  U of Michigan - School of Information
 Unix and Security Admin |  1214 SI North - 1075 Beal Ave.
  jlock...@umich.edu |Ann Arbor, MI  48109-2112
 www.umich.edu/~jlockard | 734-615-8776 | 734-647-8045 FAX
---

--
Register Now for Creativity and Technology (CaT), June 3rd, NYC. CaT
is a gathering of tech-side developers  brand creativity professionals. Meet
the minds behind Google Creative Lab, Visual Complexity, Processing,  
iPhoneDevCamp asthey present alongside digital heavyweights like Barbarian
Group, R/GA,  Big Spaceship. http://www.creativitycat.com 
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] Not automounting... why?

2009-05-22 Thread John Lockard
And if I mounted a tape immediately after and also made sure
that automout was set to yes?  Whenever I unmount a tape I
always make sure to mount another in it's stead.

I'll dig through previous and current job and daemon messages
when I get back to my normal office.

-John

On Fri, May 22, 2009 at 11:03:51AM -0400, John Drescher wrote:
 On Fri, May 22, 2009 at 10:45 AM, John Lockard jlock...@umich.edu wrote:
  Hi All,
 
  I know I don't have full information here, but don't want to send along
  my full config as I'll guess that's overkill.
 
  22-May 06:17 tibor-sd JobId 4223: Please mount Volume 100027L2 or label a 
  new one for:
     Job:          Belobog-Data2-Users.2009-05-22_05.15.00_35
     Storage:      NEO-LTO-0 (/dev/nst0)
     Pool:         Radev-Diff
     Media type:   LTO-2
  22-May 06:19 tibor-sd JobId 4209: Please mount Volume 000152L2 or label a 
  new one for:
     Job:          BackupCatalog.2009-05-21_23.10.00_21
     Storage:      NEO-LTO-1 (/dev/nst1)
     Pool:         Weekly-Tape-WH
     Media type:   LTO-2
 
  I have a Overland Neo 4000 Tape Library with 57 slots and 3 tape drives.
  My config is set to Automount tapes when needed.  But, sometimes it doesn't
  and just sits there, waiting for me to manually load the requested tapes.
  I had two instances of that this morning.  Both tapes were in the Library
  and checking which tapes Bacula thought were loaded showed both of these
  tapes as being In Changer.  I was able to easily do:
   mount storage=Overland_Neo_4000 drive=0 slot=30
   mount storage=Overland_Neo_4000 drive=1 slot=1
 
  Then my jobs continued running nicely.
 
  What would be the reasons that occasionally Bacula won't automount tapes
  which it knows are present?
 
 
 One reason is if you umounted the previous tape.
 
 John
 
 

-- 
A photograph is usually looked at - seldom looked into. - Ansel Adams
---
 John M. Lockard |  U of Michigan - School of Information
 Unix and Security Admin |  1214 SI North - 1075 Beal Ave.
  jlock...@umich.edu |Ann Arbor, MI  48109-2112
 www.umich.edu/~jlockard | 734-615-8776 | 734-647-8045 FAX
---

--
Register Now for Creativity and Technology (CaT), June 3rd, NYC. CaT
is a gathering of tech-side developers  brand creativity professionals. Meet
the minds behind Google Creative Lab, Visual Complexity, Processing,  
iPhoneDevCamp asthey present alongside digital heavyweights like Barbarian
Group, R/GA,  Big Spaceship. http://www.creativitycat.com 
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] run full backup in bconsole but it runs incremental after one full?

2009-05-22 Thread John Lockard
When you run a job by hand the schedule isn't involved.
Either way, for your Schedule entry you need Level=
before the work Full.

 Schedule {
   Name = test   
   Run = Level=Full at 11:50   
 }


But, your problem is that your Job doesn't have a Default
Level defined.  You'll need something like this:

 Job {
   Name = job_backup1
   Type = Backup
   Level = Full
   .
   .
 }

On Fri, May 22, 2009 at 11:48:07AM -0500, Zhengquan Zhang wrote:
 Hello, 
 
 The first time I run the job it runs full well, but I tried to run it
 more times, it will run incremental automatically. Could anyone help me
 understand this? I am learning bacula and please forgive me for this easy
 question.
 
 *run
 A job name must be specified.
 Automatically selected Job: job_backup1
 Run Backup job
 JobName:  job_backup1
 Level:Incremental
 Client:   client_backup1
 FileSet:  fileset_backup1
 Pool: Default (From Job resource)
 Storage:  storage_backup1 (From Job resource)
 When: 2009-05-22 11:49:34
 Priority: 10
 OK to run? (yes/mod/no):
 
 the schedule section is listed below;
 
 Schedule {
   Name = test   
   Run = Full at 11:50   
 }
 
 attached is bacula-dir.conf
 
 #
 # Default Bacula Director Configuration file
 #
 #  The only thing that MUST be changed is to add one or more
 #   file or directory names in the Include directive of the
 #   FileSet resource.
 #
 #  For Bacula release 2.4.4 (28 December 2008) -- debian lenny/sid
 #
 #  You might also want to change the default email address
 #   from root to your address.  See the mail and operator
 #   directives in the Messages resource.
 #
 
 Director {# define myself
   Name = director_backup1
   DIRport = 9101# where we listen for UA connections
   QueryFile = /etc/bacula/scripts/query.sql
   WorkingDirectory = /var/lib/bacula
   PidDirectory = /var/run/bacula
   Maximum Concurrent Jobs = 1
   Password = CLEANED # Console password
   Messages = Daemon
   DirAddress = 127.0.0.1
 }
 
 Job {
   Name = job_backup1
   Type = Backup
   Client = client_backup1
   FileSet = fileset_backup1
   Pool = Default
   Schedule = test
   Full Backup Pool = pool_backup1_full
   Differential Backup Pool = pool_backup1_diff
   Incremental Backup Pool = pool_backup1_inc
   Messages = Standard
   Storage = storage_backup1
   Write Bootstrap = /var/lib/bacula/job_backup1.bsr
   Priority = 10
 }
 
 
 # List of files to be backed up
 FileSet {
   Name = fileset_backup1
   Include {
 Options {
   signature = MD5
 }
 File = /etc
 File = /home/zhengquan
 File = /var
   }
   Exclude {
 File = /proc
 File = /tmp
 File = /.journal
 File = /.fsck
   }
 }
 
 #test schedule
 Schedule {
   Name = test 
   Run = Full at 11:50
 }
 
 # backup1, the backup server itself
 Client {
   Name = client_backup1
   Address = backup1
   FDPort = 9102
   Catalog = MyCatalog
   Password = CLEANED  # password for FileDaemon
   File Retention = 60 days# 60 days
   Job Retention = 6 months# six months
   AutoPrune = yes # Prune expired Jobs/Files
 }
 
 
 # Definition of file storage device
 Storage {
   Name = storage_backup1
   Address = backup1# N.B. Use a fully qualified name here
   SDPort = 9103
   Password = CLEANED
   Device = device_backup1
   Media Type = File
 }
 
 
 
 # Generic catalog service
 Catalog {
   Name = MyCatalog
   dbname = CLEANED; dbuser = CLEANED; dbpassword = CLEANED
 }
 
 # Reasonable message delivery -- send most everything to email address
 #  and to the console
 Messages {
   Name = Standard
   mailcommand = /usr/lib/bacula/bsmtp -h localhost -f \\(Bacula\) \%r\\ 
 -s \Bacula: %t %e of %c %l\ %r
   operatorcommand = /usr/lib/bacula/bsmtp -h localhost -f \\(Bacula\) 
 \%r\\ -s \Bacula: Intervention needed for %j\ %r
   mail = zhang.zhengq...@gmail.com = all, !skipped
   operator = zhang.zhengq...@gmail.com = mount
   console = all, !skipped, !saved
 #
 # WARNING! the following will create a file that you must cycle from
 #  time to time as it will grow indefinitely. However, it will
 #  also keep all your messages if they scroll off the console.
 #
   append = /var/lib/bacula/log = all, !skipped
 }
 
 
 #
 # Message delivery for daemon messages (no job).
 Messages {
   Name = Daemon
   mailcommand = /usr/lib/bacula/bsmtp -h localhost -f \\(Bacula\) \%r\\ 
 -s \Bacula daemon message\ %r
   mail = zhang.zhengq...@gmail.com = all, !skipped
   console = all, !skipped, !saved
   append = /var/lib/bacula/log = all, !skipped
 }
 
 Pool {
Name = pool_backup1_full
Pool Type = Backup
Recycle = yes
AutoPrune = yes
Volume Retention = 6 months
Label Format = backup1_full_
Maximum Volume Bytes = 500M
 }
 
 
 Pool {
Name = pool_backup1_diff
Pool Type = Backup
Recycle = yes
AutoPrune = yes
# default 1 year
Volume Retention 

Re: [Bacula-users] run full backup in bconsole but it runs incremental after one full?

2009-05-22 Thread John Lockard
Also, when you post your configs, it would be a really good idea
to remove password and account information.


On Fri, May 22, 2009 at 02:12:13PM -0400, John Lockard wrote:
 When you run a job by hand the schedule isn't involved.
 Either way, for your Schedule entry you need Level=
 before the work Full.
 
  Schedule {
Name = test   
Run = Level=Full at 11:50   
  }
 
 
 But, your problem is that your Job doesn't have a Default
 Level defined.  You'll need something like this:
 
  Job {
Name = job_backup1
Type = Backup
Level = Full
.
.
  }
 
 On Fri, May 22, 2009 at 11:48:07AM -0500, Zhengquan Zhang wrote:
  Hello, 
  
  The first time I run the job it runs full well, but I tried to run it
  more times, it will run incremental automatically. Could anyone help me
  understand this? I am learning bacula and please forgive me for this easy
  question.
  
  *run
  A job name must be specified.
  Automatically selected Job: job_backup1
  Run Backup job
  JobName:  job_backup1
  Level:Incremental
  Client:   client_backup1
  FileSet:  fileset_backup1
  Pool: Default (From Job resource)
  Storage:  storage_backup1 (From Job resource)
  When: 2009-05-22 11:49:34
  Priority: 10
  OK to run? (yes/mod/no):
  
  the schedule section is listed below;
  
  Schedule {
Name = test   
Run = Full at 11:50   
  }
  
  attached is bacula-dir.conf
  
  #
  # Default Bacula Director Configuration file
  #
  #  The only thing that MUST be changed is to add one or more
  #   file or directory names in the Include directive of the
  #   FileSet resource.
  #
  #  For Bacula release 2.4.4 (28 December 2008) -- debian lenny/sid
  #
  #  You might also want to change the default email address
  #   from root to your address.  See the mail and operator
  #   directives in the Messages resource.
  #
  
  Director {# define myself
Name = director_backup1
DIRport = 9101# where we listen for UA connections
QueryFile = /etc/bacula/scripts/query.sql
WorkingDirectory = /var/lib/bacula
PidDirectory = /var/run/bacula
Maximum Concurrent Jobs = 1
Password = CLEANED # Console password
Messages = Daemon
DirAddress = 127.0.0.1
  }
  
  Job {
Name = job_backup1
Type = Backup
Client = client_backup1
FileSet = fileset_backup1
Pool = Default
Schedule = test
Full Backup Pool = pool_backup1_full
Differential Backup Pool = pool_backup1_diff
Incremental Backup Pool = pool_backup1_inc
Messages = Standard
Storage = storage_backup1
Write Bootstrap = /var/lib/bacula/job_backup1.bsr
Priority = 10
  }
  
  
  # List of files to be backed up
  FileSet {
Name = fileset_backup1
Include {
  Options {
signature = MD5
  }
  File = /etc
  File = /home/zhengquan
  File = /var
}
Exclude {
  File = /proc
  File = /tmp
  File = /.journal
  File = /.fsck
}
  }
  
  #test schedule
  Schedule {
Name = test 
Run = Full at 11:50
  }
  
  # backup1, the backup server itself
  Client {
Name = client_backup1
Address = backup1
FDPort = 9102
Catalog = MyCatalog
Password = CLEANED  # password for FileDaemon
File Retention = 60 days# 60 days
Job Retention = 6 months# six months
AutoPrune = yes # Prune expired Jobs/Files
  }
  
  
  # Definition of file storage device
  Storage {
Name = storage_backup1
Address = backup1# N.B. Use a fully qualified name here
SDPort = 9103
Password = CLEANED
Device = device_backup1
Media Type = File
  }
  
  
  
  # Generic catalog service
  Catalog {
Name = MyCatalog
dbname = CLEANED; dbuser = CLEANED; dbpassword = CLEANED
  }
  
  # Reasonable message delivery -- send most everything to email address
  #  and to the console
  Messages {
Name = Standard
mailcommand = /usr/lib/bacula/bsmtp -h localhost -f \\(Bacula\) 
  \%r\\ -s \Bacula: %t %e of %c %l\ %r
operatorcommand = /usr/lib/bacula/bsmtp -h localhost -f \\(Bacula\) 
  \%r\\ -s \Bacula: Intervention needed for %j\ %r
mail = zhang.zhengq...@gmail.com = all, !skipped
operator = zhang.zhengq...@gmail.com = mount
console = all, !skipped, !saved
  #
  # WARNING! the following will create a file that you must cycle from
  #  time to time as it will grow indefinitely. However, it will
  #  also keep all your messages if they scroll off the console.
  #
append = /var/lib/bacula/log = all, !skipped
  }
  
  
  #
  # Message delivery for daemon messages (no job).
  Messages {
Name = Daemon
mailcommand = /usr/lib/bacula/bsmtp -h localhost -f \\(Bacula\) 
  \%r\\ -s \Bacula daemon message\ %r
mail = zhang.zhengq...@gmail.com = all, !skipped
console = all, !skipped, !saved
append = /var/lib/bacula/log = all

Re: [Bacula-users] Not automounting... why?

2009-05-22 Thread John Lockard
Okay, here's more informational information. :)

I've attached the logs sections for both jobs.

For both jobs which required me to mount a tape I had something
like the following happen.


22-May 05:16 tibor-sd JobId 4223: 3301 Issuing autochanger loaded? drive 0 
command.
22-May 05:16 tibor-sd JobId 4223: 3302 Autochanger loaded? drive 0, result: 
nothing loaded.
22-May 05:16 tibor-sd JobId 4223: 3304 Issuing autochanger load slot 29, drive 
0 command.
22-May 05:17 tibor-sd JobId 4223: 3305 Autochanger load slot 29, drive 0, 
status is OK.
22-May 05:17 tibor-sd JobId 4223: Volume 100116L2 previously written, moving 
to end of data.
22-May 05:17 tibor-sd JobId 4223: Error: Unable to position to end of data on 
device NEO-LTO-0 (/dev/nst0): ERR=dev.c:946 ioctl MTIOCGET erro
r on NEO-LTO-0 (/dev/nst0). ERR=Input/output error.

22-May 05:17 tibor-sd JobId 4223: Marking Volume 100116L2 in Error in Catalog.
22-May 05:17 tibor-sd JobId 4223: 3307 Issuing autochanger unload slot 29, 
drive 0 command.
22-May 05:17 tibor-sd JobId 4223: 3995 Bad autochanger unload slot 29, drive 
0: ERR=Child exited with code 1
Results=Unloading drive 0 into Storage Element 29...mtx: Request Sense: Long 
Report=yes
mtx: Request Sense: Valid Residual=no
mtx: Request Sense: Error Code=70 (Current)
mtx: Request Sense: Sense Key=Illegal Request
mtx: Request Sense: FileMark=no
mtx: Request Sense: EOM=no
mtx: Request Sense: ILI=no
mtx: Request Sense: Additional Sense Code = 53
mtx: Request Sense: Additional Sense Qualifier = 02
mtx: Request Sense: BPV=no
mtx: Request Sense: Error in CDB=no
mtx: Request Sense: SKSV=no

MOVE MEDIUM from Element Address 480 to 60 Failed


So, it came across a bad tape, puked, tried to unload to slot and
failed.  It tried the unload twice (within the same minute) failed
both times, then sat there waiting for me to issue the mount command,
which succeeded.

Could it be a timing issue?

-John

On Fri, May 22, 2009 at 11:58:44AM -0400, John Lockard wrote:
 And if I mounted a tape immediately after and also made sure
 that automout was set to yes?  Whenever I unmount a tape I
 always make sure to mount another in it's stead.
 
 I'll dig through previous and current job and daemon messages
 when I get back to my normal office.
 
 -John
 
 On Fri, May 22, 2009 at 11:03:51AM -0400, John Drescher wrote:
  On Fri, May 22, 2009 at 10:45 AM, John Lockard jlock...@umich.edu wrote:
   Hi All,
  
   I know I don't have full information here, but don't want to send along
   my full config as I'll guess that's overkill.
  
   22-May 06:17 tibor-sd JobId 4223: Please mount Volume 100027L2 or label 
   a new one for:
      Job:          Belobog-Data2-Users.2009-05-22_05.15.00_35
      Storage:      NEO-LTO-0 (/dev/nst0)
      Pool:         Radev-Diff
      Media type:   LTO-2
   22-May 06:19 tibor-sd JobId 4209: Please mount Volume 000152L2 or label 
   a new one for:
      Job:          BackupCatalog.2009-05-21_23.10.00_21
      Storage:      NEO-LTO-1 (/dev/nst1)
      Pool:         Weekly-Tape-WH
      Media type:   LTO-2
  
   I have a Overland Neo 4000 Tape Library with 57 slots and 3 tape drives.
   My config is set to Automount tapes when needed.  But, sometimes it 
   doesn't
   and just sits there, waiting for me to manually load the requested tapes.
   I had two instances of that this morning.  Both tapes were in the Library
   and checking which tapes Bacula thought were loaded showed both of these
   tapes as being In Changer.  I was able to easily do:
    mount storage=Overland_Neo_4000 drive=0 slot=30
    mount storage=Overland_Neo_4000 drive=1 slot=1
  
   Then my jobs continued running nicely.
  
   What would be the reasons that occasionally Bacula won't automount tapes
   which it knows are present?
  
  
  One reason is if you umounted the previous tape.
  
  John
  
  
 
 -- 
 A photograph is usually looked at - seldom looked into. - Ansel Adams
 ---
  John M. Lockard |  U of Michigan - School of Information
  Unix and Security Admin |  1214 SI North - 1075 Beal Ave.
   jlock...@umich.edu |Ann Arbor, MI  48109-2112
  www.umich.edu/~jlockard | 734-615-8776 | 734-647-8045 FAX
 ---
 
 --
 Register Now for Creativity and Technology (CaT), June 3rd, NYC. CaT
 is a gathering of tech-side developers  brand creativity professionals. Meet
 the minds behind Google Creative Lab, Visual Complexity, Processing,  
 iPhoneDevCamp asthey present alongside digital heavyweights like Barbarian
 Group, R/GA,  Big Spaceship. http://www.creativitycat.com 
 ___
 Bacula-users mailing list
 Bacula-users@lists.sourceforge.net
 https://lists.sourceforge.net/lists/listinfo/bacula

Re: [Bacula-users] Not automounting... why?

2009-05-22 Thread John Lockard
Release doesn't, however, eject the tape from the drive, correct?

Also, I mentioned that I will always follow my unmount command
with a mount of a different tape.

Main reasons for unmounting...  Weekly switching of backup tapes.
My full backups run (mostly) during the 1st 7 days of the month,
and after the 7th, whether a tape is full or not, I will remove
the tape from the library and hold it for offsite storage.  When
I load the library with a new batch of tapes, I'll mount a tape
in each of the tape library's drives.

-John

On Fri, May 22, 2009 at 02:23:32PM -0400, John Drescher wrote:
 On Fri, May 22, 2009 at 11:58 AM, John Lockard jlock...@umich.edu wrote:
  And if I mounted a tape immediately after and also made sure
  that automout was set to yes?  Whenever I unmount a tape I
  always make sure to mount another in it's stead.
 
  I'll dig through previous and current job and daemon messages
  when I get back to my normal office.
 
 
 If you unmount a volume from any drive use the release command instead
 of unmount. The reason is that unmount will take the drive out of
 bacula's control and it will remain out of bacula's control until you
 issue the mount command on that drive.
 
 John
 
 

-- 
Brain: Pinky, Are you pondering what I'm pondering?
Pinky: Uh, I think so Don Cerebro, but why would
   Sophia Loren do a musical?
---
 John M. Lockard |  U of Michigan - School of Information
 Unix and Security Admin |  1214 SI North - 1075 Beal Ave.
  jlock...@umich.edu |Ann Arbor, MI  48109-2112
 www.umich.edu/~jlockard | 734-615-8776 | 734-647-8045 FAX
---

--
Register Now for Creativity and Technology (CaT), June 3rd, NYC. CaT
is a gathering of tech-side developers  brand creativity professionals. Meet
the minds behind Google Creative Lab, Visual Complexity, Processing,  
iPhoneDevCamp asthey present alongside digital heavyweights like Barbarian
Group, R/GA,  Big Spaceship. http://www.creativitycat.com 
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] [Fwd: Re: FD on Win2000 Server]

2009-05-21 Thread John Lockard
On Thu, May 21, 2009 at 01:34:31PM +0300, Alnis Morics wrote:
 Yes, I can list all the files but that doesn't mean I can back them up. When 
 I 
 try to run the job, it terminates with an error, and there's also nothing I 
 can restore.
 
 Here's the output of the last job:
 
3743 21-May 13:12 debian-dir JobId 77: No prior Full backup Job record 
 found.
3744 21-May 13:12 debian-dir JobId 77: No prior or suitable Full backup 
 found in catalog. Doing FULL backup.
3745 21-May 13:12 debian-dir JobId 77: Start Backup JobId 77, 
 Job=win2000srv-share-bkp.2009-05-21_13.12.15_39
3746 21-May 13:12 debian-dir JobId 77: Using Device FileStorage
3747 21-May 13:12 debian-sd JobId 77: Volume Volume001 previously 
 written, moving to end of data.
3748 21-May 13:12 debian-sd JobId 77: Ready to append to end of 
 Volume Volume001 size=8420568963
3749 21-May 13:14 win2000srv-fd JobId 77: Fatal 
 error: /home/kern/bacula/k/src/filed/backup.c:948 Network send error to SD. 
 ERR=Input/output error
3750 21-May 13:12 debian-sd JobId 77: Fatal error: append.c:243 Network 
 error on data channel. ERR=No data available

To me this looks like a firewall issue.  You'll need to make sure that
either:
a: The firewall is turned off on the windows system
b: You have proper entries in your windows systems' firewall
   which allow connections from the client to the Storage Daemon.

3751 21-May 13:12 debian-sd JobId 77: Job write elapsed time = 00:00:06, 
 Transfer rate = 10.94 K bytes/second
3752 21-May 13:12 debian-sd JobId 77: Job 
 win2000srv-share-bkp.2009-05-21_13.12.15_39 marked to be canceled.
3753 21-May 13:12 debian-sd JobId 77: Fatal error: fd_cmds.c:170 Command 
 error with FD, hanging up. Append data error.


-John

-- 
The beatings will continue until morale improves.
---
 John M. Lockard |  U of Michigan - School of Information
 Unix and Security Admin |  1214 SI North - 1075 Beal Ave.
  jlock...@umich.edu |Ann Arbor, MI  48109-2112
 www.umich.edu/~jlockard | 734-615-8776 | 734-647-8045 FAX
---

--
Register Now for Creativity and Technology (CaT), June 3rd, NYC. CaT
is a gathering of tech-side developers  brand creativity professionals. Meet
the minds behind Google Creative Lab, Visual Complexity, Processing,  
iPhoneDevCamp asthey present alongside digital heavyweights like Barbarian
Group, R/GA,  Big Spaceship. http://www.creativitycat.com 
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] Backup errors...

2009-05-18 Thread John Lockard
On Fri, May 15, 2009 at 04:28:23PM -0400, John Lockard wrote:
 On Fri, May 15, 2009 at 01:30:57PM +0200, Bruno Friedmann wrote:
  John Lockard wrote:
   Hi all,
   
   Saw this last night.  What would cause these Fatal errors?
   
   Server: Linux 2.6.18 x86_64
   Bacula Version:
 Server: 3.0.0
 Client: 2.4.4 (SPARC Solaris 8)
 Filesystem: just under 1TB
   
   Thanks for any help,
   John
   
   13-May 22:25 tibor-sd JobId 3833: Labeled new Volume Monthly-SIN-0363 
   on device Storage_Array_1 (/data1/bacula/storage).
   13-May 22:25 tibor-sd JobId 3833: Wrote label to prelabeled Volume 
   Monthly-SIN-0363 on device Storage_Array_1 (/data1/bacula/storage)
   13-May 22:25 tibor-sd JobId 3833: New volume Monthly-SIN-0363 mounted 
   on device Storage_Array_1 (/data1/bacula/storage) at 13-May-2009
   +22:25.
   13-May 23:06 tibor-dir JobId 3833: Fatal error: sql_create.c:731 
   sql_create.c:731 insert INSERT INTO batch VALUES
   +(10889201,3833,'/data0/projects/polisci/corpora/bills/106txt-preferred/','106-H.R.01517.txt','gAD4
Fs06a IG0 B CY9 HYs A iV CAA I BKCOdS
   +BDxzxI BDxz0l A A C','GanvMb5SbMX2t+HvG3WbzQ') failed:
   Incorrect key file for table '/tmp/#sqlbf2_fe_0.MYI'; try to repair it
   13-May 23:06 tibor-dir JobId 3833: sql_create.c:731 INSERT INTO batch 
   VALUES
   +(10889201,3833,'/data0/projects/polisci/corpora/bills/106txt-preferred/','106-H.R.01517.txt','gAD4
Fs06a IG0 B CY9 HYs A iV CAA I BKCOdS
   +BDxzxI BDxz0l A A C','GanvMb5SbMX2t+HvG3WbzQ')
   13-May 23:06 tibor-dir JobId 3833: Fatal error: catreq.c:488 Attribute 
   create error. sql_get.c:1029 Media record for Volume Monthly-SIN-0363
   +not found.
   13-May 23:06 tibor-sd JobId 3833: Job Tangra-Data0.2009-05-11_05.15.00_47 
   marked to be canceled.
   13-May 23:06 tibor-sd JobId 3833: Fatal error: fd_cmds.c:181 FD command 
   not found: and variable operation maintenance and replacement for such
   +Central Arizona Project
water.
 `f WATER RIGHTS UNAFFECTED BY USE OR NON-USE- The lack of use of water 
   by the Nation or the use or lack of use of water by any person or
   +entity with whom the Nation enters into a contract for an exchange lease 
   option for the lease or disposition of water pursuant to subsection c
   +shall not diminish reduce or impair.
 `1 the water rights of the Nation as established under this title or 
   any other applicable law; or
 `2 any use rights
   
   
   
   
  
  Has you use the batch insert enable, could you verify that during the 
  backup you doesn't go out of space in /tmp ( indicate in
  trace )
  I suspect your big jobs are creating a too big sql big ( temp db ) and have 
  no more space ...
  
  You should tell mysql to write tmp table to another place with fast disk 
  and lot of space.
 
 Hi Bruno,
 
 No, I didn't have batch insert enabled.  I've now recompiled with.
 I think you may be entirely correct on the tmpdir setting in MySQL,
 I've relocated the tmpdir location.
 I've just started one of my large jobs.  We'll see if these changes
 fix my problems.
 
 Thanks,
 John

This indeed took care of my problems.  I will guess that it
was the relocation of the MySQL tmpdir, but I implemented
both changes at the same time, so can't be 100% sure.

I am running a job right now where the temp DB is around
4G in size, which would have WAY overfilled the original
location's space.

-John
-- 
   (In this one, Pinky is smart.)
Brain: Pinky, Are you pondering what I'm pondering?
Pinky: Yes I am.
---
 John M. Lockard |  U of Michigan - School of Information
 Unix and Security Admin |  1214 SI North - 1075 Beal Ave.
  jlock...@umich.edu |Ann Arbor, MI  48109-2112
 www.umich.edu/~jlockard | 734-615-8776 | 734-647-8045 FAX
---

--
Crystal Reports - New Free Runtime and 30 Day Trial
Check out the new simplified licensing option that enables 
unlimited royalty-free distribution of the report engine 
for externally facing server and web deployment. 
http://p.sf.net/sfu/businessobjects
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] Backup errors...

2009-05-15 Thread John Lockard
On Fri, May 15, 2009 at 01:30:57PM +0200, Bruno Friedmann wrote:
 John Lockard wrote:
  Hi all,
  
  Saw this last night.  What would cause these Fatal errors?
  
  Server: Linux 2.6.18 x86_64
  Bacula Version:
Server: 3.0.0
Client: 2.4.4 (SPARC Solaris 8)
Filesystem: just under 1TB
  
  Thanks for any help,
  John
  
  13-May 22:25 tibor-sd JobId 3833: Labeled new Volume Monthly-SIN-0363 on 
  device Storage_Array_1 (/data1/bacula/storage).
  13-May 22:25 tibor-sd JobId 3833: Wrote label to prelabeled Volume 
  Monthly-SIN-0363 on device Storage_Array_1 (/data1/bacula/storage)
  13-May 22:25 tibor-sd JobId 3833: New volume Monthly-SIN-0363 mounted on 
  device Storage_Array_1 (/data1/bacula/storage) at 13-May-2009
  +22:25.
  13-May 23:06 tibor-dir JobId 3833: Fatal error: sql_create.c:731 
  sql_create.c:731 insert INSERT INTO batch VALUES
  +(10889201,3833,'/data0/projects/polisci/corpora/bills/106txt-preferred/','106-H.R.01517.txt','gAD4
   Fs06a IG0 B CY9 HYs A iV CAA I BKCOdS
  +BDxzxI BDxz0l A A C','GanvMb5SbMX2t+HvG3WbzQ') failed:
  Incorrect key file for table '/tmp/#sqlbf2_fe_0.MYI'; try to repair it
  13-May 23:06 tibor-dir JobId 3833: sql_create.c:731 INSERT INTO batch VALUES
  +(10889201,3833,'/data0/projects/polisci/corpora/bills/106txt-preferred/','106-H.R.01517.txt','gAD4
   Fs06a IG0 B CY9 HYs A iV CAA I BKCOdS
  +BDxzxI BDxz0l A A C','GanvMb5SbMX2t+HvG3WbzQ')
  13-May 23:06 tibor-dir JobId 3833: Fatal error: catreq.c:488 Attribute 
  create error. sql_get.c:1029 Media record for Volume Monthly-SIN-0363
  +not found.
  13-May 23:06 tibor-sd JobId 3833: Job Tangra-Data0.2009-05-11_05.15.00_47 
  marked to be canceled.
  13-May 23:06 tibor-sd JobId 3833: Fatal error: fd_cmds.c:181 FD command not 
  found: and variable operation maintenance and replacement for such
  +Central Arizona Project
   water.
`f WATER RIGHTS UNAFFECTED BY USE OR NON-USE- The lack of use of water by 
  the Nation or the use or lack of use of water by any person or
  +entity with whom the Nation enters into a contract for an exchange lease 
  option for the lease or disposition of water pursuant to subsection c
  +shall not diminish reduce or impair.
`1 the water rights of the Nation as established under this title or any 
  other applicable law; or
`2 any use rights
  
  
  
  
 
 Has you use the batch insert enable, could you verify that during the backup 
 you doesn't go out of space in /tmp ( indicate in
 trace )
 I suspect your big jobs are creating a too big sql big ( temp db ) and have 
 no more space ...
 
 You should tell mysql to write tmp table to another place with fast disk and 
 lot of space.

Hi Bruno,

No, I didn't have batch insert enabled.  I've now recompiled with.
I think you may be entirely correct on the tmpdir setting in MySQL,
I've relocated the tmpdir location.
I've just started one of my large jobs.  We'll see if these changes
fix my problems.

Thanks,
John

-- 
If you take out the killings, Washington actually
 has a very, very low crime rate. - Marion Barry
---
 John M. Lockard |  U of Michigan - School of Information
 Unix and Security Admin |  1214 SI North - 1075 Beal Ave.
  jlock...@umich.edu |Ann Arbor, MI  48109-2112
 www.umich.edu/~jlockard | 734-615-8776 | 734-647-8045 FAX
---

--
Crystal Reports - New Free Runtime and 30 Day Trial
Check out the new simplified licensing option that enables 
unlimited royalty-free distribution of the report engine 
for externally facing server and web deployment. 
http://p.sf.net/sfu/businessobjects
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


[Bacula-users] Backup errors...

2009-05-14 Thread John Lockard
Hi all,

Saw this last night.  What would cause these Fatal errors?

Server: Linux 2.6.18 x86_64
Bacula Version:
  Server: 3.0.0
  Client: 2.4.4 (SPARC Solaris 8)
  Filesystem: just under 1TB

Thanks for any help,
John

13-May 22:25 tibor-sd JobId 3833: Labeled new Volume Monthly-SIN-0363 on 
device Storage_Array_1 (/data1/bacula/storage).
13-May 22:25 tibor-sd JobId 3833: Wrote label to prelabeled Volume 
Monthly-SIN-0363 on device Storage_Array_1 (/data1/bacula/storage)
13-May 22:25 tibor-sd JobId 3833: New volume Monthly-SIN-0363 mounted on 
device Storage_Array_1 (/data1/bacula/storage) at 13-May-2009
+22:25.
13-May 23:06 tibor-dir JobId 3833: Fatal error: sql_create.c:731 
sql_create.c:731 insert INSERT INTO batch VALUES
+(10889201,3833,'/data0/projects/polisci/corpora/bills/106txt-preferred/','106-H.R.01517.txt','gAD4
 Fs06a IG0 B CY9 HYs A iV CAA I BKCOdS
+BDxzxI BDxz0l A A C','GanvMb5SbMX2t+HvG3WbzQ') failed:
Incorrect key file for table '/tmp/#sqlbf2_fe_0.MYI'; try to repair it
13-May 23:06 tibor-dir JobId 3833: sql_create.c:731 INSERT INTO batch VALUES
+(10889201,3833,'/data0/projects/polisci/corpora/bills/106txt-preferred/','106-H.R.01517.txt','gAD4
 Fs06a IG0 B CY9 HYs A iV CAA I BKCOdS
+BDxzxI BDxz0l A A C','GanvMb5SbMX2t+HvG3WbzQ')
13-May 23:06 tibor-dir JobId 3833: Fatal error: catreq.c:488 Attribute create 
error. sql_get.c:1029 Media record for Volume Monthly-SIN-0363
+not found.
13-May 23:06 tibor-sd JobId 3833: Job Tangra-Data0.2009-05-11_05.15.00_47 
marked to be canceled.
13-May 23:06 tibor-sd JobId 3833: Fatal error: fd_cmds.c:181 FD command not 
found: and variable operation maintenance and replacement for such
+Central Arizona Project
 water.
  `f WATER RIGHTS UNAFFECTED BY USE OR NON-USE- The lack of use of water by the 
Nation or the use or lack of use of water by any person or
+entity with whom the Nation enters into a contract for an exchange lease 
option for the lease or disposition of water pursuant to subsection c
+shall not diminish reduce or impair.
  `1 the water rights of the Nation as established under this title or any 
other applicable law; or
  `2 any use rights




-- 
And she said to me, are you bad, and I said, yah, I'm bad baby, and
 she said are you BAD? And I said Yah, baby, yah!!  And she said are
 you bad, and I said HEADS UP SPACE PONIES, WE'RE MAKING GRAVY WITHOUT
 THE LUMPS!!! - The Evil Midnight Bomber, What Bombs at Midnight
---
 John M. Lockard |  U of Michigan - School of Information
 Unix and Security Admin |  1214 SI North - 1075 Beal Ave.
  jlock...@umich.edu |Ann Arbor, MI  48109-2112
 www.umich.edu/~jlockard | 734-615-8776 | 734-647-8045 FAX
---

--
The NEW KODAK i700 Series Scanners deliver under ANY circumstances! Your
production scanning environment may not be a perfect world - but thanks to
Kodak, there's a perfect scanner to get the job done! With the NEW KODAK i700
Series Scanner you'll get full speed at 300 dpi even with all image 
processing features enabled. http://p.sf.net/sfu/kodak-com
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] Backup errors...

2009-05-14 Thread John Lockard
On Thu, May 14, 2009 at 03:54:55PM -0400, John Drescher wrote:
 On Thu, May 14, 2009 at 2:26 PM, John Lockard jlock...@umich.edu wrote:
  Hi all,
 
  Saw this last night.  What would cause these Fatal errors?
 
 
 Possible database corruption.

Other backups after this and concurrent ran fine.  I am
consistantly seeing these kinds of errors on three of my
machines (2 linux, 1 solaris) on which the Full backup
is close to a TB or more.

-John

-- 
A man steals a loaf of bread and never hears the end of it.
- summary of 'Les Miserables'
---
 John M. Lockard |  U of Michigan - School of Information
 Unix and Security Admin |  1214 SI North - 1075 Beal Ave.
  jlock...@umich.edu |Ann Arbor, MI  48109-2112
 www.umich.edu/~jlockard | 734-615-8776 | 734-647-8045 FAX
---

--
The NEW KODAK i700 Series Scanners deliver under ANY circumstances! Your
production scanning environment may not be a perfect world - but thanks to
Kodak, there's a perfect scanner to get the job done! With the NEW KODAK i700
Series Scanner you'll get full speed at 300 dpi even with all image 
processing features enabled. http://p.sf.net/sfu/kodak-com
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


[Bacula-users] Error in src/filed/acl.c

2009-05-14 Thread John Lockard
Appears that in acl.c, line 1145 there's a stray ; at the
end of the line (version 3.0.1).  Removal allows compilation
on Solaris.

-John

-- 
What good is a ring Mr. Baggins if you don't have
 any fingers. - Agent Elrond - Matrix of the Rings
---
 John M. Lockard |  U of Michigan - School of Information
 Unix and Security Admin |  1214 SI North - 1075 Beal Ave.
  jlock...@umich.edu |Ann Arbor, MI  48109-2112
 www.umich.edu/~jlockard | 734-615-8776 | 734-647-8045 FAX
---

--
The NEW KODAK i700 Series Scanners deliver under ANY circumstances! Your
production scanning environment may not be a perfect world - but thanks to
Kodak, there's a perfect scanner to get the job done! With the NEW KODAK i700
Series Scanner you'll get full speed at 300 dpi even with all image 
processing features enabled. http://p.sf.net/sfu/kodak-com
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


[Bacula-users] Continued backup problems. Interesting error.

2009-04-21 Thread John Lockard
Has anyone seen anything like this before?


21-Apr 14:10 tibor-dir JobId 3240: Start Backup JobId 3240, 
Job=Belobog-Data3-Users.2009-04-21_12.10.22_07
21-Apr 14:10 tibor-dir JobId 3240: Using Device NEO-LTO-1
21-Apr 14:11 tibor-sd JobId 3240: Spooling data ...
21-Apr 16:10 tibor-dir JobId 3240: Fatal error: Network error with FD during 
Backup: ERR=Connection reset by peer
21-Apr 16:10 tibor-sd JobId 3240: Job 
Belobog-Data3-Users.2009-04-21_12.10.22_07 marked to be canceled.
21-Apr 16:10 tibor-sd JobId 3240: Fatal error: fd_cmds.c:181 FD command not 
found: s ,
including Bangladesh , voted in favour of resolution 50 / 245 ; this included 
the five nuclear weapon States .
Bangladesh welcomes the decision of India and Pakistan , as announced by their 
Prime Ministers in the General Assembly last year , to join the CTBT .
We see this as a positive step towards ensuring peace and security in the South 
Asian region , and as conducive to fostering fruitful economic cooperation in 
the region .
Bangladeshs major concern , as a least developed country , has been the high 
financial obligations that would devolve on the States parties on account of 
the implementation of the CTBT ,
including the expenses of the Preparatory Commission for the Comprehensive 
Nuclear Test Ban Treaty Organization , of that organization itself , and of the 
verification regime , including the international CTBT monitoring system and 
the Provisional Technical Secretariat .
As coordinator of the least developed countries , Bangladesh has already voiced 
the concern of those countries about this matter in various relevant forums , 
including the Conference on Disarmament in Geneva .
As a party to the Convention on the Prohibition of the Development , Production 
and Stockpiling of Bacteriological ( Biological ) and Toxin Weapons and on 
Their Destruction , Bangladesh is fully aware of its responsibilities ,
and takes its obligations seriously .
By not having developed , acquired or stockpiled biological weapons , 
Bangladesh is in full compliance with the provisions of the Convention .
Full adherence to the Convention by all States would be an ultimate guarantee 
ensuring the effective elimination of biological weapons .
There is therefore a clear need for charting a credible compliance regime .
In this context , Bangladesh welcomes the ongoing work of the ad hoc group 
entrusted to negotiate a protocol to strengthen the Convention by developing 
verification and compliance mechanisms .
As for the Convention on the Prohibition of the Development , Production , 
Stockpiling and Use of Chemical Weapons and on Their Destruction ,
Bangladesh was among the first to sign it , and although we have no chemical 
weapons programme or facilities , we ratified the Convention two years ago .
But ratification of the Convention will have little meaning unless the major 
chemical weapons countries join it .
We emphasize the necessity of universal adherence to the Convention , and call 
upon all States that have not done so to become parties to the Convention 
without further delay .
We also underline the importance of the early initiation of activities under 
all relevant provisions of the Convention by the Organization for the 
Prohibition of Chemical Weapons .
We call for an early convening of a fourth special session of the General 
Assembly devoted to disarmament .
It is time that the international community again reviewed the implementation 
of the Final Document of the tenth special session of the General Assembly as 
well as the outcomes of the subsequent special sessions on disarmament ,
and took stock of the international security and disarmament situation in the 
post @-@ cold @-@ war era . While nuclear disarmament should remain the highest 
priority for us ,
we have to identify the emerging challenges presented by the new era and 
formulate an agreed plan of action to deal with these in a true spirit of 
multilateralism .
My delegation believes that only a special session of the General Assembly can 
address the broad subject of disarmament , taking into account in particular 
its relationship to development , with the comprehensiveness and thoroughness 
it deserves .
In todays world , regional disarmament presents newer challenges .

The continued arms race , which is a result of unresolved problems , is a 
formidable source of threats to security and is draining considerable resources 
from many countries at the cost of investment in economic and social 
development .
It is our belief that while regional confidence @-@ building measures can go a 
long way , true regional disarmament will largely depend on understanding at 
the global level and on courageous gestures from major Powers .
Regional disarmament will not advance unless legitimate security concerns are 
addressed adequately .
In this connection , we expect that the United_Nations Regional Centre for 
Peace and Disarmament in Asia and the Pacific will be given more support and 

Re: [Bacula-users] Backup failing reliably repeatable

2009-04-20 Thread John Lockard
Nope, disabling tso and tx changed nothing.

-John

On Fri, Apr 17, 2009 at 12:24:25PM -0400, John Lockard wrote:
 On Fri, Apr 17, 2009 at 09:39:24AM +1000, James Harper wrote:
  Does belobog have the same network adapter and kernel as your other
  servers?
 
 It has the same ethernet (Intel e1000) as other servers and similar
 kernel, but is now a mostly legacy system.  Professor has ongoing
 research which disallows me from upgrading.  Unfortunately other systems
 of that same class don't have nearly as much data for backing up.
 
  What brand (machine and network adapter), what kernel, and what network
  adapter?
 
 Machine is a Dell PowerEdge 6650, Kernel is:
   2.6.13-15.12-smp x86_64
 
  Could be a tcp offload issue, try disabling tso and tx csum offload on
  the fd (use ethtool)
 
 Is this correct command?
ethtool -K eth0 tx off tso off
 
 
  Could also be a firewall issue timing out the connection... are the sd,
  dir, and fd on the same lan segment?
 
 The SD and DIR are the same machine.  The FD is on a different LAN
 segment.
 
  James
 
 -John
 
 
  
  
   -Original Message-
   From: John Lockard [mailto:jlock...@umich.edu]
   Sent: Friday, 17 April 2009 01:38
   To: bacula-users@lists.sourceforge.net
   Subject: [Bacula-users] Backup failing reliably repeatable
   
   Client and server at 2.4.4.  Both client and server are Linux 2.6
   
   Logs from client:
   
   15-Apr 16:53 tibor-dir JobId 2954: Start Backup JobId 2954,
  Job=Belobog-Data-
   Users.2009-04-15_16.32.07.04
   15-Apr 16:53 tibor-dir JobId 2954: Using Volume 100108L2 from
  'Scratch'
   pool.
   15-Apr 16:53 tibor-dir JobId 2954: Using Device NEO-LTO-1
   15-Apr 16:53 tibor-sd JobId 2954: 3307 Issuing autochanger unload
  slot 3,
   drive 0 command.
   15-Apr 16:53 tibor-sd JobId 2954: 3304 Issuing autochanger load slot
  21,
   drive 0 command.
   15-Apr 16:54 tibor-sd JobId 2954: 3305 Autochanger load slot 21,
  drive 0,
   status is OK.
   15-Apr 16:54 tibor-sd JobId 2954: Wrote label to prelabeled Volume
  100108L2
   on device NEO-LTO-1 (/dev/nst0)
   15-Apr 16:54 tibor-sd JobId 2954: Spooling data ...
   15-Apr 18:22 tibor-sd JobId 2954: User specified spool size reached.
   15-Apr 18:22 tibor-sd JobId 2954: Writing spooled data to Volume.
  Despooling
   27,941,786,126 bytes ...
   15-Apr 18:32 tibor-sd JobId 2954: Despooling elapsed time = 00:09:57,
  Transfer
   rate = 46.80 M bytes/second
   15-Apr 18:33 tibor-sd JobId 2954: Spooling data again ...
   15-Apr 19:29 tibor-sd JobId 2954: User specified spool size reached.
   15-Apr 19:29 tibor-sd JobId 2954: Writing spooled data to Volume.
  Despooling
   27,941,794,024 bytes ...
   15-Apr 19:40 tibor-sd JobId 2954: Despooling elapsed time = 00:11:24,
  Transfer
   rate = 40.85 M bytes/second
   15-Apr 19:41 tibor-sd JobId 2954: Spooling data again ...
   15-Apr 20:28 tibor-sd JobId 2954: User specified spool size reached.
   15-Apr 20:28 tibor-sd JobId 2954: Writing spooled data to Volume.
  Despooling
   27,941,795,119 bytes ...
   15-Apr 20:42 tibor-sd JobId 2954: Despooling elapsed time = 00:14:30,
  Transfer
   rate = 32.11 M bytes/second
   15-Apr 20:43 tibor-sd JobId 2954: Spooling data again ...
   15-Apr 21:27 tibor-sd JobId 2954: User specified spool size reached.
   15-Apr 21:27 tibor-sd JobId 2954: Writing spooled data to Volume.
  Despooling
   27,941,795,083 bytes ...
   15-Apr 21:44 tibor-sd JobId 2954: Despooling elapsed time = 00:16:34,
  Transfer
   rate = 28.11 M bytes/second
   15-Apr 21:45 tibor-sd JobId 2954: Spooling data again ...
   15-Apr 22:29 tibor-sd JobId 2954: User specified spool size reached.
   15-Apr 22:29 tibor-sd JobId 2954: Writing spooled data to Volume.
  Despooling
   27,941,795,185 bytes ...
   15-Apr 22:43 tibor-sd JobId 2954: Despooling elapsed time = 00:14:28,
  Transfer
   rate = 32.19 M bytes/second
   15-Apr 22:44 tibor-sd JobId 2954: Spooling data again ...
   15-Apr 23:29 tibor-sd JobId 2954: User specified spool size reached.
   15-Apr 23:29 tibor-sd JobId 2954: Writing spooled data to Volume.
  Despooling
   27,941,793,858 bytes ...
   15-Apr 23:42 tibor-sd JobId 2954: Despooling elapsed time = 00:12:54,
  Transfer
   rate = 36.10 M bytes/second
   15-Apr 23:43 tibor-sd JobId 2954: Spooling data again ...
   16-Apr 00:26 tibor-sd JobId 2954: User specified spool size reached.
   16-Apr 00:26 tibor-sd JobId 2954: Writing spooled data to Volume.
  Despooling
   27,941,794,859 bytes ...
   16-Apr 00:43 tibor-sd JobId 2954: Despooling elapsed time = 00:16:46,
  Transfer
   rate = 27.77 M bytes/second
   16-Apr 00:43 tibor-sd JobId 2954: Spooling data again ...
   16-Apr 02:07 belobog-fd JobId 2954: Fatal error: backup.c:1087 Network
  send
   error to SD. ERR=Connection reset by peer
   16-Apr 02:07 tibor-dir JobId 2954: Error: Bacula tibor-dir 2.4.4
  (28Dec08):
   16-Apr-2009 02:07:38
 Build OS:   x86_64-unknown-linux-gnu redhat Enterprise
  release
 JobId

Re: [Bacula-users] Backup failing reliably repeatable

2009-04-17 Thread John Lockard
On Fri, Apr 17, 2009 at 09:39:24AM +1000, James Harper wrote:
 Does belobog have the same network adapter and kernel as your other
 servers?

It has the same ethernet (Intel e1000) as other servers and similar
kernel, but is now a mostly legacy system.  Professor has ongoing
research which disallows me from upgrading.  Unfortunately other systems
of that same class don't have nearly as much data for backing up.

 What brand (machine and network adapter), what kernel, and what network
 adapter?

Machine is a Dell PowerEdge 6650, Kernel is:
  2.6.13-15.12-smp x86_64

 Could be a tcp offload issue, try disabling tso and tx csum offload on
 the fd (use ethtool)

Is this correct command?
   ethtool -K eth0 tx off tso off


 Could also be a firewall issue timing out the connection... are the sd,
 dir, and fd on the same lan segment?

The SD and DIR are the same machine.  The FD is on a different LAN
segment.

 James

-John


 
 
  -Original Message-
  From: John Lockard [mailto:jlock...@umich.edu]
  Sent: Friday, 17 April 2009 01:38
  To: bacula-users@lists.sourceforge.net
  Subject: [Bacula-users] Backup failing reliably repeatable
  
  Client and server at 2.4.4.  Both client and server are Linux 2.6
  
  Logs from client:
  
  15-Apr 16:53 tibor-dir JobId 2954: Start Backup JobId 2954,
 Job=Belobog-Data-
  Users.2009-04-15_16.32.07.04
  15-Apr 16:53 tibor-dir JobId 2954: Using Volume 100108L2 from
 'Scratch'
  pool.
  15-Apr 16:53 tibor-dir JobId 2954: Using Device NEO-LTO-1
  15-Apr 16:53 tibor-sd JobId 2954: 3307 Issuing autochanger unload
 slot 3,
  drive 0 command.
  15-Apr 16:53 tibor-sd JobId 2954: 3304 Issuing autochanger load slot
 21,
  drive 0 command.
  15-Apr 16:54 tibor-sd JobId 2954: 3305 Autochanger load slot 21,
 drive 0,
  status is OK.
  15-Apr 16:54 tibor-sd JobId 2954: Wrote label to prelabeled Volume
 100108L2
  on device NEO-LTO-1 (/dev/nst0)
  15-Apr 16:54 tibor-sd JobId 2954: Spooling data ...
  15-Apr 18:22 tibor-sd JobId 2954: User specified spool size reached.
  15-Apr 18:22 tibor-sd JobId 2954: Writing spooled data to Volume.
 Despooling
  27,941,786,126 bytes ...
  15-Apr 18:32 tibor-sd JobId 2954: Despooling elapsed time = 00:09:57,
 Transfer
  rate = 46.80 M bytes/second
  15-Apr 18:33 tibor-sd JobId 2954: Spooling data again ...
  15-Apr 19:29 tibor-sd JobId 2954: User specified spool size reached.
  15-Apr 19:29 tibor-sd JobId 2954: Writing spooled data to Volume.
 Despooling
  27,941,794,024 bytes ...
  15-Apr 19:40 tibor-sd JobId 2954: Despooling elapsed time = 00:11:24,
 Transfer
  rate = 40.85 M bytes/second
  15-Apr 19:41 tibor-sd JobId 2954: Spooling data again ...
  15-Apr 20:28 tibor-sd JobId 2954: User specified spool size reached.
  15-Apr 20:28 tibor-sd JobId 2954: Writing spooled data to Volume.
 Despooling
  27,941,795,119 bytes ...
  15-Apr 20:42 tibor-sd JobId 2954: Despooling elapsed time = 00:14:30,
 Transfer
  rate = 32.11 M bytes/second
  15-Apr 20:43 tibor-sd JobId 2954: Spooling data again ...
  15-Apr 21:27 tibor-sd JobId 2954: User specified spool size reached.
  15-Apr 21:27 tibor-sd JobId 2954: Writing spooled data to Volume.
 Despooling
  27,941,795,083 bytes ...
  15-Apr 21:44 tibor-sd JobId 2954: Despooling elapsed time = 00:16:34,
 Transfer
  rate = 28.11 M bytes/second
  15-Apr 21:45 tibor-sd JobId 2954: Spooling data again ...
  15-Apr 22:29 tibor-sd JobId 2954: User specified spool size reached.
  15-Apr 22:29 tibor-sd JobId 2954: Writing spooled data to Volume.
 Despooling
  27,941,795,185 bytes ...
  15-Apr 22:43 tibor-sd JobId 2954: Despooling elapsed time = 00:14:28,
 Transfer
  rate = 32.19 M bytes/second
  15-Apr 22:44 tibor-sd JobId 2954: Spooling data again ...
  15-Apr 23:29 tibor-sd JobId 2954: User specified spool size reached.
  15-Apr 23:29 tibor-sd JobId 2954: Writing spooled data to Volume.
 Despooling
  27,941,793,858 bytes ...
  15-Apr 23:42 tibor-sd JobId 2954: Despooling elapsed time = 00:12:54,
 Transfer
  rate = 36.10 M bytes/second
  15-Apr 23:43 tibor-sd JobId 2954: Spooling data again ...
  16-Apr 00:26 tibor-sd JobId 2954: User specified spool size reached.
  16-Apr 00:26 tibor-sd JobId 2954: Writing spooled data to Volume.
 Despooling
  27,941,794,859 bytes ...
  16-Apr 00:43 tibor-sd JobId 2954: Despooling elapsed time = 00:16:46,
 Transfer
  rate = 27.77 M bytes/second
  16-Apr 00:43 tibor-sd JobId 2954: Spooling data again ...
  16-Apr 02:07 belobog-fd JobId 2954: Fatal error: backup.c:1087 Network
 send
  error to SD. ERR=Connection reset by peer
  16-Apr 02:07 tibor-dir JobId 2954: Error: Bacula tibor-dir 2.4.4
 (28Dec08):
  16-Apr-2009 02:07:38
Build OS:   x86_64-unknown-linux-gnu redhat Enterprise
 release
JobId:  2954
Job:Belobog-Data-Users.2009-04-15_16.32.07.04
Backup Level:   Full
Client: belobog 2.4.4 (28Dec08)
 x86_64-unknown-linux-
  gnu,suse,10.0
FileSet:belobog-data

[Bacula-users] Debug levels and output.

2009-04-17 Thread John Lockard
I've just switched over from 2.4.4 to 3.0.0 so my familiarity
with new features is close to null.

Is there a way I can (maybe just for a specific job) output
to a file *everything* which is happening with a backup job?

I'd like to run a job and get a file containing which files
were backed up, or more specifically, which file was being
backed up when the backup died.

Thanks,
John

-- 
We are the people our parents warned us about
---
 John M. Lockard |  U of Michigan - School of Information
 Unix and Security Admin |  1214 SI North - 1075 Beal Ave.
  jlock...@umich.edu |Ann Arbor, MI  48109-2112
 www.umich.edu/~jlockard | 734-615-8776 | 734-647-8045 FAX
---

--
Stay on top of everything new and different, both inside and 
around Java (TM) technology - register by April 22, and save
$200 on the JavaOne (SM) conference, June 2-5, 2009, San Francisco.
300 plus technical and hands-on sessions. Register today. 
Use priority code J9JMT32. http://p.sf.net/sfu/p
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


[Bacula-users] Backup failing reliably repeatable

2009-04-16 Thread John Lockard
Client and server at 2.4.4.  Both client and server are Linux 2.6

Logs from client:

15-Apr 16:53 tibor-dir JobId 2954: Start Backup JobId 2954, 
Job=Belobog-Data-Users.2009-04-15_16.32.07.04
15-Apr 16:53 tibor-dir JobId 2954: Using Volume 100108L2 from 'Scratch' pool.
15-Apr 16:53 tibor-dir JobId 2954: Using Device NEO-LTO-1
15-Apr 16:53 tibor-sd JobId 2954: 3307 Issuing autochanger unload slot 3, 
drive 0 command.
15-Apr 16:53 tibor-sd JobId 2954: 3304 Issuing autochanger load slot 21, drive 
0 command.
15-Apr 16:54 tibor-sd JobId 2954: 3305 Autochanger load slot 21, drive 0, 
status is OK.
15-Apr 16:54 tibor-sd JobId 2954: Wrote label to prelabeled Volume 100108L2 
on device NEO-LTO-1 (/dev/nst0)
15-Apr 16:54 tibor-sd JobId 2954: Spooling data ...
15-Apr 18:22 tibor-sd JobId 2954: User specified spool size reached.
15-Apr 18:22 tibor-sd JobId 2954: Writing spooled data to Volume. Despooling 
27,941,786,126 bytes ...
15-Apr 18:32 tibor-sd JobId 2954: Despooling elapsed time = 00:09:57, Transfer 
rate = 46.80 M bytes/second
15-Apr 18:33 tibor-sd JobId 2954: Spooling data again ...
15-Apr 19:29 tibor-sd JobId 2954: User specified spool size reached.
15-Apr 19:29 tibor-sd JobId 2954: Writing spooled data to Volume. Despooling 
27,941,794,024 bytes ...
15-Apr 19:40 tibor-sd JobId 2954: Despooling elapsed time = 00:11:24, Transfer 
rate = 40.85 M bytes/second
15-Apr 19:41 tibor-sd JobId 2954: Spooling data again ...
15-Apr 20:28 tibor-sd JobId 2954: User specified spool size reached.
15-Apr 20:28 tibor-sd JobId 2954: Writing spooled data to Volume. Despooling 
27,941,795,119 bytes ...
15-Apr 20:42 tibor-sd JobId 2954: Despooling elapsed time = 00:14:30, Transfer 
rate = 32.11 M bytes/second
15-Apr 20:43 tibor-sd JobId 2954: Spooling data again ...
15-Apr 21:27 tibor-sd JobId 2954: User specified spool size reached.
15-Apr 21:27 tibor-sd JobId 2954: Writing spooled data to Volume. Despooling 
27,941,795,083 bytes ...
15-Apr 21:44 tibor-sd JobId 2954: Despooling elapsed time = 00:16:34, Transfer 
rate = 28.11 M bytes/second
15-Apr 21:45 tibor-sd JobId 2954: Spooling data again ...
15-Apr 22:29 tibor-sd JobId 2954: User specified spool size reached.
15-Apr 22:29 tibor-sd JobId 2954: Writing spooled data to Volume. Despooling 
27,941,795,185 bytes ...
15-Apr 22:43 tibor-sd JobId 2954: Despooling elapsed time = 00:14:28, Transfer 
rate = 32.19 M bytes/second
15-Apr 22:44 tibor-sd JobId 2954: Spooling data again ...
15-Apr 23:29 tibor-sd JobId 2954: User specified spool size reached.
15-Apr 23:29 tibor-sd JobId 2954: Writing spooled data to Volume. Despooling 
27,941,793,858 bytes ...
15-Apr 23:42 tibor-sd JobId 2954: Despooling elapsed time = 00:12:54, Transfer 
rate = 36.10 M bytes/second
15-Apr 23:43 tibor-sd JobId 2954: Spooling data again ...
16-Apr 00:26 tibor-sd JobId 2954: User specified spool size reached.
16-Apr 00:26 tibor-sd JobId 2954: Writing spooled data to Volume. Despooling 
27,941,794,859 bytes ...
16-Apr 00:43 tibor-sd JobId 2954: Despooling elapsed time = 00:16:46, Transfer 
rate = 27.77 M bytes/second
16-Apr 00:43 tibor-sd JobId 2954: Spooling data again ...
16-Apr 02:07 belobog-fd JobId 2954: Fatal error: backup.c:1087 Network send 
error to SD. ERR=Connection reset by peer
16-Apr 02:07 tibor-dir JobId 2954: Error: Bacula tibor-dir 2.4.4 (28Dec08): 
16-Apr-2009 02:07:38
  Build OS:   x86_64-unknown-linux-gnu redhat Enterprise release
  JobId:  2954
  Job:Belobog-Data-Users.2009-04-15_16.32.07.04
  Backup Level:   Full
  Client: belobog 2.4.4 (28Dec08) 
x86_64-unknown-linux-gnu,suse,10.0
  FileSet:belobog-data-users 2009-04-11 05:15:01
  Pool:   Radev-Full (From Job FullPool override)
  Storage:Overland_Neo_4000 (From Pool resource)
  Scheduled time: 15-Apr-2009 16:32:02
  Start time: 15-Apr-2009 16:53:13
  End time:   16-Apr-2009 02:07:38
  Elapsed time:   9 hours 14 mins 25 secs
  Priority:   30
  FD Files Written:   7,029,945
  SD Files Written:   0
  FD Bytes Written:   217,674,494,521 (217.6 GB)
  SD Bytes Written:   0 (0 B)
  Rate:   6543.6 KB/s
  Software Compression:   None
  VSS:no
  Storage Encryption: no
  Volume name(s): 100108L2
  Volume Session Id:  50
  Volume Session Time:1239651099
  Last Volume Bytes:  194,987,520,000 (194.9 GB)
  Non-fatal FD errors:0
  SD Errors:  0
  FD termination status:  Error
  SD termination status:  Error
  Termination:*** Backup Error ***


On the client side I see this in the firewall log:
Apr 16 02:07:38 belobog kernel: SFW2-OUT-ERROR IN= OUT=eth0 SRC=xxx.xxx.xxx.32 
DST=xxx.xxx.yyy.25 LEN=52 TOS=0x00 PREC=0x00 TTL=64 ID=59630 DF PROTO=TCP 
SPT=11310 DPT=9103 WINDOW=1460 RES=0x00 ACK RST URGP=0 OPT 
(0101080A3D08C91922123CB5)

(SRC and 

Re: [Bacula-users] Howto recover from a job being rerun around a summer time change

2009-04-06 Thread John Lockard
The time jumps at 2am, either forward or backward depending on
whether you're switching to or from DST.  Most admins I know
just completely avoid the time period from 1:00am to 3:00am.
entirely because of the Daylight Saving Time switches.

If you're going to go UTC, then you should go UTC all the way
and not worry about what local time.  That's what most DNS and
DHCP servers do.  They happily splurk out logs, not worrying
about local time.  When there's a problem, then it's up to
the lucky human to compute the local time.

-John

On Mon, Apr 06, 2009 at 06:41:52PM +0100, Martin Simmons wrote:
  On Mon, 06 Apr 2009 17:34:09 +0200, Foo  said:
  
  The best solution would be for Bacula to translate time to UTC internally  
  for scheduling, everything external such as logfiles, scheduling stanzas  
  in config etc. would remain in the user's locale.
 
 That doesn't help, because saying 2:05am local time in the config file is
 still ambiguous if the local time moves backwards by 1 hour at 3:00am.  To
 resolve this, Bacula would have to use UTC in the config file as well.
 Alternatively, it would need some non-trivial code to deal with the
 duplicate/missing local times.

-- 
I know you believe you understand what you think I said, 
 but I am not sure you realize that what you heard is not 
 what I meant. - Richard Nixon
---
 John M. Lockard |  U of Michigan - School of Information
 Unix and Security Admin |  1214 SI North - 1075 Beal Ave.
  jlock...@umich.edu |Ann Arbor, MI  48109-2112
 www.umich.edu/~jlockard | 734-615-8776 | 734-647-8045 FAX
---

--
This SF.net email is sponsored by:
High Quality Requirements in a Collaborative Environment.
Download a free trial of Rational Requirements Composer Now!
http://p.sf.net/sfu/www-ibm-com
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


[Bacula-users] Compression for certain levels of backup...

2009-04-06 Thread John Lockard
I can't see a way in 2.4.x, but maybe it's present in the
3.0.x code...  I would like to compress my Incremental backups,
but not my Differential backups or Full backups.

I keep my incremental backups on disk.  They never transition
to Tape.  My Differentials run weekly and I keep a week and a
half of Differentials before they rotate to tape.  Fulls are
run Monthly and rotate to tape 35 days after creation.

So, my Fulls and Differentials I don't want compressed because
the Tape drive will eventually take care of that with hardware
compression.  But since the Incrementals never hit tape I'd
prefer to have them compressed.

Is there a way of doing this?

-John

-- 
Photography can never grow up if it imitates some other medium.
 It has to walk alone; it has to be itself. - Berenice Abbott
---
 John M. Lockard |  U of Michigan - School of Information
 Unix and Security Admin |  1214 SI North - 1075 Beal Ave.
  jlock...@umich.edu |Ann Arbor, MI  48109-2112
 www.umich.edu/~jlockard | 734-615-8776 | 734-647-8045 FAX
---

--
This SF.net email is sponsored by:
High Quality Requirements in a Collaborative Environment.
Download a free trial of Rational Requirements Composer Now!
http://p.sf.net/sfu/www-ibm-com
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


[Bacula-users] Fileset available in Message Resource?

2009-04-03 Thread John Lockard
Hi All,

Looking through the manual in the Message Resource section
I don't see 'FileSet' as one of the options.  (Version 2.4.4).
Is this available but undocumented or should I be putting in
a software change request?

Reason I ask, is that an email telling me that a job for
'Server1' finished isn't nearly as informative to me as
an email saying that 'Server1:partion2' finished, especially
when I may also have jobs for 'Server1:partition3' and
'Server1:partition4'.

Thanks,
John

-- 
Photography can never grow up if it imitates some other medium.
 It has to walk alone; it has to be itself. - Berenice Abbott
---
 John M. Lockard |  U of Michigan - School of Information
 Unix and Security Admin |  1214 SI North - 1075 Beal Ave.
  jlock...@umich.edu |Ann Arbor, MI  48109-2112
 www.umich.edu/~jlockard | 734-615-8776 | 734-647-8045 FAX
---

--
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] Fileset available in Message Resource?

2009-04-03 Thread John Lockard
Nevermind... I'm a moron.

On Fri, Apr 03, 2009 at 02:54:01PM -0400, John Lockard wrote:
 Hi All,
 
 Looking through the manual in the Message Resource section
 I don't see 'FileSet' as one of the options.  (Version 2.4.4).
 Is this available but undocumented or should I be putting in
 a software change request?
 
 Reason I ask, is that an email telling me that a job for
 'Server1' finished isn't nearly as informative to me as
 an email saying that 'Server1:partion2' finished, especially
 when I may also have jobs for 'Server1:partition3' and
 'Server1:partition4'.
 
 Thanks,
 John

-- 
Emergency water landing, 600 miles an hour:
   blank faces, calm as Hindu cows. - Tyler Durden
---
 John M. Lockard |  U of Michigan - School of Information
 Unix and Security Admin |  1214 SI North - 1075 Beal Ave.
  jlock...@umich.edu |Ann Arbor, MI  48109-2112
 www.umich.edu/~jlockard | 734-615-8776 | 734-647-8045 FAX
---

--
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


[Bacula-users] Updates to bacula_mail_summary.sh

2009-03-26 Thread John Lockard
Attached, please find updates to bacula_mail_summary.sh which
was in the examples/reports directory in the source distribution.
I run this script once a week, after the log has been rotated
by my systems logrotate script.

I've tweaked the display formatting quite a bit.  Rather than
displaying the full job level I've done:
 F   = Full
 D   = Differential
 I   = Incrmental
 I2F = Full (upgraded from Incremental)
 D2F = Full (upgraded from Incremental)

For completion status I've shorted these as well to:
 OK= OK
 OK-Verify = Verify OK
 OK-Warn   = Ok -- with warnings
 M-OK  = Migration OK
 M-Error   = Migration Error

I've changed the Start and End times to include the day
and month, to cover jobs that run long (more than 24 hours).

Sample output (excerpts):

Client StatusType  StartTimeEndTime  Files  
Bytes
homer  OK-Warn   I 01-Mar-22:00:03  01-Mar-22:20:28  1,049  
637,371,688(637.3  MB)
wiggum OKI 02-Mar-02:01:03  02-Mar-02:02:11  26 
73,035 (73.03  KB)
harvbannister  OK-Warn   F 02-Mar-02:30:02  02-Mar-02:40:16  298,076
8,937,733,774  (8.937  GB)
dataless   OKF 02-Mar-03:00:00  02-Mar-06:13:48  212,695
129,185,742,611(129.1  GB)
bart   OKI2F   06-Mar-20:00:02  06-Mar-22:59:04  1,484,236  
91,154,841,692 (91.15  GB)
homer  OK-Warn   I2F   06-Mar-22:59:06  07-Mar-07:01:35  2,147,092  
765,068,201,965(765.0  GB)
mrlombardo OKD2F   12-Mar-21:00:00  12-Mar-23:48:29  1,277  
60,076,971,159 (60.07  GB)
tibor  Canceled  F 12-Mar-23:50:31  13-Mar-10:32:35  0  0   
   (0  B)
tibor  M-OK  F 17-Mar-17:19:42  17-Mar-17:21:09  58,134 
2,477,312,248  (2.477  GB)
tibor  M-Error   F 17-Mar-17:23:01  17-Mar-17:23:01  0  0   
   (0  B)
bart   OKD 17-Mar-20:00:02  17-Mar-20:10:35  756
535,593,805(535.5  MB)
sherri Error F 19-Mar-12:25:59  20-Mar-22:24:01  147,641
1,390,032,423,344  (1.390  TB)
centra OK-Warn   D 23-Mar-17:00:03  23-Mar-17:00:21  0  0   
   (0  B)
adilhoxha  OKI 23-Mar-18:30:56  23-Mar-18:36:23  65 
3,456,362,879  (3.456  GB)


I hope someone finds this to be useful.

-John


-- 
Four Horsemen Of The Apocalypse Unveil New Alert System
 - Subject of recent SPAM message
---
 John M. Lockard |  U of Michigan - School of Information
 Unix and Security Admin |  1214 SI North - 1075 Beal Ave.
  jlock...@umich.edu |Ann Arbor, MI  48109-2112
 www.umich.edu/~jlockard | 734-615-8776 | 734-647-8045 FAX
---


bacula_mail_summary.sh
Description: Bourne shell script
--
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


[Bacula-users] Updates to bacula_mail_summary.sh

2009-03-26 Thread John Lockard
Attached, please find updates to bacula_mail_summary.sh which
was in the examples/reports directory in the source distribution.
I run this script once a week, after the log has been rotated
by my systems logrotate script.

I've tweaked the display formatting quite a bit.  Rather than
displaying the full job level I've done:
 F   = Full
 D   = Differential
 I   = Incrmental
 I2F = Full (upgraded from Incremental)
 D2F = Full (upgraded from Incremental)

For completion status I've shorted these as well to:
 OK= OK
 OK-Verify = Verify OK
 OK-Warn   = Ok -- with warnings
 M-OK  = Migration OK
 M-Error   = Migration Error

I've changed the Start and End times to include the day
and month, to cover jobs that run long (more than 24 hours).

Sample output (excerpts):

Client StatusType  StartTimeEndTime  Files  
Bytes
homer  OK-Warn   I 01-Mar-22:00:03  01-Mar-22:20:28  1,049  
637,371,688(637.3  MB)
wiggum OKI 02-Mar-02:01:03  02-Mar-02:02:11  26 
73,035 (73.03  KB)
harvbannister  OK-Warn   F 02-Mar-02:30:02  02-Mar-02:40:16  298,076
8,937,733,774  (8.937  GB)
dataless   OKF 02-Mar-03:00:00  02-Mar-06:13:48  212,695
129,185,742,611(129.1  GB)
bart   OKI2F   06-Mar-20:00:02  06-Mar-22:59:04  1,484,236  
91,154,841,692 (91.15  GB)
homer  OK-Warn   I2F   06-Mar-22:59:06  07-Mar-07:01:35  2,147,092  
765,068,201,965(765.0  GB)
mrlombardo OKD2F   12-Mar-21:00:00  12-Mar-23:48:29  1,277  
60,076,971,159 (60.07  GB)
tibor  Canceled  F 12-Mar-23:50:31  13-Mar-10:32:35  0  0   
   (0  B)
tibor  M-OK  F 17-Mar-17:19:42  17-Mar-17:21:09  58,134 
2,477,312,248  (2.477  GB)
tibor  M-Error   F 17-Mar-17:23:01  17-Mar-17:23:01  0  0   
   (0  B)
bart   OKD 17-Mar-20:00:02  17-Mar-20:10:35  756
535,593,805(535.5  MB)
sherri Error F 19-Mar-12:25:59  20-Mar-22:24:01  147,641
1,390,032,423,344  (1.390  TB)
centra OK-Warn   D 23-Mar-17:00:03  23-Mar-17:00:21  0  0   
   (0  B)
adilhoxha  OKI 23-Mar-18:30:56  23-Mar-18:36:23  65 
3,456,362,879  (3.456  GB)


I hope someone finds this to be useful.

-John


- start bacula_mail_summary.sh -

#!/bin/sh

# $Id: bacula_mail_summary.sh,v 1.5 2009/03/26 19:04:11 root Exp $
# $Locker:  $

# This script is to create a summary of the job notifications from bacula
# and send it to people who care.
#
# For it to work, you need to have all Bacula job report
# logging to a file, edit LOGFILE to match your setup.
# This should be run after all backup jobs have finished.
# Tested with bacula-2.4.4

# Some improvements by: John Lockard jlock...@umich.edu
#   (University of Michigan - School of Information)
#   Changed Date format to better sort
#   Reformatted Levels to fit better
#   Caught more job completion types
#   Added From addressing to the outgoing email
#   Removed log rotation.  I'll leave that up to a system utility (logrotate)
#   Added partial date to Start and End times to cover long running jobs
# Some improvements by: Andrey Yakovlev free...@kiev.farlep.net  (ISP Farlep)
# Contributed by Andrew J. Millar and...@alphajuliet.org.uk
# Patched by Andrey A. Yakovlev free...@kiev.farlep.net

# Use awk to create the report, pass to column to be
# formatted nicely, then on to mail to be sent to
# people who care.

LOGFILE='/var/log/bacula/standard'

EMAIL_TO=backup-adm...@example.com
EMAIL_FROM=bacula-ser...@example.com
#EMAIL_FROM=${EMAIL_TO}
EMAIL_SUBJECT=Bacula Job Summary: `date +'%F - %a'`

#-

awk -F\:\  'BEGIN {
print Client Status Type StartTime EndTime Files Bytes
}

/director-dir: New file:/ {
print $3
}

/director-dir: File:/ {
print $3
}

/Client/ {
CLIENT=$2; sub(//, , CLIENT) ; sub(/.*$/, , CLIENT)
}
/Backup Level/ {
TYPE=$2 ;
sub(/,.*$/, , TYPE)
sub(/Full \(upgraded from Incremental\)/, I2F, TYPE);
sub(/Full \(upgraded from Differential\)/, D2F, TYPE);
sub(/Full/, F, TYPE);
sub(/Incremental/, I, TYPE);
sub(/Differential/, D, TYPE);
}
/Start time/ {
STARTTIME=$2;
sub(/-[0-9]* /, -, STARTTIME)
sub(/^ */, , STARTTIME)
gsub(/ /, -, STARTTIME)
}
/End time/ {
ENDTIME=$2;
sub(/-[0-9]* /, -, ENDTIME)
sub(/^ */, , ENDTIME)
gsub(/ /, -, ENDTIME)
}
/Files Examined/ {
SDFILES=$2
SDBYTES=0
}
/SD Files Written/ {
SDFILES=$2
}
/SD Bytes Written/ {
SDBYTES=$2
}
/Termination/ {
TERMINATION=$2 ;
sub

Re: [Bacula-users] Question about Priorities and Maximum Concurrent Jobs

2009-03-23 Thread John Lockard
On Sat, Mar 21, 2009 at 05:10:09AM -0700, Kevin Keane wrote:
 John Lockard wrote:
  The minimum setting I have on Max Concurrent Jobs is on the
  Tape Library and that's set to 3.  It appears that priority
  trumps all, unless the priority is the same or better.
 
  So, if I have one job that has priority of, say, 10, then
  any job running on any other tape drive or virtual library
  will sit and wait till that higher priority job to finish
  before they'll begin.
 
  This also makes priority mostly useless for me as well.  I
  guess it would take care of situations where I'd want one
  job to finish before a secondary or tertiary job starts, but
  then I run the risk of another job postponing the 2nd and
  3rd job, which wouldn't be my intention
 I notice in the bacula sample configuration that priority is used to 
 make sure the catalog backup always runs last. In that case, the 
 behavior you describe (any other job postpones the lower-priority one, 
 regardless of storage or pool) is exactly the desired behavior.

But, priority also postpones any jobs of higher priority.
If a job, of priority 20 is currently running and you start
off several other jobs, with priorities of 10, 20 and 30, then
the only jobs which will run concurrently will be the jobs
of priority 20.  Once all the priority 20 jobs are complete,
only then will jobs of other priorities be examined and run.
In my example, after all of the P-20 jobs run, the P-10 jobs
will run, followed by the P-30 jobs.

It seems to me that priority is running as described in the
manual, but it runs counter to how you would logically expect.
If I'm running a P-20 job, and storage devices are available,
I would expect that jobs of P-20 would also be able to run,
rather than having to wait till *all* of the queued P-20 jobs
are finished.

-John

-- 
Brain: Pinky, Are you pondering what I'm pondering?
Pinky: I think so Brain, but if we get Sam spayed,
   we won't be able to have puppies, will we?
---
 John M. Lockard |  U of Michigan - School of Information
 Unix and Security Admin |  1214 SI North - 1075 Beal Ave.
  jlock...@umich.edu |Ann Arbor, MI  48109-2112
 www.umich.edu/~jlockard | 734-615-8776 | 734-647-8045 FAX
---

--
Apps built with the Adobe(R) Flex(R) framework and Flex Builder(TM) are
powering Web 2.0 with engaging, cross-platform capabilities. Quickly and
easily build your RIAs with Flex Builder, the Eclipse(TM)based development
software that enables intelligent coding and step-through debugging.
Download the free 60 day trial. http://p.sf.net/sfu/www-adobe-com
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


[Bacula-users] Question about Priorities and Maximum Concurrent Jobs

2009-03-20 Thread John Lockard
Hi All,

I have a mix of disk and tape backups.  To disk I allow up to
20 jobs run concurrently.  On my tape library I have 3 tape
drives, so only allow a max of 3 jobs to run concurrently.

I run Full backups once a month, Differentials once a week
and incrementals most days of the week.  I would prefer to
give preference to a Full backup over a Diff or Incr and I'd
like to give preference to a Diff over an Incr.

So...

I set:
  Full backups to have a priority of 30
  Differential backups to have a priority of 40
  Incremental backups to have a priority of 50

I figured that since I had concurrency setup with my Max
Concurrent Jobs setting that this would happen...  If there
was a fight for a medium, with no other medium currently
free, that a Full would have preference to the medium over a
Differential which would have preference over an Incremental.

What I'm seeing is that if a Full is running on a certain
type of storage, only other Fulls will run on that storage.
If a full is running on one type of storage, other jobs
(Diffs and Incrs) will run on the other types of storage.
So, if I have a Full running to disk storage #1, then an Incr
will run to disk storage #2, but not #1.  For disk storage I
mostly understand this.

This really becomes a problem for tape storage.  I would like
to be able to run backups on the other 2 tape drives in my
library when a Full backup is running.  I have several large,
slow servers which take upwards of 36 hours to backup and
during this time I can't backup anything of a lower Priority
than that system which I'm currently backing up.

Do I have to entirely can (forget) the notion of job Priorities
except in the cases where I absolutely want a certain job to
have exclusive rights to a backup medium?

Thanks, in advance for all the help,
-John

-- 
We have Enough Youth, How About A Fountain Of Smart?
---
 John M. Lockard |  U of Michigan - School of Information
 Unix and Security Admin |  1214 SI North - 1075 Beal Ave.
  jlock...@umich.edu |Ann Arbor, MI  48109-2112
 www.umich.edu/~jlockard | 734-615-8776 | 734-647-8045 FAX
---

--
Apps built with the Adobe(R) Flex(R) framework and Flex Builder(TM) are
powering Web 2.0 with engaging, cross-platform capabilities. Quickly and
easily build your RIAs with Flex Builder, the Eclipse(TM)based development
software that enables intelligent coding and step-through debugging.
Download the free 60 day trial. http://p.sf.net/sfu/www-adobe-com
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] Question about Priorities and Maximum Concurrent Jobs

2009-03-20 Thread John Lockard
I stand somewhat corrected.  I was wrong in stating
that priority of a job on a certain media blocked
only jobs on that media.  It actually blocks all other
lower priority jobs from running no matter whether the
lower priority job is on the same media or not.

-John

On Fri, Mar 20, 2009 at 10:04:48AM -0400, John Lockard wrote:
 Hi All,
 
 I have a mix of disk and tape backups.  To disk I allow up to
 20 jobs run concurrently.  On my tape library I have 3 tape
 drives, so only allow a max of 3 jobs to run concurrently.
 
 I run Full backups once a month, Differentials once a week
 and incrementals most days of the week.  I would prefer to
 give preference to a Full backup over a Diff or Incr and I'd
 like to give preference to a Diff over an Incr.
 
 So...
 
 I set:
   Full backups to have a priority of 30
   Differential backups to have a priority of 40
   Incremental backups to have a priority of 50
 
 I figured that since I had concurrency setup with my Max
 Concurrent Jobs setting that this would happen...  If there
 was a fight for a medium, with no other medium currently
 free, that a Full would have preference to the medium over a
 Differential which would have preference over an Incremental.
 
 What I'm seeing is that if a Full is running on a certain
 type of storage, only other Fulls will run on that storage.
 If a full is running on one type of storage, other jobs
 (Diffs and Incrs) will run on the other types of storage.
 So, if I have a Full running to disk storage #1, then an Incr
 will run to disk storage #2, but not #1.  For disk storage I
 mostly understand this.
 
 This really becomes a problem for tape storage.  I would like
 to be able to run backups on the other 2 tape drives in my
 library when a Full backup is running.  I have several large,
 slow servers which take upwards of 36 hours to backup and
 during this time I can't backup anything of a lower Priority
 than that system which I'm currently backing up.
 
 Do I have to entirely can (forget) the notion of job Priorities
 except in the cases where I absolutely want a certain job to
 have exclusive rights to a backup medium?
 
 Thanks, in advance for all the help,
 -John
 
 -- 
 We have Enough Youth, How About A Fountain Of Smart?
 ---
  John M. Lockard |  U of Michigan - School of Information
  Unix and Security Admin |  1214 SI North - 1075 Beal Ave.
   jlock...@umich.edu |Ann Arbor, MI  48109-2112
  www.umich.edu/~jlockard | 734-615-8776 | 734-647-8045 FAX
 ---
 
 

-- 
Time and time and time again, you wake up screaming
 and you wake up dead. - RevCo
---
 John M. Lockard |  U of Michigan - School of Information
 Unix and Security Admin |  1214 SI North - 1075 Beal Ave.
  jlock...@umich.edu |Ann Arbor, MI  48109-2112
 www.umich.edu/~jlockard | 734-615-8776 | 734-647-8045 FAX
---

--
Apps built with the Adobe(R) Flex(R) framework and Flex Builder(TM) are
powering Web 2.0 with engaging, cross-platform capabilities. Quickly and
easily build your RIAs with Flex Builder, the Eclipse(TM)based development
software that enables intelligent coding and step-through debugging.
Download the free 60 day trial. http://p.sf.net/sfu/www-adobe-com
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] Question about Priorities and Maximum Concurrent Jobs

2009-03-20 Thread John Lockard
The minimum setting I have on Max Concurrent Jobs is on the
Tape Library and that's set to 3.  It appears that priority
trumps all, unless the priority is the same or better.

So, if I have one job that has priority of, say, 10, then
any job running on any other tape drive or virtual library
will sit and wait till that higher priority job to finish
before they'll begin.

This also makes priority mostly useless for me as well.  I
guess it would take care of situations where I'd want one
job to finish before a secondary or tertiary job starts, but
then I run the risk of another job postponing the 2nd and
3rd job, which wouldn't be my intention.

-John

On Fri, Mar 20, 2009 at 02:16:03PM -0400, John Drescher wrote:
 I find this makes priorities not that useful for me.
 
 Have you thought of using concurrency and a small (2 to 5GB) spool
 file and scrap the priorities. I am unsure why you only want 1 job per
 tape drive. Are your drives really slow such that 1 client backup will
 be faster than the tape can handle?
 
 John
 
 

-- 
ACTION: None of the violence you are about to see was simulated.
 People were actually injured for your entertainment.
 - SciFi program intro
---
 John M. Lockard |  U of Michigan - School of Information
 Unix and Security Admin |  1214 SI North - 1075 Beal Ave.
  jlock...@umich.edu |Ann Arbor, MI  48109-2112
 www.umich.edu/~jlockard | 734-615-8776 | 734-647-8045 FAX
---

--
Apps built with the Adobe(R) Flex(R) framework and Flex Builder(TM) are
powering Web 2.0 with engaging, cross-platform capabilities. Quickly and
easily build your RIAs with Flex Builder, the Eclipse(TM)based development
software that enables intelligent coding and step-through debugging.
Download the free 60 day trial. http://p.sf.net/sfu/www-adobe-com
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] Best way to backup simultaneously

2009-03-20 Thread John Lockard
On Fri, Mar 20, 2009 at 04:11:55PM -0400, Jason Dixon wrote:
 On Fri, Mar 20, 2009 at 03:46:49PM -0400, Jason Dixon wrote:
  
  Just to be certain, I kicked off a few OS jobs just prior to the
  transaction log backup.  I also changed the Storage directive to use
  Maximum Concurrent Jobs = 1 for FileStorage.  This forces only one OS
  job at a time.
  
  I would expect the DatabaseArchives_crank-va-3 job (11242) to run before
  the queued OS jobs (11240, 11241) but that isn't the case.  I don't know
  why this reports the other jobs as a higher priority.  And remember that
  these are using *different* storage devices.  The OS jobs use
  FileStorage, the transaction logs backup to tape (SDX-700C).
  
  
  Running Jobs:
   JobId Level   Name   Status
  ==
   11239 Increme  Unix_crank-va-4.2009-03-20_15.39.55 is running
   11240 Increme  Unix_puffer-va-3.2009-03-20_15.39.56 is waiting on max
  Storage jobs
   11241 Increme  Unix_puffer-va-4.2009-03-20_15.39.57 is waiting on max
  Storage jobs
   11242 FullDatabaseArchives_crank-va-3.2009-03-20_15.40.59 is
  waiting for higher priority jobs to finish
  
 
 Ok, it looks like these ran correctly after all.  I'm a bit perplexed
 why the Director reports 11242 as being lower priority, but at least it
 worked as designed.  Extracted from llist jobs:

From the run-times, the job order was 11239, 11242, 11240, 11241.
This would make sense, it just listed 11242 last, it was waiting
for 11239 to finish, thus the waiting for higher priority jobs
message.

 
jobid: 11,239
  job: Unix_crank-va-4.2009-03-20_15.39.55
schedtime: 2009-03-20 15:39:45
starttime: 2009-03-20 15:40:02
  endtime: 2009-03-20 15:40:20
  realendtime: 2009-03-20 15:40:20
 
jobid: 11,240
  job: Unix_puffer-va-3.2009-03-20_15.39.56
schedtime: 2009-03-20 15:39:48
starttime: 2009-03-20 15:40:28
  endtime: 2009-03-20 15:40:39
  realendtime: 2009-03-20 15:40:39
 
jobid: 11,241
  job: Unix_puffer-va-4.2009-03-20_15.39.57
schedtime: 2009-03-20 15:39:53
starttime: 2009-03-20 15:40:40
  endtime: 2009-03-20 15:40:51
  realendtime: 2009-03-20 15:40:51
 
jobid: 11,242
  job: DatabaseArchives_crank-va-3.2009-03-20_15.40.59
schedtime: 2009-03-20 15:40:02
starttime: 2009-03-20 15:40:21
  endtime: 2009-03-20 15:40:28
  realendtime: 2009-03-20 15:40:28

-- 
Brain: Pinky, Are you pondering what I'm pondering?
Pinky: Wuh, I think so, Brain, but if we didn't have
   ears, we'd look like weasels.
---
 John M. Lockard |  U of Michigan - School of Information
 Unix and Security Admin |  1214 SI North - 1075 Beal Ave.
  jlock...@umich.edu |Ann Arbor, MI  48109-2112
 www.umich.edu/~jlockard | 734-615-8776 | 734-647-8045 FAX
---

--
Apps built with the Adobe(R) Flex(R) framework and Flex Builder(TM) are
powering Web 2.0 with engaging, cross-platform capabilities. Quickly and
easily build your RIAs with Flex Builder, the Eclipse(TM)based development
software that enables intelligent coding and step-through debugging.
Download the free 60 day trial. http://p.sf.net/sfu/www-adobe-com
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] How to force a full backup?

2009-03-05 Thread John Lockard
As a side note, I'm pretty sure you could shorten this definition to:

Schedule {
   Name = Schedule-apache
   Run = Level=Full Storage=Disk3-apache on 1,16 at 19:05
   Run = Level=Differential Storage=Disk3-apache on 8,23 at 19:05
   Run = Level=Incremental Storage=Disk3-apache on 2-7,9-15,17-22,24-31 at 19:05
 }


On Tue, Mar 03, 2009 at 01:15:18AM -0800, Kevin Keane wrote:
 I have a schedule that dictates a full backup on the 1st and 16th of the 
 month, differentials on the 8th and 23rd, and incrementals the remaining 
 days.
 
 Yesterday, the full backup for one of my clients failed due to a lost 
 network connection. I notice that bacula now blindly does an incremental 
 backup anyway. How can I get bacula to automatically retry a full backup 
 on the 2nd of the month if it failed on the 1st?
 
 Thanks!
 
 Here is my schedule resource:
 
 Schedule {
   Name = Schedule-apache
   Run = Level=Full Storage=Disk3-apache on 1 at 19:05
   Run = Level=Incremental Storage=Disk3-apache on 2 at 19:05
   Run = Level=Incremental Storage=Disk3-apache on 3 at 19:05
   Run = Level=Incremental Storage=Disk3-apache on 4 at 19:05
   Run = Level=Incremental Storage=Disk3-apache on 5 at 19:05
   Run = Level=Incremental Storage=Disk3-apache on 6 at 19:05
   Run = Level=Incremental Storage=Disk3-apache on 7 at 19:05
   Run = Level=Differential Storage=Disk3-apache on 8 at 19:05
   Run = Level=Incremental Storage=Disk3-apache on 9 at 19:05
   Run = Level=Incremental Storage=Disk3-apache on 10 at 19:05
   Run = Level=Incremental Storage=Disk3-apache on 11 at 19:05
   Run = Level=Incremental Storage=Disk3-apache on 12 at 19:05
   Run = Level=Incremental Storage=Disk3-apache on 13 at 19:05
   Run = Level=Incremental Storage=Disk3-apache on 14 at 19:05
   Run = Level=Incremental Storage=Disk3-apache on 15 at 19:05
   Run = Level=Full Storage=Disk2-apache on 16 at 19:05
   Run = Level=Incremental Storage=Disk2-apache on 17 at 19:05
   Run = Level=Incremental Storage=Disk2-apache on 18 at 19:05
   Run = Level=Incremental Storage=Disk2-apache on 19 at 19:05
   Run = Level=Incremental Storage=Disk2-apache on 20 at 19:05
   Run = Level=Incremental Storage=Disk2-apache on 21 at 19:05
   Run = Level=Incremental Storage=Disk2-apache on 22 at 19:05
   Run = Level=Differential Storage=Disk2-apache on 23 at 19:05
   Run = Level=Incremental Storage=Disk2-apache on 24 at 19:05
   Run = Level=Incremental Storage=Disk2-apache on 25 at 19:05
   Run = Level=Incremental Storage=Disk2-apache on 26 at 19:05
   Run = Level=Incremental Storage=Disk2-apache on 27 at 19:05
   Run = Level=Incremental Storage=Disk2-apache on 28 at 19:05
   Run = Level=Incremental Storage=Disk2-apache on 29 at 19:05
   Run = Level=Incremental Storage=Disk2-apache on 30 at 19:05
   Run = Level=Incremental Storage=Disk2-apache on 31 at 19:05
 }
 
 -- 
 Kevin Keane
 Owner
 The NetTech
 Find the Uncommon: Expert Solutions for a Network You Never Have to Think 
 About
 
 Office: 866-642-7116
 http://www.4nettech.com
 
 This e-mail and attachments, if any, may contain confidential and/or 
 proprietary information. Please be advised that the unauthorized use or 
 disclosure of the information is strictly prohibited. The information herein 
 is intended only for use by the intended recipient(s) named above. If you 
 have received this transmission in error, please notify the sender 
 immediately and permanently delete the e-mail and any copies, printouts or 
 attachments thereof.
 
 
 --
 Open Source Business Conference (OSBC), March 24-25, 2009, San Francisco, CA
 -OSBC tackles the biggest issue in open source: Open Sourcing the Enterprise
 -Strategies to boost innovation and cut costs with open source participation
 -Receive a $600 discount off the registration fee with the source code: SFAD
 http://p.sf.net/sfu/XcvMzF8H
 ___
 Bacula-users mailing list
 Bacula-users@lists.sourceforge.net
 https://lists.sourceforge.net/lists/listinfo/bacula-users
 
 

-- 
If you take out the killings, Washington actually
 has a very, very low crime rate. - Marion Barry
---
 John M. Lockard |  U of Michigan - School of Information
 Unix and Security Admin |  1214 SI North - 1075 Beal Ave.
  jlock...@umich.edu |Ann Arbor, MI  48109-2112
 www.umich.edu/~jlockard | 734-615-8776 | 734-647-8045 FAX
---

--
Open Source Business Conference (OSBC), March 24-25, 2009, San Francisco, CA
-OSBC tackles the biggest issue in open source: Open Sourcing the Enterprise
-Strategies to boost innovation and cut costs with open source participation
-Receive a $600 discount off the registration fee with the source code: SFAD
http://p.sf.net/sfu/XcvMzF8H

Re: [Bacula-users] REPLACE BSMTP with other mail program

2008-10-21 Thread John Lockard
If your problem is authentication or encryption then I would
suggest you check out msmtp (http://msmtp.sourceforge.net/).

This smtp client will use SSL/TLS for encrypted transport and
GSSAPI, Digest-MD5 and many more for authentication.

-John

On Tue, Oct 21, 2008 at 07:21:07AM +0200, Peter Herrington wrote:
 Hello
 
 I am having the continual situation where the bsmtp cant send me emails
 because of various reasons,
 First it was invalid recipient.As Soon as I put in a valid email addr, it
 then says invalid sender.
 Now I just saw in the mail queue that my ISP email server refused the
 connection.
 
 My Clarkconnect server's email server component,postfix is configured to
 OUTBOUND RELAY to my ISP mail server
 Because that was the only way I could get my emails from my LAN to the
 intended recipients on the internet.
 
 My thinking goes if I can only put another email program in the email
 settings, one that authorise on smtp,
 My problem will be solved, unless of course bsmtp can do that.
 
 Any pointers will be great
 
 Rgds/Pete

-- 
Maybe, just once, someone will call me 'sir' without
 adding, 'you're making a scene.' - Homer Simpson
---
 John M. Lockard |  U of Michigan - School of Information
 Unix and Security Admin |  1214 SI Norsth - 1075 Beal Ave.
  [EMAIL PROTECTED] |Ann Arbor, MI  48109-2112
 www.umich.edu/~jlockard | 734-615-8776 | 734-647-8045 FAX
---

-
This SF.Net email is sponsored by the Moblin Your Move Developer's challenge
Build the coolest Linux based applications with Moblin SDK  win great prizes
Grand prize is a trip for two to an Open Source event anywhere in the world
http://moblin-contest.org/redirect.php?banner_id=100url=/
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users