Re: [Bacula-users] bacula splitting big jobs in to two

2015-02-10 Thread Blake Dunlap
Umm... just a check, but if it takes this long to back up, wont it
take just as long if not longer to restore?

I don't really see how this is a workable situation and perhaps you're
trying to solve the wrong problem?

-Blake

On Mon, Feb 9, 2015 at 2:01 PM, Bryn Hughes li...@nashira.ca wrote:
 I'm in a somewhat similar boat, I don't have 70TB, but I do have 21.5TB
 and LTO3 equipment (max 80 MB/sec).  If it were possible to keep a tape
 drive streaming constantly without ever having to change tapes or
 unloading it would still take me many days to finish.

 The easiest way to start splitting things up is at the root filesystem
 level for your data directory.  I assume you don't have 70TB in files
 just all thrown together in one big directory, there is probably some
 sort of organizational structure?

 In my case we generate a new directory at the root level each year and
 then put the year's work in to that.  I create a separate fileset for
 each individual year.  I then have a separate job for each year.  Using
 JobDefs and a common schedule minimizes the configuration work, you end
 up with nothing more really than a Job and a Fileset, each with only a
 few lines in your config file.

 Look for something like this in how your data is laid out.  Just
 remember to create additional jobs as new stuff is added - again in my
 case we're only doing this once a year.  I have the root directory of
 the file shares locked so new folders can't be created by anyone but the
 admins, this allows me to ensure the correct backup config is added at
 the same time.

 Bryn


 On 2015-02-02 06:23 AM, Rao, Uthra R. (GSFC-672.0)[ADNET SYSTEMS INC] wrote:
 Andy,

 I have to backup about 70TB in one job.

 Uthra

 -Original Message-
 From: akent04 [mailto:bacula-fo...@backupcentral.com]
 Sent: Friday, January 30, 2015 9:50 AM
 To: bacula-users@lists.sourceforge.net
 Subject: [Bacula-users] bacula splitting big jobs in to two

 I run bacula 5.2.12 on a RHEL server which is attached to a Tape Library. I 
 have two LTO5 tape drives. Since the data on one of my server has grown big 
 the back-up takes 10-12 days to complete. I would like to split this job in 
 to two jobs. Has anybody done this kind of a set-up? I need some guidance on 
 how to go about it.


 Mr. Uthra: at a glance I think the only way to do that is having to Job 
 configurations for the same Client, with two FileSet configurations for them 
 respectively.
 This is very manual, since you will have to load balance the different 
 backup paths for each FileSet by yourself.
 Just in time: unless there is a huge amount of information on this server 
 it's not normal for a Backup Job take 12 days to complete, and maybe there 
 is some bottlenecks in your structure / configuration. 


 Regards,
 ==Heitor
  Medrado de Faria - LPIC-III | ITIL-F Jan. 26 - Fev. 06 - Novo Treinamento 
 Telepresencial Bacula: http://www.bacula.com.br/?p=2174
 61  (tel:%2B55%2061%202021-8260)8268-4220 (tel:%2B55%2061%208268-4220)
 Site: heitorfaria  at  gmail.com (heitorfaria  at  gmail.com) 
 ===

 I would have to echo the above post in regards to bottlenecks. How much data 
 are you backing up?
 I'm able to back up almost 500GB(over a million files) in around 4-5 hours 
 and that is to an RDX cartridge in a Tandberg Quikstation. Twice that 
 (around 1TB) would probably only take about 8-10 hours, less than a day. If 
 you have only around 1-4TB of data you're backing up 10-12 days is abnormal 
 for that when dealing with some type of local hardware(not over Internet or 
 VPN), and I'd look into why it's taking so long instead of trying to work 
 around it.

 -Andy



 --
 Dive into the World of Parallel Programming. The Go Parallel Website,
 sponsored by Intel and developed in partnership with Slashdot Media, is your
 hub for all things parallel software development, from weekly thought
 leadership blogs to news, videos, case studies, tutorials and more. Take a
 look and join the conversation now. http://goparallel.sourceforge.net/
 ___
 Bacula-users mailing list
 Bacula-users@lists.sourceforge.net
 https://lists.sourceforge.net/lists/listinfo/bacula-users

--
Dive into the World of Parallel Programming. The Go Parallel Website,
sponsored by Intel and developed in partnership with Slashdot Media, is your
hub for all things parallel software development, from weekly thought
leadership blogs to news, videos, case studies, tutorials and more. Take a
look and join the conversation now. http://goparallel.sourceforge.net/
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net

Re: [Bacula-users] [Bacula-devel] Despooling attrs does not finish

2014-10-21 Thread Blake Dunlap
This sounds like a bug in bacula actually. It shouldn't follow
recursion into the same structure, simply store the link and move on.

-Blake

On Tue, Oct 21, 2014 at 4:17 AM, Ulrich Leodolter
ulrich.leodol...@obvsg.at wrote:
 Hi all,

 i found the root cause of the problem, it was simply a mysql performance
 problem because auf special filesystem hierarchy on the users desktop.

 there was one directory which was recursively repeated inside itself.

 C:/Users/name/Desktop/Exercise Files/CSS Core Concepts
 C:/Users/name/Desktop/Exercise Files/CSS Core Concepts/Exercise Files/CSS 
 Core Concepts/
 ...

 there was only one file inside CSS Core Concepts and six empty sub 
 directories
 Chapter_01 to Chapter_06.  this hierarchy was repeated up to path length of 
 4834.
 very strange, maybe a zip file containing symlinks pointing to .  was 
 unzipped on desktop.

 the join on path in the batch insert seems to perform very badly comparing 
 about 27000
 long path names like that.

 INSERT INTO File (FileIndex, JobId, PathId, FilenameId, LStat, MD5, DeltaSeq)
   SELECT batch.FileIndex, batch.JobId, Path.PathId, Filename.FilenameId, 
 batch.LStat, batch.MD5, batch.DeltaSeq FROM batch
JOIN Path ON (batch.Path = Path.Path) JOIN Filename ON (batch.Name = 
 Filename.Name)


 now we removed the almost empty tree C:/Users/name/Desktop/Exercise Files
 and i am sure tomorrow the backup will finish in time without problems.

 Best regards
 Ulrich




 On Mon, 2014-10-20 at 15:21 +0200, alejandro alfonso fernandez wrote:
 Hi!

 I agree with Martin. It's no a Bacula error, it's a MySQL problem

 I'm pretty sure that your /tmp partition becomes full, specially if you
 share both Bacula Spool Directory (are you using SpoolData = yes?) and
 MySQL temporary filesystem (both of them in /tmp by default)

 Try changing the tmpdir param of your MySQL server (my.cnf) to a bigger
 partition (don't forget restart the service to commit the change)

 Example:
 # point the following paths to different dedicated disks
 # tmpdir= /tmp/
 tmpdir  = /var/tmp/mysql

 Doing a mysqlrepair to test database integrity will be a good idea

 Best regards!

 On Mon, Oct 20, 2014 at 12:57 PM, Martin Simmons mar...@lispworks.com
 wrote:

   On Sun, 19 Oct 2014 19:02:57 +0200, Ulrich Leodolter said:
  
   Hello Dan,
  
   On Sat, 2014-10-18 at 13:32 -0400, Dan Langille wrote:
On Oct 18, 2014,
at 4:03 AM, Ulrich Leodolter ulrich.leodol...@obvsg.at wrote:
   
 Hello,

 we have Win7 backup which does not come to an end within
  MaxRunTime=12h.
 server runs 7.0.5 (28 July 2014),  the client has installed the
 bacula-enterprise-win64-7.0.5.exe.  but the problem started about 2
 months ago,  at that time windows client 5.2.10 was installed on the
 machine.

 the backup itself is about 100GB compressed and seems to finish
 on the client after about 6 hours, below are the last messages of
 the job before it gets stuck.

 2014-10-18 03:18:09 troll-sd JobId 635821: Committing spooled data to
 Volume Backup-0779. Despooling 1,692,736,419 bytes ...
 2014-10-18 03:18:18 troll-sd JobId 635821: Despooling elapsed time =
  00:00:09, Transfer rate = 188.0 M Bytes/second
 2014-10-18 03:18:19 troll-sd JobId 635821: Elapsed time=06:11:45,
  Transfer rate=4.691 M Bytes/second
 2014-10-18 03:18:22 troll-sd JobId 635821: Sending spooled attrs to
  the Director. Despooling 603,449,667 bytes .

 mysql status at the same time:

 # echo show full processlist | mysql
 IdUserHostdb  Command TimeState   Info
 6854  bacula  localhost   bacula  Sleep   522
   NULL
 6873  bacula  localhost   bacula  Query   21143   Sending
  dataINSERT INTO File (FileIndex, JobId, PathId, FilenameId, LStat, MD5,
  DeltaSeq) SELECT batch.FileIndex, batch.JobId, Path.PathId,
  Filename.FilenameId,batch.LStat, batch.MD5, batch.DeltaSeq FROM batch JOIN
  Path ON (batch.Path = Path.Path) JOIN Filename ON (batch.Name =
  Filename.Name)
 6899  rootlocalhost   NULLQuery   0   NULL
  show full processlist


 we have a bunch of other clients (about 30), a mixture of linux,
  win7 and mac powerpc.
 all other backups run without problems for years now.  there are
  even larger backups,
 in size and in number of files.


 does anyone have an idea why this single batch insert does not
  complete?

 do i need to analyze the attrs spool file itself ?

 yesterday i optimized the bacula database, but that doesn't help.
 there must be something special in the attrs spool file which the
 mysql server can't handle. the server runs on standard CentOS 6.5
  x86_64.
   
This is something which should be asked in the user mailing list, not
  the devel mailing list.  I am replying to that list instead.
   
  
   

[Bacula-users] FileSet Question (related to snapshots)

2011-10-28 Thread Blake Dunlap
Greetings,

Minor question, figured I'd try the users list first in case you guys could
help. I have the following directory structure on a server that I back up,
but there's a minor twist:

FileSet {
  Name = filemonster-fs
  Include {
*snip*

File = /etc
File = /usr/local/sbin
File = /snapshot/webdata
  }
*snip*

}


That snapshot directory (/snapshot/webdata) is actually a mounted snapshot
of /data. Ideally, I would like bacula to store this as the actual path, and
not the path it gets backed up from. It would greatly simplify restores
among other things.

I know there is the option of stripping X pieces of the path from a fileset,
but it is fileset wide to my knowledge. Is the best practice to just add a
snapshot dir to the beginning and keep the same path structure, and have a
seperate FileSet for each such item?

The reason I ask is I am also considering adding a PathReplace directive (or
something similar) to facilitate the above, and I want to judge input first,
and see if there is a better design option.


-Blake
--
Get your Android app more play: Bring it to the BlackBerry PlayBook 
in minutes. BlackBerry App World#153; now supports Android#153; Apps 
for the BlackBerryreg; PlayBook#153;. Discover just how easy and simple 
it is! http://p.sf.net/sfu/android-dev2dev
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] Win32 FD / Write error sending N bytes to Storage daemon

2011-06-10 Thread Blake Dunlap
$20 you have the other bacula comm channel failing due to timeout of the
state on a forwarding device. Dropping spool sizes is only increasing the
frequency of communication across that path. You will likely see this
problem solved completely by setting a short duration keepalive in your
bacula configs.

-Blake

On Fri, Jun 10, 2011 at 20:48, Mike Seda mas...@stanford.edu wrote:

 I just encountered a similar error in RHEL 6 using 5.0.3 (on the server
 and client) with Data Spooling enabled:
 10-Jun 02:06 srv084 JobId 43: Error: bsock.c:393 Write error sending
 65536 bytes to Storage daemon:srv010.nowhere.us:9103: ERR=Broken pipe
 10-Jun 02:06 srv084 JobId 43: Fatal error: backup.c:1024 Network send
 error to SD. ERR=Broken pipe

 The way that I made it go away was to decrease Maximum Spool Size from
 200G to 2G. I also received the same error at 100G and 50G. I ended up
 just disabling data spooling completely on this box since small spool
 sizes almost defeat the point of spooling at all.

 I've also been seeing some sporadic tape drive errors recently, too. So
 that may be part of the problem. I will be running the vendor-suggested
 diags on the library (Dell TL4000 with 2 x LTO-4 FC drives) in the next
 couple of days.

 Plus, this is a temporary SD instance that I will eventually migrate to
 new hardware and add large/fast SAN disk to for spooling. This should
 explain the reason for the small spool size settings... This box only
 has a 2 x 300 GB drive SAS 10K RAID 1.

 It'd be nice to see if anyone else has received this error on a similar
 HW/SW configuration.

 Mike


 On 06/07/2011 09:48 AM, Yann Cézard wrote:
  Le 07/06/2011 18:10, Josh Fisher a écrit :
  Another problem I see with Windows 7 clients is too aggressive power
  management turning off the Ethernet interface even though it is in use
  by bacula-fd. Apparently there is some Windows system call that a
  service (daemon) must make to tell Windows not to do power management
  while it is busy. I don't know what versions of Windows do that, other
  than 7 and Vista, but it is a potential problem.
  There is no power management on our servers :-D
 
  I just ran some tests this afternoon, I create a new bacula server
  with lenny / bacula 2.4.4, and downgrade the client to 2.4.4, to
  be sure that all was fine with the same fileset, etc.
  The test was OK, no problem, the job ran fine.
  Than I tested again with our production server (5.0.3) and
  the 2.4.4 client =  network error, failed job
  I upgraded the test bacula server to squeeze / bacula 5.0.2,
  and still the 2.4.4 fd on the client =  No problem !
 
  So it seems that the problem is clearly in network hardware on the
  server side.
 
  We will do some more tests on the network side (change
  switch port, change wire, see if no firmware update is available...),
  but now I really doubt that the problem is in bacula, nor it can be
  resolved in it.
 
  The strange thing is that the problems are only observed with win32
  clients. Perhaps the Windows (2003) TCP/IP stack is less fault tolerant
 than
  the linux one in some very special case ?
 
  Regards,
 


 --
 EditLive Enterprise is the world's most technically advanced content
 authoring tool. Experience the power of Track Changes, Inline Image
 Editing and ensure content is compliant with Accessibility Checking.
 http://p.sf.net/sfu/ephox-dev2dev
 ___
 Bacula-users mailing list
 Bacula-users@lists.sourceforge.net
 https://lists.sourceforge.net/lists/listinfo/bacula-users

--
EditLive Enterprise is the world's most technically advanced content
authoring tool. Experience the power of Track Changes, Inline Image
Editing and ensure content is compliant with Accessibility Checking.
http://p.sf.net/sfu/ephox-dev2dev___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] Force Bacula to retain at least one Full backup per job?

2011-04-16 Thread Blake Dunlap

 So here is my question: Is there a way to prevent Bacula from pruning
 files/jobs that are part of the last full backup Job done?



This is being solved in the next version inherently on job/file pruning, I
do not know how volume based retention will be affected. Supposedly it is
due within the next month last I saw.

-Blake
--
Benefiting from Server Virtualization: Beyond Initial Workload 
Consolidation -- Increasing the use of server virtualization is a top
priority.Virtualization can reduce costs, simplify management, and improve 
application availability and disaster protection. Learn more about boosting 
the value of server virtualization. http://p.sf.net/sfu/vmware-sfdev2dev___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] Retention Configuration

2011-01-18 Thread Blake Dunlap
On Tue, Jan 18, 2011 at 13:11, Martin Simmons mar...@lispworks.com wrote:

  On Tue, 18 Jan 2011 14:07:45 +0100, Paulo Martinez said:
 
  Am 18.01.2011 um 13:44 schrieb Martin Simmons:
   On Tue, 18 Jan 2011 12:21:44 +0100, Paulo Martinez said:
  
   How to handle different retentions for different pool?
  
   I think you can't in the current release -- per-pool retention is
   broken.  See
  
  
 http://www.mail-archive.com/bacula-users@lists.sourceforge.net/msg42297.html
  
 
  ok, thats a good hint, thanks.
 
  JFI: My setup is
 
  Pool Inc: J/F Retention 14 Days, V Retention 21 Days
  Pool Dif: J/F Retention 7 Months, V Retention 9 Months
  Pool Ful: J/F Retention 5 Years, V Retention 6 Years
 
  Well, i think i must put the highest job and file retention that exist
  in the pool resource, into the client resource. Shorter job and file
  retention are anyway forced by the shorter volume retention.
 
  Thought?

 Yes, assuming two things:

 1) You also remove J/F retention from the pool resource.

 2) You have the same retention periods for all clients.

 __Martin


Honestly, when you get to the point of custom retentions per type / pool /
client, you're better off coding your business logic into a script that
manually purges jobs / volumes.

-Blake
--
Protect Your Site and Customers from Malware Attacks
Learn about various malware tactics and how to avoid them. Understand 
malware threats, the impact they can have on your business, and how you 
can protect your company and customers by using code signing.
http://p.sf.net/sfu/oracle-sfdevnl___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] incremental backups too large

2011-01-13 Thread Blake Dunlap
2011/1/13 Lawrence Strydom qhol...@gmail.com

 Hi And thanks for all the replies so far.

 I'm running Bacula 5.0.3 on OpenSuSE 11.3. Self compiled with the following
 configure options:

 * --enable-smartalloc --sbindir=/usr/local/bacula/bin
 --sysconfdir=/usr/local/bacula/bin -with-mysql -with-openssl -enable-bat
 -sysconfdir=/etc/bacula -enable-tray-monitor*

 Here are my job and fileset deffinitions:

 ###


 JobDefs {
 Name = XPclients
 Type = Backup
 Level = Incremental
 FileSet = XP Set
 Schedule = WeeklyCycle
 Storage = File
 Messages = Standard
 Pool = File
 Priority = 10
 Reschedule On Error = yes
 Reschedule Interval = 1 hours
 Reschedule Times = 10

 Write Bootstrap = /home/bacula/bacula/working/%c.bsr
 }




 FileSet {
 Name = XP Set

 Include {
 Options {
 signature = MD5
 compression = GZIP
 }

 File = C:/documents and settings
 }


 


 I understand that something is adding data and logically the backup should
 grow. What I don't understand is why the entire file has to be backed up if
 only a few bytes of data has changed. It is mainly outlook.pst files and
 MSSQL databse files that cuase these large backups. Some of these files are
 several GB.

 My understanding of an incremental backup is that only changed data is
 backed up. It seems that at the moment my Bacula is doing differential
 backups, ie backing up the entire file if the timestamp has changed, even
 though I have configured it for incremental.

 regds

 Lawrence


From the first reply to your first email:

quote
Yes. The entire file is backed up again when gets modification.
Incremental backups include all modified files since last backup (Full,
Incremental ou differential). Incremental and differential are file based.
if you have a 100GB file and this was modified, it will be backed up and
will use this space again.

Kleber
/quote

-Blake
--
Protect Your Site and Customers from Malware Attacks
Learn about various malware tactics and how to avoid them. Understand 
malware threats, the impact they can have on your business, and how you 
can protect your company and customers by using code signing.
http://p.sf.net/sfu/oracle-sfdevnl___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] Help on retention period

2011-01-12 Thread Blake Dunlap
On Wed, Jan 12, 2011 at 11:41, Graham Keeling gra...@equiinet.com wrote:

 On Wed, Jan 12, 2011 at 10:54:13AM -0600, Mark wrote:
  On Wed, Jan 12, 2011 at 9:35 AM, Valerio Pachera siri...@gmail.com
 wrote:
 
  
   SCOPE: We want the possibility of restore any file till 2 weeks ago.
  
 
  ...
 
 
   at sunday of the third week, the first full backup get overwritten.
  
   _ _ _ _ _ _ | _ _ _ _ _ _ |
  
   This means that, of the first week, I can only restore file present in
   the incremental backup.
   In other words I do not have a cicle of 2 weeks but 1.
  
  
  When your first week's full backup gets overwritten, what are those
  incremental backups incremental to?  What's you're describing sounds
 like
  what I expect fulls and incrementals to be.  When you overwrite the full,
  you've essentially orphaned the incrementals that were created based on
 that
  full backup.

 Bacula doesn't prevent backups that other backups depend on from being
 purged.

 If there is no previous Full before the Incrementals, you cannot easily
 restore
 the files in the Incrementals. You have to extract the files from the
 individual volumes with the 'bextract' command.

 It also doesn't have anything that indicates that a particular backup was
 based on another particular backup. It calculates everything based on
 dates.

 So, if an Incremental from the middle of a sequence got purged (somehow),
 bacula won't notice and will happily restore from the latest Incremental.
 It is good that you can restore something, but bad because the files you
 get
 back may well not be the same as the ones that were on your client machine
 on
 the day that the Incremental was made.


 Anyway, this all means that you need to set your retention times very
 carefully.

 If you set them so that they cover the periods that you're worried
 about - like Valerio's example of wanting to restore from two weeks back...
 F I I I I I I F I I I I I I I F
 ...you might decide to set the retention of Fulls to 3 weeks.
 But be careful! If, for some reason, a Full backup fails, time will march
 on
 and you will end up having a Full that other backups depend on getting
 purged
 (imagine the 2nd 'F' in the sequence above being missed out).


 I actually wrote a patch that enforces that bacula only purges backups
 that other backups don't depend on. But it makes some assumptions about
 your
 setup. It assumes that you are using one job per volume and it assumes that
 you are not using Job / File retentions (ie, you have set them very high)
 and
 instead rely purely on Volume retentions.

 So, if your retention is set to one week, the purge sequence will be like
 this
 (with new backups being made on the right).

 F I I I I I I F I I I I I I F
 F I I I I I   F I I I I I I F I
 F I I I I F I I I I I I F I I
 F I I I   F I I I I I I F I I I
 F I I F I I I I I I F I I I I
 F I   F I I I I I I F I I I I I
 F F I I I I I I F I I I I I I
  F I I I I I I F I I I I I I F
  F I I I I I   F I I I I I I F I

 This also suffers if a Full backup is missed, because you end up having to
 keep more old backups.

 I can upload this patch if it is interesting to people.



 P.S. One last thing - try not to worry about what bacula does if your clock
 somehow goes wrong and decides that it is 2037. Or 1970. Or last week. :)


This is supposedly better in the upcoming version at least in regards to
pruning. If they have gotten the pruning to work properly on pools as well,
it could potentially make a lot of lives easier. I personally never
understood the logic behind the implicit assumptions that the restore and
backup algorithms make in regards to job tree structure, but at least things
are progressing. I'm just still hoping that they will some day include a
more realistic concept for management of locations of data and the
implications related to backups / restores.

-Blake
--
Protect Your Site and Customers from Malware Attacks
Learn about various malware tactics and how to avoid them. Understand 
malware threats, the impact they can have on your business, and how you 
can protect your company and customers by using code signing.
http://p.sf.net/sfu/oracle-sfdevnl___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] Serializing catalog backup

2011-01-10 Thread Blake Dunlap
On Mon, Jan 10, 2011 at 18:18, Phil Stracchino ala...@metrocast.net wrote:

 On 01/10/11 16:49, Mike Ruskai wrote:
  So simply having the catalog backup be a different priority ensures that
  no other job can run at the same time, provided mixed priorities are
  disallowed (that would allow the higher-priority backup jobs to start
  while the catalog backup is under way).  Which is just as well, since I
  don't like the idea of relying on database or table locks.

 Yes, disabling mixed priorities would prevent any higher-priority job
 from starting while the catalog update was running.


 --
  Phil Stracchino, CDK#2 DoD#299792458 ICBM: 43.5607, -71.355
  ala...@caerllewys.net   ala...@metrocast.net   p...@co.ordinate.org
 Renaissance Man, Unix ronin, Perl hacker, Free Stater
 It's not the years, it's the mileage.


What you may wish to request (I can't imagine it would be difficult) is a
maximum mixed priority level option, where anything above that number
ignores mixed priorities being allowed and waits till everything else is
stopped. I would certainly vote for such a feature.

I will look into doing it myself, but I cannot guarantee anything, as my
boss would first have to agree with our internal business need for such an
additional feature, and I'm not sure we need it at the moment though I
certianly see the appeal of the capability.
--
Gaining the trust of online customers is vital for the success of any company
that requires sensitive data to be transmitted over the Web.   Learn how to 
best implement a security strategy that keeps consumers' information secure 
and instills the confidence they need to proceed with transactions.
http://p.sf.net/sfu/oracle-sfdevnl ___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] Virtual Backups - Do we actually need full backups anymore?

2011-01-07 Thread Blake Dunlap
On Fri, Jan 7, 2011 at 06:30, Mister IT Guru misteritg...@gmx.com wrote:

 On 07/01/2011 12:23, James Harper wrote:
  Suggestion:
 
  Schedule the day's Incremental, then schedule the VirtualFull, say,
  30
  minutes later.
 
  Put a RunBeforeJob script on the incremental that creates a lockfile
  (in
  a properly race-safe manner, of course) for the client.
 
  Put a RunAfterJob script on the incremental that removes the
  lockfile.
  Put a RunBeforeJob script on the VirtualFull job that checks for
  presence of the client's lockfile, and, if it finds it still
  present,
  sleeps for five minutes before checking again, and does not return
  until
  the lockfile has been gone for two consecutive checks (thus making
  certain there is a minimum of five minutes for attribute metadata
  from
  the job to be flushed).
 
 
  Brilliant - sounds workable, I just don't know if my bacula skills are
  up to it, I'm still very fresh to it, but the theory of your
  suggestion
  is the closest I guess we can come. I will look into in - Thank you
  bacula list :)
  I'm not completely sure, but I think that Bacula figures out what media
  it is going to use before it calls RunBeforeJob. This would mean that if
  you schedule your VirtualFull while your Incremental is running, the
  VirtualFull will not include the Incremental backup, no matter how much
  you wait inside the VirtualFull's RunBeforeJob script.
 
  Does anyone know for sure?
 
  James
 I'm thinking that a workaround for this would be a script that checks if
 the previous incremental has finished, and if it hasn't exit the job,
 and then schedule a new job for $timenow+30mins. And then in 30 mins,
 it'll check again. That way, no overlap

 What I just said works on paper, but may not be able to actually run in
 bacula, I say that because google will index this msg and people may not
 read the whole thread when they find this:)


I have a specific perl script that runs, and fires off the top x (in our
case 3) vfulls each morning queued to run, based on what incrementals are
seen running successfully the night before, and has had no fulls within x
(in my case 90) days.

We did this because of the poor scheduling capability of bacula, and the
desire to have constant aniversary based backups, instead of any kind of set
weekend full backup schedule, which would be impossible to complete in a
window.

I am happy to post if anyone wants.

-Blake
--
Gaining the trust of online customers is vital for the success of any company
that requires sensitive data to be transmitted over the Web.   Learn how to 
best implement a security strategy that keeps consumers' information secure 
and instills the confidence they need to proceed with transactions.
http://p.sf.net/sfu/oracle-sfdevnl ___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] Virtual Backups - Do we actually need full backups anymore?

2011-01-07 Thread Blake Dunlap
On Fri, Jan 7, 2011 at 16:20, Blake Dunlap iki...@gmail.com wrote:

 On Fri, Jan 7, 2011 at 13:15, Phil Stracchino ala...@metrocast.netwrote:

 Blake,
 By all means post it, and you might want to consider submitting it as a
 contributed support script.


  Script to automatically start VirtualFulls based on Full Backup Age
 attached, currently set up to have some hardcoding in the script, but easily
 changeable. Designed to be run from crontab piped to mail program if any
 issues.

 Assumes a fairly simple backup structure, but easily customizable.
 Originally designed for our use, which consisted of age based backup
 schedule, each night set to incremental, with max diff age of a week, and
 max full age set about 2 weeks longer than the age set in the vmon script.

 Script looks at the last x days of incrementals (so as to not have to read
 configs etc), and the most recent full backup for each job, and orders them
 from oldest to newest. The top x jobs with fulls older than x days have
 vfull backups spawned via console.

 Reason for this design was so that we would have staggered backups, so as
 not to overload backup server, as well as to maintain under backup window.

 -Blake


Attached is a cleaner version.
#!/usr/bin/perl
use strict;
use Getopt::Long;
use DBI;

my $allowed = 3;
my $fullinterval = 90;
my $scaninterval = 3 DAY;
my $spool = no;
my $priority = 7;
my $debug = 0;
my $exit = 0;
my %jobs;
my $batch = 0;
my @validjobs;
my @jobstorun;
my @jobstocheck;
my $Row;
$fullinterval--;

GetOptions(debug+=\$debug,
allowed:i=\$allowed,
priority:i=\$priority,
spool:s=\$spool,
fullinterval:s=\$fullinterval,
batch+=\$batch
);

print Debug: $debug\n if $debug;
print Allowed: $allowed\n if $debug;
print Interval: $fullinterval\n if $debug;
print Batch: $batch\n if $debug;

my $username = 'bacula';my $password = '';my $database = 'bacula';my $hostname 
= 'localhost'; my $port = 3306;

my $dbh = DBI-connect(dbi:mysql:database=$database; . 
host=$hostname;port=$port, $username, $password);

die Unable to connect to Database unless $dbh;

#Get list of recent successful Incremental jobs
my $SQL=select Distinct(Name) as Name from Job WHERE Level IN ('I', 'D') AND 
JobStatus = 'T' AND StartTime  DATE_SUB(CURDATE(), INTERVAL $scaninterval);

my $Select = $dbh-prepare($SQL);
die Unable to run query unless $Select;
$Select-execute();

my $rv = $Select-rows;

if ($rv  0) {
print The following incrementals were found:\n if $debug1;
while($Row=$Select-fetchrow_hashref)
{
$jobs{ $Row-{'Name'} }{'Active'} = 1;
print  $Row-{'Name'}\n if $debug1;
}
$Select-finish;
} else {
$Select-finish;
die Somethings wrong, no successful Incremental jobs returned!\n;
}

#Get list of last successful full backup for each Job
$SQL=select Max(JobId) as JobId, Name, DATEDIFF(CURDATE(), Max(StartTime)) as 
Age from Job WHERE Level = 'F' AND JobStatus = 'T' GROUP BY Name;

$Select = $dbh-prepare($SQL);
die Unable to run query unless $Select;
$Select-execute();

$rv = $Select-rows;

if ($rv  0) {
#Attach the last successful full backup to each job
while($Row=$Select-fetchrow_hashref)
{
if ($jobs{ $Row-{'Name'} }{'Active'} eq 1) {
$jobs{ $Row-{'Name'} }{'Age'} = $Row-{'Age'};
$jobs{ $Row-{'Name'} }{'JobId'} = $Row-{'JobId'};
push(@validjobs,$Row-{'Name'});
}
}
$Select-finish;

} else {
$Select-finish;
die Somethings wrong, no successful Full jobs returned!\n;
}

my @joblist = (sort { $jobs{$b}{'Age'} = $jobs{$a}{'Age'} } @validjobs);

if (scalar(@joblist)  0) {
print The following Fulls were found that match:\n if $debug1;
print  Age ID  Name\n if $debug1;
foreach my $jobtocheck (@joblist) {
print  $jobs{$jobtocheck}{'Age'}   
$jobs{$jobtocheck}{'JobId'} $jobtocheck\n if $debug1;
if ($jobs{$jobtocheck}{'Age'}  $fullinterval  
scalar(@jobstorun)  $allowed) {
push(@jobstorun,$jobtocheck);
}
}
}

if (scalar(@jobstorun)  0) {
print Running the following virtual full jobs today:\n;
foreach my $jobtocheck (@jobstorun) {
print  $jobs{$jobtocheck}{'Age'}   $jobtocheck\n;
my $test = `/sbin/bconsole -c /etc/bacula/bconsole.conf EOF
run job=$jobtocheck level=VirtualFull SpoolData=$spool priority=$priority yes
EOF` unless $debug;
}
$exit = 1 if $batch;
}

$dbh-disconnect();

exit $exit;--
Gaining the trust of online customers is vital for the success of any company
that requires sensitive data to be transmitted over the Web.   Learn how to 
best implement a security strategy

Re: [Bacula-users] Virtual Backups - Do we actually need full backups anymore?

2011-01-07 Thread Blake Dunlap
On Fri, Jan 7, 2011 at 13:15, Phil Stracchino ala...@metrocast.net wrote:

 Blake,
 By all means post it, and you might want to consider submitting it as a
 contributed support script.


 Script to automatically start VirtualFulls based on Full Backup Age
attached, currently set up to have some hardcoding in the script, but easily
changeable. Designed to be run from crontab piped to mail program if any
issues.

Assumes a fairly simple backup structure, but easily customizable.
Originally designed for our use, which consisted of age based backup
schedule, each night set to incremental, with max diff age of a week, and
max full age set about 2 weeks longer than the age set in the vmon script.

Script looks at the last x days of incrementals (so as to not have to read
configs etc), and the most recent full backup for each job, and orders them
from oldest to newest. The top x jobs with fulls older than x days have
vfull backups spawned via console.

Reason for this design was so that we would have staggered backups, so as
not to overload backup server, as well as to maintain under backup window.

-Blake
#!/usr/bin/perl

use strict;

use Getopt::Long;

use DBI;



my $allowed = 3;

my $fullinterval = 90;

my $scaninterval = 3 DAY;

my $spool = no;

my $priority = 7;

my $debug = 0;

my $exit = 0;

my %jobs;

my $batch = 0;

my @validjobs;

my @jobstorun;

my @jobstocheck;

my $Row;

$fullinterval--;



GetOptions(debug+=\$debug,

allowed:i=\$allowed,

priority:i=\$priority,

spool:s=\$spool,

fullinterval:s=\$fullinterval,

batch+=\$batch

);



print Debug: $debug\n if $debug;

print Allowed: $allowed\n if $debug;

print Interval: $fullinterval\n if $debug;

print Batch: $batch\n if $debug;



my $username = 'bacula';my $password = 'back...@n';my $database = 'bacula';my 
$hostname = 'localhost'; my $port = 3306;



my $dbh = DBI-connect(dbi:mysql:database=$database; . 
host=$hostname;port=$port, $username, $password);



die Unable to connect to Database unless $dbh;



#Get list of recent successful Incremental jobs

my $SQL=select Distinct(Name) as Name from Job WHERE Level IN ('I', 'D') AND 
JobStatus = 'T' AND StartTime  DATE_SUB(CURDATE(), INTERVAL $scaninterval);



my $Select = $dbh-prepare($SQL);

die Unable to run query unless $Select;

$Select-execute();



my $rv = $Select-rows;



if ($rv  0) {

print The following incrementals were found:\n if $debug1;

while($Row=$Select-fetchrow_hashref)

{

$jobs{ $Row-{'Name'} }{'Active'} = 1;

print  $Row-{'Name'}\n if $debug1;

}

$Select-finish;

} else {

$Select-finish;

die Somethings wrong, no successful Incremental jobs returned!\n;

}



#Get list of last successful full backup for each Job

$SQL=select Max(JobId) as JobId, Name, DATEDIFF(CURDATE(), Max(StartTime)) as 
Age from Job WHERE Level = 'F' AND JobStatus = 'T' GROUP BY Name;



$Select = $dbh-prepare($SQL);

die Unable to run query unless $Select;

$Select-execute();



$rv = $Select-rows;



if ($rv  0) {

#Attach the last successful full backup to each job

while($Row=$Select-fetchrow_hashref)

{

if ($jobs{ $Row-{'Name'} }{'Active'} eq 1) {

$jobs{ $Row-{'Name'} }{'Age'} = $Row-{'Age'};

$jobs{ $Row-{'Name'} }{'JobId'} = $Row-{'JobId'};

push(@validjobs,$Row-{'Name'});

}

}

$Select-finish;



} else {

$Select-finish;

die Somethings wrong, no successful Full jobs returned!\n;

}



my @joblist = (sort { $jobs{$b}{'Age'} = $jobs{$a}{'Age'} } @validjobs);



if (scalar(@joblist)  0) {

print The following Fulls were found that match:\n if $debug1;

print  Age ID  Name\n if $debug1;

foreach my $jobtocheck (@joblist) {

print  $jobs{$jobtocheck}{'Age'}   
$jobs{$jobtocheck}{'JobId'} $jobtocheck\n if $debug1;

if ($jobs{$jobtocheck}{'Age'}  $fullinterval  
scalar(@jobstorun)  $allowed) {

push(@jobstorun,$jobtocheck);

}

}

}



if (scalar(@jobstorun)  0) {

print Running the following virtual full jobs today:\n;

foreach my $jobtocheck (@jobstorun) {

print  $jobs{$jobtocheck}{'Age'}   $jobtocheck\n;

my $test = `/sbin/bconsole -c /etc/bacula/bconsole.conf EOF

run job=$jobtocheck level=VirtualFull SpoolData=$spool priority=$priority yes

EOF` unless $debug;

}

$exit = 1 if $batch;

}



$dbh-disconnect();



exit $exit;--
Gaining the trust of online customers is vital for the success of any company
that requires sensitive data to be transmitted over the Web.   Learn how to 
best implement a security 

Re: [Bacula-users] VirtualFull using tape drives

2011-01-07 Thread Blake Dunlap
 I am using a TL2000 tape library with two drives.
 The technique can't work if you only have one tape drive.

 I'm taking incremental backups Mon-Fri.
 Then after the incremental backups are finished on Friday I consolidate
 them into a VirtualFull backup.

 For a VirtualFull backup to work it takes the previous full backup and the
 incremental backups since, and combines them to produce new tape(s) that
 will be promoted to being the latest full backups.
 The first incremental backups ever run for a host will auto upgrade to a
 full backup so you're covered for that.

 The process seems to work well for me, after I worked around a minor
 problem.
 After the incremental backup is complete, tapes are left in the drives.
 If tape X is in drive 1 after the incremental backups are complete and the
 VirtualFull wishes to load it it into drive 0 to read from; then Bacula
 can't eject the tape from drive 1 and then load it into drive 0 and
 deadlocks for user intervention.
 User intervention doesn't help either as the tape you want to eject is
 locked by Bacula, so you end up stopping daemons and interrupting the
 backup.

 The solution is to use an administrative job that is scheduled to eject all
 tapes from the drives after the incremental backups are done, and before the
 VirtualFull backup starts.
 Perhaps I should have just raised a bug on that...

 Regards,

 --
 Jim Barber
 DDI Health


I see you ran into the same bug I did (
http://bugs.bacula.org/view.php?id=1657). Do you mind sharing your admin
job? It would save me some scripting on my home server.

-Blake
--
Gaining the trust of online customers is vital for the success of any company
that requires sensitive data to be transmitted over the Web.   Learn how to 
best implement a security strategy that keeps consumers' information secure 
and instills the confidence they need to proceed with transactions.
http://p.sf.net/sfu/oracle-sfdevnl ___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] Bacula and automatic labelling of volumes

2010-11-23 Thread Blake Dunlap
Bacula does not act like you wish, you could script parts of this to
accomplish what you want, but you cannot do it natively.

On Tue, Nov 23, 2010 at 12:53, Thomas Schweikle t...@vr-web.de wrote:

 Hi!

 I have set up bacula with automatic labelling of volumes:

 In bacula-dir.conf:
 Storage {
  Name = File
  Address = bacula
  SDPort = 9103
  Password = 
  Device = FileStorage
  Media Type = File
 }

 Pool {
  Name = File
  Pool Type = Backup
  Recycle = yes
  recycle Volumes
  AutoPrune = yes
  Volume Retention = 10 days
  Maximum Volume Bytes = 4G
  Maximum Volume Jobs = 1
  Maximum Volumes = 100
  LabelFormat = T${JobName}
 }

 In bacula-sd.conf:
 Device {
  Name = FileStorage
  Media Type = File
  Archive Device = /srv/bacula
  LabelMedia = yes;
  Random Access = Yes;
  AutomaticMount = yes;
  RemovableMedia = no;
  AlwaysOpen = no;
 }

 Automatic creation of volumes then labelling them fails. Bacula
 hangs waiting for operator intervention --- this is not what I
 expected it to do. I'd like to have:

 - Start a job,
 - look for an empty volume, if none there, create one, than label it
 - push it into the pool, mount it.
 - backup the jobs data.
 - after finishing keep the volume with this one job arround
 - delete it when retention time has come and there is
  not enough space for a new volume.

 After reading the fine manual , goggling around, not finding
 anything helpful: does anyone have a setup doing what I expect mine
 to do, running and is willing to share?

 I'd appreciate any help!

 --
 Thomas



 --
 Increase Visibility of Your 3D Game App  Earn a Chance To Win $500!
 Tap into the largest installed PC base  get more eyes on your game by
 optimizing for Intel(R) Graphics Technology. Get started today with the
 Intel(R) Software Partner Program. Five $500 cash prizes are up for grabs.
 http://p.sf.net/sfu/intelisp-dev2dev
 ___
 Bacula-users mailing list
 Bacula-users@lists.sourceforge.net
 https://lists.sourceforge.net/lists/listinfo/bacula-users

--
Increase Visibility of Your 3D Game App  Earn a Chance To Win $500!
Tap into the largest installed PC base  get more eyes on your game by
optimizing for Intel(R) Graphics Technology. Get started today with the
Intel(R) Software Partner Program. Five $500 cash prizes are up for grabs.
http://p.sf.net/sfu/intelisp-dev2dev___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] Bacula and automatic labelling of volumes

2010-11-23 Thread Blake Dunlap
recycled volumes are not relabeled, only when they are created are they
labeled AFAIK

On Tue, Nov 23, 2010 at 16:17, Paulo Martinez martinez...@googlemail.comwrote:

 Am 23.11.2010 um 19:53 schrieb Thomas Schweikle:
  Hi!
 
  I have set up bacula with automatic labelling of volumes:
 
  In bacula-dir.conf:
  Storage {
   Name = File
   Address = bacula
   SDPort = 9103
   Password = 
   Device = FileStorage
   Media Type = File
  }
 
  Pool {
   Name = File
   Pool Type = Backup
   Recycle = yes
   recycle Volumes
   AutoPrune = yes
   Volume Retention = 10 days
   Maximum Volume Bytes = 4G
   Maximum Volume Jobs = 1
   Maximum Volumes = 100
   LabelFormat = T${JobName}
  }
 
  In bacula-sd.conf:
  Device {
   Name = FileStorage
   Media Type = File
   Archive Device = /srv/bacula
   LabelMedia = yes;
   Random Access = Yes;
   AutomaticMount = yes;
   RemovableMedia = no;
   AlwaysOpen = no;
  }
 
  Automatic creation of volumes then labelling them fails. Bacula
  hangs waiting for operator intervention --- this is not what I
  expected it to do. I'd like to have:
 
  - Start a job,
  - look for an empty volume, if none there, create one, than label it
  - push it into the pool, mount it.
  - backup the jobs data.
  - after finishing keep the volume with this one job arround
  - delete it when retention time has come and there is
   not enough space for a new volume.
 
  After reading the fine manual , goggling around, not finding
  anything helpful: does anyone have a setup doing what I expect mine
  to do, running and is willing to share?
 
  I'd appreciate any help!


 Hi Thomas, i have the same configuration and it is doing the automatic
 labeling (Media Type = File).

 What about this recycle Volumes-line in your Pool definition ?

 PM







 --
 Increase Visibility of Your 3D Game App  Earn a Chance To Win $500!
 Tap into the largest installed PC base  get more eyes on your game by
 optimizing for Intel(R) Graphics Technology. Get started today with the
 Intel(R) Software Partner Program. Five $500 cash prizes are up for grabs.
 http://p.sf.net/sfu/intelisp-dev2dev
 ___
 Bacula-users mailing list
 Bacula-users@lists.sourceforge.net
 https://lists.sourceforge.net/lists/listinfo/bacula-users

--
Increase Visibility of Your 3D Game App  Earn a Chance To Win $500!
Tap into the largest installed PC base  get more eyes on your game by
optimizing for Intel(R) Graphics Technology. Get started today with the
Intel(R) Software Partner Program. Five $500 cash prizes are up for grabs.
http://p.sf.net/sfu/intelisp-dev2dev___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] Understanding purge

2010-11-22 Thread Blake Dunlap
On Mon, Nov 22, 2010 at 11:02, Dermot Beirne dermot.bei...@dpd.ie wrote:

 That particular feature would be good news for me at least!
 I definitely would really like to see the ability to automatically
 purge volumes also, and leave it to the user to decide if they want to
 preserve the data as long as possible.
 The patch you posted a link to is for version 3.0.1
 Do you know if it would still apply to 5.0.3?

 If you can point me to a description of how to apply patches to bacula
 also, that would be helpful.  I haven't used patches before.


Do you normally use the binary distributions (rpms etc) or compile bacula
from source?

-Blake
--
Increase Visibility of Your 3D Game App  Earn a Chance To Win $500!
Tap into the largest installed PC base  get more eyes on your game by
optimizing for Intel(R) Graphics Technology. Get started today with the
Intel(R) Software Partner Program. Five $500 cash prizes are up for grabs.
http://p.sf.net/sfu/intelisp-dev2dev___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] Understanding purge

2010-11-18 Thread Blake Dunlap
If you say so, I guess it will be nice to have one less patch to manually
merge.

Personally, I'd rather see them add VirtualDiffs, VirtualFullCopys, fix the
Pool based expiration (really really really want that), have the option to
automatically purge expired volumes instead of only keeping data as long as
possible so i can stop having to script it, provide better logic for
restores from multiple logical sites like offsite tapes or slow link
datacenters, block based dedup, a more inteligent file based store because
they arent tapes, etc etc =)

-Blake

On Thu, Nov 18, 2010 at 12:05, Dermot Beirne dermot.bei...@dpd.ie wrote:

 Hello Blake,


  Basicly what I see here is that you really want a migration, not a copy
  job. This coupled with the patch from
  bacula-de...@lists.sourceforge.net/msg04724.html target=_new
 http://www.mail-archive.com/bacula-de...@lists.sourceforge.net/msg04724.html
 
  should do what you want if you set the new option in the migration job
  (from the patch, believe it is Migrate Purge Jobs = yes, as I said,
 it's
  been a while).

 Sorry, we missed your excelent idea, and I think that we can add it very
 quickly, with one minor modification about the directive name (more
 something
 like  PurgeMigrateJob or PurgeMigrationJob.

 If you want to help getting documentation (new feature section and Job
 resource directive) and regression testing on your feature, it would be
 great.

 Bye


 This is great news!
 Dermot


 --
 Beautiful is writing same markup. Internet Explorer 9 supports
 standards for HTML5, CSS3, SVG 1.1,  ECMAScript5, and DOM L2  L3.
 Spend less time writing and  rewriting code and more time creating great
 experiences on the web. Be a part of the beta today
 http://p.sf.net/sfu/msIE9-sfdev2dev
 ___
 Bacula-users mailing list
 Bacula-users@lists.sourceforge.net
 https://lists.sourceforge.net/lists/listinfo/bacula-users

--
Beautiful is writing same markup. Internet Explorer 9 supports
standards for HTML5, CSS3, SVG 1.1,  ECMAScript5, and DOM L2  L3.
Spend less time writing and  rewriting code and more time creating great
experiences on the web. Be a part of the beta today
http://p.sf.net/sfu/msIE9-sfdev2dev___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] Understanding purge

2010-11-17 Thread Blake Dunlap
If you guys would like, I can attach the patch we apply to make migrations
purge the jobs themselves as well and thus cause volumes to properly
autoprune.

We also run a perl script to prune/purge any expired volumes every few
hours.

-Blake


On Wed, Nov 17, 2010 at 05:58, Graham Keeling gra...@equiinet.com wrote:

 On Wed, Nov 17, 2010 at 11:32:44AM +, Dermot Beirne wrote:
  Hi Graham,
  I think this is a key feature, and am surprised it's not easily
  possible.  The user should have the choice.  I saw the blog entries
  you refer to, and that bug appears to have been fixed, but I don't see
  what use it is in the current system.  If it's not possible to get
  bacula to purge a volume until it has absolutely no option, (which it
  then truncates and relabels anyway) then under what circumstances is
  the actiononpurge=trucate feature useful?
 
  Hence I am wondering if I am misunderstanding how Bacula works in
  regard to purging volumes.
 
  There must have been a good reason to implement this new feature, and
  I think it's what I need, but I can't see how to use it properly.
 
  Dermot.

 I agree, and I am sorry because I can't offer you any more help.
 I was just stating how things appear to stand at the moment.

  On 17 November 2010 11:03, Graham Keeling gra...@equiinet.com wrote:
   On Wed, Nov 17, 2010 at 10:48:36AM +, Dermot Beirne wrote:
   Hi Phil,
  
   Here is the pool definitions I'm using.
  
   Is there some way I can get the entire disk pool volumes purged when
   they expire, so they are all truncated and all that space is released.
  
   I don't think that you're going to have much luck with this.
   If you do, I would be interested in how you did it.
  
   When the ActionOnPurge feature originally came along, I found a
 dangerous
   bug in it. The bacula people said that they would try to fix it in the
   next version.
   But as far as I understand it, the fix is that you shouldn't try to run
 it
   automatically.
  
  
 http://sourceforge.net/apps/wordpress/bacula/2010/02/01/new-actiononpurge-feature/
  
 http://sourceforge.net/apps/wordpress/bacula/2010/01/28/action-on-purge-feature-broken-in-5-0-0/
  
  
  



 --
 Beautiful is writing same markup. Internet Explorer 9 supports
 standards for HTML5, CSS3, SVG 1.1,  ECMAScript5, and DOM L2  L3.
 Spend less time writing and  rewriting code and more time creating great
 experiences on the web. Be a part of the beta today
 http://p.sf.net/sfu/msIE9-sfdev2dev
 ___
 Bacula-users mailing list
 Bacula-users@lists.sourceforge.net
 https://lists.sourceforge.net/lists/listinfo/bacula-users

--
Beautiful is writing same markup. Internet Explorer 9 supports
standards for HTML5, CSS3, SVG 1.1,  ECMAScript5, and DOM L2  L3.
Spend less time writing and  rewriting code and more time creating great
experiences on the web. Be a part of the beta today
http://p.sf.net/sfu/msIE9-sfdev2dev___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] Understanding purge

2010-11-17 Thread Blake Dunlap
Basicly what I see here is that you really want a migration, not a copy job.
This coupled with the patch from
http://www.mail-archive.com/bacula-de...@lists.sourceforge.net/msg04724.htmlshould
do what you want if you set the new option in the migration job (from
the patch, believe it is Migrate Purge Jobs = yes, as I said, it's been a
while).

You may also have to use a script to run the purge command on empty /
expired volumes to get the truncate to act like you want before the volumes
are reused, but I have a skeleton for that as well if needed.

-Blake

On Wed, Nov 17, 2010 at 16:07, Dermot Beirne dermot.bei...@dpd.ie wrote:

 Hi Blake,
 That sounds great, exactly what I've been looking for by the sound of it.
 If you can provide this and some details of how to get it working, I
 for one would be very interested and grateful.

 Incidently, how would using such a patch affect upgrading Bacula in
 future, etc.  I presume you are using it in a production environment
 and find it stable.

 Does this patch replace the actiononpurge feature entirely?

 I'd suggest something like this should be considered for the next
 version of Bacula.

 Dermot.




 On 17 November 2010 18:42, Blake Dunlap iki...@gmail.com wrote:
  If you guys would like, I can attach the patch we apply to make
 migrations
  purge the jobs themselves as well and thus cause volumes to properly
  autoprune.
 
  We also run a perl script to prune/purge any expired volumes every few
  hours.
 
  -Blake
 
 
  On Wed, Nov 17, 2010 at 05:58, Graham Keeling gra...@equiinet.com
 wrote:
 
  On Wed, Nov 17, 2010 at 11:32:44AM +, Dermot Beirne wrote:
   Hi Graham,
   I think this is a key feature, and am surprised it's not easily
   possible.  The user should have the choice.  I saw the blog entries
   you refer to, and that bug appears to have been fixed, but I don't see
   what use it is in the current system.  If it's not possible to get
   bacula to purge a volume until it has absolutely no option, (which it
   then truncates and relabels anyway) then under what circumstances is
   the actiononpurge=trucate feature useful?
  
   Hence I am wondering if I am misunderstanding how Bacula works in
   regard to purging volumes.
  
   There must have been a good reason to implement this new feature, and
   I think it's what I need, but I can't see how to use it properly.
  
   Dermot.
 
  I agree, and I am sorry because I can't offer you any more help.
  I was just stating how things appear to stand at the moment.
 
   On 17 November 2010 11:03, Graham Keeling gra...@equiinet.com
 wrote:
On Wed, Nov 17, 2010 at 10:48:36AM +, Dermot Beirne wrote:
Hi Phil,
   
Here is the pool definitions I'm using.
   
Is there some way I can get the entire disk pool volumes purged
when
they expire, so they are all truncated and all that space is
released.
   
I don't think that you're going to have much luck with this.
If you do, I would be interested in how you did it.
   
When the ActionOnPurge feature originally came along, I found a
dangerous
bug in it. The bacula people said that they would try to fix it in
 the
next version.
But as far as I understand it, the fix is that you shouldn't try to
run it
automatically.
   
   
   
 http://sourceforge.net/apps/wordpress/bacula/2010/02/01/new-actiononpurge-feature/
   
   
 http://sourceforge.net/apps/wordpress/bacula/2010/01/28/action-on-purge-feature-broken-in-5-0-0/
   
   
   
 
 
 
 
 --
  Beautiful is writing same markup. Internet Explorer 9 supports
  standards for HTML5, CSS3, SVG 1.1,  ECMAScript5, and DOM L2  L3.
  Spend less time writing and  rewriting code and more time creating great
  experiences on the web. Be a part of the beta today
  http://p.sf.net/sfu/msIE9-sfdev2dev
  ___
  Bacula-users mailing list
  Bacula-users@lists.sourceforge.net
  https://lists.sourceforge.net/lists/listinfo/bacula-users
 
 

--
Beautiful is writing same markup. Internet Explorer 9 supports
standards for HTML5, CSS3, SVG 1.1,  ECMAScript5, and DOM L2  L3.
Spend less time writing and  rewriting code and more time creating great
experiences on the web. Be a part of the beta today
http://p.sf.net/sfu/msIE9-sfdev2dev___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] broken threading

2010-11-01 Thread Blake Dunlap
On Mon, Nov 1, 2010 at 06:42, Ralf Gross ralf-li...@ralfgross.de wrote:

 Dan Langille schrieb:
  Over the past few days, I've become increasingly impatient and
  frustrated by posts that break threading.  That is, posts that lack the
  headers necessary for properly threading of emails.  Specifically, the
  References: and In-Reply-To: headers are not being preserved.
 
  cases in point, the following threads:
 
  * Cannot build bacula-client 5.0.3 on FreeBSD
  * Searching for files
  * PLEASE READ BEFORE POSTING
 
  As can be found here:
 
 http://marc.info/?l=bacula-usersr=1b=201010w=2
 
  Thanks for the rant.  :)


 The only way to stop this would be blocking all mails from the
 froum2mailinglist gateway backupcentral.com.

 http://backupcentral.com/component/mailman2/

 It's the same situation on the backuppc list...

 Ralf


Yeah I'm pretty sure its the forum integration from backup central's forums.
I'll second a vote for blocking them if you are considering it. =)

-Blake
--
Nokia and ATT present the 2010 Calling All Innovators-North America contest
Create new apps  games for the Nokia N8 for consumers in  U.S. and Canada
$10 million total in prizes - $4M cash, 500 devices, nearly $6M in marketing
Develop with Nokia Qt SDK, Web Runtime, or Java and Publish to Ovi Store 
http://p.sf.net/sfu/nokia-dev2dev___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] Maximum Concurrent Jobs

2010-10-29 Thread Blake Dunlap
All of the information is stored in the DB very granularly, there's no need
to go to volume size to determine storage use. I'll gladly help you more off
list if you need assistance collecting it, I've written a billing
application using the same data.

-Blake

On Fri, Oct 29, 2010 at 08:43, Mark Gordon mgor...@tdarx.com wrote:


 No you are correct. Simultaneous jobs allow multiple jobs to use the
 same storage device at the same time. Provided that only 1 pool is
 involved because a single (non autochanger) storage device can only
 load 1 volume at a time.

 John

 So that's the sticking point. 1 volume at a time unless I want to dump 8
 clients into the same volume with no way to tell who is using what
 amount of storage space.


 Thanks
 Mark


 This mail was sent via Mail-SeCure System.




 --
 Nokia and ATT present the 2010 Calling All Innovators-North America
 contest
 Create new apps  games for the Nokia N8 for consumers in  U.S. and Canada
 $10 million total in prizes - $4M cash, 500 devices, nearly $6M in
 marketing
 Develop with Nokia Qt SDK, Web Runtime, or Java and Publish to Ovi Store
 http://p.sf.net/sfu/nokia-dev2dev
 ___
 Bacula-users mailing list
 Bacula-users@lists.sourceforge.net
 https://lists.sourceforge.net/lists/listinfo/bacula-users

--
Nokia and ATT present the 2010 Calling All Innovators-North America contest
Create new apps  games for the Nokia N8 for consumers in  U.S. and Canada
$10 million total in prizes - $4M cash, 500 devices, nearly $6M in marketing
Develop with Nokia Qt SDK, Web Runtime, or Java and Publish to Ovi Store 
http://p.sf.net/sfu/nokia-dev2dev___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


[Bacula-users] Minor feature request (just something that would be handy, pretty low priority)

2009-10-09 Thread Blake Dunlap
Item n:   single release command that releases all drives in an auto changer in 
sequence
  Origin: Blake Dunlap (bl...@nxs.net)
  Date:   10/07/2009
  Status: Request

  What:   It would be nice if there was a release command that would release 
all drives in an autochanger instead of having to do each one in turn.

  Why:It can take some time for a release to occur, and the commands must 
be given for each drive in turn, which can quicky scale if there are several 
drives in the library. (Having to watch the console, to give each command can 
waste a good bit of time when you start getting into the 16 drive range when 
the tapes can take up to 3 minutes to eject each)

  Notes:  Due  to the way some autochangers/libraries work, you cannot assume 
that new tapes inserted will go into slots that are not currently believed to 
be in use by bacula (the tape from that slot is in a drive). This would make 
any changes in configuration quicker/easier, as all drives need to be released 
before any modifications to slots.

--
Come build with us! The BlackBerry(R) Developer Conference in SF, CA
is the only developer event you need to attend this year. Jumpstart your
developing skills, take BlackBerry mobile applications to market and stay 
ahead of the curve. Join us from November 9 - 12, 2009. Register now!
http://p.sf.net/sfu/devconference___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] [Bacula-devel] Bacula project design process

2009-09-26 Thread Blake Dunlap

 Hello,

 Recently several email design threads have pointed out an important
 deficiency
 in the Bacula project that I would like to discuss. We (I) I have already
 designed (mostly in my head) a good number of future projects -- including
 how to support portable clients better, deleting volume on purge, clients
 initiating backups, ...  Often these are put down in sketchy notes and
 shared
 with the programmer who wants to do the work (mostly Eric).  Generally the
 solutions designed are far more general and complete and already encompass
 much of what is being discussed via email.  The problem is that this
 process
 is not well defined and is not implemented in a way that a number of users
 can participate or even see what those proposed designs are.

 It seems to me that we need a more public way to share Bacula design
 proposal.
 Launchpad has a nice way of doing that (I forgot exactly what they call
 it),
 but I have recently moved off of Launchpad because as a project manager, I
 was unable to properly structure the project (it seems that only the
 Launchpad programmers can do that), in addition, I found launchpad very
 difficult to navigate.

 What we need is either a place where I can publish approved designs
 (probably
 the web site) or possibly a special design wiki.  At the moment, it is not
 clear to me that a wiki would work well -- the biggest problem is that
 many
 users don't fully appreciate the Bacula philosophy and how Bacula works,
 which means that it is easy to go off on a tangent.  This is not a
 criticism
 of anyone, but is meant to point out that designing new features for
 Bacula
 is very non-trivial and requires a *lot* of work and thought before any
 implementation begins.

 I suppose that the first step is for me to write up (or gather up what is
 already written) a few of the designs for ideas that are being discussed
 on
 the email lists so that you can see the direction I currently favor ...

Speaking as a minor hacker / user of the project, that would help greatly.

Speaking personally as a systems architect / developer, it practically makes me 
twitch trying to make any modifications without any design knowledge as far as 
the current project managers and developers, as it is counterproductive and 
quite often results in effort that will never make it to mainline, and while 
generally the work in question is useful to the original party, the effort of 
trying to maintain a secondary patch set is a project in itself, and rarely 
worth the effort long term except for special cases of either strong philosophy 
differences or considerable differences in target use. Anyone who has ever 
tried to work with/for OSS/semi-OSS projects that they do not control probably 
has experienced this at some point (I'm looking at you asterisk).

Currently without dedicating considerable time to just understanding what is 
trying to take place / extracting the information from other quite busy 
developers who quite rightly feel it is unlikely to have any return on time 
investment, or do not wish to rehash said discussions, it is extremely 
difficult to get a good overall picture for anything not already documented 
(and likely mostly completed for a long time) or be able to give any genuinely 
useful input besides very minor modifications and simple bug tracing / fixes as 
most discussion is not held publicly.

Unless there are specific questions posted, or there are specific feature 
requests to be made, I know I personally try to sideline as much as possible, 
as I do not have the understanding necessary at this point, to do much other 
than for the most part, waste others time.

I think Bacula is a wonderful if not very well known project, and vastly 
superior to any other product I have seen save for high end commercial backup 
offerings. It scales quite well, and is adept at covering the needs of quite 
small setups, to very large ones, with some minor caveats. I know it has saved 
my company and some of our customers (though they don't know it) considerable 
time, money, and headache compared to the previous products we have used (for 
instance I hate Arcserve with a passion still that burns in my veins to this 
day), however there are just the same, a few weak points that we have had to 
work around as well that it would be very nice to see those issues handled 
natively and just work.


 Does anyone have any comments or ideas on this?

You asked =)


 Best regards,

 Kern



Blake Dunlap

--
Come build with us! The BlackBerryreg; Developer Conference in SF, CA
is the only developer event you need to attend this year. Jumpstart your
developing skills, take BlackBerry mobile applications to market and stay 
ahead of the curve. Join us from November 9#45;12, 2009. Register now#33;
http://p.sf.net/sfu/devconf
___
Bacula-users mailing list
Bacula-users

Re: [Bacula-users] Volume retention with Migration

2009-09-25 Thread Blake Dunlap
I had a patch submitted a while back for there to be an option for this (you 
cannot currently do it natively), but it has not been applied to the main code 
last I looked.

Let me know off list if you feel like running unsupported patches and I'll send 
you a copy, or you can alternately check the bacula-devel archives for it.

-Blake

From: Robert LeBlanc [mailto:rob...@leblancnet.us]
Sent: Friday, September 25, 2009 3:24 PM
To: bacula-users
Subject: [Bacula-users] Volume retention with Migration

I've read through the docs and can't find a definitive answer to this. We 
back-up to a Data Domain box, then migrate the jobs after some period of time 
off to tape for archive. It seems that if all the jobs are migrated off a 
volume, but the volume is not past it's retention period then the volume is not 
recycled.

What I want to do is keep the backup on the Data Domain box for 30 days and 
then migrate it off to tape. I've set the volume retention for 45 days as our 
migration jobs have been taking a long time since it reads the whole volume for 
even KB of data. I don't want the volume to be recycled before all the jobs are 
migrated, but I want it recycled before the retention period if all the jobs 
are migrated. Any ideas will be appreciated.

Thanks,

Robert LeBlanc
Life Sciences  Undergraduate Education Computer Support
Brigham Young University
--
Come build with us! The BlackBerryreg; Developer Conference in SF, CA
is the only developer event you need to attend this year. Jumpstart your
developing skills, take BlackBerry mobile applications to market and stay 
ahead of the curve. Join us from November 9#45;12, 2009. Register now#33;
http://p.sf.net/sfu/devconf___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


[Bacula-users] Troubleshooting question

2009-02-19 Thread Blake Dunlap
I realize this is not directly Bacula, but does anyone know what might be my 
problem (settings / hardware issues) from the below SCSI errors?

scsi1:0:0:0: Attempting to abort cmd f4003500: 0x34 0x0 0x0 0x0 0x0 0x0 0x0 0x0 
0x0 0x0
scsi1: At time of recovery, card was not paused
 Dump Card State Begins 
scsi1: Dumping Card State at program address 0x198 Mode 0x11
Card was paused
HS_MAILBOX[0x0] INTCTL[0x80] SEQINTSTAT[0x0] SAVED_MODE[0x11]
DFFSTAT[0x19] SCSISIGI[0x84] SCSIPHASE[0x0] SCSIBUS[0x0]
LASTPHASE[0x80] SCSISEQ0[0x0] SCSISEQ1[0x12] SEQCTL0[0x0]
SEQINTCTL[0x0] SEQ_FLAGS[0x0] SEQ_FLAGS2[0x0] SSTAT0[0x0]
SSTAT1[0x8] SSTAT2[0x0] SSTAT3[0x0] PERRDIAG[0xc0]
SIMODE1[0xac] LQISTAT0[0x0] LQISTAT1[0x0] LQISTAT2[0x0]
LQOSTAT0[0x0] LQOSTAT1[0x0] LQOSTAT2[0x0]

SCB Count = 4 CMDS_PENDING = 1 LASTSCB 0x CURRSCB 0x3 NEXTSCB 0x0
qinstart = 27348 qinfifonext = 27348
QINFIFO:
WAITING_TID_QUEUES:
Pending list:
  3 FIFO_USE[0x0] SCB_CONTROL[0x0] SCB_SCSIID[0x7]
Total 1
Kernel Free SCB list: 2 1 0
Sequencer Complete DMA-inprog list:
Sequencer Complete list:
Sequencer DMA-Up and Complete list:

scsi1: FIFO0 Free, LONGJMP == 0x80ff, SCB 0x0
SEQIMODE[0x3f] SEQINTSRC[0x0] DFCNTRL[0x0] DFSTATUS[0x89]
SG_CACHE_SHADOW[0x2] SG_STATE[0x0] DFFSXFRCTL[0x0]
SOFFCNT[0x0] MDFFSTAT[0x5] SHADDR = 0x00, SHCNT = 0x0
HADDR = 0x00, HCNT = 0x0 CCSGCTL[0x10]
scsi1: FIFO1 Active, LONGJMP == 0x8063, SCB 0x3
SEQIMODE[0x3f] SEQINTSRC[0x0] DFCNTRL[0x4] DFSTATUS[0x89]
SG_CACHE_SHADOW[0x3] SG_STATE[0x0] DFFSXFRCTL[0x0]
SOFFCNT[0x0] MDFFSTAT[0x14] SHADDR = 0x06, SHCNT = 0x0
HADDR = 0x00, HCNT = 0x0 CCSGCTL[0x10]
LQIN: 0x0 0x0 0x0 0x0 0x0 0x0 0x0 0x0 0x0 0x0 0x0 0x0 0x0 0x0 0x0 0x0 0x0 0x0 
0x0 0x0
scsi1: LQISTATE = 0x0, LQOSTATE = 0x0, OPTIONMODE = 0x52
scsi1: OS_SPACE_CNT = 0x20 MAXCMDCNT = 0x0
SIMODE0[0xc]
CCSCBCTL[0x4]
scsi1: REG0 == 0x3, SINDEX = 0x180, DINDEX = 0x102
scsi1: SCBPTR == 0x3, SCB_NEXT == 0xff00, SCB_NEXT2 == 0xff2e
CDB 3 0 0 0 20 0
STACK: 0xc9 0x0 0x0 0x0 0x0 0x0 0x0 0x0
 Dump Card State Ends 
DevQ(0:0:0): 0 waiting
DevQ(0:1:0): 0 waiting
DevQ(0:2:0): 0 waiting
DevQ(0:3:0): 0 waiting
scsi1:0:0:0: Device is active, asserting ATN
Recovery code sleeping
(scsi1:A:0:0): Recovery SCB completes
Unexpected busfree in Command phase, 1 SCBs aborted, PRGMCNT == 0x198
 Dump Card State Begins 
scsi1: Dumping Card State at program address 0x196 Mode 0x11
Card was paused
HS_MAILBOX[0x0] INTCTL[0x80] SEQINTSTAT[0x0] SAVED_MODE[0x11]
DFFSTAT[0x13] SCSISIGI[0x0] SCSIPHASE[0x0] SCSIBUS[0x0]
LASTPHASE[0x80] SCSISEQ0[0x0] SCSISEQ1[0x12] SEQCTL0[0x0]
SEQINTCTL[0x0] SEQ_FLAGS[0x0] SEQ_FLAGS2[0x0] SSTAT0[0x0]
SSTAT1[0x8] SSTAT2[0xc0] SSTAT3[0x0] PERRDIAG[0xc0]
SIMODE1[0xac] LQISTAT0[0x0] LQISTAT1[0x0] LQISTAT2[0x0]
LQOSTAT0[0x0] LQOSTAT1[0x0] LQOSTAT2[0x0]

SCB Count = 4 CMDS_PENDING = 1 LASTSCB 0x CURRSCB 0x3 NEXTSCB 0x0
qinstart = 27348 qinfifonext = 27348
QINFIFO:
WAITING_TID_QUEUES:
Pending list:
Total 0
Kernel Free SCB list: 3 2 1 0
Sequencer Complete DMA-inprog list:
Sequencer Complete list:
Sequencer DMA-Up and Complete list:

scsi1: FIFO0 Free, LONGJMP == 0x80ff, SCB 0x0
SEQIMODE[0x3f] SEQINTSRC[0x0] DFCNTRL[0x0] DFSTATUS[0x89]
SG_CACHE_SHADOW[0x2] SG_STATE[0x0] DFFSXFRCTL[0x0]
SOFFCNT[0x0] MDFFSTAT[0x5] SHADDR = 0x00, SHCNT = 0x0
HADDR = 0x00, HCNT = 0x0 CCSGCTL[0x10]
scsi1: FIFO1 Active, LONGJMP == 0x8063, SCB 0x3
SEQIMODE[0x3f] SEQINTSRC[0x0] DFCNTRL[0x4] DFSTATUS[0x89]
SG_CACHE_SHADOW[0x3] SG_STATE[0x0] DFFSXFRCTL[0x0]
SOFFCNT[0x0] MDFFSTAT[0x14] SHADDR = 0x06, SHCNT = 0x0
HADDR = 0x00, HCNT = 0x0 CCSGCTL[0x10]
LQIN: 0x0 0x0 0x0 0x0 0x0 0x0 0x0 0x0 0x0 0x0 0x0 0x0 0x0 0x0 0x0 0x0 0x0 0x0 
0x0 0x0
scsi1: LQISTATE = 0x0, LQOSTATE = 0x0, OPTIONMODE = 0x52
scsi1: OS_SPACE_CNT = 0x20 MAXCMDCNT = 0x0
SIMODE0[0xc]
CCSCBCTL[0x4]
scsi1: REG0 == 0x3, SINDEX = 0x180, DINDEX = 0x102
scsi1: SCBPTR == 0x3, SCB_NEXT == 0xff00, SCB_NEXT2 == 0xff2e
CDB 3 0 0 0 20 0
STACK: 0xc9 0x0 0x0 0x0 0x0 0x0 0x0 0x0
 Dump Card State Ends 
DevQ(0:0:0): 0 waiting
DevQ(0:1:0): 0 waiting
DevQ(0:2:0): 0 waiting
DevQ(0:3:0): 0 waiting
Recovery code awake
(scsi1:A:0:0): Unexpected busfree in Command phase, 1 SCBs aborted, PRGMCNT == 
0x19a
 Dump Card State Begins 
scsi1: Dumping Card State at program address 0x198 Mode 0x11
Card was paused
HS_MAILBOX[0x0] INTCTL[0x80] SEQINTSTAT[0x0] SAVED_MODE[0x11]
DFFSTAT[0x13] SCSISIGI[0x0] SCSIPHASE[0x0] SCSIBUS[0x0]
LASTPHASE[0x80] SCSISEQ0[0x0] SCSISEQ1[0x12] SEQCTL0[0x0]
SEQINTCTL[0x0] SEQ_FLAGS[0x0] SEQ_FLAGS2[0x0] SSTAT0[0x0]
SSTAT1[0x8] SSTAT2[0xc0] SSTAT3[0x0] PERRDIAG[0xc0]
SIMODE1[0xac] LQISTAT0[0x0] LQISTAT1[0x0] LQISTAT2[0x0]
LQOSTAT0[0x0] LQOSTAT1[0x0] LQOSTAT2[0x0]

SCB Count = 4 CMDS_PENDING = 1 LASTSCB 0x CURRSCB 0x3 NEXTSCB 0x0
qinstart = 27349 qinfifonext = 27349
QINFIFO:
WAITING_TID_QUEUES:
Pending list:
Total 0
Kernel Free SCB list: 3 2 1 0
Sequencer Complete DMA-inprog list:
Sequencer Complete list:
Sequencer DMA-Up and Complete list:

scsi1: FIFO0 Free, LONGJMP == 0x80ff, SCB 0x0
SEQIMODE[0x3f] 

Re: [Bacula-users] Troubleshooting question

2009-02-19 Thread Blake Dunlap
Yes Linux box. Sorry should have stated that.

Linux nrepbak01.isdn.net 2.6.9-42.0.3.ELsmp #1 SMP Fri Oct 6 06:21:39 CDT 2006 
i686 i686 i386 GNU/Linux

Cable has been swapped before, these errors persist through reboots and with 
different drives in the autochanger removed.

The autochanger is connected to one of the following 2 cards, I am not sure 
which is which unfortunately.

09:04.0 RAID bus controller: Adaptec ASC-39320(B) U320 w/HostRAID (rev 10)
Subsystem: Dell: Unknown device 0168
Flags: bus master, 66Mhz, slow devsel, latency 64, IRQ 7
I/O ports at cc00 [disabled] [size=256]
Memory at fe1fe000 (64-bit, non-prefetchable) [size=8K]
I/O ports at c800 [disabled] [size=256]
Expansion ROM at fe20 [disabled] [size=512K]
Capabilities: [dc] Power Management version 2
Capabilities: [a0] Message Signalled Interrupts: 64bit+ Queue=0/1 
Enable-
Capabilities: [94] PCI-X non-bridge device.

09:04.1 RAID bus controller: Adaptec ASC-39320(B) U320 w/HostRAID (rev 10)
Subsystem: Dell: Unknown device 0168
Flags: bus master, 66Mhz, slow devsel, latency 64, IRQ 10
I/O ports at c400 [disabled] [size=256]
Memory at fe1fc000 (64-bit, non-prefetchable) [size=8K]
I/O ports at c000 [disabled] [size=256]
Expansion ROM at fe20 [disabled] [size=512K]
Capabilities: [dc] Power Management version 2
Capabilities: [a0] Message Signalled Interrupts: 64bit+ Queue=0/1 
Enable-
Capabilities: [94] PCI-X non-bridge device.

0a:03.0 RAID bus controller: LSI Logic / Symbios Logic MegaRAID (rev 01)
Subsystem: Dell MegaRAID 518 DELL PERC 4/DC RAID Controller
Flags: bus master, 66Mhz, slow devsel, latency 32, IRQ 3
Memory at f80f (32-bit, prefetchable) [size=64K]
Expansion ROM at fe00 [disabled] [size=64K]
Capabilities: [80] Power Management version 2

Relevant /proc/scsi/scsi info here:

Attached devices:
Host: scsi1 Channel: 00 Id: 00 Lun: 00
  Vendor: DELL Model: PV-136T  Rev: 3.37
  Type:   Medium Changer   ANSI SCSI revision: 02
Host: scsi1 Channel: 00 Id: 01 Lun: 00
  Vendor: IBM  Model: ULTRIUM-TD2  Rev: 67U1
  Type:   Sequential-AccessANSI SCSI revision: 03
Host: scsi1 Channel: 00 Id: 02 Lun: 00
  Vendor: IBM  Model: ULTRIUM-TD2  Rev: 67U1
  Type:   Sequential-AccessANSI SCSI revision: 03
Host: scsi1 Channel: 00 Id: 03 Lun: 00
  Vendor: IBM  Model: ULTRIUM-TD2  Rev: 67U1
  Type:   Sequential-Access


 Hi Blake,

 This looks like a Linux box?

 Which kernel?  'uname -a'
 Which card? 'lspci -v'

 Check all the cables, make sure they're seated correctly?  Try a
 different cable?  Try powering everything off and then powering
 everything back on, starting with the outermost SCSI devices?

 Regards,
 Alex

 On Thu, 19 Feb 2009 11:03:29 -0600
 Blake Dunlap bl...@nxs.net wrote:

  I realize this is not directly Bacula, but does anyone know what might
 be my problem (settings / hardware issues) from the below SCSI errors?
 
  scsi1:0:0:0: Attempting to abort cmd f4003500: 0x34 0x0 0x0 0x0 0x0 0x0
 0x0 0x0 0x0 0x0
  scsi1: At time of recovery, card was not paused
   Dump Card State Begins 
  scsi1: Dumping Card State at program address 0x198 Mode 0x11
  Card was paused
  HS_MAILBOX[0x0] INTCTL[0x80] SEQINTSTAT[0x0] SAVED_MODE[0x11]
  DFFSTAT[0x19] SCSISIGI[0x84] SCSIPHASE[0x0] SCSIBUS[0x0]
  LASTPHASE[0x80] SCSISEQ0[0x0] SCSISEQ1[0x12] SEQCTL0[0x0]
  SEQINTCTL[0x0] SEQ_FLAGS[0x0] SEQ_FLAGS2[0x0] SSTAT0[0x0]
  SSTAT1[0x8] SSTAT2[0x0] SSTAT3[0x0] PERRDIAG[0xc0]
  SIMODE1[0xac] LQISTAT0[0x0] LQISTAT1[0x0] LQISTAT2[0x0]
  LQOSTAT0[0x0] LQOSTAT1[0x0] LQOSTAT2[0x0]
 
  SCB Count = 4 CMDS_PENDING = 1 LASTSCB 0x CURRSCB 0x3 NEXTSCB 0x0
  qinstart = 27348 qinfifonext = 27348
  QINFIFO:
  WAITING_TID_QUEUES:
  Pending list:
3 FIFO_USE[0x0] SCB_CONTROL[0x0] SCB_SCSIID[0x7]
  Total 1
  Kernel Free SCB list: 2 1 0
  Sequencer Complete DMA-inprog list:
  Sequencer Complete list:
  Sequencer DMA-Up and Complete list:
 
  scsi1: FIFO0 Free, LONGJMP == 0x80ff, SCB 0x0
  SEQIMODE[0x3f] SEQINTSRC[0x0] DFCNTRL[0x0] DFSTATUS[0x89]
  SG_CACHE_SHADOW[0x2] SG_STATE[0x0] DFFSXFRCTL[0x0]
  SOFFCNT[0x0] MDFFSTAT[0x5] SHADDR = 0x00, SHCNT = 0x0
  HADDR = 0x00, HCNT = 0x0 CCSGCTL[0x10]
  scsi1: FIFO1 Active, LONGJMP == 0x8063, SCB 0x3
  SEQIMODE[0x3f] SEQINTSRC[0x0] DFCNTRL[0x4] DFSTATUS[0x89]
  SG_CACHE_SHADOW[0x3] SG_STATE[0x0] DFFSXFRCTL[0x0]
  SOFFCNT[0x0] MDFFSTAT[0x14] SHADDR = 0x06, SHCNT = 0x0
  HADDR = 0x00, HCNT = 0x0 CCSGCTL[0x10]
  LQIN: 0x0 0x0 0x0 0x0 0x0 0x0 0x0 0x0 0x0 0x0 0x0 0x0 0x0 0x0 0x0 0x0
 0x0 0x0 0x0 0x0
  scsi1: LQISTATE = 0x0, LQOSTATE = 0x0, OPTIONMODE = 0x52
  scsi1: OS_SPACE_CNT = 0x20 MAXCMDCNT = 0x0
  SIMODE0[0xc]
  CCSCBCTL[0x4]
  scsi1: REG0 == 0x3, SINDEX = 0x180, DINDEX = 0x102
  scsi1: SCBPTR == 0x3, SCB_NEXT == 0xff00, SCB_NEXT2 == 0xff2e
  CDB

Re: [Bacula-users] Issue with conncurrent jobs

2007-12-04 Thread Blake Dunlap
After watching it run now for a few days, it definitely appears to not be 
starting new jobs until all of the jobs running on the sd finish. Is this how 
it is supposed to work?


From: [EMAIL PROTECTED] [mailto:[EMAIL PROTECTED] On Behalf Of Blake Dunlap
Sent: Saturday, December 01, 2007 1:20 AM
To: 'bacula-users@lists.sourceforge.net'
Subject: [Bacula-users] Issue with conncurrent jobs

I seem to be having some scheduling quirks on my director right now. It doesn't 
seem to be correctly queuing new jobs to the store as they drop off. I am 
currently waiting on the last running job on the store to finish to see if it 
even correctly adds more jobs once that does drop.

Example:

Running Jobs:
 JobId Level   Name   Status
==
  5574 FullWeeklyOffsite-filemonster.2007-11-30_19.05.00 has terminated
  5580 FullWeeklyOffsite-bnawsvmsx04.bna01.isdn.net.2007-11-30_19.05.06 has 
terminated
  5587 FullWeeklyOffsite-nrepwsvfs01.bna01.isdn.net.2007-11-30_19.05.13 is 
running
  5588 FullWeeklyOffsite-bnaw2ksql01.bna01.isdn.net.2007-11-30_19.05.14 is 
waiting on max Storage jobs
  5589 FullWeeklyOffsite-rex2.2007-11-30_19.05.15 is waiting on max Storage 
jobs
  5590 FullWeeklyOffsite-web05.2007-11-30_19.05.16 is waiting on max 
Storage jobs

Running Jobs:
 JobId Level   Name   Status
==
  5587 FullWeeklyOffsite-nrepwsvfs01.bna01.isdn.net.2007-11-30_19.05.13 is 
running
  5588 FullWeeklyOffsite-bnaw2ksql01.bna01.isdn.net.2007-11-30_19.05.14 is 
waiting on max Storage jobs
  5589 FullWeeklyOffsite-rex2.2007-11-30_19.05.15 is waiting on max Storage 
jobs
  5590 FullWeeklyOffsite-web05.2007-11-30_19.05.16 is waiting on max 
Storage jobs
  5591 FullWeeklyOffsite-bnaw2ksql02.2007-11-30_19.05.17 is waiting on max 
Storage jobs
  5592 FullWeeklyOffsite-nashlog01.2007-11-30_19.05.18 is waiting on max 
Storage jobs


But the director correctly sees that there should be 5 concurrent allowed by 
the store, and when it first starts backups, it does indeed start 5, and has 
worked fine in the past. Perhaps something on my end, or a bug introduced 
somewhere around the last two minor releases (I'm running 2.2.6)?

*show storage
Storage: name=nrepbak-sd address=172.30.0.1 SDport=9103 MaxJobs=5
  DeviceName=Autochanger MediaType=LTO2 StorageId=2


-Blake


-
SF.Net email is sponsored by: The Future of Linux Business White Paper
from Novell.  From the desktop to the data center, Linux is going
mainstream.  Let it simplify your IT future.
http://altfarm.mediaplex.com/ad/ck/8857-50307-18918-4___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] Issue with concurrent jobs

2007-12-04 Thread Blake Dunlap
 After watching it run now for a few days, it definitely appears to not be
 starting new jobs until all of the jobs running on the sd finish. Is this
 how it is supposed to work?

No. Can you post your configs. I am using bacula-2.3.6 for the
director and storage and several different versions for the clients
and I do not have this problem.

John

Sure, I'll just pick a random client for brevity, and show the relevant config.

bacula-dir.conf:

Director {# Define the Bacula Director Server
  Name = nrepbak01-dir
  DIRport = 9101# where we listen for UA connections
  QueryFile = /etc/bacula/query.sql
  WorkingDirectory = /var/bacula/working
  PidDirectory = /var/run
  Maximum Concurrent Jobs = 20
  Password = REDACTED # Console password
  Messages = Daemon
  FD Connect Timeout = 10 min
}

Storage {
  Name = nrepbak-sd
  Address = 172.30.0.1# N.B. Use a fully qualified name here
  Maximum Concurrent Jobs = 5
  SDPort = 9103
  Password = REDACTED  # password for Storage daemon
  Device = Autochanger# must be same as Device in Storage 
daemon
  Media Type = LTO2  # must be same as MediaType in Storage 
daemon
  Autochanger = yes   # enable for autochanger device
}

JobDefs {
  Name = NrepNightlyFullGeneric #This is the standard weekly backup defaults 
for NREP
  Spool Data = yes
  Type = Backup
  Level = Incremental
  Schedule = WeeklyCycle
  Storage = nrepbak-sd
  Messages = Standard
#  Max Start Delay = 22 hours   ;Disabled until can override using the Schedules
  Rerun Failed Levels = yes
  Reschedule On Error = yes
  Reschedule Interval = 6 hours
  Reschedule Times = 1
  Prefer Mounted Volumes = yes
  Pool = OnsiteFull
  Incremental Backup Pool = OnsiteIncremental
  Write Bootstrap = /var/bacula/working/%c_%n.bsr
#  Priority = 6
}

Job {
  Name = filemonster
  Client = filemonster-fd
  FileSet = filemonster
  JobDefs = NrepNightlyFullGeneric
}

(all clients on that SD have same jobdef, just different client/name/filesets)

Client {
  Name = filemonster-fd
  Address = 172.30.0.25
  FDPort = 9102
  Catalog = MyCatalog
  Password = REDACTED # password for FileDaemon 2
  File Retention = 30 days# 30 days
  Job Retention = 3 years# six months
  AutoPrune = yes # Prune expired Jobs/Files
}


bacula-sd.conf:
Storage { # definition of myself
  Name = nrepbak01-sd
  SDPort = 9103  # Director's port
  WorkingDirectory = /var/bacula/working
  Pid Directory = /var/run
  Maximum Concurrent Jobs = 20
  Heartbeat Interval = 15 seconds
}

Autochanger {
  Name = Autochanger
  Device = DriveA
  Device = DriveB
  Changer Command = /etc/bacula/mtx-changer %c %o %S %a %d
  Changer Device = /dev/sg0
}

Device {
  Name = DriveA  #
  Drive Index = 0
  Media Type = LTO2
  Archive Device = /dev/nst0
  AutomaticMount = yes;   # when device opened, read it
  AlwaysOpen = yes;
  Spool Directory = /staging/backups/
  RemovableMedia = yes;
  RandomAccess = no;
  Changer Command = /etc/bacula/mtx-changer %c %o %S %a %d
  Changer Device = /dev/sg0
  Offline On Unmount = Yes
  AutoChanger = yes
  # Enable the Alert command only if you have the mtx package loaded
  Alert Command = sh -c 'tapeinfo -f %c |grep TapeAlert|cat'
}

(driveB is the same as driveA except for dev/nst1)

-
SF.Net email is sponsored by: The Future of Linux Business White Paper
from Novell.  From the desktop to the data center, Linux is going
mainstream.  Let it simplify your IT future.
http://altfarm.mediaplex.com/ad/ck/8857-50307-18918-4
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] Issue with concurrent jobs

2007-12-04 Thread Blake Dunlap
Yes, concurrency has been working for some time, and it does initially start 5 
concurrent jobs, but it does not start another job when one finishes, it waits 
till all 5 jobs have finished to start 5 more simultaneous jobs.

And yes, they are all the same priority.

On a side note, I apologize for this top post, I have not set up this outlook 
client properly yet.

-Original Message-
From: Michel Meyers [mailto:[EMAIL PROTECTED]
Sent: Tuesday, December 04, 2007 6:34 PM
To: John Drescher
Cc: Blake Dunlap; bacula-users@lists.sourceforge.net
Subject: Re: [Bacula-users] Issue with concurrent jobs

-BEGIN PGP SIGNED MESSAGE-
Hash: SHA1

John Drescher wrote:
 On Dec 4, 2007 2:43 PM, Blake Dunlap [EMAIL PROTECTED] wrote:
 After watching it run now for a few days, it definitely appears to not be
 starting new jobs until all of the jobs running on the sd finish. Is this
 how it is supposed to work?

 No. Can you post your configs. I am using bacula-2.3.6 for the
 director and storage and several different versions for the clients
 and I do not have this problem.
 John
 Sure, I'll just pick a random client for brevity, and show the relevant 
 config.

 bacula-dir.conf:

 Director {# Define the Bacula Director Server
   Name = nrepbak01-dir
   DIRport = 9101# where we listen for UA connections
   QueryFile = /etc/bacula/query.sql
   WorkingDirectory = /var/bacula/working
   PidDirectory = /var/run
   Maximum Concurrent Jobs = 20
   Password = REDACTED # Console password
   Messages = Daemon
   FD Connect Timeout = 10 min
 }

 Storage {
   Name = nrepbak-sd
   Address = 172.30.0.1# N.B. Use a fully qualified name here
   Maximum Concurrent Jobs = 5
   SDPort = 9103
   Password = REDACTED  # password for Storage daemon
   Device = Autochanger# must be same as Device in 
 Storage daemon
   Media Type = LTO2  # must be same as MediaType in Storage 
 daemon
   Autochanger = yes   # enable for autochanger device
 }

 JobDefs {
   Name = NrepNightlyFullGeneric #This is the standard weekly backup 
 defaults for NREP
   Spool Data = yes
   Type = Backup
   Level = Incremental
   Schedule = WeeklyCycle
   Storage = nrepbak-sd
   Messages = Standard
 #  Max Start Delay = 22 hours   ;Disabled until can override using the 
 Schedules
   Rerun Failed Levels = yes
   Reschedule On Error = yes
   Reschedule Interval = 6 hours
   Reschedule Times = 1
   Prefer Mounted Volumes = yes
   Pool = OnsiteFull
   Incremental Backup Pool = OnsiteIncremental
   Write Bootstrap = /var/bacula/working/%c_%n.bsr
 #  Priority = 6
 }

 Job {
   Name = filemonster
   Client = filemonster-fd
   FileSet = filemonster
   JobDefs = NrepNightlyFullGeneric
 }

 (all clients on that SD have same jobdef, just different 
 client/name/filesets)

 Client {
   Name = filemonster-fd
   Address = 172.30.0.25
   FDPort = 9102
   Catalog = MyCatalog
   Password = REDACTED # password for FileDaemon 2
   File Retention = 30 days# 30 days
   Job Retention = 3 years# six months
   AutoPrune = yes # Prune expired Jobs/Files
 }


 bacula-sd.conf:
 Storage { # definition of myself
   Name = nrepbak01-sd
   SDPort = 9103  # Director's port
   WorkingDirectory = /var/bacula/working
   Pid Directory = /var/run
   Maximum Concurrent Jobs = 20
   Heartbeat Interval = 15 seconds
 }

 Autochanger {
   Name = Autochanger
   Device = DriveA
   Device = DriveB
   Changer Command = /etc/bacula/mtx-changer %c %o %S %a %d
   Changer Device = /dev/sg0
 }

 Device {
   Name = DriveA  #
   Drive Index = 0
   Media Type = LTO2
   Archive Device = /dev/nst0
   AutomaticMount = yes;   # when device opened, read it
   AlwaysOpen = yes;
   Spool Directory = /staging/backups/
   RemovableMedia = yes;
   RandomAccess = no;
   Changer Command = /etc/bacula/mtx-changer %c %o %S %a %d
   Changer Device = /dev/sg0
   Offline On Unmount = Yes
   AutoChanger = yes
   # Enable the Alert command only if you have the mtx package loaded
   Alert Command = sh -c 'tapeinfo -f %c |grep TapeAlert|cat'
 }

 (driveB is the same as driveA except for dev/nst1)

 I do not see anything that looks wrong. Are you using spooling? I use
 spooling with most clients except a few jobs that originate on the
 director or the storage machines.

Only guesses here:
- - Did you restart all the respective daemons after setting up the
concurrency?
- - Are all the jobs running at the same priority level? (Jobs with
different priorities will not run concurrently)



-
SF.Net email is sponsored by: The Future of Linux Business White Paper
from Novell.  From the desktop to the data center, Linux is going
mainstream.  Let it simplify your IT future.
http://altfarm.mediaplex.com/ad/ck/8857-50307

[Bacula-users] Issue with conncurrent jobs

2007-11-30 Thread Blake Dunlap
I seem to be having some scheduling quirks on my director right now. It doesn't 
seem to be correctly queuing new jobs to the store as they drop off. I am 
currently waiting on the last running job on the store to finish to see if it 
even correctly adds more jobs once that does drop.

Example:

Running Jobs:
 JobId Level   Name   Status
==
  5574 FullWeeklyOffsite-filemonster.2007-11-30_19.05.00 has terminated
  5580 FullWeeklyOffsite-bnawsvmsx04.bna01.isdn.net.2007-11-30_19.05.06 has 
terminated
  5587 FullWeeklyOffsite-nrepwsvfs01.bna01.isdn.net.2007-11-30_19.05.13 is 
running
  5588 FullWeeklyOffsite-bnaw2ksql01.bna01.isdn.net.2007-11-30_19.05.14 is 
waiting on max Storage jobs
  5589 FullWeeklyOffsite-rex2.2007-11-30_19.05.15 is waiting on max Storage 
jobs
  5590 FullWeeklyOffsite-web05.2007-11-30_19.05.16 is waiting on max 
Storage jobs

Running Jobs:
 JobId Level   Name   Status
==
  5587 FullWeeklyOffsite-nrepwsvfs01.bna01.isdn.net.2007-11-30_19.05.13 is 
running
  5588 FullWeeklyOffsite-bnaw2ksql01.bna01.isdn.net.2007-11-30_19.05.14 is 
waiting on max Storage jobs
  5589 FullWeeklyOffsite-rex2.2007-11-30_19.05.15 is waiting on max Storage 
jobs
  5590 FullWeeklyOffsite-web05.2007-11-30_19.05.16 is waiting on max 
Storage jobs
  5591 FullWeeklyOffsite-bnaw2ksql02.2007-11-30_19.05.17 is waiting on max 
Storage jobs
  5592 FullWeeklyOffsite-nashlog01.2007-11-30_19.05.18 is waiting on max 
Storage jobs


But the director correctly sees that there should be 5 concurrent allowed by 
the store, and when it first starts backups, it does indeed start 5, and has 
worked fine in the past. Perhaps something on my end, or a bug introduced 
somewhere around the last two minor releases (I'm running 2.2.6)?

*show storage
Storage: name=nrepbak-sd address=172.30.0.1 SDport=9103 MaxJobs=5
  DeviceName=Autochanger MediaType=LTO2 StorageId=2


-Blake


-
SF.Net email is sponsored by: The Future of Linux Business White Paper
from Novell.  From the desktop to the data center, Linux is going
mainstream.  Let it simplify your IT future.
http://altfarm.mediaplex.com/ad/ck/8857-50307-18918-4___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] Bacula-SD hang using IBM Ultrium 3581 Library

2007-10-23 Thread Blake Dunlap
If you run a status storage when this is occurring, I believe you will see 
that the drive is unmounted and offline by user. Mounting a tape, or instead 
using the release command to unmount tapes should solve your problem if this 
is the case.

From: [EMAIL PROTECTED] [mailto:[EMAIL PROTECTED] On Behalf Of jay
Sent: Tuesday, October 23, 2007 11:16 AM
To: bacula-users@lists.sourceforge.net
Subject: [Bacula-users] Bacula-SD hang using IBM Ultrium 3581 Library

Hello,

I'm having a peculiar problem with an IBM Ultrium 3581 autochanger and the SD 
daemon.  I have to restart the SD daemon after I unmount a tape, or the next 
job hangs and never starts.

We currently have a single LTO2 drive in the library, and are running Bacula 
2.2.5 on a Redhat Enterprise Linux 4 update 5 IBM server.  I ran all the btape 
tests, including the multiple tape test for autochangers, and they all worked 
flawlessly.  Not a single problem or error reported.  I am able to use the mtx 
script and mt successfully as well.  Mtx reports all the proper tapes in the 
library, and I can mount a tape and WEOF and REWIND it using mt.  Things look 
great at this point.  Next, I ran label barcodes and it labeled all the tapes 
in the library properly.  Again, no problems.  I decided at this point to 
modify the Full OS Set backup for our environment and run some tests.  I can 
successfully run the backup the first time.  It pulls a tape from the proper 
pool inside the library, backup works.  But if I unmount the tape in bconsole, 
and try and run the same backup again, it hangs.  Nothing happens.  I have 
waited several hours and nothing ever times out.  I finally have to cancel the 
job manually.  I discovered that by restarting bacula-sd that I can then re-run 
the job immediately without any problems.

I'm not sure what could be causing this, or how to even troubleshoot.  Things 
seem to work great during all tests and doing a backup, Bacula just doesn't 
like it when I unmount a tape manually.

Could someone offer some advice or debugging steps I could take to figure this 
out?  Below is a copy of my bacula-sd.conf file.  I can post other config files 
if I need to.  Thanks

Jay


 bacula-sd.conf ---

Storage {
  Name = server1-sd
  SDPort = 9103
  WorkingDirectory = /var/lib/bacula
  Pid Directory = /var/run
  Maximum Concurrent Jobs = 20
  SDAddress = 192.168.1.20http://192.168.1.20
}

Director {
  Name = server1-dir
  Password = 1234
}

Director {
  Name = server1-mon
  Password = 1234
  Monitor = yes
}

Device {
  Name = FileStorage
  Media Type = File
  Archive Device = /tmp
  LabelMedia = yes;
  Random Access = Yes;
  AutomaticMount = yes;
  RemovableMedia = no;
  AlwaysOpen = no;
}

Autochanger {
  Name = Autochanger
  Device = Drive-0
  Changer Device = /dev/sg1
  Changer Command = /etc/bacula/mtx-changer %c %o %S %a %d
}

Device {
  Name = Drive-0
  Drive Index = 0
  Media Type = LTO2
  Archive Device = /dev/nst0
  AutomaticMount = yes;
  AlwaysOpen = yes;
  RemovableMedia = yes;
  RandomAccess = no
  AutoChanger = yes
}

Messages {
  Name = Standard
  director = server1-dir = all
}
-
This SF.net email is sponsored by: Splunk Inc.
Still grepping through log files to find problems?  Stop.
Now Search log events and configuration files using AJAX and a browser.
Download your FREE copy of Splunk now  http://get.splunk.com/___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


[Bacula-users] Schedules and start time delay

2007-09-05 Thread Blake Dunlap
Is there any way to have the Max Start Time Delay overrideable on a schedule 
other than doing multiple jobs atm?

The reason I ask is that backups during the week need to start no later than a 
certain time, but weekends are not a problem.



Blake Dunlap
Network Operations
ISDN-Net, Inc.

-
This SF.net email is sponsored by: Splunk Inc.
Still grepping through log files to find problems?  Stop.
Now Search log events and configuration files using AJAX and a browser.
Download your FREE copy of Splunk now   http://get.splunk.com/___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users