[Bacula-users] How Label Format count volumes?

2007-11-15 Thread Alejandro Alfonso
Hello!

I'm upgrading my old bacula instalations to bacula 2.2.x

After some tries, i have a doubt about automatic label of File Devices

Using LabelMedia = yes in Device, and Label Format = Server-, in
Pool section, label suffix starts counting in sum of existing Volumes in
ALL Pools (example: 2 volumes in Pool_A, 3 volumes in Pool_B: bacula
create Server-0006, Server-0007, and so for Pool_C)

I think in prior versions bacula counts the volumes ONLY in the
asociated Pool (in an empty Pool, first Volume will be Server-0001)

Should i start using someting like 'Label Format =
Server-${NumVols:p/2/0/r}'?
If i have to use Bacula Variables... how can i start counting in 1 in
an empty Pool?

Thanks in advance!
Best regards!
-- 





 
*   Alejandro Alfonso Fernandez
   *[EMAIL PROTECTED] mailto:[EMAIL PROTECTED]
http://www.telecyl.com/




4


-
This SF.net email is sponsored by: Splunk Inc.
Still grepping through log files to find problems?  Stop.
Now Search log events and configuration files using AJAX and a browser.
Download your FREE copy of Splunk now  http://get.splunk.com/___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] Backup catalog not running.

2007-11-15 Thread Cedric Devillers
Cedric Devillers wrote:
 Arno Lehmann wrote:
 Hi,

 13.11.2007 12:54,, Cedric Devillers wrote::
 Hello,

 I have a little problem with one of our bacula installation.

 Let me explain the setup first.

 There is two server, the first has the data and the storage daemon
 (meia). The second is the director/DB server (lucita). There is alsa two
 client only servers (hr-accentv2 a windows client and darla).
 Ok. The catalog database is on lucita, right?
 
 
 That's right.
 
 
 All the jobs are running fine, except the Catalog Backup. The strange
 thing here is that i have nothing in logs about it. If i run it
 manually, it is fine.

 Director ans storage version : 1.38.11 (can't upgrade right now).
 You should plan for that, though :-)
 
 
 It is planned yes, i have to backport the packages :)
 
 
 I suppose i have messed something with scheduling or concurency, but i
 can't find what.
 I hope I can...

 ...
 Here is the relevant part of my config :

 ### Jobs definitions :

 JobDefs {
   Name = DefaultJob
   Type = Backup
   Level = Incremental
   Client = lucita-fd
   FileSet = Full Set
   Schedule = WeeklyCycle
   Storage = meia-sd
   Messages = Standard
   Pool = Default
   Priority = 10
 }

 JobDefs {
   Name = Daily
   Type = Backup
   Level = Differential
   Client = meia-fd
   FileSet = Full Set
   Schedule = DailyCycle
   Storage = meia-sd
   Messages = Standard
   Pool = Default#overwrited by schedule config, but needed to start
 bacula
   Max Wait Time = 1 hours
   Max Start Delay = 4 hours
   RunBeforeJob = etc/bacula/before.sh
   Priority = 10
 }

 JobDefs {
   Name = Weekly
   Type = Backup
   Level = Full
   Client = meia-fd
   FileSet = Full Set
   Schedule = WeeklyCycle
   Storage = meia-sd
   Messages = Standard
   Pool = Default#overwrited by schedule config, but needed to
 start bacula
   Max Wait Time = 1 hours
   Max Start Delay = 4 hours
   RunBeforeJob = etc/bacula/before.sh
   Priority = 10
 }



 Job {
   Name = Daily-meia
   JobDefs = Daily
   Write Bootstrap = /var/bacula/incremental.bsr
 }

 Job {
   Name = Weekly-meia
   JobDefs = Weekly
   Write Bootstrap = /var/bacula/full.bsr
 }

 Job {
   Name = DARLABackup
   JobDefs = Weekly
   Client = darla-fd
   FileSet=DARLA
   Schedule = DARLACycle
   Max Wait Time = 1 hours
   Max Start Delay = 4 hours
   RunBeforeJob = /etc/bacula/before.sh
   Write Bootstrap = /var/bacula/darla.bsr
 }


 Job {
   Name = HRBackup
   Client = hr-accentv2-fd
   JobDefs = Daily
   Level = Full
   FileSet = HRSet
   Schedule = HRSchedule
   Max Wait Time = 1 hours
   Max Start Delay = 4 hours
   Write Bootstrap = /var/bacula/hraccent.bsr
   Priority = 11   # run after main backup
 }

 #
 # Backup the catalog database (after the nightly save)
 Job {
   Name = BackupCatalog
   JobDefs = Weekly
   Level = Full
   FileSet=Catalog
   Client = lucita-fd
 Ok. This is looking right.

   Schedule = WeeklyCycleAfterBackup
   # This creates an ASCII copy of the catalog
   RunBeforeJob = /etc/bacula/scripts/make_catalog_backup bacula bacula
 Ya2AhGho
   RunBeforeJob = /etc/bacula/before.sh
   # This deletes the copy of the catalog
   RunAfterJob  = /etc/bacula/scripts/delete_catalog_backup
   RunAfterJob = /etc/bacula/after.sh
   RunAfterJob = ssh -i /etc/bacula/Bacula_key [EMAIL PROTECTED]
 I *believe* that 1.38 could only handle one Run After Job and Run
 Before Job option per job. See below how to verify this.

   Write Bootstrap = /var/lib/bacula/BackupCatalog.bsr
   Priority = 11   # run after main backup
 }
 In bconsole, use the command show jobs=BackupCatalog. Search for the
 lines with the Run before/after Job commands.

 If you only see one each, you'll have to put the commands you need to
 execute into one script, and then reference that script.
 
 You make the point here, multiple runbefore and runafter directive are
 not supported on this version.
 
 I hope it is supported on 2.2.5, because all my other setup use this :)
 (i checked of course, it is supported).
 
 I've made the changes and see if the planned backup of tonight run fine.
 
 But one thing i don't understand is why i don't have anything about this
 job in my logs. And also the fact that manually running the job is
 working fine. Of course, the different runbefore and runafter scripts
 where not running, but the job was executed without issuing any errors.
 
 I'm wondering if there is not a problem with my Max Wait Time and Max
 Start Delay directive. But as far i undrestand thems, they should be good.
 


Ok, the runbefore runafter scripts are working fine now.

But i still have the exact same problem as before. The job is showed as
canceled in bconsole, but there is absolutely nothing in the logs about it.

I have turned trace on and set debug level 200 (maybe a little high ?)
and i'll see if i can catch some informations.


 By the way: If you posted the real password to the catalog above
 you'll want to change that soon :-)
 
 
 I've noticed 

[Bacula-users] SunStorageTek C2

2007-11-15 Thread Kazon, Krzysztof Maciej
Hi everybody,

Whether somebody had experience in use:

Autoloader:
http://www.sun.com/storagetek/tape_storage/tape_libraries/c2/
http://www.sun.com/storagetek/tape_storage/tape_libraries/c2/ 

OS: Solaris 10

Tape: LTO3

 

Whether somebody knows, whether autoloader will work? I must create
backup file system (3 tapes).

Any idea?

-
This SF.net email is sponsored by: Splunk Inc.
Still grepping through log files to find problems?  Stop.
Now Search log events and configuration files using AJAX and a browser.
Download your FREE copy of Splunk now  http://get.splunk.com/___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


[Bacula-users] bconsole Director authorization problem and tapes put into read error (LTO4)

2007-11-15 Thread Win Htin
Hi Folks,

Just as I thought I'm ready for prime time, I'm back to square one with TWO
unexpected issues that popped up.

#  *bconsole Director authorization problem:
*After upgrading to 2.2.6 I noticed I get failures time to time when I issue
the bconsole command. Following is what I see from my CLI.
*Connecting to Director nfsserver:9101
Director authorization problem.
Most likely the passwords do not agree.
If you are using TLS, there may have been a certificate validation error
during the TLS handshake.
*
Following is what I got in my email alert.
*15-Nov 07:53 nfsserver-dir: ERROR in authenticate.c:380 Unable to
authenticate console *UserAgent* at client:192.168.15.34:36131.
*
The ip 192.168.15.34 is where I have my Bacula DIR  SD services running.

After Bacula dir, fd and sd daemons are restarted, bconsole will work a
number of times and then all of a sudden the connection fails. Please note I
didn't modify ANY entries in any of the DIR, FD or SD conf files.

# *Tapes put into read error:
*The backups will work perfectly fine for a number of times on the same
tape/Volume from multiple clients through multiple Bacula daemon start/stops
and then all of a sudden slams the working tape/Volume into Error.
Pool: AllDifferentials
+-++---+-++--+--+-+--+---+---+-+
| MediaId | VolumeName | VolStatus | Enabled | VolBytes   | VolFiles |
VolRetention | Recycle | Slot | InChanger | MediaType | LastWritten
|
+-++---+-++--+--+-+--+---+---+-+
|   7 | 400AAL | Error |   1 |513,321,984 |5 |
31,536,000 |   1 |1 | 1 | LTO4  | 2007-11-08 23:04:13 |
|   8 | 401AAL | Error  |   1 | 20,237,478,912 |   51
|   31,536,000 |   1 |2 | 1 | LTO4  | 2007-11-08
23:02:10 |
|  12 | 402AAL | Error |   1 | 33,390,572,544 |   79 |
31,536,000 |   1 |3 | 1 | LTO4  | 2007-11-12 08:18:44 |
|  13 | 403AAL | Error |   1 | 51,140,210,688 |  115 |
31,536,000 |   1 |4 | 1 | LTO4  | 2007-11-12 22:12:35 |
|  14 | 404AAL | Error |   1 | 83,328,408,576 |  189 |
31,536,000 |   1 |5 | 1 | LTO4  | 2007-11-13 22:23:25 |

Anyone out there have LTO4 working properly? I'm getting pretty frustrated
and I am on the verge of calling it a day. Any help is much appreciated.

Cheers,

Win
-
This SF.net email is sponsored by: Splunk Inc.
Still grepping through log files to find problems?  Stop.
Now Search log events and configuration files using AJAX and a browser.
Download your FREE copy of Splunk now  http://get.splunk.com/___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


[Bacula-users] full/incremental and predefined changer slots?

2007-11-15 Thread Kristian Rink

Folks;

currently into reorganizing the tape-backup of some of our systems using
bacula, I do have a situation like this:

- Backups go to tapes, full backup needs to span across two tapes,
incrementals fit on one.

- We do use a Tandberg LTO-2 tape changer, the tapes for weekend full
backup always are located in slots 1 and 2, those for the rest of the
week always in 3 .. 7 . There are two weeks in rotation.


So far my idea has been to create two different tape pools (Full and
Incremental) adding the right tapes (and the appropriate slots) to the
right pool and hoping for the best. What I don't really get straight by
now is how to choose between different pools and managing the
full/incremental setup correctly:

- Can I have a Job defined as Level Incremental without ever running a
Full dump on that job? Or is every job forced into Full the first time
it's being run? Can I sort of link two job definitions making one
responsible for Full and one for Incremental?

- Can I setup one job but choose a different pool to be used based upon
the schedule in use?

- Is there a smarter (easier) way to setup a scenario like that?

Thanks in advance in case of any responses (given these questions
possibly are rather basic/stupid), best regards.
Kristian


-- 
Dipl.-Ing.(BA) Kristian Rink * Software- und Systemingenieur
planConnect GmbH  * Könneritzstr. 33 * 01067 Dresden
fon: 0351 4657770 * cell: 0176 2447 2771 * mail: [EMAIL PROTECTED]
Amtsgericht Dresden HRB: 20 015 * St.-Nr. FA DD III 203 / 116 / 04105
Geschäftsführer: Stefan Voß, Karl Stierstorfer


-
This SF.net email is sponsored by: Splunk Inc.
Still grepping through log files to find problems?  Stop.
Now Search log events and configuration files using AJAX and a browser.
Download your FREE copy of Splunk now  http://get.splunk.com/
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] bconsole Director authorization problem and tapes put into read error (LTO4)

2007-11-15 Thread John Drescher
 Just as I thought I'm ready for prime time, I'm back to square one with TWO
 unexpected issues that popped up.

 #  bconsole Director authorization problem:
 After upgrading to 2.2.6 I noticed I get failures time to time when I issue
 the bconsole command. Following is what I see from my CLI.
 Connecting to Director nfsserver:9101
 Director authorization problem.
 Most likely the passwords do not agree.
 If you are using TLS, there may have been a certificate validation error
 during the TLS handshake.
 Following is what I got in my email alert.
 15-Nov 07:53 nfsserver-dir: ERROR in authenticate.c:380 Unable to
 authenticate console *UserAgent* at client: 192.168.15.34:36131.
 The ip 192.168.15.34 is where I have my Bacula DIR  SD services running.

 After Bacula dir, fd and sd daemons are restarted, bconsole will work a
 number of times and then all of a sudden the connection fails. Please note I
 didn't modify ANY entries in any of the DIR, FD or SD conf files.

What version of bacula did you upgrade from?

Are you sure you do not have network communications problems (faulty
NIC/ switch). I ask that because both problems can be caused by a bad
network connection.

 # Tapes put into read error:
 The backups will work perfectly fine for a number of times on the same
 tape/Volume from multiple clients through multiple Bacula daemon start/stops
 and then all of a sudden slams the working tape/Volume into Error.

You need to post the log or console message when this happens
otherwise it will be impossible to help.

John

-
This SF.net email is sponsored by: Splunk Inc.
Still grepping through log files to find problems?  Stop.
Now Search log events and configuration files using AJAX and a browser.
Download your FREE copy of Splunk now  http://get.splunk.com/
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


[Bacula-users] Fwd: bconsole Director authorization problem and tapes put into read error (LTO4)

2007-11-15 Thread John Drescher
-- Forwarded message --
From: John Drescher [EMAIL PROTECTED]
Date: Nov 15, 2007 11:57 AM
Subject: Re: [Bacula-users] bconsole Director authorization problem
and tapes put into read error (LTO4)
To: Win Htin [EMAIL PROTECTED]


On Nov 15, 2007 11:43 AM, Win Htin [EMAIL PROTECTED] wrote:
 Hi John,

 Thanks for the response.

 1. I upgraded from 2.2.5.

 2. I have TWO gigabit ethernet interfaces using Linux bonding. Found no
 network related error messages in either the log files or from the
 monitoring software. The hardware is enterprise class Blades and less than 6
 months old.

 3. For the bconsole connection issue, what I posted was everything I got
 through Bacula daemon message. For the tape error message, the rest are
 pretty irrelevant but am reposting it for your perusal.
  14-Nov 10:08 nfsserver-dir JobId 74: Start Backup JobId 74,
 Job=NFSSERVER.2007-11-14_10.08.03
 14-Nov 10:08 nfsserver-dir JobId 74: Using Device LTO4_1
 14-Nov 10:08 nfsserver-sd JobId 74: 3301 Issuing autochanger loaded? drive
 0 command.
 14-Nov 10:08 nfsserver-sd JobId 74: 3302 Autochanger loaded? drive 0,
 result is Slot 5.
 14-Nov 10:08 nfsserver-sd JobId 74: Volume 404AAL previously written,
 moving to end of data.
 14-Nov 10:11 nfsserver-sd JobId 74: Error: Unable to position to end of data
 on device LTO4_1 (/dev/IBMtape0): ERR= dev.c:1326 read error on LTO4_1
 (/dev/IBMtape0). ERR=Input/output error.

This looks like the same problem you had a few weeks back when running
the btape tests. Do you still have the sam bacula-sd.conf file that
you used when the problem appeared to be fixed? Specifically do you
have TWO EOF set to yes?

John



-- 
John M. Drescher

-
This SF.net email is sponsored by: Splunk Inc.
Still grepping through log files to find problems?  Stop.
Now Search log events and configuration files using AJAX and a browser.
Download your FREE copy of Splunk now  http://get.splunk.com/
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] 2.2.6 rpm release

2007-11-15 Thread Jeff Dickens
Any idea when the source rpm will be back?  I saw that the bad one had 
been taken down from sf, but there has been no word about its replacement.

Scott Barninger wrote:
 Hello,

 bacula-2.2.6 has been released to sourceforge. This release should
 address the RedHat (and clone) issues discussed recently as well as
 introducing support for SuSE 10.3.

 The following issues have been corrected:

 * Sun Nov 11 2007 
 - add new files required by rescue makefile
 * Sat Nov 10 2007 
 - add su103 build target
 * Sun Nov 04 2007 
 - fix dist defines for rhel5 and clones
 - fix rhel broken 64 bit QT4 paths
 - rh qt4 packages don't provide qt so fix that too
 * Mon Oct 29 2007 
 - correct ownership when creating sqlite db file in post script

 The full release notes follow.

 Bacula-2.2 RPM Release Notes
 11 November 2007
 D. Scott Barninger
 barninger at fairfieldcomputers dot com

 Release 2.2.6-1

 This release incorporates a number of significant changes since 1.38.

 These release notes refer to the rpm packaging only.
 Please refer to the release notes and changelog in the
 tarball or on sourceforge for complete information on all changes.


 *
 * Miscellaneous *
 *

 Added missing files now required by the rescue configure script.

 Corrected dist target for rhel5.

 Corrected QT dependency name for RedHat flavors.

 Added fix for RHEL5 64 bit QT paths.

 Added build target for su103.

 Added build targets for rhel5 and clones.

 Build target added for Scientific Linux (RHEL clone) thanks to Jon
 Peatfield.

 Merged Otto Mueller's patch changing some directory locations for FHS
 compatibility
 but retaining the script directory as /etc/bacula.

 gnome-console and wxconsole have been renamed to bgnome-console and
 bwx-console 
 respectively.

 ***
 * bat (Bacula Admin Tool) *
 ***

 A new subpackage has been added for the new bat QT-based GUI
 administration tool.
 This requires QT = 4.2

 --define build_bat 1

 ***
 * Gnome console dropped on some platforms *
 ***

 The gconsole package has been dropped on older gnome platforms (gtk+ 
 2.4). 
 Changes in the gnome api and code produced by Glade no longer build. 
 In addition, the tray monitor now fails to build on  2.10 platforms.
 gconsole is now not built on the following platforms:
 rh7, rh8, rh9, rhel3 (and clones), fc1, fc3, fc4, mdk, su9, su10

 
 * Third party packager support *
 

 A new build tag added to support third party packagers to replace the
 information in the Packager: tag in the rpm. Invoking
 --define contrib_packager Your Name [EMAIL PROTECTED]
 will substitute your packager identification.

 Users interested in building packages for platforms not normally
 published
 should examine the platforms/contrib-rpm directory in the source code.

 ***
 * Option to build client only *
 ***

 A build define to build the client rpm package only added. This turns 
 off all database builds, gnome console and wxconsole.
 --define build_client_only 1

 
 * Python support added *
 

 Support for compiling with python scripting support added. This is off 
 by default but can be enabled with:
 --define build_python 1
 Released binary packages are built with python support.

 ***
 * Database update *
 ***

 The 2.x release requires an update to the bacula database structure
 from version 9 to version 10. A pre-install routine has been added to
 check for databases older than 9. In that event the install will exit
 with an error message indicating that the database must be updated to
 version 9 before installing this upgrade. Scripts for updating older
 database formats are available for download in the bacula-updatedb
 rpm package. In the event a version 9 database is detected a
 post-install 
 routine will update the database after creating a backup file in the 
 bacula working directory.

 **
 * Platform Notes *
 **

 The spec file currently supports building on the following platforms:

 # RedHat builds
 --define build_rh7 1
 --define build_rh8 1
 --define build_rh9 1

 # Fedora Core build
 --define build_fc1 1
 --define build_fc3 1
 --define build_fc4 1
 --define build_fc5 1
 --define build_fc6 1

 # Whitebox Enterprise build
 --define build_wb3 1

 # RedHat Enterprise builds
 --define build_rhel3 1
 --define build_rhel4 1
 --define build_rhel5 1

 # CentOS build
 --define build_centos3 1
 --define build_centos4 1
 --define build_centos5 1

 # Scientific Linux build
 --define build_sl3 1
 --define build_sl4 1
 --define build_sl5 1

 # SuSE build
 --define build_su9 1
 --define build_su10 1
 --define build_su102 1
 --define build_su103 1

 # Mandrake 10.x build
 

Re: [Bacula-users] Fwd: [Bacula-devel] Broken 2.2.6 source rpm in Sourceforge

2007-11-15 Thread Scott Barninger
Hello,

Sorry for the delay but I have been out of town until this evening. Not
sure what went wrong with the release but I have re-uploaded the srpm
now. It should be available now and the file size looks correct.

On Tue, 2007-11-13 at 14:10 +0100, Kern Sibbald wrote:
 Hello Scott,
 
 Just to let you know, I downloaded the 2.2.6-1.src.rpm package, and it does 
 look like it is truncated since it is less than half the size of the 2.2.5 
 release, and the signature is NOT OK.
 
 As a consequence, I have deleted the package from Source Forge to avoid more 
 users from having problems.  I have a copy of the broken package here, if 
 by any chance you should need it.
 
 Best regards,
 
 Kern
 
 --  Forwarded Message  --
 
 Subject: [Bacula-devel] Broken 2.2.6 source rpm in Sourceforge
 Date: Tuesday 13 November 2007 13:51
 From: Timo Neuvonen [EMAIL PROTECTED]
 To: [EMAIL PROTECTED]
 
   bacula-2.2.6 has been released to sourceforge. This release should
   address the RedHat (and clone) issues discussed recently as well as
   introducing support for SuSE 10.3.
 
  Could someone pls. check if the bacula-2.2.6-1.src.rpm is ok?
  Size shown by Sourceforge is less than 8 megs, which is less than
  one half of 2.2.5 src rpm package...
 
 For me, the package in Sourceforge definetely seems to be broken. I've tried
 to download it several times, every time I get only the appr. 8 megs, which
 does not build to binaries... someone else mentioned this in the user list
 too.
 
 Regards,
 Timo
 
 
 
 -
 This SF.net email is sponsored by: Splunk Inc.
 Still grepping through log files to find problems?  Stop.
 Now Search log events and configuration files using AJAX and a browser.
 Download your FREE copy of Splunk now  http://get.splunk.com/
 ___
 Bacula-devel mailing list
 [EMAIL PROTECTED]
 https://lists.sourceforge.net/lists/listinfo/bacula-devel
 
 ---
 


-
This SF.net email is sponsored by: Microsoft
Defy all challenges. Microsoft(R) Visual Studio 2005.
http://clk.atdmt.com/MRT/go/vse012070mrt/direct/01/
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] Fwd: [Bacula-devel] Broken 2.2.6 source rpm in Sourceforge

2007-11-15 Thread Jeff Dickens

Thanks.

Scott Barninger wrote:

Hello,

Sorry for the delay but I have been out of town until this evening. Not
sure what went wrong with the release but I have re-uploaded the srpm
now. It should be available now and the file size looks correct.

On Tue, 2007-11-13 at 14:10 +0100, Kern Sibbald wrote:
  

Hello Scott,

Just to let you know, I downloaded the 2.2.6-1.src.rpm package, and it does 
look like it is truncated since it is less than half the size of the 2.2.5 
release, and the signature is NOT OK.


As a consequence, I have deleted the package from Source Forge to avoid more 
users from having problems.  I have a copy of the broken package here, if 
by any chance you should need it.


Best regards,

Kern

--  Forwarded Message  --

Subject: [Bacula-devel] Broken 2.2.6 source rpm in Sourceforge
Date: Tuesday 13 November 2007 13:51
From: Timo Neuvonen [EMAIL PROTECTED]
To: [EMAIL PROTECTED]



bacula-2.2.6 has been released to sourceforge. This release should
address the RedHat (and clone) issues discussed recently as well as
introducing support for SuSE 10.3.


Could someone pls. check if the bacula-2.2.6-1.src.rpm is ok?
Size shown by Sourceforge is less than 8 megs, which is less than
one half of 2.2.5 src rpm package...
  

For me, the package in Sourceforge definetely seems to be broken. I've tried
to download it several times, every time I get only the appr. 8 megs, which
does not build to binaries... someone else mentioned this in the user list
too.

Regards,
Timo



-
This SF.net email is sponsored by: Splunk Inc.
Still grepping through log files to find problems?  Stop.
Now Search log events and configuration files using AJAX and a browser.
Download your FREE copy of Splunk now  http://get.splunk.com/
___
Bacula-devel mailing list
[EMAIL PROTECTED]
https://lists.sourceforge.net/lists/listinfo/bacula-devel

---




  
-
This SF.net email is sponsored by: Microsoft
Defy all challenges. Microsoft(R) Visual Studio 2005.
http://clk.atdmt.com/MRT/go/vse012070mrt/direct/01/___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] Fwd: [Bacula-devel] Broken 2.2.6 source rpm in Sourceforge

2007-11-15 Thread Scott Barninger
No problem. Good thing I don't delete that trusty srpm until I build the
next release :-) I hope all is well now.

Coming Event Highlights:

Per me, will be switching the sqlite package to sqlite3.
Per Alan Brown, will be adding a build switch for the mtx package
(default off).

Hopefully this will be done with the next major release.

On Thu, 2007-11-15 at 17:24 -0500, Jeff Dickens wrote:
 Thanks.
 
 Scott Barninger wrote: 
  Hello,
  
  Sorry for the delay but I have been out of town until this evening. Not
  sure what went wrong with the release but I have re-uploaded the srpm
  now. It should be available now and the file size looks correct.
  
  On Tue, 2007-11-13 at 14:10 +0100, Kern Sibbald wrote:

   Hello Scott,
   
   Just to let you know, I downloaded the 2.2.6-1.src.rpm package, and it 
   does 
   look like it is truncated since it is less than half the size of the 
   2.2.5 
   release, and the signature is NOT OK.
   
   As a consequence, I have deleted the package from Source Forge to avoid 
   more 
   users from having problems.  I have a copy of the broken package here, 
   if 
   by any chance you should need it.
   
   Best regards,
   
   Kern
   
   --  Forwarded Message  --
   
   Subject: [Bacula-devel] Broken 2.2.6 source rpm in Sourceforge
   Date: Tuesday 13 November 2007 13:51
   From: Timo Neuvonen [EMAIL PROTECTED]
   To: [EMAIL PROTECTED]
   
   
 bacula-2.2.6 has been released to sourceforge. This release should
 address the RedHat (and clone) issues discussed recently as well as
 introducing support for SuSE 10.3.
 
Could someone pls. check if the bacula-2.2.6-1.src.rpm is ok?
Size shown by Sourceforge is less than 8 megs, which is less than
one half of 2.2.5 src rpm package...
  
   For me, the package in Sourceforge definetely seems to be broken. I've 
   tried
   to download it several times, every time I get only the appr. 8 megs, 
   which
   does not build to binaries... someone else mentioned this in the user list
   too.
   
   Regards,
   Timo
   
   
   
   -
   This SF.net email is sponsored by: Splunk Inc.
   Still grepping through log files to find problems?  Stop.
   Now Search log events and configuration files using AJAX and a browser.
   Download your FREE copy of Splunk now  http://get.splunk.com/
   ___
   Bacula-devel mailing list
   [EMAIL PROTECTED]
   https://lists.sourceforge.net/lists/listinfo/bacula-devel
   
   ---
   
   
  



-
This SF.net email is sponsored by: Microsoft
Defy all challenges. Microsoft(R) Visual Studio 2005.
http://clk.atdmt.com/MRT/go/vse012070mrt/direct/01/
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] backup issue with batch table

2007-11-15 Thread Jason Martin
MySQL has its own size limits on files. See:
http://wiki.bacula.org/doku.php?id=faq#why_does_mysql_say_my_file_table_is_full

-Jason Martin

On Thu, Nov 15, 2007 at 05:44:44PM -0600, Nick Jones wrote:
 Hello,
 
 I was hoping someone could help me identify what is going wrong with
 my backup job?
 
 I recently updated from 2.0.3 to 2.2.5 so that building of directory
 trees for restores were faster (and I am quite pleased).  After I
 updated, everything seemed fine, I was able to run several incremental
 backups of the same identical job except on a different / identical
 tapeset that is now offsite.
 
 I am trying to create a new backup on the secondary set of tapes and I
 keep running into this error after a day and a half.  Table 'batch' is
 full.  I'm using a large my.cnf config
 
 Another error is:   Attribute create error. sql_find.c:333 Request for
 Volume item 1 greater than max 0 or less than 1 I may have read
 somewhere that this is caused by a disk space issue so I suspect I'm
 running out of space.
 
 The fileset is roughly 27,000,000 (million) files consuming 2.5 TB of
 space.  I have 16GB free on the root partition where mysql lives,
 however the bacula sql tables and working directory are symbolically
 linked to a RAID with 80GB of free space.  I had hoped this would be
 enough.  Is it not?
 
 Thanks for any hints on identifying the problem.
 
 Nick
 
 
 
 -- Forwarded message --
 From: Bacula [EMAIL PROTECTED]
 Date: Nov 15, 2007 5:05 PM
 Subject: Bacula: Backup Fatal Error of lcn-fd Full
 To: [EMAIL PROTECTED]
 
 
 14-Nov 09:29 lcn-dir JobId 375: No prior Full backup Job record found.
 14-Nov 09:29 lcn-dir JobId 375: No prior or suitable Full backup found
 in catalog. Doing FULL backup.
 14-Nov 09:29 lcn-dir JobId 375: Start Backup JobId 375,
 Job=Job1.2007-11-14_09.29.05
 14-Nov 09:29 lcn-dir JobId 375: Recycled current volume tape1
 14-Nov 09:29 lcn-dir JobId 375: Using Device Ultrium
 14-Nov 09:29 lcn-sd JobId 375: 3301 Issuing autochanger loaded? drive
 0 command.
 14-Nov 09:29 lcn-sd JobId 375: 3302 Autochanger loaded? drive 0,
 result is Slot 1.
 14-Nov 09:29 lcn-sd JobId 375: Recycled volume tape1 on device
 Ultrium (/dev/tape), all previous data lost.
 14-Nov 23:46 lcn-sd JobId 375: End of Volume tape1 at 742:11802 on
 device Ultrium (/dev/tape). Write of 64512 bytes got -1.
 14-Nov 23:46 lcn-sd JobId 375: Re-read of last block succeeded.
 14-Nov 23:46 lcn-sd JobId 375: End of medium on Volume tape1
 Bytes=742,713,882,624 Blocks=11,512,801 at 14-Nov-2007 23:46.
 14-Nov 23:46 lcn-dir JobId 375: Recycled volume tape4
 14-Nov 23:46 lcn-sd JobId 375: 3307 Issuing autochanger unload slot
 1, drive 0 command.
 14-Nov 23:47 lcn-sd JobId 375: 3304 Issuing autochanger load slot 4,
 drive 0 command.
 14-Nov 23:47 lcn-sd JobId 375: 3305 Autochanger load slot 4, drive
 0, status is OK.
 14-Nov 23:47 lcn-sd JobId 375: 3301 Issuing autochanger loaded? drive
 0 command.
 14-Nov 23:47 lcn-sd JobId 375: 3302 Autochanger loaded? drive 0,
 result is Slot 4.
 14-Nov 23:47 lcn-sd JobId 375: Recycled volume tape4 on device
 Ultrium (/dev/tape), all previous data lost.
 14-Nov 23:47 lcn-sd JobId 375: New volume tape4 mounted on device
 Ultrium (/dev/tape) at 14-Nov-2007 23:47.
 15-Nov 15:53 lcn-sd JobId 375: End of Volume tape4 at 808:12641 on
 device Ultrium (/dev/tape). Write of 64512 bytes got -1.
 15-Nov 15:53 lcn-sd JobId 375: Re-read of last block succeeded.
 15-Nov 15:53 lcn-sd JobId 375: End of medium on Volume tape4
 Bytes=808,763,784,192 Blocks=12,536,640 at 15-Nov-2007 15:53.
 15-Nov 15:53 lcn-dir JobId 375: Recycled volume tape3
 15-Nov 15:53 lcn-sd JobId 375: 3307 Issuing autochanger unload slot
 4, drive 0 command.
 15-Nov 15:54 lcn-sd JobId 375: 3304 Issuing autochanger load slot 3,
 drive 0 command.
 15-Nov 15:54 lcn-sd JobId 375: 3305 Autochanger load slot 3, drive
 0, status is OK.
 15-Nov 15:54 lcn-sd JobId 375: 3301 Issuing autochanger loaded? drive
 0 command.
 15-Nov 15:54 lcn-sd JobId 375: 3302 Autochanger loaded? drive 0,
 result is Slot 3.
 15-Nov 15:54 lcn-sd JobId 375: Recycled volume tape3 on device
 Ultrium (/dev/tape), all previous data lost.
 15-Nov 15:54 lcn-sd JobId 375: New volume tape3 mounted on device
 Ultrium (/dev/tape) at 15-Nov-2007 15:54.
 15-Nov 17:04 lcn-dir JobId 375: Fatal error: sql_create.c:732
 sql_create.c:732 insert INSERT INTO batch VALUES
 (20976597,375,'/mnt/right/ppg/dropbox/for_jessica.dir/lesion_vol.dir/2117/','2117_lesionnot_fruit_004.flt.gz','gg
 Bgn/f IGw B Ru U A FO BAA I BHOj5q BE7Ot4 BG2Gxr A A
 C','xWoEMzfHWuvIxoZu2vxP0A') failed:
 The table 'batch' is full
 15-Nov 17:04 lcn-dir JobId 375: sql_create.c:732 INSERT INTO batch
 VALUES 
 (20976597,375,'/mnt/right/ppg/dropbox/for_jessica.dir/lesion_vol.dir/2117/','2117_lesionnot_fruit_004.flt.gz','gg
 Bgn/f IGw B Ru U A FO BAA I BHOj5q BE7Ot4 BG2Gxr A A
 C','xWoEMzfHWuvIxoZu2vxP0A')
 15-Nov 17:04 lcn-dir JobId 375: Fatal error: catreq.c:478 Attribute
 create error. sql_find.c:333 

[Bacula-users] backup issue with batch table

2007-11-15 Thread Nick Jones
Hello,

I was hoping someone could help me identify what is going wrong with
my backup job?

I recently updated from 2.0.3 to 2.2.5 so that building of directory
trees for restores were faster (and I am quite pleased).  After I
updated, everything seemed fine, I was able to run several incremental
backups of the same identical job except on a different / identical
tapeset that is now offsite.

I am trying to create a new backup on the secondary set of tapes and I
keep running into this error after a day and a half.  Table 'batch' is
full.  I'm using a large my.cnf config

Another error is:   Attribute create error. sql_find.c:333 Request for
Volume item 1 greater than max 0 or less than 1 I may have read
somewhere that this is caused by a disk space issue so I suspect I'm
running out of space.

The fileset is roughly 27,000,000 (million) files consuming 2.5 TB of
space.  I have 16GB free on the root partition where mysql lives,
however the bacula sql tables and working directory are symbolically
linked to a RAID with 80GB of free space.  I had hoped this would be
enough.  Is it not?

Thanks for any hints on identifying the problem.

Nick



-- Forwarded message --
From: Bacula [EMAIL PROTECTED]
Date: Nov 15, 2007 5:05 PM
Subject: Bacula: Backup Fatal Error of lcn-fd Full
To: [EMAIL PROTECTED]


14-Nov 09:29 lcn-dir JobId 375: No prior Full backup Job record found.
14-Nov 09:29 lcn-dir JobId 375: No prior or suitable Full backup found
in catalog. Doing FULL backup.
14-Nov 09:29 lcn-dir JobId 375: Start Backup JobId 375,
Job=Job1.2007-11-14_09.29.05
14-Nov 09:29 lcn-dir JobId 375: Recycled current volume tape1
14-Nov 09:29 lcn-dir JobId 375: Using Device Ultrium
14-Nov 09:29 lcn-sd JobId 375: 3301 Issuing autochanger loaded? drive
0 command.
14-Nov 09:29 lcn-sd JobId 375: 3302 Autochanger loaded? drive 0,
result is Slot 1.
14-Nov 09:29 lcn-sd JobId 375: Recycled volume tape1 on device
Ultrium (/dev/tape), all previous data lost.
14-Nov 23:46 lcn-sd JobId 375: End of Volume tape1 at 742:11802 on
device Ultrium (/dev/tape). Write of 64512 bytes got -1.
14-Nov 23:46 lcn-sd JobId 375: Re-read of last block succeeded.
14-Nov 23:46 lcn-sd JobId 375: End of medium on Volume tape1
Bytes=742,713,882,624 Blocks=11,512,801 at 14-Nov-2007 23:46.
14-Nov 23:46 lcn-dir JobId 375: Recycled volume tape4
14-Nov 23:46 lcn-sd JobId 375: 3307 Issuing autochanger unload slot
1, drive 0 command.
14-Nov 23:47 lcn-sd JobId 375: 3304 Issuing autochanger load slot 4,
drive 0 command.
14-Nov 23:47 lcn-sd JobId 375: 3305 Autochanger load slot 4, drive
0, status is OK.
14-Nov 23:47 lcn-sd JobId 375: 3301 Issuing autochanger loaded? drive
0 command.
14-Nov 23:47 lcn-sd JobId 375: 3302 Autochanger loaded? drive 0,
result is Slot 4.
14-Nov 23:47 lcn-sd JobId 375: Recycled volume tape4 on device
Ultrium (/dev/tape), all previous data lost.
14-Nov 23:47 lcn-sd JobId 375: New volume tape4 mounted on device
Ultrium (/dev/tape) at 14-Nov-2007 23:47.
15-Nov 15:53 lcn-sd JobId 375: End of Volume tape4 at 808:12641 on
device Ultrium (/dev/tape). Write of 64512 bytes got -1.
15-Nov 15:53 lcn-sd JobId 375: Re-read of last block succeeded.
15-Nov 15:53 lcn-sd JobId 375: End of medium on Volume tape4
Bytes=808,763,784,192 Blocks=12,536,640 at 15-Nov-2007 15:53.
15-Nov 15:53 lcn-dir JobId 375: Recycled volume tape3
15-Nov 15:53 lcn-sd JobId 375: 3307 Issuing autochanger unload slot
4, drive 0 command.
15-Nov 15:54 lcn-sd JobId 375: 3304 Issuing autochanger load slot 3,
drive 0 command.
15-Nov 15:54 lcn-sd JobId 375: 3305 Autochanger load slot 3, drive
0, status is OK.
15-Nov 15:54 lcn-sd JobId 375: 3301 Issuing autochanger loaded? drive
0 command.
15-Nov 15:54 lcn-sd JobId 375: 3302 Autochanger loaded? drive 0,
result is Slot 3.
15-Nov 15:54 lcn-sd JobId 375: Recycled volume tape3 on device
Ultrium (/dev/tape), all previous data lost.
15-Nov 15:54 lcn-sd JobId 375: New volume tape3 mounted on device
Ultrium (/dev/tape) at 15-Nov-2007 15:54.
15-Nov 17:04 lcn-dir JobId 375: Fatal error: sql_create.c:732
sql_create.c:732 insert INSERT INTO batch VALUES
(20976597,375,'/mnt/right/ppg/dropbox/for_jessica.dir/lesion_vol.dir/2117/','2117_lesionnot_fruit_004.flt.gz','gg
Bgn/f IGw B Ru U A FO BAA I BHOj5q BE7Ot4 BG2Gxr A A
C','xWoEMzfHWuvIxoZu2vxP0A') failed:
The table 'batch' is full
15-Nov 17:04 lcn-dir JobId 375: sql_create.c:732 INSERT INTO batch
VALUES 
(20976597,375,'/mnt/right/ppg/dropbox/for_jessica.dir/lesion_vol.dir/2117/','2117_lesionnot_fruit_004.flt.gz','gg
Bgn/f IGw B Ru U A FO BAA I BHOj5q BE7Ot4 BG2Gxr A A
C','xWoEMzfHWuvIxoZu2vxP0A')
15-Nov 17:04 lcn-dir JobId 375: Fatal error: catreq.c:478 Attribute
create error. sql_find.c:333 Request for Volume item 1 greater than
max 0 or less than 1
15-Nov 17:04 lcn-sd JobId 375: Job Job1.2007-11-14_09.29.05 marked to
be canceled.
15-Nov 17:04 lcn-sd JobId 375: Job write elapsed time = 31:31:59,
Transfer rate = 14.30 M bytes/second
15-Nov 17:04 lcn-sd JobId 375: Job Job1.2007-11-14_09.29.05 

[Bacula-users] Volume Size Mismatch: please, help me understand this

2007-11-15 Thread Martin Schmid
Hello Everybody

I'm running bacula for quite a while now and also for a quite a high 
volume on NAS devices. Most of it is running smootly but I do not 
understand _that_  volume recycling issue:

The files are generally limited to 2G in size.
 From previous backups, the volume Vol_0058 has been filled up to 
136660 bytes.
Now, the volume is recycled:

--snip--
14-Nov 23:06 tormore-dir: There are no more Jobs associated with Volume 
Vol_0058. Marking it purged.
14-Nov 23:06 tormore-dir: All records pruned from Volume Vol_0058; 
marking it Purged
14-Nov 23:06 tormore-dir: Recycled volume Vol_0058
14-Nov 23:06 tormore-sd: Recycled volume Vol_0058 on device 
FileStorage (/mnt/storage/bacula), all previous data lost.
--snip--

I expect the volume being reused leaving the size as it is. The job runs 
thru and terminates with 522,632,654 written bytes on that volume that 
still is 136660 bytes.

--snip--
14-Nov 23:11 tormore-sd: Job write elapsed time = 00:04:50, Transfer 
rate = 1.800 M bytes/second
14-Nov 23:11 tormore-dir: Bacula tormore-dir 2.2.4 (14Sep07): 
14-Nov-2007 23:11:36
  Build OS:   i686-pc-linux-gnu debian 4.0
  JobId:  929
  Job:srv_to_File.2007-11-14_23.05.02
  Backup Level:   Incremental, since=2007-11-13 23:11:20
  Client: tormore-fd 2.2.4 (14Sep07) 
i686-pc-linux-gnu,debian,4.0
  FileSet:srv 2007-06-12 20:32:19
  Pool:   Default (From Job resource)
  Storage:File (From Job resource)
  Scheduled time: 14-Nov-2007 23:05:01
  Start time: 14-Nov-2007 23:06:43
  End time:   14-Nov-2007 23:11:36
  Elapsed time:   4 mins 53 secs
  Priority:   10
  FD Files Written:   407
  SD Files Written:   407
  FD Bytes Written:   522,170,750 (522.1 MB)
  SD Bytes Written:   522,232,428 (522.2 MB)
  Rate:   1782.2 KB/s
  Software Compression:   None
  VSS:no
  Encryption: no
  Volume name(s): Vol_0058
  Volume Session Id:  181
  Volume Session Time:1191482842
  Last Volume Bytes:  522,632,654 (522.6 MB)
  Non-fatal FD errors:0
  SD Errors:  0
  FD termination status:  OK
  SD termination status:  OK
  Termination:Backup OK

14-Nov 23:11 tormore-dir: Begin pruning Jobs.
14-Nov 23:11 tormore-dir: No Jobs found to prune.
14-Nov 23:11 tormore-dir: Begin pruning Files.
14-Nov 23:11 tormore-dir: No Files found to prune.
14-Nov 23:11 tormore-dir: End auto prune.
--snip--

Now, the next job starts which should continue writing to Vol_0058. But 
look at this:

--snip--
14-Nov 23:11 tormore-dir: BeforeJob: run command 
/etc/bacula/scripts/make_catalog_backup bacula bacula bacula
14-Nov 23:11 tormore-dir: Start Backup JobId 930, 
Job=BackupCatalog.2007-11-14_23.10.00
14-Nov 23:11 tormore-dir: Using Device FileStorage
14-Nov 23:11 tormore-sd: Volume Vol_0058 previously written, moving to 
end of data.
14-Nov 23:11 tormore-sd: BackupCatalog.2007-11-14_23.10.00 Error: Bacula 
cannot write on disk Volume Vol_0058 because: The sizes do not match! 
Volume=136660 Catalog=522632654
14-Nov 23:11 tormore-sd: Marking Volume Vol_0058 in Error in Catalog.
--snip--

So why doesn't bacula delete the file and recreate it when recycling - 
if it can't handle a volume that keeps its size?
  or
Why does it write a volume size into the catalog that does not reflect 
the true file size on disk?


I've already reported this as a bug but I was told this behaviour was 
correct, which I do not understand. Veritas which I'm also using doesn't 
provoke such mismatches...

Can someone explain me why bacula is correct here?


Regards

Martin

-
This SF.net email is sponsored by: Microsoft
Defy all challenges. Microsoft(R) Visual Studio 2005.
http://clk.atdmt.com/MRT/go/vse012070mrt/direct/01/
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] backup issue with batch table

2007-11-15 Thread Michael Lewinger
Yups. You could switch to postgres ?
Michael

On Nov 16, 2007 1:49 AM, Jason Martin [EMAIL PROTECTED] wrote:
 MySQL has its own size limits on files. See:
 http://wiki.bacula.org/doku.php?id=faq#why_does_mysql_say_my_file_table_is_full

 -Jason Martin


 On Thu, Nov 15, 2007 at 05:44:44PM -0600, Nick Jones wrote:
  Hello,
 
  I was hoping someone could help me identify what is going wrong with
  my backup job?
 
  I recently updated from 2.0.3 to 2.2.5 so that building of directory
  trees for restores were faster (and I am quite pleased).  After I
  updated, everything seemed fine, I was able to run several incremental
  backups of the same identical job except on a different / identical
  tapeset that is now offsite.
 
  I am trying to create a new backup on the secondary set of tapes and I
  keep running into this error after a day and a half.  Table 'batch' is
  full.  I'm using a large my.cnf config
 
  Another error is:   Attribute create error. sql_find.c:333 Request for
  Volume item 1 greater than max 0 or less than 1 I may have read
  somewhere that this is caused by a disk space issue so I suspect I'm
  running out of space.
 
  The fileset is roughly 27,000,000 (million) files consuming 2.5 TB of
  space.  I have 16GB free on the root partition where mysql lives,
  however the bacula sql tables and working directory are symbolically
  linked to a RAID with 80GB of free space.  I had hoped this would be
  enough.  Is it not?
 
  Thanks for any hints on identifying the problem.
 
  Nick
 
 
 
  -- Forwarded message --
  From: Bacula [EMAIL PROTECTED]
  Date: Nov 15, 2007 5:05 PM
  Subject: Bacula: Backup Fatal Error of lcn-fd Full
  To: [EMAIL PROTECTED]
 
 
  14-Nov 09:29 lcn-dir JobId 375: No prior Full backup Job record found.
  14-Nov 09:29 lcn-dir JobId 375: No prior or suitable Full backup found
  in catalog. Doing FULL backup.
  14-Nov 09:29 lcn-dir JobId 375: Start Backup JobId 375,
  Job=Job1.2007-11-14_09.29.05
  14-Nov 09:29 lcn-dir JobId 375: Recycled current volume tape1
  14-Nov 09:29 lcn-dir JobId 375: Using Device Ultrium
  14-Nov 09:29 lcn-sd JobId 375: 3301 Issuing autochanger loaded? drive
  0 command.
  14-Nov 09:29 lcn-sd JobId 375: 3302 Autochanger loaded? drive 0,
  result is Slot 1.
  14-Nov 09:29 lcn-sd JobId 375: Recycled volume tape1 on device
  Ultrium (/dev/tape), all previous data lost.
  14-Nov 23:46 lcn-sd JobId 375: End of Volume tape1 at 742:11802 on
  device Ultrium (/dev/tape). Write of 64512 bytes got -1.
  14-Nov 23:46 lcn-sd JobId 375: Re-read of last block succeeded.
  14-Nov 23:46 lcn-sd JobId 375: End of medium on Volume tape1
  Bytes=742,713,882,624 Blocks=11,512,801 at 14-Nov-2007 23:46.
  14-Nov 23:46 lcn-dir JobId 375: Recycled volume tape4
  14-Nov 23:46 lcn-sd JobId 375: 3307 Issuing autochanger unload slot
  1, drive 0 command.
  14-Nov 23:47 lcn-sd JobId 375: 3304 Issuing autochanger load slot 4,
  drive 0 command.
  14-Nov 23:47 lcn-sd JobId 375: 3305 Autochanger load slot 4, drive
  0, status is OK.
  14-Nov 23:47 lcn-sd JobId 375: 3301 Issuing autochanger loaded? drive
  0 command.
  14-Nov 23:47 lcn-sd JobId 375: 3302 Autochanger loaded? drive 0,
  result is Slot 4.
  14-Nov 23:47 lcn-sd JobId 375: Recycled volume tape4 on device
  Ultrium (/dev/tape), all previous data lost.
  14-Nov 23:47 lcn-sd JobId 375: New volume tape4 mounted on device
  Ultrium (/dev/tape) at 14-Nov-2007 23:47.
  15-Nov 15:53 lcn-sd JobId 375: End of Volume tape4 at 808:12641 on
  device Ultrium (/dev/tape). Write of 64512 bytes got -1.
  15-Nov 15:53 lcn-sd JobId 375: Re-read of last block succeeded.
  15-Nov 15:53 lcn-sd JobId 375: End of medium on Volume tape4
  Bytes=808,763,784,192 Blocks=12,536,640 at 15-Nov-2007 15:53.
  15-Nov 15:53 lcn-dir JobId 375: Recycled volume tape3
  15-Nov 15:53 lcn-sd JobId 375: 3307 Issuing autochanger unload slot
  4, drive 0 command.
  15-Nov 15:54 lcn-sd JobId 375: 3304 Issuing autochanger load slot 3,
  drive 0 command.
  15-Nov 15:54 lcn-sd JobId 375: 3305 Autochanger load slot 3, drive
  0, status is OK.
  15-Nov 15:54 lcn-sd JobId 375: 3301 Issuing autochanger loaded? drive
  0 command.
  15-Nov 15:54 lcn-sd JobId 375: 3302 Autochanger loaded? drive 0,
  result is Slot 3.
  15-Nov 15:54 lcn-sd JobId 375: Recycled volume tape3 on device
  Ultrium (/dev/tape), all previous data lost.
  15-Nov 15:54 lcn-sd JobId 375: New volume tape3 mounted on device
  Ultrium (/dev/tape) at 15-Nov-2007 15:54.
  15-Nov 17:04 lcn-dir JobId 375: Fatal error: sql_create.c:732
  sql_create.c:732 insert INSERT INTO batch VALUES
  (20976597,375,'/mnt/right/ppg/dropbox/for_jessica.dir/lesion_vol.dir/2117/','2117_lesionnot_fruit_004.flt.gz','gg
  Bgn/f IGw B Ru U A FO BAA I BHOj5q BE7Ot4 BG2Gxr A A
  C','xWoEMzfHWuvIxoZu2vxP0A') failed:
  The table 'batch' is full
  15-Nov 17:04 lcn-dir JobId 375: sql_create.c:732 INSERT INTO batch
  VALUES 
  

Re: [Bacula-users] backup issue with batch table

2007-11-15 Thread Michael Lewinger
Sorry my mistake:

 You are using a MyISAM table and the space required for the table
exceeds what is allowed by the internal pointer size. MyISAM  creates
tables to allow up to 4GB by default (256TB as of MySQL 5.0.6), but
this limit can be changed up to the maximum allowable size of 65,536TB
(2567 – 1 bytes).

If you need a MyISAM table that is larger than the default limit and
your operating system supports large files, the CREATE TABLE statement
supports AVG_ROW_LENGTH and MAX_ROWS options. See Section 12.1.5,
CREATE TABLE Syntax. The server uses these options to determine how
large a table to allow.

Michael

On Nov 16, 2007 8:38 AM, Michael Lewinger [EMAIL PROTECTED] wrote:
 Yups. You could switch to postgres ?
 Michael


 On Nov 16, 2007 1:49 AM, Jason Martin [EMAIL PROTECTED] wrote:
  MySQL has its own size limits on files. See:
  http://wiki.bacula.org/doku.php?id=faq#why_does_mysql_say_my_file_table_is_full
 
  -Jason Martin
 
 
  On Thu, Nov 15, 2007 at 05:44:44PM -0600, Nick Jones wrote:
   Hello,
  
   I was hoping someone could help me identify what is going wrong with
   my backup job?
  
   I recently updated from 2.0.3 to 2.2.5 so that building of directory
   trees for restores were faster (and I am quite pleased).  After I
   updated, everything seemed fine, I was able to run several incremental
   backups of the same identical job except on a different / identical
   tapeset that is now offsite.
  
   I am trying to create a new backup on the secondary set of tapes and I
   keep running into this error after a day and a half.  Table 'batch' is
   full.  I'm using a large my.cnf config
  
   Another error is:   Attribute create error. sql_find.c:333 Request for
   Volume item 1 greater than max 0 or less than 1 I may have read
   somewhere that this is caused by a disk space issue so I suspect I'm
   running out of space.
  
   The fileset is roughly 27,000,000 (million) files consuming 2.5 TB of
   space.  I have 16GB free on the root partition where mysql lives,
   however the bacula sql tables and working directory are symbolically
   linked to a RAID with 80GB of free space.  I had hoped this would be
   enough.  Is it not?
  
   Thanks for any hints on identifying the problem.
  
   Nick
  
  
  
   -- Forwarded message --
   From: Bacula [EMAIL PROTECTED]
   Date: Nov 15, 2007 5:05 PM
   Subject: Bacula: Backup Fatal Error of lcn-fd Full
   To: [EMAIL PROTECTED]
  
  
   14-Nov 09:29 lcn-dir JobId 375: No prior Full backup Job record found.
   14-Nov 09:29 lcn-dir JobId 375: No prior or suitable Full backup found
   in catalog. Doing FULL backup.
   14-Nov 09:29 lcn-dir JobId 375: Start Backup JobId 375,
   Job=Job1.2007-11-14_09.29.05
   14-Nov 09:29 lcn-dir JobId 375: Recycled current volume tape1
   14-Nov 09:29 lcn-dir JobId 375: Using Device Ultrium
   14-Nov 09:29 lcn-sd JobId 375: 3301 Issuing autochanger loaded? drive
   0 command.
   14-Nov 09:29 lcn-sd JobId 375: 3302 Autochanger loaded? drive 0,
   result is Slot 1.
   14-Nov 09:29 lcn-sd JobId 375: Recycled volume tape1 on device
   Ultrium (/dev/tape), all previous data lost.
   14-Nov 23:46 lcn-sd JobId 375: End of Volume tape1 at 742:11802 on
   device Ultrium (/dev/tape). Write of 64512 bytes got -1.
   14-Nov 23:46 lcn-sd JobId 375: Re-read of last block succeeded.
   14-Nov 23:46 lcn-sd JobId 375: End of medium on Volume tape1
   Bytes=742,713,882,624 Blocks=11,512,801 at 14-Nov-2007 23:46.
   14-Nov 23:46 lcn-dir JobId 375: Recycled volume tape4
   14-Nov 23:46 lcn-sd JobId 375: 3307 Issuing autochanger unload slot
   1, drive 0 command.
   14-Nov 23:47 lcn-sd JobId 375: 3304 Issuing autochanger load slot 4,
   drive 0 command.
   14-Nov 23:47 lcn-sd JobId 375: 3305 Autochanger load slot 4, drive
   0, status is OK.
   14-Nov 23:47 lcn-sd JobId 375: 3301 Issuing autochanger loaded? drive
   0 command.
   14-Nov 23:47 lcn-sd JobId 375: 3302 Autochanger loaded? drive 0,
   result is Slot 4.
   14-Nov 23:47 lcn-sd JobId 375: Recycled volume tape4 on device
   Ultrium (/dev/tape), all previous data lost.
   14-Nov 23:47 lcn-sd JobId 375: New volume tape4 mounted on device
   Ultrium (/dev/tape) at 14-Nov-2007 23:47.
   15-Nov 15:53 lcn-sd JobId 375: End of Volume tape4 at 808:12641 on
   device Ultrium (/dev/tape). Write of 64512 bytes got -1.
   15-Nov 15:53 lcn-sd JobId 375: Re-read of last block succeeded.
   15-Nov 15:53 lcn-sd JobId 375: End of medium on Volume tape4
   Bytes=808,763,784,192 Blocks=12,536,640 at 15-Nov-2007 15:53.
   15-Nov 15:53 lcn-dir JobId 375: Recycled volume tape3
   15-Nov 15:53 lcn-sd JobId 375: 3307 Issuing autochanger unload slot
   4, drive 0 command.
   15-Nov 15:54 lcn-sd JobId 375: 3304 Issuing autochanger load slot 3,
   drive 0 command.
   15-Nov 15:54 lcn-sd JobId 375: 3305 Autochanger load slot 3, drive
   0, status is OK.
   15-Nov 15:54 lcn-sd JobId 375: 3301 Issuing autochanger loaded? drive
   0 command.
   15-Nov 15:54 lcn-sd JobId 375: 3302 Autochanger