[Bacula-users] Set start time elsewhere than Schedule?
Hi everybody! Is it possible to set client start time somewhere other than Schedule configuration? So we can use the same schedule, but in different time start. I didn't found anything like this in doc :( Ex: Schedule { Name = FSS Run = Level=Full sun } JobDefs { Name = client001.jd Type = Backup Level = Incremental Client = client001 FileSet = all-unix Schedule = FSS Storage = Filedisk Messages = Standard Pool = StandardPool Priority = 10 Start Time = 23:00 } JobDefs { Name = client002.jd Type = Backup Level = Incremental Client = client002 FileSet = all-unix Schedule = FSS Storage = Filedisk Messages = Standard Pool = StandardPool Priority = 10 Start Time = 15:00 } Thx, Fabio -- OpenSolaris 2009.06 is a cutting edge operating system for enterprises looking to deploy the next generation of Solaris that includes the latest innovations from Sun and the OpenSource community. Download a copy and enjoy capabilities such as Networking, Storage and Virtualization. Go to: http://p.sf.net/sfu/opensolaris-get___ Bacula-users mailing list Bacula-users@lists.sourceforge.net https://lists.sourceforge.net/lists/listinfo/bacula-users
Re: [Bacula-users] The number of files mismatch! Marking volume in Error in Catalog
Bob Hetzel schrieb: Greetings, I've been seeing an issue whereby a volume gets marked in error periodically. The last items logged about that volume are typically like this: 02-Jun 11:53 gyrus-sd JobId 83311: Volume LTO224L2 previously written, moving to end of data. 02-Jun 11:53 gyrus-sd JobId 83311: Error: Bacula cannot write on tape Volume LTO224L2 because: The number of files mismatch! Volume=46 Catalog=45 02-Jun 11:53 gyrus-sd JobId 83311: Marking Volume LTO224L2 in Error in Catalog. I don't think I have any SCSI errors, but instead the problem seems to be related to bacula not properly keeping track of the volume files in some rare case. This time the problem happened not too long after the volume got recycled and so I noted one thing about how the tape was used... a backup started on another volume and then spanned onto it. Could that be a source of these problems? Here's the pertinent part of the bacula log file--debugging not turned on right now but I'm hoping enough got logged to help. If not I'll have to turn debugging back on but what level would be good for determining the source of that error? http://casemed.case.edu/admin_computing/bacula/bacula-2009-06-01.log.txt Bob -- OpenSolaris 2009.06 is a cutting edge operating system for enterprises looking to deploy the next generation of Solaris that includes the latest innovations from Sun and the OpenSource community. Download a copy and enjoy capabilities such as Networking, Storage and Virtualization. Go to: http://p.sf.net/sfu/opensolaris-get ___ Bacula-users mailing list Bacula-users@lists.sourceforge.net https://lists.sourceforge.net/lists/listinfo/bacula-users To me this looks like an issue reported a couple of times on this list, once by me and once by another user, whereby Bacula isnt updating the Volume Files when doing concurrent jobs. So far nobody has seemed interested in it. For me and another user it has worked to set the maximum concurrent jobs to 1 on the device.. Yes, you will have jobs piling on for hours until they get worked off. I witnessed this first after upgrading from 2.4.4 to 3.0.0 but have not been able to track it down myself or i would have made a proper bugreport for it.. Hope that helps a little -- OpenSolaris 2009.06 is a cutting edge operating system for enterprises looking to deploy the next generation of Solaris that includes the latest innovations from Sun and the OpenSource community. Download a copy and enjoy capabilities such as Networking, Storage and Virtualization. Go to: http://p.sf.net/sfu/opensolaris-get ___ Bacula-users mailing list Bacula-users@lists.sourceforge.net https://lists.sourceforge.net/lists/listinfo/bacula-users
Re: [Bacula-users] estimate and actual job differ
Silver Salonen schrieb: Hi. I'm trying to run incremental job of a restored fileset (having mtimeonly=yes). When I check its estimate, it shows correctly only new files that have been created/modified since restoration. But when I run the actual job, all the files are included in backup. The server is 3.0.0 on FreeBSD, client is 3.0.1 on Windows XP. May it be because all the folders' (but not files') mtime is the date of restoration? And I wonder what is the latter caused by? Since you modified the fileset to add mtimeonly=yes, did you also add Ignore FileSet Changes=yes? If not, your next backup will default to a Full because the fileset doesnt match to the one your last Full was made with. -- OpenSolaris 2009.06 is a cutting edge operating system for enterprises looking to deploy the next generation of Solaris that includes the latest innovations from Sun and the OpenSource community. Download a copy and enjoy capabilities such as Networking, Storage and Virtualization. Go to: http://p.sf.net/sfu/opensolaris-get ___ Bacula-users mailing list Bacula-users@lists.sourceforge.net https://lists.sourceforge.net/lists/listinfo/bacula-users
Re: [Bacula-users] estimate and actual job differ
On Thursday 04 June 2009 10:34:36 Christian Gaul wrote: Silver Salonen schrieb: Hi. I'm trying to run incremental job of a restored fileset (having mtimeonly=yes). When I check its estimate, it shows correctly only new files that have been created/modified since restoration. But when I run the actual job, all the files are included in backup. The server is 3.0.0 on FreeBSD, client is 3.0.1 on Windows XP. May it be because all the folders' (but not files') mtime is the date of restoration? And I wonder what is the latter caused by? Since you modified the fileset to add mtimeonly=yes, did you also add Ignore FileSet Changes=yes? If not, your next backup will default to a Full because the fileset doesnt match to the one your last Full was made with. Yes, I also have Ignore FileSet Changes=yes, sorry I didn't mention it. And if I didn't have it, estimate would show full too, wouldn't it? -- Silver -- OpenSolaris 2009.06 is a cutting edge operating system for enterprises looking to deploy the next generation of Solaris that includes the latest innovations from Sun and the OpenSource community. Download a copy and enjoy capabilities such as Networking, Storage and Virtualization. Go to: http://p.sf.net/sfu/opensolaris-get ___ Bacula-users mailing list Bacula-users@lists.sourceforge.net https://lists.sourceforge.net/lists/listinfo/bacula-users
Re: [Bacula-users] estimate and actual job differ
Silver Salonen schrieb: On Thursday 04 June 2009 10:34:36 Christian Gaul wrote: Silver Salonen schrieb: Hi. I'm trying to run incremental job of a restored fileset (having mtimeonly=yes). When I check its estimate, it shows correctly only new files that have been created/modified since restoration. But when I run the actual job, all the files are included in backup. The server is 3.0.0 on FreeBSD, client is 3.0.1 on Windows XP. May it be because all the folders' (but not files') mtime is the date of restoration? And I wonder what is the latter caused by? Since you modified the fileset to add mtimeonly=yes, did you also add Ignore FileSet Changes=yes? If not, your next backup will default to a Full because the fileset doesnt match to the one your last Full was made with. Yes, I also have Ignore FileSet Changes=yes, sorry I didn't mention it. And if I didn't have it, estimate would show full too, wouldn't it? I dont know if estimate honors that or just takes what you give it. I personally only use estimate when making new filesets (to see if my excludes work correctly). And since using estimate with LVM Snapshots doesnt work anyways because they are not mounted for an estimate job, most of my filesets would show 0 files anyways. Sorry i cant help with that. -- Christian Gaul otop AG D-55116 Mainz Rheinstraße 105-107 Fon: 06131.5763.310 Fax: 06131.5763.500 E-Mail: christian.g...@otop.de Internet: www.otop.de Vorsitzender des Aufsichtsrats: Christof Glasmacher Vorstand: Dirk Flug Registergericht: Amtsgericht Mainz Handelsregister: HRB 7647 -- OpenSolaris 2009.06 is a cutting edge operating system for enterprises looking to deploy the next generation of Solaris that includes the latest innovations from Sun and the OpenSource community. Download a copy and enjoy capabilities such as Networking, Storage and Virtualization. Go to: http://p.sf.net/sfu/opensolaris-get ___ Bacula-users mailing list Bacula-users@lists.sourceforge.net https://lists.sourceforge.net/lists/listinfo/bacula-users
Re: [Bacula-users] The number of files mismatch! Marking volume in Error in Catalog
On Thu, Jun 04, 2009 at 09:20:34AM +0200, Christian Gaul wrote: Bob Hetzel schrieb: Greetings, I've been seeing an issue whereby a volume gets marked in error periodically. The last items logged about that volume are typically like this: 02-Jun 11:53 gyrus-sd JobId 83311: Volume LTO224L2 previously written, moving to end of data. 02-Jun 11:53 gyrus-sd JobId 83311: Error: Bacula cannot write on tape Volume LTO224L2 because: The number of files mismatch! Volume=46 Catalog=45 02-Jun 11:53 gyrus-sd JobId 83311: Marking Volume LTO224L2 in Error in Catalog. I don't think I have any SCSI errors, but instead the problem seems to be related to bacula not properly keeping track of the volume files in some rare case. This time the problem happened not too long after the volume got recycled and so I noted one thing about how the tape was used... a backup started on another volume and then spanned onto it. Could that be a source of these problems? Here's the pertinent part of the bacula log file--debugging not turned on right now but I'm hoping enough got logged to help. If not I'll have to turn debugging back on but what level would be good for determining the source of that error? http://casemed.case.edu/admin_computing/bacula/bacula-2009-06-01.log.txt Bob To me this looks like an issue reported a couple of times on this list, once by me and once by another user, whereby Bacula isnt updating the Volume Files when doing concurrent jobs. So far nobody has seemed interested in it. For me and another user it has worked to set the maximum concurrent jobs to 1 on the device.. Yes, you will have jobs piling on for hours until they get worked off. I witnessed this first after upgrading from 2.4.4 to 3.0.0 but have not been able to track it down myself or i would have made a proper bugreport for it.. Hope that helps a little Hi, we're running bacula 2.2.8, using concurrent jobs = 2 on a disk based set of volumes. I've done several restores from those volumes without any errors, and haven't seen the error you mention in a good 3 months or so since having switched from concurrent jobs = 1 to = 2, so I'd consider this a positive report that the feature actually does work. The problem bug may have been introduced in a later version of bacula. All the best, Uwe -- uwe.schuerk...@nionex.net phone: [+49] 5242.91 - 4740, fax:-69 72 Hauptsitz: Avenwedder Str. 55, D-33311 Guetersloh, Germany Registergericht Guetersloh HRB 4196, Geschaeftsfuehrer: Horst Gosewehr NIONEX ist ein Unternehmen der DirectGroup Germany www.directgroupgermany.de -- OpenSolaris 2009.06 is a cutting edge operating system for enterprises looking to deploy the next generation of Solaris that includes the latest innovations from Sun and the OpenSource community. Download a copy and enjoy capabilities such as Networking, Storage and Virtualization. Go to: http://p.sf.net/sfu/opensolaris-get ___ Bacula-users mailing list Bacula-users@lists.sourceforge.net https://lists.sourceforge.net/lists/listinfo/bacula-users
[Bacula-users] don't store file information
Hi, is it possible to configure a backup job to _not_ store information about the backed up files/paths etc in the database? I recently tried to backup my BackupPC pool with bacula. BackupPC makes excessive use of hardlinks, I ended up with 50 GB of data in my postgres database only for this single backup job! I don't need the file information, the bacula backup will only be used if I have to restore the whole BackupPC pool for desaster recovery. I couldn't find a option for this in the FileSet, Client or Job resources. A very short file retention won't help, because my diskspace for the database is not even enough to store this information at all (the job ended with an database error during the batch inserts...). Ralf -- OpenSolaris 2009.06 is a cutting edge operating system for enterprises looking to deploy the next generation of Solaris that includes the latest innovations from Sun and the OpenSource community. Download a copy and enjoy capabilities such as Networking, Storage and Virtualization. Go to: http://p.sf.net/sfu/opensolaris-get ___ Bacula-users mailing list Bacula-users@lists.sourceforge.net https://lists.sourceforge.net/lists/listinfo/bacula-users
[Bacula-users] bacula management questions
hi guys, bacula kicks arse lol how may I know how long its been since the clients where backed up from bcosole of whatever? I jsut want to find out quickly which ones have the longest time to see whats going on with them btw, every once in a while employees lave the company and new ones come in, so my machines names change names with them, how may I move the clients name from old-fd to new-fd? I currently delete old configuration, create a new one and whalla, duplicated information and I am building up a bunch of unsused names in the clients list and taking space with old backup files :s thanks -- OpenSolaris 2009.06 is a cutting edge operating system for enterprises looking to deploy the next generation of Solaris that includes the latest innovations from Sun and the OpenSource community. Download a copy and enjoy capabilities such as Networking, Storage and Virtualization. Go to: http://p.sf.net/sfu/opensolaris-get ___ Bacula-users mailing list Bacula-users@lists.sourceforge.net https://lists.sourceforge.net/lists/listinfo/bacula-users
[Bacula-users] Error writing to tape
Greetings I'm wonding if anyone may have any ideas of how to prevent this error. I've been using bacula for a few years in one location and would get an error like below approximately once every three weeks. It has not been too big a problem and I am able to just umount mount umount mount and solve the issue from home. So not to big of an issue for me. However, I'm getting the oporunity to use bacula at another location and we have the exact same tape drive there as well. They are both Certrance LTO2 drives. These are just single drives with no changers. If anyone has any experience solving this kind of issue, I would certainly appreciate the help. Log records for job 445 2009-06-03 17:58:18 centos6-sd Volume MAIL_MOWE_2 previously written, moving to end of data. 2009-06-03 17:59:04 centos6-sd Marking Volume MAIL_MOWE_2 in Error in Catalog. Error: Unable to position to end of data on device LTO (/dev/nst0): ERR=dev.c:946 ioctl MTIOCGET error on LTO (/dev/nst0). ERR=Input/output error. 2009-06-03 17:59:47 centos6-sd Job centos2etc.2009-06-03_18.00.00_42 waiting. Cannot find any appendable volumes. Please use the label command to create a new Volume for: Thank you kindly Dirk Bartley -- OpenSolaris 2009.06 is a cutting edge operating system for enterprises looking to deploy the next generation of Solaris that includes the latest innovations from Sun and the OpenSource community. Download a copy and enjoy capabilities such as Networking, Storage and Virtualization. Go to: http://p.sf.net/sfu/opensolaris-get ___ Bacula-users mailing list Bacula-users@lists.sourceforge.net https://lists.sourceforge.net/lists/listinfo/bacula-users