[Bacula-users] dir and sd hang
Hi. I use Bacula 3.0.0 on FreeBSD-6.3. The problem I have is that DIR and SD tend to hang often, and it seems one causes another, because they do it together mostly. Sometimes it happens between jobs, sometimes it happens before all the jobs. The only possibility unlock the processes is to kill them with -9 and start again. After that backups work usually a few days and then the processes hang again. Anyone else experiencing this? -- Silver -- Crystal Reports - New Free Runtime and 30 Day Trial Check out the new simplified licensing option that enables unlimited royalty-free distribution of the report engine for externally facing server and web deployment. http://p.sf.net/sfu/businessobjects ___ Bacula-users mailing list Bacula-users@lists.sourceforge.net https://lists.sourceforge.net/lists/listinfo/bacula-users
Re: [Bacula-users] suppress these warnings?
So is this impossible? Just to be a little bit more clear, My FileSet includes drives A-Z in order to back up whatever drives are present on a system. I may just change it to drive C only, especially if there is no way to suppress these error messages, but ideally email report contents shouldn't influence FileSet definitions. :) From: Jeff Shanholtz [mailto:jeffs...@shanholtz.com] Sent: Tuesday, June 09, 2009 2:48 PM To: bacula-users@lists.sourceforge.net Subject: [Bacula-users] suppress these warnings? Is it possible to suppress warnings that relate to drives that don't exist in the email reports? Examples (which occur for every drive letter from H-Z on my system)... 09-Jun 12:38 jeff-fd JobId 1: Warning: Generate VSS snapshot of drive H:\ failed. VSS support is disabled on this drive. 09-Jun 14:20 jeff-fd JobId 1: Could not stat H:/: ERR=The system cannot find the path specified. -- Crystal Reports - New Free Runtime and 30 Day Trial Check out the new simplified licensing option that enables unlimited royalty-free distribution of the report engine for externally facing server and web deployment. http://p.sf.net/sfu/businessobjects___ Bacula-users mailing list Bacula-users@lists.sourceforge.net https://lists.sourceforge.net/lists/listinfo/bacula-users
Re: [Bacula-users] Restoring Fileset from one Computer to Different Computer
You shouldn't have to rename anything. Just add a new Client resource for the new computer and install the file daemon on it. I don't know wx-console, but as I see it, you selected XPcomp-fd as restore Client? If so, this is probably wrong as this is the first client. Chris Industry Standard Computers sale...@iscnetwork.com wrote on 16.06.2009 03:46:01: Industry Standard Computers sale...@iscnetwork.com 16.06.2009 03:46 An bacula-users@lists.sourceforge.net Kopie Thema Re: [Bacula-users] Restoring Fileset from one Computer to Different Computer I created the backup on the server from computer xpcomp :-) Went to computer server2 and attempted to restore. NOTTA! :-( Renamed server2 and made sure all the passwords, user, and directory structures were the same as xpcomp and tried a second time to restore xpcomp-fd NOTTA :-( When I shutdown the fake xpcomp and turned on the real xpcomp both backups restored. =-O If I was into drinking I would be drunk right now. :-) == open WX-CONSOLE Click Restore tab Click Enter restore mode Do the drop down to XPcomp-fd Other boxes self adjust Click OK 2 pane window opens up for picking what to restore I browse down the tree and just pick Favorites so nothing gets erased from this other computer. click restore button. (here is where the OLD WX-console has lots of issues. IF I change anything not already filled in the boxes -- I am done for.) console tab read out == 4 files selected to be restored. Run Restore job JobName:Restore Bootstrap: /var/bacula/Server-dir.restore.2.bsr Where: *None* Replace:always FileSet:Catalog Client: xpcomp-fd Storage:File When: 2009-06-15 21:08:12 Catalog:MyCatalog Priority: 10 OK to run? (yes/mod/no): yes Job queued. JobId=42 #Failed to retrieve jobid.15-Jun 21:08 Server-dir: Start Restore Job Restore.2009-06-15_21.08.16 = Failed to retrieve jobid always comes up even on good backups. Nothing happens... the 4 files never show up. -- Crystal Reports - New Free Runtime and 30 Day Trial Check out the new simplified licensing option that enables unlimited royalty-free distribution of the report engine for externally facing server and web deployment. http://p.sf.net/sfu/businessobjects ___ Bacula-users mailing list Bacula-users@lists.sourceforge.net https://lists.sourceforge.net/lists/listinfo/bacula-users -- Crystal Reports - New Free Runtime and 30 Day Trial Check out the new simplified licensing option that enables unlimited royalty-free distribution of the report engine for externally facing server and web deployment. http://p.sf.net/sfu/businessobjects___ Bacula-users mailing list Bacula-users@lists.sourceforge.net https://lists.sourceforge.net/lists/listinfo/bacula-users
Re: [Bacula-users] How do I resotre directories with spaces in the name?
Kevin Thorpe ke...@pibenchmark.com wrote on 15.06.2009 16:50:08: Kevin Thorpe ke...@pibenchmark.com 15.06.2009 16:50 [Bacula-users] How do I resotre directories with spaces in the name? Hi all, I've just hit a bit of a snag with restore. I can't seem to cd into a directory with a space in the name. I've tried quoting the name and \escaping the spaces and neither works. How do I do this? I can't reeducate (baseball bat) all our Windows users not to use spaces. thanks Works for me both ways; escaping My\ Directory and doublequoting My Directory. Did you compile with readline support? You might then be able to tab-complete the directory (I have never done that though). Chris.-- Crystal Reports - New Free Runtime and 30 Day Trial Check out the new simplified licensing option that enables unlimited royalty-free distribution of the report engine for externally facing server and web deployment. http://p.sf.net/sfu/businessobjects___ Bacula-users mailing list Bacula-users@lists.sourceforge.net https://lists.sourceforge.net/lists/listinfo/bacula-users
[Bacula-users] premature EOT under Windows
I've been using bacula-sd and bacula-dir under windows for a bit and so far it's worked well except for two occasions. Just now, and maybe a week ago, bacula decided that it had reached the end of tape (I think there was an error there). This happened again shortly after, but the problem appeared to go away after restarting bacula-sd. When it happened a week ago, I put it down to a problem with Windows updates which were installing or had just finished installing when the backup job was running. So to restate what I just said: . backup-sd says end of tape due to write error (about 83G into the job) . I cancel the job, purge the volume, and restart using the same volume . bacula-sd does the same again, but not at the same point . I cancel the job, purge the volume, restart bacula-sd, and restart the job using the same volume again . the backup works fine, and subsequent backups worked fine for a week or so. I'm using bacula 3.0.1 (x64 version of bacula-fd, i386 version of everything else). The drive is a HP LTO3 drive (400/800G capacity). The normal job size is about 220G. Windows reports no errors in the logs (eg no scsi errors). I have run HP LTT tools and it declares that everything is fine, although I haven't run it since the most recent failure. Is anyone else using bacula-sd under windows and can tell me what their experience is? Given the unsupported disclaimer for -sd and -dir under windows, I'm not ruling out a bug in -sd... Thanks James -- Crystal Reports - New Free Runtime and 30 Day Trial Check out the new simplified licensing option that enables unlimited royalty-free distribution of the report engine for externally facing server and web deployment. http://p.sf.net/sfu/businessobjects ___ Bacula-users mailing list Bacula-users@lists.sourceforge.net https://lists.sourceforge.net/lists/listinfo/bacula-users
Re: [Bacula-users] dir and sd hang
Silver Salonen wrote: Hi. I use Bacula 3.0.0 on FreeBSD-6.3. The problem I have is that DIR and SD tend to hang often, and it seems one causes another, because they do it together mostly. Sometimes it happens between jobs, sometimes it happens before all the jobs. The only possibility unlock the processes is to kill them with -9 and start again. After that backups work usually a few days and then the processes hang again. Anyone else experiencing this? I fixed thread related bacula 2.2.8 stability issues by moving from 6.3-RELEASE to 6.3-STABLE. It has a threadlib patch which 6.3-RELEASE has not. The issue was that concurrent backups would after a while result in a locked dir and sd. So your problem sounds related and you may give 6.3-STABLE a try. I would appreciate any feedback since we are planing to update to 3.0.x. Stability issues would of course change our modus operandi. HTH Attila -- Crystal Reports - New Free Runtime and 30 Day Trial Check out the new simplified licensing option that enables unlimited royalty-free distribution of the report engine for externally facing server and web deployment. http://p.sf.net/sfu/businessobjects ___ Bacula-users mailing list Bacula-users@lists.sourceforge.net https://lists.sourceforge.net/lists/listinfo/bacula-users
[Bacula-users] Performance with MySQL queries since 3.0.0 (Dir inserting attributes hang)
Hi, I have a somewhat pressing problem with the performance of my Bacula installation. My MySQL database currently holds 247,342,127 (36GB) records in the File table, and 78,576,199 (10GB) records in the Filename table. Since 3.0.0, but even more since 3.0.1, I have a problem with queries being really slow. Basically when doing a full backup of a server (mailserver, LOTS of small files), I can have my MySQL hanging for up to 24+ hours on queries like this: INSERT INTO Filename( Name ) SELECT a.Name FROM ( SELECT DISTINCT Name FROM batch ) AS a WHERE NOT EXISTS ( SELECT Name FROM Filename AS f WHERE f.Name = a.Name ) with the status Sending data, and a lot of other similar queries in queue with the status Locked. One of these queries take approx 10.000 seconds to execute, but is just followed by another similar (identical) query with the same duration. This is a problem mostly because it prevents me doing restores while backups are Dir inserting attributes Obviously I would think it's a MySQL performance issue, but I was wondering if anything had been done to the queries? They seem to be a LOT slower and a LOT heavier. I've just put more RAM into the server, but it's done little to improve the duration of the queries. My server now has 4GB RAM (will update to 6GB) - but again it's a recent issue, because Bacula has been running perfectly for many months on 2GB RAM, until I updated to 3.0.1 I've done REPAIR TABLE and OPTIMIZE TABLE and seen no improvement. Finally if anyone have any specific ideas to improve performance on my huge SQL database, please share :) Thanks -- Tom -- Crystal Reports - New Free Runtime and 30 Day Trial Check out the new simplified licensing option that enables unlimited royalty-free distribution of the report engine for externally facing server and web deployment. http://p.sf.net/sfu/businessobjects ___ Bacula-users mailing list Bacula-users@lists.sourceforge.net https://lists.sourceforge.net/lists/listinfo/bacula-users
Re: [Bacula-users] premature EOT under Windows
I've been using bacula-sd and bacula-dir under windows for a bit and so far it's worked well except for two occasions. Just now, and maybe a week ago, bacula decided that it had reached the end of tape (I think there was an error there). This happened again shortly after, but the problem appeared to go away after restarting bacula-sd. When it happened a week ago, I put it down to a problem with Windows updates which were installing or had just finished installing when the backup job was running. So to restate what I just said: . backup-sd says end of tape due to write error (about 83G into the job) . I cancel the job, purge the volume, and restart using the same volume . bacula-sd does the same again, but not at the same point . I cancel the job, purge the volume, restart bacula-sd, and restart the job using the same volume again . the backup works fine, and subsequent backups worked fine for a week or so. I'm using bacula 3.0.1 (x64 version of bacula-fd, i386 version of everything else). The drive is a HP LTO3 drive (400/800G capacity). The normal job size is about 220G. Windows reports no errors in the logs (eg no scsi errors). I have run HP LTT tools and it declares that everything is fine, although I haven't run it since the most recent failure. Is anyone else using bacula-sd under windows and can tell me what their experience is? Given the unsupported disclaimer for -sd and -dir under windows, I'm not ruling out a bug in -sd... Actually even after restarting bacula-sd, it happened again. I will run some more extensive tests on the hardware. James -- Crystal Reports - New Free Runtime and 30 Day Trial Check out the new simplified licensing option that enables unlimited royalty-free distribution of the report engine for externally facing server and web deployment. http://p.sf.net/sfu/businessobjects ___ Bacula-users mailing list Bacula-users@lists.sourceforge.net https://lists.sourceforge.net/lists/listinfo/bacula-users
[Bacula-users] MySQL errors
We keep getting the following errors when running backups and any time we query: 16-Jun 13:56 ceres-dir JobId 8703: Fatal error: sql_get.c:359 sql_get.c:359 query SELECT VolumeName,MAX(VolIndex) FROM JobMedia,Media WHERE JobMedia.JobId=8703 AND JobMedia.MediaId=Media.MediaId GROUP BY VolumeName ORDER BY 2 ASC failed: Can't create/write to file '/tmp/#sql_555_0.MYI' (Errcode: 30) I can not find any problems with MySQL and /tmp is not full nor are there any permission issues. I am relatively new to Bacula and know a bit about MySQL (just short of Jr DBA level). Thanks, John The information in this message is intended solely for the addressee and should be considered confidential. Publishing Technology does not accept legal responsibility for the contents of this message and any statements contained herein which do not relate to the official business of Publishing Technology are neither given nor endorsed by Publishing Technology and are those of the individual and not of Publishing Technology. This message has been scanned for viruses using the most current and reliable tools available and Publishing Technology excludes all liability related to any viruses that might exist in any attachment or which may have been acquired in transit. -- Crystal Reports - New Free Runtime and 30 Day Trial Check out the new simplified licensing option that enables unlimited royalty-free distribution of the report engine for externally facing server and web deployment. http://p.sf.net/sfu/businessobjects ___ Bacula-users mailing list Bacula-users@lists.sourceforge.net https://lists.sourceforge.net/lists/listinfo/bacula-users
[Bacula-users] [Fwd: MySQL errors] (Additional Info)
We keep getting the following errors when running backups and any time we query: 16-Jun 13:56 ceres-dir JobId 8703: Fatal error: sql_get.c:359 sql_get.c:359 query SELECT VolumeName,MAX(VolIndex) FROM JobMedia,Media WHERE JobMedia.JobId=8703 AND JobMedia.MediaId=Media.MediaId GROUP BY VolumeName ORDER BY 2 ASC failed: Can't create/write to file '/tmp/#sql_555_0.MYI' (Errcode: 30) I can not find any problems with MySQL and /tmp is not full nor are there any permission issues. I am relatively new to Bacula and know a bit about MySQL (just short of Jr DBA level). Thanks, John Forgot to add that we are using Bacula 2.0.3 on Debian 3. Thanks The information in this message is intended solely for the addressee and should be considered confidential. Publishing Technology does not accept legal responsibility for the contents of this message and any statements contained herein which do not relate to the official business of Publishing Technology are neither given nor endorsed by Publishing Technology and are those of the individual and not of Publishing Technology. This message has been scanned for viruses using the most current and reliable tools available and Publishing Technology excludes all liability related to any viruses that might exist in any attachment or which may have been acquired in transit. -- Crystal Reports - New Free Runtime and 30 Day Trial Check out the new simplified licensing option that enables unlimited royalty-free distribution of the report engine for externally facing server and web deployment. http://p.sf.net/sfu/businessobjects ___ Bacula-users mailing list Bacula-users@lists.sourceforge.net https://lists.sourceforge.net/lists/listinfo/bacula-users
Re: [Bacula-users] premature EOT under Windows
James, Interesting. I have been experiencing exactly the same problem the last two nights. However, my bacula-sd and -dir are running on a x64 CentOS 5.2 box. The bacula-fd involved in the error runs on x64 Windows 2k8. I also run bacula-fds on other Linux and FreeBSD hosts, which don't show the same issues. However, the size of the Windows fileset is the largest with approx. 110 GB. I am using Bacula 3.0.1 together with an HP LTO3 drive. Thanks Matthias -Original Message- From: James Harper james.har...@bendigoit.com.au Sent: Tuesday, 16 June 2009 11:27 PM To: bacula-users@lists.sourceforge.net bacula-users@lists.sourceforge.net Subject: [Bacula-users] premature EOT under Windows I've been using bacula-sd and bacula-dir under windows for a bit and so far it's worked well except for two occasions. Just now, and maybe a week ago, bacula decided that it had reached the end of tape (I think there was an error there). This happened again shortly after, but the problem appeared to go away after restarting bacula-sd. When it happened a week ago, I put it down to a problem with Windows updates which were installing or had just finished installing when the backup job was running. So to restate what I just said: . backup-sd says end of tape due to write error (about 83G into the job) . I cancel the job, purge the volume, and restart using the same volume . bacula-sd does the same again, but not at the same point . I cancel the job, purge the volume, restart bacula-sd, and restart the job using the same volume again . the backup works fine, and subsequent backups worked fine for a week or so. I'm using bacula 3.0.1 (x64 version of bacula-fd, i386 version of everything else). The drive is a HP LTO3 drive (400/800G capacity). The normal job size is about 220G. Windows reports no errors in the logs (eg no scsi errors). I have run HP LTT tools and it declares that everything is fine, although I haven't run it since the most recent failure. Is anyone else using bacula-sd under windows and can tell me what their experience is? Given the unsupported disclaimer for -sd and -dir under windows, I'm not ruling out a bug in -sd... Thanks James -- Crystal Reports - New Free Runtime and 30 Day Trial Check out the new simplified licensing option that enables unlimited royalty-free distribution of the report engine for externally facing server and web deployment. http://p.sf.net/sfu/businessobjects ___ Bacula-users mailing list Bacula-users@lists.sourceforge.net https://lists.sourceforge.net/lists/listinfo/bacula-users -- Crystal Reports - New Free Runtime and 30 Day Trial Check out the new simplified licensing option that enables unlimited royalty-free distribution of the report engine for externally facing server and web deployment. http://p.sf.net/sfu/businessobjects ___ Bacula-users mailing list Bacula-users@lists.sourceforge.net https://lists.sourceforge.net/lists/listinfo/bacula-users
[Bacula-users] What this error means?
Hi every: Today I check the logs for Bacula because none of my Jobs ran OK. I see this message in many of the Jobs reports: 16-Jun 02:53 salvasprod_f6_bioinformatica- JobId 37: Fatal error: job.c:1246 Unknown backup level: accurate_incremental 16-Jun 02:53 salvasprod_f6_bioinformatica- JobId 37: Fatal error: job.c:1246 Unknown backup level: accurate_incremental 16-Jun 02:53 salvasprod_f6_bioinformatica- JobId 37: Fatal error: job.c:1246 Unknown backup level: accurate_incremental 16-Jun 02:53 salvasprod_f6_bioinformatica- JobId 37: Fatal error: job.c:1246 Unknown backup level: accurate_incremental What this means? This is the complete report: Build OS: i686-pc-linux-gnu ubuntu 9.04 JobId: 37 Job: SP_F06_POLO_BIOINFORMATICA-FD.2009-06-16_02.00.00_07 Backup Level: Incremental, since=2009-06-10 21:49:27 Client: salvasprod_f06_polo_bioinformatica-fd 2.4.4 (28Dec08) i486-pc-linux-gnu,debian,5.0 FileSet:SP_F06_POLO_BIOINFORMATICA-FS 2009-06-10 15:54:07 Pool: SP_F06_POLO_BIOINFORMATICA_Pool (From Job resource) Catalog:MyCatalog (From Client resource) Storage:FileSAN (From Job resource) Scheduled time: 16-Jun-2009 02:00:00 Start time: 16-Jun-2009 02:05:02 End time: 16-Jun-2009 02:05:03 Elapsed time: 1 sec Priority: 5 FD Files Written: 0 SD Files Written: 0 FD Bytes Written: 0 (0 B) SD Bytes Written: 0 (0 B) Rate: 0.0 KB/s Software Compression: None VSS:no Encryption: no Accurate: yes Volume name(s): Volume Session Id: 3 Volume Session Time:1245073554 Last Volume Bytes: 0 (0 B) Non-fatal FD errors:0 SD Errors: 0 FD termination status: Error SD termination status: Waiting on FD Termination:*** Backup Error *** -- Crystal Reports - New Free Runtime and 30 Day Trial Check out the new simplified licensing option that enables unlimited royalty-free distribution of the report engine for externally facing server and web deployment. http://p.sf.net/sfu/businessobjects ___ Bacula-users mailing list Bacula-users@lists.sourceforge.net https://lists.sourceforge.net/lists/listinfo/bacula-users
Re: [Bacula-users] What this error means?
2009/6/16 Reynier Pérez Mira rper...@uci.cu: Hi every: Today I check the logs for Bacula because none of my Jobs ran OK. I see this message in many of the Jobs reports: 16-Jun 02:53 salvasprod_f6_bioinformatica- JobId 37: Fatal error: job.c:1246 Unknown backup level: accurate_incremental 16-Jun 02:53 salvasprod_f6_bioinformatica- JobId 37: Fatal error: job.c:1246 Unknown backup level: accurate_incremental 16-Jun 02:53 salvasprod_f6_bioinformatica- JobId 37: Fatal error: job.c:1246 Unknown backup level: accurate_incremental 16-Jun 02:53 salvasprod_f6_bioinformatica- JobId 37: Fatal error: job.c:1246 Unknown backup level: accurate_incremental What this means? This is the complete report: Build OS: i686-pc-linux-gnu ubuntu 9.04 JobId: 37 Job: SP_F06_POLO_BIOINFORMATICA-FD.2009-06-16_02.00.00_07 Backup Level: Incremental, since=2009-06-10 21:49:27 Client: salvasprod_f06_polo_bioinformatica-fd 2.4.4 (28Dec08) i486-pc-linux-gnu,debian,5.0 FileSet: SP_F06_POLO_BIOINFORMATICA-FS 2009-06-10 15:54:07 Pool: SP_F06_POLO_BIOINFORMATICA_Pool (From Job resource) Catalog: MyCatalog (From Client resource) Storage: FileSAN (From Job resource) Scheduled time: 16-Jun-2009 02:00:00 Start time: 16-Jun-2009 02:05:02 End time: 16-Jun-2009 02:05:03 Elapsed time: 1 sec Priority: 5 FD Files Written: 0 SD Files Written: 0 FD Bytes Written: 0 (0 B) SD Bytes Written: 0 (0 B) Rate: 0.0 KB/s Software Compression: None VSS: no Encryption: no Accurate: yes Volume name(s): Volume Session Id: 3 Volume Session Time: 1245073554 Last Volume Bytes: 0 (0 B) Non-fatal FD errors: 0 SD Errors: 0 FD termination status: Error SD termination status: Waiting on FD Termination: *** Backup Error *** -- Crystal Reports - New Free Runtime and 30 Day Trial Check out the new simplified licensing option that enables unlimited royalty-free distribution of the report engine for externally facing server and web deployment. http://p.sf.net/sfu/businessobjects ___ Bacula-users mailing list Bacula-users@lists.sourceforge.net https://lists.sourceforge.net/lists/listinfo/bacula-users Please post your bacula-dir.conf -- John M. Drescher -- Crystal Reports - New Free Runtime and 30 Day Trial Check out the new simplified licensing option that enables unlimited royalty-free distribution of the report engine for externally facing server and web deployment. http://p.sf.net/sfu/businessobjects ___ Bacula-users mailing list Bacula-users@lists.sourceforge.net https://lists.sourceforge.net/lists/listinfo/bacula-users
Re: [Bacula-users] What this error means?
Hi every: Today I check the logs for Bacula because none of my Jobs ran OK. I see this message in many of the Jobs reports: job.c:1246 Unknown backup level: accurate_incremental ... Client: salvasprod_f06_polo_bioinformatica-fd 2.4.4 (28Dec08) i486-pc-linux-gnu,debian,5.0 I wonder if Bacula 2.4 client (fd) supports accurate backup? -- Crystal Reports - New Free Runtime and 30 Day Trial Check out the new simplified licensing option that enables unlimited royalty-free distribution of the report engine for externally facing server and web deployment. http://p.sf.net/sfu/businessobjects ___ Bacula-users mailing list Bacula-users@lists.sourceforge.net https://lists.sourceforge.net/lists/listinfo/bacula-users
Re: [Bacula-users] What this error means?
On Tue, Jun 16, 2009 at 1:44 PM, Jari Fredrikssonja...@iki.fi wrote: Hi every: Today I check the logs for Bacula because none of my Jobs ran OK. I see this message in many of the Jobs reports: job.c:1246 Unknown backup level: accurate_incremental ... Client: salvasprod_f06_polo_bioinformatica-fd 2.4.4 (28Dec08) i486-pc-linux-gnu,debian,5.0 I wonder if Bacula 2.4 client (fd) supports accurate backup? No. It does not. John -- Crystal Reports - New Free Runtime and 30 Day Trial Check out the new simplified licensing option that enables unlimited royalty-free distribution of the report engine for externally facing server and web deployment. http://p.sf.net/sfu/businessobjects ___ Bacula-users mailing list Bacula-users@lists.sourceforge.net https://lists.sourceforge.net/lists/listinfo/bacula-users
[Bacula-users] how to recycle volumes ahead of time?
hi! i dont want to wait untill my filesystem is filled up on my storage cluster and want to start to purge and recycle volumes now. i want to recycle all volumes that have the status purged. i am not afraid to enter the database (postgresql here) and run sql queries. what query should i run? will this make bacula reuse the space of old files/volumes? /andreas -- Crystal Reports - New Free Runtime and 30 Day Trial Check out the new simplified licensing option that enables unlimited royalty-free distribution of the report engine for externally facing server and web deployment. http://p.sf.net/sfu/businessobjects ___ Bacula-users mailing list Bacula-users@lists.sourceforge.net https://lists.sourceforge.net/lists/listinfo/bacula-users
[Bacula-users] Copy jobs between two different SDs uses wrong source SD
I'm trying to set up my first Copy job, and running into a problem. I don't know whether it's a configuration issue, a documentation shortfall, a Bacula limitation, or a combination of the three. I have two SDs on two different machines. Altogether, four pools exist, three disk pools on one machine, one tape pool on the other. The disk SD is on a NAS box with a multi-terabyte SAS/SATA array, and owns three disk pools on the array. The tape SD is on a separate machine, and owns a single pool and an LTO1 drive. The disk array cannot be connected to the tape SD because the tape SD's machine has no SAS controllers. The tape drive cannot be connected to the disk SD's machine because the disk machine has no SCSI controllers and is in an insufficiently controlled environment for the tape drive. Backups have been running to the disk pools without incident for about two months, and I've verified that I can run backup jobs directly to the tape drive. The relevant config sections are as follows: Storage { Name = babylon4-sd Address = babylon4.babcom.com Maximum Concurrent Jobs = 20 SDPort = 9103 Password = XXX Device = FileStorage Media Type = File } Storage { Name = babylon5-sd Address = babylon5.babcom.com SDPort = 9103 Password = XXX Device = Ultrium-LTO1 Media Type = LTO1 Maximum Concurrent Jobs = 10 } Pool { Name = Full-Disk Storage = babylon4-sd Pool Type = Backup Next Pool = Full-Tape Recycle = yes AutoPrune = yes Volume Retention = 6 months Maximum Volume Jobs = 0 Volume Use Duration = 23h Label Format = FULL-$Year${Month:p/2/0/r}${Day:p/2/0/r}-${Hour:p/2/0/r}:${Minute:p/2/0/r} RecyclePool = Scratch } Pool { Name = Full-Tape Storage = babylon5-sd Pool Type = Backup Recycle = yes Autoprune = yes Volume Retention = 365d Recycle Oldest Volume = yes Recycle Current Volume = yes Label Format = ARCH- Maximum Volumes = 9 } # Dummy client and fileset for the copy job Client { Name = ALL Address = localhost Password = NONE Catalog = Catalog } Fileset { Name = DUMMY Include { Options { signature = MD5 } } } JobDefs { Name = TapeArchive Type = Copy Pool = Full-Tape Level = Full Client = ALL Fileset = DUMMY Selection Type = PoolUncopiedJobs Selection Pattern = Babylon5.* # this seems to be being ignored SpoolData = no Allow Duplicate Jobs = no Schedule = MonthlyCopy Messages = Daemon Priority = 20 } Job { Name = CopyToTape Enabled = Yes Pool = Full-Disk JobDefs = TapeArchive Storage = babylon4-sd } When I go to run the Copy master job, I get this output: Select Job resource (1-9): 1 Run Copy job JobName: CopyToTape Bootstrap: *None* Client:ALL FileSet: DUMMY Pool: Full-Disk (From Job resource) Read Storage: babylon4-sd (From Pool resource) Write Storage: babylon5-sd (From Storage from Pool's NextPool resource) JobId: *None* When: 2009-06-16 16:34:24 Catalog: Catalog Priority: 20 OK to run? (yes/mod/no): The read and write storage appear to be correct here. More or less the correct jobIDs get queued, except that the selection pattern is being ignored: Job queued. JobId=134 16-Jun 17:00 babylon4-dir JobId 134: The following 9 JobIds were chosen to be copied: 1,4,3,2,5,6,92,93,94 The selection pattern above should theoretically have matched only jobs 92, 93 and 94, which are small test jobs. Here's what happens when one of those queued actually tried to execute, though: Copying JobId 134, Job=CopyToTape.2009-06-16_17.00.57_41 16-Jun 17:01 babylon5-sd JobId 134: Failed command: 16-Jun 17:01 babylon5-sd JobId 134: Fatal error: Device FileStorage with MediaType File requested by DIR not found in SD Device resources. 16-Jun 17:01 babylon4-dir JobId 134: Fatal error: Storage daemon didn't accept Device FileStorage because: 3924 Device FileStorage not in SD Device resources. 16-Jun 17:01 babylon4-dir JobId 134: Error: Bacula babylon4-dir 3.0.1 (30Apr09): 16-Jun-2009 17:01:04 Build OS: i386-pc-solaris2.10 solaris 5.10 Prev Backup JobId: 94 Prev Backup Job:Babylon5_Backup.2009-06-16_14.57.25_03 New Backup JobId: 151 Current JobId: 134 Current Job:CopyToTape.2009-06-16_17.00.57_41 Backup Level: Full Client: ALL FileSet:DUMMY 2009-06-16 16:16:31 Read Pool: Full-Disk (From Job resource) Read Storage: babylon4-sd (From Pool resource) Write Pool: Full-Tape (From Job Pool's NextPool resource) Write Storage: babylon5-sd (From Storage from Pool's NextPool resource) Catalog:Catalog (From Client resource) Start time: 16-Jun-2009 17:01:04 End time: 16-Jun-2009 17:01:04 Elapsed time: 0 secs Priority: 20 SD Files Written: 0 SD Bytes
Re: [Bacula-users] how to recycle volumes ahead of time?
Maybe this can help you .. http://www.bacula.org/en/dev-manual/Automatic_Volume_Recycling.html#SECTION00118 2009/6/16 Andreas Schuldei schuldei+bacula-us...@spotify.com: hi! i dont want to wait untill my filesystem is filled up on my storage cluster and want to start to purge and recycle volumes now. i want to recycle all volumes that have the status purged. i am not afraid to enter the database (postgresql here) and run sql queries. what query should i run? will this make bacula reuse the space of old files/volumes? /andreas -- Crystal Reports - New Free Runtime and 30 Day Trial Check out the new simplified licensing option that enables unlimited royalty-free distribution of the report engine for externally facing server and web deployment. http://p.sf.net/sfu/businessobjects ___ Bacula-users mailing list Bacula-users@lists.sourceforge.net https://lists.sourceforge.net/lists/listinfo/bacula-users -- _ Francisco Javier Funes Nieto [esen...@gmail.com] CANONIGOS Servicios Informáticos para PYMES. Cl. Cruz 2, 1º Oficina 7 Tlf: 958.536759 / 661134556 Fax: 958.521354 GRANADA - 18002 -- Crystal Reports - New Free Runtime and 30 Day Trial Check out the new simplified licensing option that enables unlimited royalty-free distribution of the report engine for externally facing server and web deployment. http://p.sf.net/sfu/businessobjects ___ Bacula-users mailing list Bacula-users@lists.sourceforge.net https://lists.sourceforge.net/lists/listinfo/bacula-users
Re: [Bacula-users] how to recycle volumes ahead of time?
On Wed, Jun 17, 2009 at 12:48 AM, francisco javier funes nieto esen...@gmail.com wrote: Maybe this can help you .. http://www.bacula.org/en/dev-manual/Automatic_Volume_Recycling.html#SECTION00118 should that even work if we dont use tape but backup to hard disk? because we dont see it happening. (bacula keeps creating new files and the old ones seem to be kept around forever.) perhaps the retention times are to blame? how should the file retention, job retention and volume retention times be relative to each other? all of those retention times are long since expired for the earlier files. -- Crystal Reports - New Free Runtime and 30 Day Trial Check out the new simplified licensing option that enables unlimited royalty-free distribution of the report engine for externally facing server and web deployment. http://p.sf.net/sfu/businessobjects___ Bacula-users mailing list Bacula-users@lists.sourceforge.net https://lists.sourceforge.net/lists/listinfo/bacula-users
Re: [Bacula-users] how to recycle volumes ahead of time?
2009/6/16 Andreas Schuldei schuldei+bacula-us...@spotify.com: On Wed, Jun 17, 2009 at 12:48 AM, francisco javier funes nieto esen...@gmail.com wrote: Maybe this can help you .. http://www.bacula.org/en/dev-manual/Automatic_Volume_Recycling.html#SECTION00118 should that even work if we dont use tape but backup to hard disk? because we dont see it happening. (bacula keeps creating new files and the old ones seem to be kept around forever.) Yes. All media is treated the same as far as recycling is concerned, perhaps the retention times are to blame? how should the file retention, job retention and volume retention times be relative to each other? all of those retention times are long since expired for the earlier files. If the retention period was changed in the configuration files after the volume was labeled bacula does not automatically apply that to existing volumes. use the update pool command in bacula to force the .conf retention period to the existing volumes. Also list media should be of use John -- Crystal Reports - New Free Runtime and 30 Day Trial Check out the new simplified licensing option that enables unlimited royalty-free distribution of the report engine for externally facing server and web deployment. http://p.sf.net/sfu/businessobjects ___ Bacula-users mailing list Bacula-users@lists.sourceforge.net https://lists.sourceforge.net/lists/listinfo/bacula-users
Re: [Bacula-users] how to recycle volumes ahead of time?
On Wed, Jun 17, 2009 at 1:28 AM, John Drescher dresche...@gmail.com wrote: 2009/6/16 Andreas Schuldei schuldei+bacula-us...@spotify.comschuldei%2bbacula-us...@spotify.com : On Wed, Jun 17, 2009 at 12:48 AM, francisco javier funes nieto esen...@gmail.com wrote: Maybe this can help you .. http://www.bacula.org/en/dev-manual/Automatic_Volume_Recycling.html#SECTION00118 should that even work if we dont use tape but backup to hard disk? because we dont see it happening. (bacula keeps creating new files and the old ones seem to be kept around forever.) Yes. All media is treated the same as far as recycling is concerned, Then the system does not behave as expected in my case, right? the files SHOULD have been reused, eventhough there is space left on the device? how can i debug this? -- Crystal Reports - New Free Runtime and 30 Day Trial Check out the new simplified licensing option that enables unlimited royalty-free distribution of the report engine for externally facing server and web deployment. http://p.sf.net/sfu/businessobjects___ Bacula-users mailing list Bacula-users@lists.sourceforge.net https://lists.sourceforge.net/lists/listinfo/bacula-users
Re: [Bacula-users] how to recycle volumes ahead of time?
Greetings My understanding, and I've been wrong before, is that deleting a volume (which is a file) from the catalog does not delete the file from the storage. My recollection is that may have been one of the projects: http://bacula.svn.sourceforge.net/viewvc/bacula/trunk/bacula/projects?view=markup On a skim I could not find it, Oh well. So I've scripted my way out of the problem. One to search the filestorage directory for Volumes and select from the catalog. If any Files are not Cataloged volumes, delete them. Another to search for the oldest job on those volumes and delete it (it's been copied to tape). Then yet another to loop through doing it until there is 200GB of space available. I use postgres as well. If you want to see them just let me know, it uses php from a bash prompt. Dirk On Wed, 2009-06-17 at 01:49 +0200, Andreas Schuldei wrote: On Wed, Jun 17, 2009 at 1:28 AM, John Drescher dresche...@gmail.com wrote: 2009/6/16 Andreas Schuldei schuldei +bacula-us...@spotify.com: On Wed, Jun 17, 2009 at 12:48 AM, francisco javier funes nieto esen...@gmail.com wrote: Maybe this can help you .. http://www.bacula.org/en/dev-manual/Automatic_Volume_Recycling.html#SECTION00118 should that even work if we dont use tape but backup to hard disk? because we dont see it happening. (bacula keeps creating new files and the old ones seem to be kept around forever.) Yes. All media is treated the same as far as recycling is concerned, Then the system does not behave as expected in my case, right? the files SHOULD have been reused, eventhough there is space left on the device? how can i debug this? -- Crystal Reports - New Free Runtime and 30 Day Trial Check out the new simplified licensing option that enables unlimited royalty-free distribution of the report engine for externally facing server and web deployment. http://p.sf.net/sfu/businessobjects ___ Bacula-users mailing list Bacula-users@lists.sourceforge.net https://lists.sourceforge.net/lists/listinfo/bacula-users -- Crystal Reports - New Free Runtime and 30 Day Trial Check out the new simplified licensing option that enables unlimited royalty-free distribution of the report engine for externally facing server and web deployment. http://p.sf.net/sfu/businessobjects ___ Bacula-users mailing list Bacula-users@lists.sourceforge.net https://lists.sourceforge.net/lists/listinfo/bacula-users
[Bacula-users] Strange issue with backing up of a w2k3 server.
I've got a Bacula instance running (3.0.9 on the server 3.0.1/3.0.0 on the clients) that is backing up a dozen or so FreeBSD machines without an issue. I'm also backing up a small amount of data from a local w2k8 machine with no problems. I recently added a w2k3 machine as a client, and intend on backing up about 50GB from that machine across a T1. (I understand how long that will take). 2 hours and 20 minutes into the job (within a couple seconds of that mark) the job fails with the following output. 16-Jun 15:00 mothra JobId 4179: Fatal error: Network error with FD during Backup: ERR=Operation timed out 16-Jun 15:00 mothra-storage JobId 4179: Fatal error: fd_cmds.c:181 FD command not found: xœ 16-Jun 15:00 mothra-storage JobId 4179: Fatal error: fd_cmds.c:172 Command error with FD, hanging up. 16-Jun 15:00 mothra JobId 4179: Fatal error: No Job status returned from FD. 16-Jun 15:00 mothra JobId 4179: Error: Bacula mothra 3.0.0 (06Apr09): 16-Jun-2009 15:00:53 Build OS: i386-portbld-freebsd7.1 freebsd 7.1-RELEASE-p5 JobId: 4179 Job:msmx.X.2009-06-16_12.40.51_52 Backup Level: Full (upgraded from Incremental) Client: msmx.XX 3.0.1 (28Apr09) Linux,Cross-compile,Win32 FileSet:MSMXFS 2009-06-16 12:40:51 Pool: MSMX (From Job resource) Catalog:MyCatalog (From Client resource) Storage:FileMSMX (From Job resource) Scheduled time: 16-Jun-2009 12:40:50 Start time: 16-Jun-2009 12:40:54 End time: 16-Jun-2009 15:00:53 Elapsed time: 2 hours 19 mins 59 secs Priority: 10 FD Files Written: 0 SD Files Written: 7,831 FD Bytes Written: 0 (0 B) SD Bytes Written: 1,491,054,294 (1.491 GB) Rate: 0.0 KB/s Software Compression: None VSS:no Encryption: no Accurate: no Volume name(s): SR-MSMX-0274 Volume Session Id: 48 Volume Session Time:1244993298 Last Volume Bytes: 1,492,457,944 (1.492 GB) Non-fatal FD errors:0 SD Errors: 0 FD termination status: Error SD termination status: Error Termination:*** Backup Error *** I've run it a number of times and it always fails within seconds of 2hrs 20min. I've run it against different data sets and the 2:20 failure occurs regardless. Smaller data sets run fine. -- Crystal Reports - New Free Runtime and 30 Day Trial Check out the new simplified licensing option that enables unlimited royalty-free distribution of the report engine for externally facing server and web deployment. http://p.sf.net/sfu/businessobjects ___ Bacula-users mailing list Bacula-users@lists.sourceforge.net https://lists.sourceforge.net/lists/listinfo/bacula-users
Re: [Bacula-users] Copy jobs between two different SDs uses wrong source SD
On Tue, 2009-06-16 at 17:37 -0400, Phil Stracchino wrote: I'm trying to set up my first Copy job, and running into a problem. I don't know whether it's a configuration issue, a documentation shortfall, a Bacula limitation, or a combination of the three. I have two SDs on two different machines. Unfortunately. http://www.bacula.org/manuals/en/concepts/concepts/Migration_Copy.html Migration (also copy because a copy is just a special case of migration) is only implemented for a single Storage daemon. You cannot read on one Storage daemon and write on another. As a new feature, there are limitations. I'm using copy jobs. Love em. Dirk Altogether, four pools exist, three disk pools on one machine, one tape pool on the other. The disk SD is on a NAS box with a multi-terabyte SAS/SATA array, and owns three disk pools on the array. The tape SD is on a separate machine, and owns a single pool and an LTO1 drive. The disk array cannot be connected to the tape SD because the tape SD's machine has no SAS controllers. The tape drive cannot be connected to the disk SD's machine because the disk machine has no SCSI controllers and is in an insufficiently controlled environment for the tape drive. Backups have been running to the disk pools without incident for about two months, and I've verified that I can run backup jobs directly to the tape drive. The relevant config sections are as follows: Storage { Name = babylon4-sd Address = babylon4.babcom.com Maximum Concurrent Jobs = 20 SDPort = 9103 Password = XXX Device = FileStorage Media Type = File } Storage { Name = babylon5-sd Address = babylon5.babcom.com SDPort = 9103 Password = XXX Device = Ultrium-LTO1 Media Type = LTO1 Maximum Concurrent Jobs = 10 } Pool { Name = Full-Disk Storage = babylon4-sd Pool Type = Backup Next Pool = Full-Tape Recycle = yes AutoPrune = yes Volume Retention = 6 months Maximum Volume Jobs = 0 Volume Use Duration = 23h Label Format = FULL-$Year${Month:p/2/0/r}${Day:p/2/0/r}-${Hour:p/2/0/r}:${Minute:p/2/0/r} RecyclePool = Scratch } Pool { Name = Full-Tape Storage = babylon5-sd Pool Type = Backup Recycle = yes Autoprune = yes Volume Retention = 365d Recycle Oldest Volume = yes Recycle Current Volume = yes Label Format = ARCH- Maximum Volumes = 9 } # Dummy client and fileset for the copy job Client { Name = ALL Address = localhost Password = NONE Catalog = Catalog } Fileset { Name = DUMMY Include { Options { signature = MD5 } } } JobDefs { Name = TapeArchive Type = Copy Pool = Full-Tape Level = Full Client = ALL Fileset = DUMMY Selection Type = PoolUncopiedJobs Selection Pattern = Babylon5.* # this seems to be being ignored SpoolData = no Allow Duplicate Jobs = no Schedule = MonthlyCopy Messages = Daemon Priority = 20 } Job { Name = CopyToTape Enabled = Yes Pool = Full-Disk JobDefs = TapeArchive Storage = babylon4-sd } When I go to run the Copy master job, I get this output: Select Job resource (1-9): 1 Run Copy job JobName: CopyToTape Bootstrap: *None* Client:ALL FileSet: DUMMY Pool: Full-Disk (From Job resource) Read Storage: babylon4-sd (From Pool resource) Write Storage: babylon5-sd (From Storage from Pool's NextPool resource) JobId: *None* When: 2009-06-16 16:34:24 Catalog: Catalog Priority: 20 OK to run? (yes/mod/no): The read and write storage appear to be correct here. More or less the correct jobIDs get queued, except that the selection pattern is being ignored: Job queued. JobId=134 16-Jun 17:00 babylon4-dir JobId 134: The following 9 JobIds were chosen to be copied: 1,4,3,2,5,6,92,93,94 The selection pattern above should theoretically have matched only jobs 92, 93 and 94, which are small test jobs. Here's what happens when one of those queued actually tried to execute, though: Copying JobId 134, Job=CopyToTape.2009-06-16_17.00.57_41 16-Jun 17:01 babylon5-sd JobId 134: Failed command: 16-Jun 17:01 babylon5-sd JobId 134: Fatal error: Device FileStorage with MediaType File requested by DIR not found in SD Device resources. 16-Jun 17:01 babylon4-dir JobId 134: Fatal error: Storage daemon didn't accept Device FileStorage because: 3924 Device FileStorage not in SD Device resources. 16-Jun 17:01 babylon4-dir JobId 134: Error: Bacula babylon4-dir 3.0.1 (30Apr09): 16-Jun-2009 17:01:04 Build OS: i386-pc-solaris2.10 solaris 5.10 Prev Backup JobId: 94 Prev Backup Job:Babylon5_Backup.2009-06-16_14.57.25_03 New Backup JobId: 151 Current JobId: 134 Current Job:CopyToTape.2009-06-16_17.00.57_41 Backup Level: Full Client:
Re: [Bacula-users] Strange issue with backing up of a w2k3 server.
That 3.0.9 should be 3.0.0. Matthew Komar wrote: I've got a Bacula instance running (3.0.9 on the server 3.0.1/3.0.0 on the clients) that is backing up a dozen or so FreeBSD machines without an issue. I'm also backing up a small amount of data from a local w2k8 machine with no problems. I recently added a w2k3 machine as a client, and intend on backing up about 50GB from that machine across a T1. (I understand how long that will take). 2 hours and 20 minutes into the job (within a couple seconds of that mark) the job fails with the following output. 16-Jun 15:00 mothra JobId 4179: Fatal error: Network error with FD during Backup: ERR=Operation timed out 16-Jun 15:00 mothra-storage JobId 4179: Fatal error: fd_cmds.c:181 FD command not found: xœ 16-Jun 15:00 mothra-storage JobId 4179: Fatal error: fd_cmds.c:172 Command error with FD, hanging up. 16-Jun 15:00 mothra JobId 4179: Fatal error: No Job status returned from FD. 16-Jun 15:00 mothra JobId 4179: Error: Bacula mothra 3.0.0 (06Apr09): 16-Jun-2009 15:00:53 Build OS: i386-portbld-freebsd7.1 freebsd 7.1-RELEASE-p5 JobId: 4179 Job:msmx.X.2009-06-16_12.40.51_52 Backup Level: Full (upgraded from Incremental) Client: msmx.XX 3.0.1 (28Apr09) Linux,Cross-compile,Win32 FileSet:MSMXFS 2009-06-16 12:40:51 Pool: MSMX (From Job resource) Catalog:MyCatalog (From Client resource) Storage:FileMSMX (From Job resource) Scheduled time: 16-Jun-2009 12:40:50 Start time: 16-Jun-2009 12:40:54 End time: 16-Jun-2009 15:00:53 Elapsed time: 2 hours 19 mins 59 secs Priority: 10 FD Files Written: 0 SD Files Written: 7,831 FD Bytes Written: 0 (0 B) SD Bytes Written: 1,491,054,294 (1.491 GB) Rate: 0.0 KB/s Software Compression: None VSS:no Encryption: no Accurate: no Volume name(s): SR-MSMX-0274 Volume Session Id: 48 Volume Session Time:1244993298 Last Volume Bytes: 1,492,457,944 (1.492 GB) Non-fatal FD errors:0 SD Errors: 0 FD termination status: Error SD termination status: Error Termination:*** Backup Error *** I've run it a number of times and it always fails within seconds of 2hrs 20min. I've run it against different data sets and the 2:20 failure occurs regardless. Smaller data sets run fine. -- Crystal Reports - New Free Runtime and 30 Day Trial Check out the new simplified licensing option that enables unlimited royalty-free distribution of the report engine for externally facing server and web deployment. http://p.sf.net/sfu/businessobjects ___ Bacula-users mailing list Bacula-users@lists.sourceforge.net https://lists.sourceforge.net/lists/listinfo/bacula-users -- Crystal Reports - New Free Runtime and 30 Day Trial Check out the new simplified licensing option that enables unlimited royalty-free distribution of the report engine for externally facing server and web deployment. http://p.sf.net/sfu/businessobjects ___ Bacula-users mailing list Bacula-users@lists.sourceforge.net https://lists.sourceforge.net/lists/listinfo/bacula-users
Re: [Bacula-users] Restoring Fileset from one Computer to Different Computer
c.kesch...@internet-mit-iq.de wrote: You shouldn't have to rename anything. Just add a new Client resource for the new computer and install the file daemon on it. I don't know wx-console, but as I see it, you selected XPcomp-fd as restore Client? If so, this is probably wrong as this is the first client. Chris Didn't work. I installed the Bacula client on my W2k3server and tried. It queued the job for when my XPCOMP computer turns back on. Something somewhere tells bacula the difference between computers (MAC address???). I have been messing with this for day. How can I drop out the databases? I know where to delete out the other files. And try fresh?? I am starting to think I might have to wait till I get a newer server with newer Bacula. :-( Thanks... back to the 'drawing board'. B -- Crystal Reports - New Free Runtime and 30 Day Trial Check out the new simplified licensing option that enables unlimited royalty-free distribution of the report engine for externally facing server and web deployment. http://p.sf.net/sfu/businessobjects ___ Bacula-users mailing list Bacula-users@lists.sourceforge.net https://lists.sourceforge.net/lists/listinfo/bacula-users