Re: [Bacula-users] question about multiple file pools
Hello, 17.11.2009 02:48, Jerome Alet wrote: Hi again, On Tue, Nov 17, 2009 at 02:09:03AM +0100, Arno Lehmann wrote: 17.11.2009 00:49, Jerome Alet wrote: On Mon, Nov 16, 2009 at 02:35:12PM +1100, Jerome Alet wrote: ... But during restore, what happens is that only volumes configured as being part of the pool defined for the RestoreFiles job are automatically mounted, and bacula waits for us to manually mount the other volumes, which don't seem to be possible since they are file volumes (i.e. always 'mounted') OK, I've fixed this particular problem by creating multiple Media Types. Great... I was about to suggest that :-) This was written in the documentation, but not about this particular subject, about multiple concurrent accesses instead. At least you found it - but where did you initially look? (Just so we can, perhaps, improve the manual) I think I've searched this a lot in resources definitions, especially pools, but the answer is (somewhat hidden) there : http://www.bacula.org/en/rel-manual/Basic_Volume_Management.html#ConcurrentDiskJobs Maybe this could be clarified but I don't really know where nor how, since I'm a bacula newbie (our existing setup was put in place before I landed here) Hmm... well, then, perhaps simply mentioning this in the description of the Pool Resource would help. As you can see above, I want to have very small volumes, but LOTS of them. When labelling them automatically, bacula only uses 4 digits, what will happen when it will reach ? ... or does it start over at full-0001 and overwrite an existing volume, or does it fail miserably because full-0001 should not be overwritten (if it's still full, as it will probably be) ? But - why do you want so many volumes that you expect to have 10,000 in a few months? Sounds a bit hard to manage... I knew the question would come ! :-) I'm currently doing some testing and I like to play with things to see if the solution is robust. Excellent approach! When in production my volumes will certainely be bigger than 32 MB as they are now. Ok, now I don't worry any more :-) Arno -- Arno Lehmann IT-Service Lehmann Sandstr. 6, 49080 Osnabrück www.its-lehmann.de -- Let Crystal Reports handle the reporting - Free Crystal Reports 2008 30-Day trial. Simplify your report design, integration and deployment - and focus on what you do best, core application coding. Discover what's new with Crystal Reports now. http://p.sf.net/sfu/bobj-july ___ Bacula-users mailing list Bacula-users@lists.sourceforge.net https://lists.sourceforge.net/lists/listinfo/bacula-users
Re: [Bacula-users] after building bat, location of binary varies
On Mon, 16 Nov 2009 22:32:56 -0500, Dan Langille said: Martin Simmons wrote: On Sun, 15 Nov 2009 23:45:43 -0500, Dan Langille said: Folks, I am finding that the location of the executable binary varies from one system to another. I am trying to find out why. The answer will help to improve the build and install process. Sometimes the binary is at: src/qt-console/bat If not there, it is at: src/qt-console/.libs/bat Within a given system, the location is always consistent. It is one of the above. Why the location varies, I do not know. The .libs directory is the default location when building with libtool (for Bacula shared libraries). This is interesting. Please, can you elaborate? The libtool utility is a wrapper around compiling/linking/install to deal with portability for shared library naming. In the build tree, it puts all shared libraries and executables into subdirectories which are called .libs by default. It also creates a shell script for each executable, which sets LD_LIBRARY_PATH appropriately to make it work in the build tree. Note that this is only in the build tree. During make install, it installs the real libraries and binaries. My guess is that that some are not being linked with shared libraries for some reason, so Bacula is not using libtool and the real executable is built in src/qt-console/bat. It isn't clear to me why the location of the binaries matters, unless the Makefile is broken. __Martin -- Let Crystal Reports handle the reporting - Free Crystal Reports 2008 30-Day trial. Simplify your report design, integration and deployment - and focus on what you do best, core application coding. Discover what's new with Crystal Reports now. http://p.sf.net/sfu/bobj-july ___ Bacula-users mailing list Bacula-users@lists.sourceforge.net https://lists.sourceforge.net/lists/listinfo/bacula-users
[Bacula-users] Upgrade to 3.0.3 Job is waiting for execution
Hi All, Since the upgrade to 3.0.3 our test server have this strange behavior , the first job after bacula start up runs fine all others get in line as job is waiting execution and nothing happens, restarting bacula reset job lists and again only the first job run fine. The log show nothing usefull to debug the problem, job e-mail is empty and it fires off only when I do manually restart the bacula service ... After noticing the issue I tried to rewrite from scratch config files , remove and recreate the Backup volume ( I do use file storage) but to no avail. Any guess on what could cause this problem ? I found this post http://www.backupcentral.com/phpBB2/two-way-mirrors-of-external-mailing-lists-3/bacula-25/job-is-waiting-for-execuition-101508/ Where this guy is having my same problem but noone still answered him Thank you in advance for your patience. Sincerely -- Let Crystal Reports handle the reporting - Free Crystal Reports 2008 30-Day trial. Simplify your report design, integration and deployment - and focus on what you do best, core application coding. Discover what's new with Crystal Reports now. http://p.sf.net/sfu/bobj-july ___ Bacula-users mailing list Bacula-users@lists.sourceforge.net https://lists.sourceforge.net/lists/listinfo/bacula-users
Re: [Bacula-users] Upgrade to 3.0.3 Job is waiting for execution
On Tue, 17 Nov 2009, Giuseppe De Nicolo' wrote: Hi All, Since the upgrade to 3.0.3 our test server have this strange behavior , the first job after bacula start up runs fine all others get in line as job is waiting execution and nothing happens, restarting bacula reset job lists and again only the first job run fine. Have your concurrency settings been reset? Remember it needs to be checked in multiple places. -- Let Crystal Reports handle the reporting - Free Crystal Reports 2008 30-Day trial. Simplify your report design, integration and deployment - and focus on what you do best, core application coding. Discover what's new with Crystal Reports now. http://p.sf.net/sfu/bobj-july ___ Bacula-users mailing list Bacula-users@lists.sourceforge.net https://lists.sourceforge.net/lists/listinfo/bacula-users
Re: [Bacula-users] Upgrade to 3.0.3 Job is waiting for execution
Giuseppe De Nicolo' wrote: Alan Brown wrote: On Tue, 17 Nov 2009, Giuseppe De Nicolo' wrote: Hi All, Since the upgrade to 3.0.3 our test server have this strange behavior , the first job after bacula start up runs fine all others get in line as job is waiting execution and nothing happens, restarting bacula reset job lists and again only the first job run fine. Have your concurrency settings been reset? Remember it needs to be checked in multiple places. Hi and thank you for your answer, actually you where right , I had ma concurrent jobs se to 1 into director configuration file, though I had it also in 3.0.1 and it worked like a charm thats why I overlooked the problem, has something changed since 3.0.1 ? and anyway even if I had a concurrency limit of 1 ( which I could like ) why are jobs then not firing up once the first one has completed ? Anyway I may consider my problem for such I thank you very much Sincerely Well evidently I was wrong my problem is not solved, it just change behavior, now subsequent jobs does not sit anymore in queue saying job is waiting execution but instead they claim to be running which in fact is not true, since backup devise is dismounted and nothing happens , pratically the system is on hang till I reboot the bacula service then only the firts job in line again does run others hangs the system and so on ... Any clue ?? -- Let Crystal Reports handle the reporting - Free Crystal Reports 2008 30-Day trial. Simplify your report design, integration and deployment - and focus on what you do best, core application coding. Discover what's new with Crystal Reports now. http://p.sf.net/sfu/bobj-july ___ Bacula-users mailing list Bacula-users@lists.sourceforge.net https://lists.sourceforge.net/lists/listinfo/bacula-users
[Bacula-users] Maximum volume size
Hello, I'm new to Bacula and this is my first message. I have a question: When I backup to disk, what is the recommended maximum volume size? Official doc says that with versions up to and including 1.39.28 the recommended maximun volumen size is 5 GB. I am using Bacula 2.4.4 on Debian Lenny, this info apply for my installation? Other non official info provides the next tip (http://www.lucasmanual.com/mywiki/Bacula#VolumeManagement): Limit your volume size to about 10%-15% of the HD capacity. IE, 1TB drive, volume size 100GB. And then set your max volume on the pool to ((HD space/volume space) - 1) so you don't need to worry about a full HD. There is an official recommendation about volume size? What is your experience on this topic? Thanks, and sorry for my poor english. -- Let Crystal Reports handle the reporting - Free Crystal Reports 2008 30-Day trial. Simplify your report design, integration and deployment - and focus on what you do best, core application coding. Discover what's new with Crystal Reports now. http://p.sf.net/sfu/bobj-july ___ Bacula-users mailing list Bacula-users@lists.sourceforge.net https://lists.sourceforge.net/lists/listinfo/bacula-users
[Bacula-users] Limitations displaying large file sizes in bconsole when browsing for a restore?
Are there any odd limitations in the bconsole (both 'restore' and 'estimate listing' sections) browsing with regard to large files, such as VMware disk files? I've just stumbled onto this oddity... # ls -l pituitary-flat.vmdk -rw-r- 1 root root 21474836480 Nov 17 15:27 pituitary-flat.vmdk So that file size is over 21GB When I go into bconsole restore and browse that directory I get... doing an estimate listing (chopping out only the file I'm talking about) I get... -rw-r- 1 root root2147483648 2009-11-17 15:21:37 /fs1/pituitary/pituitary-flat.vmdk doing a restore then browsing to that file inside bconsole I also get -rw-r- 1 root root2147483648 2009-11-17 10:24:44 /fs1/pituitary/pituitary-flat.vmdk I don't think it's sparse file, and bacula seems to be backing it up and restoring it properly as a 21GB file, just that it seems to display as exactly 1/10 the size, or perhaps it's just dropping trailing zeros when a file is bigger than 10GB? -- Let Crystal Reports handle the reporting - Free Crystal Reports 2008 30-Day trial. Simplify your report design, integration and deployment - and focus on what you do best, core application coding. Discover what's new with Crystal Reports now. http://p.sf.net/sfu/bobj-july ___ Bacula-users mailing list Bacula-users@lists.sourceforge.net https://lists.sourceforge.net/lists/listinfo/bacula-users
[Bacula-users] Strange problems with Bacula which mostly works
Hi, I'm using bacula 2.4.4 to save to file volumes, using gzip compression, with both Windows and GNU/Linux clients. The database backend is PostgreSQL 8.3.8 Backups and restore both seem to work just fine. However, in the Media panel in bat, the Files column always contain 0, even when I use volumes that can hold a lot of files. If I use list files jobid=## all files are correctly listed, and in the Joblist panel I can see the number of files saved for each job. Now if I try to purge files for a particular client, the answer is always No Files found for client myclient.example.org to purge from MyCatalog catalog. Finally, the Version browser panel always come empty, and doesn't even contain any of the buttons nor drop down lists, nor the four sub-windows. What could be wrong ? I've named both my clients, jobs and filesets as each client's FQDN, could this be the problem ? Thanks for your time. -- Jérôme Alet - jerome.a...@univ-nc.nc - Centre de Ressources Informatiques Université de la Nouvelle-Calédonie - BPR4 - 98851 NOUMEA CEDEX Tél : +687 266754 Fax : +687 254829 -- Let Crystal Reports handle the reporting - Free Crystal Reports 2008 30-Day trial. Simplify your report design, integration and deployment - and focus on what you do best, core application coding. Discover what's new with Crystal Reports now. http://p.sf.net/sfu/bobj-july ___ Bacula-users mailing list Bacula-users@lists.sourceforge.net https://lists.sourceforge.net/lists/listinfo/bacula-users
Re: [Bacula-users] Maximum volume size
Federico Alberto Sayd wrote: Hello, I'm new to Bacula and this is my first message. I have a question: When I backup to disk, what is the recommended maximum volume size? Official doc says that with versions up to and including 1.39.28 the recommended maximun volumen size is 5 GB. I am using Bacula 2.4.4 on Debian Lenny, this info apply for my installation? Other non official info provides the next tip (http://www.lucasmanual.com/mywiki/Bacula#VolumeManagement): Limit your volume size to about 10%-15% of the HD capacity. IE, 1TB drive, volume size 100GB. And then set your max volume on the pool to ((HD space/volume space) - 1) so you don't need to worry about a full HD. There is an official recommendation about volume size? What is your experience on this topic? I only have experience with tape-volumes, but you should keep in mind, that you cannot recycle (free space) the volume before the last job sitting on the volume has expired. That should probably be the argument for keeping the size down. What that translates into in your setup is hard to guess. I suggest you go for something like one job pr volume. Then decide what the max file-size should be. If you largest backup is 20G the 5 GB would be fair.. if you largest is 8TB then.. -- Jesper -- Let Crystal Reports handle the reporting - Free Crystal Reports 2008 30-Day trial. Simplify your report design, integration and deployment - and focus on what you do best, core application coding. Discover what's new with Crystal Reports now. http://p.sf.net/sfu/bobj-july ___ Bacula-users mailing list Bacula-users@lists.sourceforge.net https://lists.sourceforge.net/lists/listinfo/bacula-users
Re: [Bacula-users] virtuafull and verify
Is no one using verify jobs for virtual fulls? -Ursprüngliche Nachricht- Von: Fahrer, Julian [mailto:jul...@fahrer.net] Gesendet: Freitag, 13. November 2009 09:02 An: bacula-users@lists.sourceforge.net Betreff: [Bacula-users] virtuafull and verify Hey guys, I am currently trying to implement verify jobs at a customer's site. At that site I am running fulls and incrementals to disk and virtual fulls to tape. I want to verify that the data on tape is ok. So tried a verify job after the virtual full has finished. But instead of using the virtual full (which is the last backup for that job) the last backup to disk is choosen. Here is an Example: 13-Nov 08:23 backup01_dir JobId 3039: Verifying against JobId=3017 Job=server2_KHK.2009-11-12_21.00.00_06 13-Nov 08:23 backup01_dir JobId 3039: Bootstrap records written to /var/bacula/working/backup01_dir.restore.1.bsr 13-Nov 08:23 backup01_dir JobId 3039: Start Verify JobId=3039 Level=VolumeToCatalog Job=server2_KHK_verify.2009-11-13_08.23.43_03 13-Nov 08:23 backup01_dir JobId 3039: Using Device LTO2 13-Nov 08:23 backup01_sd JobId 3039: acquire.c:116 Changing read device. Want Media Type=File have=LTO2 device=LTO2 (/dev/nst0) 13-Nov 08:23 backup01_sd JobId 3039: Media Type change. New read device FileStorage_data2 (/data2/b2d_2) chosen. 13-Nov 08:23 backup01_sd JobId 3039: Ready to read from volume KHK_0030 on device FileStorage_data2 (/data2/b2d_2). Also there is another Job: | 3,017 | server2_KHK | 2009-11-12 21:00:01 | B| F | 19,243 | 14,399,178,034 | T | | 3,033 | server2_KHK | 2009-11-12 21:00:01 | B| F | 19,243 | 14,402,307,154 | T | Jobid 3033 is the Virtual Full. Can the same value in the date column cause this problem? Let me know if you need any parts of the config. Kind regards Julian -Ursprüngliche Nachricht- Von: r...@backup01.hettenbach.local [mailto:r...@backup01.hettenbach.local] Im Auftrag von backup-adm...@hettenbach.de Gesendet: Freitag, 13. November 2009 08:29 An: backup-adm...@hettenbach.de Betreff: Bacula: Verify OK - Job: server2_KHK_verify.2009-11-13_08.23.43_03 (backup01_fd - Verify Volume to Catalog) 13-Nov 08:23 backup01_dir JobId 3039: Verifying against JobId=3017 Job=server2_KHK.2009-11-12_21.00.00_06 13-Nov 08:23 backup01_dir JobId 3039: Bootstrap records written to /var/bacula/working/backup01_dir.restore.1.bsr 13-Nov 08:23 backup01_dir JobId 3039: Start Verify JobId=3039 Level=VolumeToCatalog Job=server2_KHK_verify.2009-11-13_08.23.43_03 13-Nov 08:23 backup01_dir JobId 3039: Using Device LTO2 13-Nov 08:23 backup01_sd JobId 3039: acquire.c:116 Changing read device. Want Media Type=File have=LTO2 device=LTO2 (/dev/nst0) 13-Nov 08:23 backup01_sd JobId 3039: Media Type change. New read device FileStorage_data2 (/data2/b2d_2) chosen. 13-Nov 08:23 backup01_sd JobId 3039: Ready to read from volume KHK_0030 on device FileStorage_data2 (/data2/b2d_2). 13-Nov 08:23 backup01_sd JobId 3039: Forward spacing Volume KHK_0030 to file:block 0:197. 13-Nov 08:28 backup01_sd JobId 3039: End of file 3 on device FileStorage_data2 (/data2/b2d_2), Volume KHK_0030 13-Nov 08:28 backup01_sd JobId 3039: End of Volume at file 3 on device FileStorage_data2 (/data2/b2d_2), Volume KHK_0030 13-Nov 08:28 backup01_sd JobId 3039: End of all volumes. 13-Nov 08:28 backup01_dir JobId 3039: Bacula backup01_dir 3.0.1 (30Apr09): 13-Nov-2009 08:28:52 Build OS: i686-pc-linux-gnu ubuntu 8.04 JobId: 3039 Job:server2_KHK_verify.2009-11-13_08.23.43_03 FileSet:server2_KHK Verify Level: VolumeToCatalog Client: backup01_fd Verify JobId: 3017 Verify Job: server2_KHK Start time: 13-Nov-2009 08:23:45 End time: 13-Nov-2009 08:28:52 Files Expected: 19,243 Files Examined: 19,243 Non-fatal FD errors:0 FD termination status: OK SD termination status: OK Termination:Verify OK 13-Nov 08:28 backup01_dir JobId 3039: Begin pruning Jobs. 13-Nov 08:28 backup01_dir JobId 3039: No Jobs found to prune. 13-Nov 08:28 backup01_dir JobId 3039: Begin pruning Files. 13-Nov 08:28 backup01_dir JobId 3039: No Files found to prune. 13-Nov 08:28 backup01_dir JobId 3039: End auto prune. -- Let Crystal Reports handle the reporting - Free Crystal Reports 2008 30-Day trial. Simplify your report design, integration and deployment - and focus on what you do best, core application coding. Discover what's new with Crystal Reports now. http://p.sf.net/sfu/bobj-july ___ Bacula-users mailing list Bacula-users@lists.sourceforge.net https://lists.sourceforge.net/lists/listinfo/bacula-users -- Let Crystal
Re: [Bacula-users] Error with DR backups
On Mon, 16 Nov 2009 09:29:07 -0500, DAve said: 16-Nov 08:25 director-dir: Allied-ex3.2009-11-16_01.00.02 Warning: Error updating job record. sql_update.c:194 Update problem: affected_rows=0 16-Nov 08:25 director-dir: Allied-ex3.2009-11-16_01.00.02 Warning: Error getting job record for stats: sql_get.c:293 No Job found for JobId 20947 I am at a loss to understand why. The volumes can be pruned almost immediately as the backup is only for DR purposes and each volume will be recycled each night. The only problem I see is that the client is paying for 60GB and the backups have begun using more than that amount, so volumes are being reused within the current backup. That seems a very likely reason, especially if you have set Purge Oldest Volume = yes. When Bacula purges a volume, it removes whole jobs, not just the info for that volume. __Martin -- Let Crystal Reports handle the reporting - Free Crystal Reports 2008 30-Day trial. Simplify your report design, integration and deployment - and focus on what you do best, core application coding. Discover what's new with Crystal Reports now. http://p.sf.net/sfu/bobj-july ___ Bacula-users mailing list Bacula-users@lists.sourceforge.net https://lists.sourceforge.net/lists/listinfo/bacula-users
Re: [Bacula-users] virtuafull and verify
Fahrer, Julian jul...@fahrer.net writes: Is no one using verify jobs for virtual fulls? FWIW I do, and I have the same problem. I’m using version 3.0.2. What version are you using? Have you tested the newest version (3.0.3), yet? I don’t have much time at the moment but if no one knows a solution we should perhaps raise this issue on the devel list or file a bug. HTH, Tobias -- Let Crystal Reports handle the reporting - Free Crystal Reports 2008 30-Day trial. Simplify your report design, integration and deployment - and focus on what you do best, core application coding. Discover what's new with Crystal Reports now. http://p.sf.net/sfu/bobj-july ___ Bacula-users mailing list Bacula-users@lists.sourceforge.net https://lists.sourceforge.net/lists/listinfo/bacula-users
Re: [Bacula-users] after building bat, location of binary varies
Martin Simmons wrote: On Mon, 16 Nov 2009 22:32:56 -0500, Dan Langille said: Martin Simmons wrote: On Sun, 15 Nov 2009 23:45:43 -0500, Dan Langille said: Folks, I am finding that the location of the executable binary varies from one system to another. I am trying to find out why. The answer will help to improve the build and install process. Sometimes the binary is at: src/qt-console/bat If not there, it is at: src/qt-console/.libs/bat Within a given system, the location is always consistent. It is one of the above. Why the location varies, I do not know. The .libs directory is the default location when building with libtool (for Bacula shared libraries). This is interesting. Please, can you elaborate? The libtool utility is a wrapper around compiling/linking/install to deal with portability for shared library naming. In the build tree, it puts all shared libraries and executables into subdirectories which are called .libs by default. It also creates a shell script for each executable, which sets LD_LIBRARY_PATH appropriately to make it work in the build tree. Note that this is only in the build tree. During make install, it installs the real libraries and binaries. My guess is that that some are not being linked with shared libraries for some reason, so Bacula is not using libtool and the real executable is built in src/qt-console/bat. It isn't clear to me why the location of the binaries matters, unless the Makefile is broken. It matters because building the FreeBSD port/packages needs to know where the binary is. Without knowing, you can't install it or build it into a package. -- Let Crystal Reports handle the reporting - Free Crystal Reports 2008 30-Day trial. Simplify your report design, integration and deployment - and focus on what you do best, core application coding. Discover what's new with Crystal Reports now. http://p.sf.net/sfu/bobj-july ___ Bacula-users mailing list Bacula-users@lists.sourceforge.net https://lists.sourceforge.net/lists/listinfo/bacula-users
[Bacula-users] PATCH: Allow Copy Jobs to work within one tape library.
Hi. A while ago I tried to set up a backup strategy where I defined three pools. An incremental pool; a full backup pool; and a copy pool. The idea was to run incremental backups forever (except for the first one that would be promoted to a full). Then at the end each week consolidate the incremental backups into a full backup using a VirtualFull job. Then take a copy of the full backup for off-site storage. When using a tape library, I could achieve incremental and virtual full backups okay. But I could not run the Copy job because it refused to run, complaining that the read storage is the same as the write storage. I looked at the code for migrate.c and compared it to vbackup.c since both have similar concepts. I wanted to see why the virtual backup works and the copy won't. I found identical code in both, except in the vbackup.c the particular check that fails for migrate.c has been wrapped in #ifdef to remove it. Also a FIXME comment is there saying that instead it should just verify that the pools are different. Below is a patch to migrate.c to do the same thing as vbackup.c does. Is this a feasible patch? Would there be any chance of this working its way into the official Bacula source? Or will it cause problems? --- bacula-3.0.3.orig/src/dird/migrate.c +++ bacula-3.0.3/src/dird/migrate.c @@ -350,11 +350,14 @@ Dmsg2(dbglevel, Read store=%s, write store=%s\n, ((STORE *)jcr-rstorage-first())-name(), ((STORE *)jcr-wstorage-first())-name()); + /* ***FIXME*** we really should simply verify that the pools are different */ +#ifdef xxx if (((STORE *)jcr-rstorage-first())-name() == ((STORE *)jcr-wstorage-first())-name()) { Jmsg(jcr, M_FATAL, 0, _(Read storage \%s\ same as write storage.\n), ((STORE *)jcr-rstorage-first())-name()); return false; } +#endif if (!start_storage_daemon_job(jcr, jcr-rstorage, jcr-wstorage, /*send_bsr*/true)) { return false; } At the moment I have a really badly hacked up configuration to try and achieve what I want by using each drive in the library independently. It is complicated and messy with lots of work arounds for various scenarios. If the above patch is okay then things become much simpler. Regards, -- -- Jim Barber DDI Health -- Let Crystal Reports handle the reporting - Free Crystal Reports 2008 30-Day trial. Simplify your report design, integration and deployment - and focus on what you do best, core application coding. Discover what's new with Crystal Reports now. http://p.sf.net/sfu/bobj-july ___ Bacula-users mailing list Bacula-users@lists.sourceforge.net https://lists.sourceforge.net/lists/listinfo/bacula-users
[Bacula-users] Feature Request: Allow schedule to override Next Pool.
Hi. When defining backup strategies, I've wanted to be able to define the 'Next Pool' in the Schedule to override the value defined against a pool. An example of one usage of such a feature follows: I have an incremental pool that each week gets consolidated into a full pool via a VirtualFull job. The 'Next Pool' directive of the incremental pool defines the location of the full pool. The following week, the next VirtualFull backup will run. It will read the previous full backups and incremental backups since then, to create new full backups. It is important that the VirtualFull backup does not try to write to the same tape that the previous weeks full backup wrote to and left in Append status. Otherwise you could end up with the one tape trying to be read and written and dead-lock. At the moment I have a hack to get around this. An admin job calls an external command that runs a SQL update to find any tapes in the full pool with an APPEND status and change it to USED. This runs after the full backups have been done. Instead I'd like to create two full pools. One for even weeks and one for odd weeks of the year. That way, even week virtual full backups could consolidate odd week virtual full backups with the latest incremental backups. And the odd week virtual full backups could consolidate the even week full backups with the latest incremental backups. The trouble is that the Incremental pool can only define one Next Pool. I can't have it toggle the Next Pool directive from odd to even, week to week. Unless I could override it from the schedule. Doing that would mean I could ditch my SQL hack to manipulate the tape status. It will also be less wasteful of tapes, since I won't have partially filled USED tapes throughout my library. Regards, -- -- Jim Barber DDI Health -- Let Crystal Reports handle the reporting - Free Crystal Reports 2008 30-Day trial. Simplify your report design, integration and deployment - and focus on what you do best, core application coding. Discover what's new with Crystal Reports now. http://p.sf.net/sfu/bobj-july ___ Bacula-users mailing list Bacula-users@lists.sourceforge.net https://lists.sourceforge.net/lists/listinfo/bacula-users
[Bacula-users] Help for a special query : Find all double same file in a job
Hi all. At a customer site, we suspect to have multiple time the same file ( md5 checksum identical ) stored at several place inside the data. This is confirmed by the fact that we have ~2.5 Millions files for a job and only ~1.3 distinct md5 chechsum. I've build a query which retrieve all informations md5,filename,path from the db ( we use a Mysql 5.0 server MyIsam Tables) SELECT File.MD5,Filename.Name,Path.Path FROM File,Filename,Path WHERE File.JobId=3797 AND Filename.FilenameId=File.FilenameId AND Path.PathId=File.PathId ORDER BY File.MD5,Filename.Name,Path.Path This work, but I'm just facing trouble to export it to a csv file with correct encoding (utf-8 filesystem) to allow us to insert this record inside another db/woksheet ... Did someone has a trick about that. ps : finally I would love to have the filesize extracted from the LSStat field if possible directly from the sql If I remember correctly someone has developped this but only for postgresql Regards -- Bruno Friedmann -- Let Crystal Reports handle the reporting - Free Crystal Reports 2008 30-Day trial. Simplify your report design, integration and deployment - and focus on what you do best, core application coding. Discover what's new with Crystal Reports now. http://p.sf.net/sfu/bobj-july ___ Bacula-users mailing list Bacula-users@lists.sourceforge.net https://lists.sourceforge.net/lists/listinfo/bacula-users