Re: [Bacula-users] director won't start
Hold on... You said the original bacula director box died, puppet restored the configs and you restored the db from the source skel files? Perhaps restoring a copy of your bacula database from before the crash will help with the missing catalog mystery :) --eddie On 03/27/2015 09:08 AM, Tim Dunphy wrote: Hi Josip, I wonder if your bacula director is reading the correct config file. Could you check that your bacula-dir is down and try to start the bacula-dir manually with the debug set? For example: bacula-dir -c /etc/bacula/bacula.dir.conf -d 200 -f That should show some debug output including the lines where the bacula is trying to connect to the database. It might be helpful to see your last 25 lines of the output but note that it will write out the database password so you might want to replace it with something else. Ok, I've verified that director is down, only storage and file daemons are running: [root@ops:~] #ps -ef | grep bacula | grep -v grep root 12915 1 0 Mar26 ?00:00:00 bacula-fd -c /etc/bacula/bacula-fd.conf -u root -g root bacula 26664 1 0 00:21 ?00:00:00 bacula-sd -c /etc/bacula/bacula-sd.conf -u bacula -g disk And this is what I get when I run that command: [root@ops:~] #bacula-dir -c /etc/bacula/bacula-dir.conf -d 200 -f bacula-dir: dird.c:194-0 Debug level = 200 bacula-dir: address_conf.c:264-0 Initaddr 0.0.0.0:9101 http://0.0.0.0:9101 bacula-dir: runscript.c:284-0 runscript: debug bacula-dir: runscript.c:285-0 -- RunScript bacula-dir: runscript.c:286-0 -- Command=/etc/bacula/make_catalog_backup.pl http://make_catalog_backup.pl JokefireCatalog bacula-dir: runscript.c:287-0 -- Target= bacula-dir: runscript.c:288-0 -- RunOnSuccess=1 bacula-dir: runscript.c:289-0 -- RunOnFailure=0 bacula-dir: runscript.c:290-0 -- FailJobOnError=1 bacula-dir: runscript.c:291-0 -- RunWhen=2 bacula-dir: runscript.c:284-0 runscript: debug bacula-dir: runscript.c:285-0 -- RunScript bacula-dir: runscript.c:286-0 -- Command=/etc/bacula/delete_catalog_backup bacula-dir: runscript.c:287-0 -- Target= bacula-dir: runscript.c:288-0 -- RunOnSuccess=1 bacula-dir: runscript.c:289-0 -- RunOnFailure=0 bacula-dir: runscript.c:290-0 -- FailJobOnError=1 bacula-dir: runscript.c:291-0 -- RunWhen=1 bacula-dir: jcr.c:128-0 read_last_jobs seek to 192 bacula-dir: jcr.c:135-0 Read num_items=10 bacula-dir: dir_plugins.c:148-0 Load dir plugins bacula-dir: dir_plugins.c:150-0 No dir plugin dir! bacula-dir: dird.c:972-0 Could not open Catalog JokefireCatalog, database bacula. bacula-dir: dird.c:977-0 Query failed: SELECT VersionId FROM Version: ERR=no such table: Version 27-Mar 12:04 bacula-dir ERROR TERMINATION Please correct configuration file: /etc/bacula/bacula-dir.conf It looks like it can't find a catalog by the name JokefireCatalog. But it used to be there before some data got deleted. So is this a physical file on the file system that it's not finding? Or is this an entry in the database that it's not finding? Any ideas how to correct this? Thanks!! Tim On Fri, Mar 27, 2015 at 11:47 AM, Josip Deanovic djosip+n...@linuxpages.net mailto:djosip+n...@linuxpages.net wrote: Quoting message written on Friday 2015-03-27 11:34:29: Yep! that works! [root@ops:~] #mysql -ubacula -p -h localhost Enter password: Welcome to the MySQL monitor. Commands end with ; or \g. Your MySQL connection id is 32 Server version: 5.5.42 MySQL Community Server (GPL) by Remi Copyright (c) 2000, 2015, Oracle and/or its affiliates. All rights reserved. Oracle is a registered trademark of Oracle Corporation and/or its affiliates. Other names may be trademarks of their respective owners. Type 'help;' or '\h' for help. Type '\c' to clear the current input statement. mysql Still need some ideas, unfortunately! :( I wonder if your bacula director is reading the correct config file. Could you check that your bacula-dir is down and try to start the bacula-dir manually with the debug set? For example: bacula-dir -c /etc/bacula/bacula.dir.conf -d 200 -f That should show some debug output including the lines where the bacula is trying to connect to the database. It might be helpful to see your last 25 lines of the output but note that it will write out the database password so you might want to replace it with something else. -- Josip Deanovic -- Dive into the World of Parallel Programming The Go Parallel Website, sponsored by Intel and developed in partnership with Slashdot Media, is your hub for all things parallel software development, from weekly thought leadership blogs to
Re: [Bacula-users] Bacula Catalog Server - MySQL - 1 Core Used Max ?
And there is always the my.cnf configurator from the nice folks at Percona. No that isn't a plug for them, I don't work for them, but use the tool. https://tools.percona.com/wizard On 03/25/2015 04:21 AM, Phil Stracchino wrote: On 03/25/15 02:46, Kern Sibbald wrote: For best performance the DB must be tuned as the default MySQL values are not optimal for Bacula. Honestly, compiled-in MySQL defaults are far from optimal for almost anything. -- Dive into the World of Parallel Programming The Go Parallel Website, sponsored by Intel and developed in partnership with Slashdot Media, is your hub for all things parallel software development, from weekly thought leadership blogs to news, videos, case studies, tutorials and more. Take a look and join the conversation now. http://goparallel.sourceforge.net/ ___ Bacula-users mailing list Bacula-users@lists.sourceforge.net https://lists.sourceforge.net/lists/listinfo/bacula-users
[Bacula-users] Bacula 7.0.5 config issues multiple autochangers
Hi List... I have been having a rough time over the last year with this bacula installation, hopefully someone can show me the noob failures I've made thus far... My setup is rather basic, a single director which mounts remote filesystems and backs up ~40TB/week with a single file/storage daemon. I recently added an additional autochanger to the mix, to speed up write times and prevent deadlocks on writers when running quarterly restores. My problem... bacula confuses the tape slots between the changers, in changer A tape is in expected pool, but bacula will request the tape be mounted from the same numbered slot in changer B, where the tape is not. I use two Quantum devices, a SuperLoader3/LTO5 and a Scalari40/LTO5. The relevant config sections can be found at: http://pastebin.com/QVSwbrN0 Thanks in advance. -- New Year. New Location. New Benefits. New Data Center in Ashburn, VA. GigeNET is offering a free month of service with a new server in Ashburn. Choose from 2 high performing configs, both with 100TB of bandwidth. Higher redundancy.Lower latency.Increased capacity.Completely compliant. http://p.sf.net/sfu/gigenet ___ Bacula-users mailing list Bacula-users@lists.sourceforge.net https://lists.sourceforge.net/lists/listinfo/bacula-users
Re: [Bacula-users] Bacula 7.0.5 config issues multiple autochangers
Hi List... I have been having a rough time over the last year with this bacula installation, hopefully someone can show me the noob failures I've made thus far... My setup is rather basic, a single director which mounts remote filesystems and backs up ~40TB/week with a single file/storage daemon. I recently added an additional autochanger to the mix, to speed up write times and prevent deadlocks on writers when running quarterly restores. My problem... bacula confuses the tape slots between the changers, in changer A tape is in expected pool, but bacula will request the tape be mounted from the same numbered slot in changer B, where the tape is not. I use two Quantum devices, a SuperLoader3/LTO5 and a Scalari40/LTO5. The relevant config sections can be found at: http://pastebin.com/QVSwbrN0 Thanks in advance. -- New Year. New Location. New Benefits. New Data Center in Ashburn, VA. GigeNET is offering a free month of service with a new server in Ashburn. Choose from 2 high performing configs, both with 100TB of bandwidth. Higher redundancy.Lower latency.Increased capacity.Completely compliant. http://p.sf.net/sfu/gigenet ___ Bacula-users mailing list Bacula-users@lists.sourceforge.net https://lists.sourceforge.net/lists/listinfo/bacula-users
Re: [Bacula-users] Simultaneousky backup to local HDD and Amazon S3
On 5 October 2012 02:35, John Drescher dresche...@gmail.com wrote: On Thu, Oct 4, 2012 at 8:59 PM, Pubudu Perera suharsha...@gmail.com wrote: Hello everyone, I'm a newbie to Bacula and want to verify whether my requirement can get done using bacula. I want to know whether it's possible to simultaneously make backups to local HDD and Amazon S3 from the same source in Bacula. Can someone please help me with this? Thanks in advance. I would backup to the local drive then mirror that with rsync to S3 via s3fs fuse. John -- Don't let slow site performance ruin your business. Deploy New Relic APM Deploy New Relic app performance management and know exactly what is happening inside your Ruby, Python, PHP, Java, and .NET app Try New Relic at no cost today and get our sweet Data Nerd shirt too! http://p.sf.net/sfu/newrelic-dev2dev ___ Bacula-users mailing list Bacula-users@lists.sourceforge.net https://lists.sourceforge.net/lists/listinfo/bacula-users From my experience using s3cmd is much more reliable than s3fs. There's a sync command in s3cmd that acts like rsync, it's probably more efficient with bandwidth as well. You could run s3cmd in a post job script and set the job to error if the s3cmd sync command fails. Ed -- Don't let slow site performance ruin your business. Deploy New Relic APM Deploy New Relic app performance management and know exactly what is happening inside your Ruby, Python, PHP, Java, and .NET app Try New Relic at no cost today and get our sweet Data Nerd shirt too! http://p.sf.net/sfu/newrelic-dev2dev ___ Bacula-users mailing list Bacula-users@lists.sourceforge.net https://lists.sourceforge.net/lists/listinfo/bacula-users
Re: [Bacula-users] Amazon Glacier
I've been using bacula with S3 using s3fs for about 6 months. To say it has been flakey is an understatement and I plan to make changes over the coming months, although still using s3/glacier. s3fs is a filesystem for S3, I believe the only free one (it's GPLv2). Backups: Small backups are fine, but large backups taking ~2 days inevitably have problems (ie connection drops) and neither s3fs or bacula are good at resuming transfers/backups. Large backups need to be stored locally and then transfered to s3 post job. It's also extra important to backup the database straight after the initial full backup as if something goes wrong you'll have to spend a long time re-upload/downloading full backups. Restores: Restoring directly using s3fs is only possible if the volume is completely cached locally by s3fs. I've never managed a backup that hasn't had the entire volume manually downloaded (either filling the cache or just cheating and remounting), so using a small volume size is essentially. I look at the volumes required when using the restore command and manually get them to cache. My Future Plan: Ditch s3fs, it's not reliable enough. Instead I plan to use a combination of s3cmd and a simple post job script to verify correct transfer. s3cmd is just a cp/mv/rsync kind of tool and so doesn't suffer of the issues s3fs does. My only concern is correctly verifying transfers have been successful. I've never used AWS storage gateway, the economics don't work in my instance, I run multiple small sites - it's $125 a head. This might be an option for some, but I still think baculas inability to resume backups would be a problem. Perhaps if bacula had a built cache, copy and verify mechanism use of such offsite backup services would be easier and safer. I.e. do backup to local disk (to avoid large jobs taking weeks/months and potentially bombing half way through), copy these volumes to another location and then verify the copy. A similar procedure in reverse would also remove some headaches for restorations. Edward On 21 August 2012 15:39, Mr IT Guru misteritg...@gmx.com wrote: Good Afternoon All, On 21 Aug 2012, at 13:38, eric santelices ericsa...@gmail.com wrote: With Amazon announcement of Glacier it seems like a natural fit for Bacula. Does anyone know if/when we would see support for this storage resource? If you wanted to utilize this, wouldn't it be better to have the OS look into mounting this service as a file system, and then just direct bacula to use the file system? Rather than coding specifically for bacula - I'm sure bacula is not going to be the only project that can benefit from this, so I'm betting that support for this will be in the underlying OS first. -- Live Security Virtual Conference Exclusive live event will cover all the ways today's security and threat landscape has changed and how IT managers can respond. Discussions will include endpoint security, mobile security and the latest in malware threats. http://www.accelacomm.com/jaw/sfrnl04242012/114/50122263/ ___ Bacula-users mailing list Bacula-users@lists.sourceforge.net https://lists.sourceforge.net/lists/listinfo/bacula-users -- Live Security Virtual Conference Exclusive live event will cover all the ways today's security and threat landscape has changed and how IT managers can respond. Discussions will include endpoint security, mobile security and the latest in malware threats. http://www.accelacomm.com/jaw/sfrnl04242012/114/50122263/ ___ Bacula-users mailing list Bacula-users@lists.sourceforge.net https://lists.sourceforge.net/lists/listinfo/bacula-users
Re: [Bacula-users] Tryed to recover from recently purged Volume by bscan
Ah sorry, I didn't see your reference to the gui app, I've not used that. If your using bconsole you must add files using `mark` and then when done issue `done`. I've never recovered using bscan before, but according to the manual you can: http://www.bacula.org/manuals/en/utility/utility/Volume_Utility_Tools.html#bscan Like suggested in the manual, any modifications to the database should be preceded with a database backup, like mysqldump. On 14 August 2012 23:39, CDuv bacula-fo...@backupcentral.com wrote: I did drag'n'dropped a whole directory into the bottom area of the brestore panel (in Bat software). Can bscan insert files that aren't available anymore into the catalog? +-- |This was sent by c.duvergier@online.fr via Backup Central. |Forward SPAM to ab...@backupcentral.com. +-- -- Live Security Virtual Conference Exclusive live event will cover all the ways today's security and threat landscape has changed and how IT managers can respond. Discussions will include endpoint security, mobile security and the latest in malware threats. http://www.accelacomm.com/jaw/sfrnl04242012/114/50122263/ ___ Bacula-users mailing list Bacula-users@lists.sourceforge.net https://lists.sourceforge.net/lists/listinfo/bacula-users -- Live Security Virtual Conference Exclusive live event will cover all the ways today's security and threat landscape has changed and how IT managers can respond. Discussions will include endpoint security, mobile security and the latest in malware threats. http://www.accelacomm.com/jaw/sfrnl04242012/114/50122263/___ Bacula-users mailing list Bacula-users@lists.sourceforge.net https://lists.sourceforge.net/lists/listinfo/bacula-users
Re: [Bacula-users] Accurate mode problem, multiple jobs on one client
08.08.2012 00:30, Edward kirjoitti: Dir/SD: openSUSE 11.4 (x86_64) - v5.0.3 (from rpm) Client: Solaris 10 x86 - v5.0.3 I am having problems with an incremental backup using the accurate option. Accurate option fails with your setup. It was not designed for that, it must fail if it honors it's purpose. If the file set changes, it must consider the missing files deleted, and mark them such. If they suddenty appear again, it must back them all again. --jarif *Sorry I've been having problems with the mailing list, hence delay.* Indeed what you describe is how I expect and want accurate mode to work. What I should have made clearer is that the files which are suddenly backed up again have ALWAYS been supplied by the script. I know this for certain from looking at the output log of the script, as well as thoroughly testing the script. The script selects files based on include and exclude within directories, which have not changed recently. The job runs fine until another job on that client with a separately defined fileset is run. After that files which were included with the previous jobs and are still included with this job are backed up again. Thanks, Ed I have two jobs for the client both of which use a script to generate a file list of individual files (no recursion) is used. The scripts can give different lists for each job and are specified in separate filesets. One job is for offsite backing up with around 70GB for a full backup and one is a local backup of around 600GB. This setup has been working happily for months. No tapes are involved, files only. All has been working well until suddenly I noticed the offsite backup job was backing up over 1GB one day, the normal being around 10-20mb. Naturally I went on the hunt for the misplaced large file, but I didn't find anything. Looking at the logs I noticed that the jobs had backed up files that had been there since the original full backup. Eventually I ended up rolling back the database to check if this solved the problem, perhaps a corruption. This seemed to work, the manually triggered offsite job ran as expected. The next morning same problem. I eventually discovered after rolling back the database quite a few more times, that after the other job for that client ran (the local job) the offsite job would then go wrong and I needed to roll back. I have also tested creating a dummy job for that client which doesn't use accurate and the fileset is a single file. Again running this causes the specified behaviour. Running a job for another client doesn't cause any problems. To clarify the jobs are still showing up as incremental, this isn't a retention issue. I've set the file deamon to debug and seen that for the files it's backing up which shouldn't be backed up accurate.c reports them as (not found). It seems as though the accurate list is disappearing or being altered by running another job on the client. Here is the job and filest definition: #Offsite Job { Name = OffsiteBackup Type = Backup Level = Incremental Client = elephant-fd FileSet = elephant_offsite Storage = File-offsite Messages = Standard Pool = File-offsite-pool Priority = 10 #Bootstrap file (Use network directory) Write Bootstrap = /backupstore/bootstraps/%c_%d_%n_%i_%v.bsr #Use accurate mode Accurate = yes Schedule = daily-night_inc } #Offsite FileSet { Name = elephant_offsite Include { File = \\|/usr/bin/perl /export/sysadmin/configuration_files_and_scripts/bacula/bin/ backup_list.pl http://backup_list.pl /export/sysadmin/configuration_files_and_scripts/bacula/config/backup_ list_offsite.config Options { signature = MD5 #When explicitly expressing directories use recurse=no #Include ACL support aclsupport=yes #on storage daemon at the filesystem level compression = GZIP6 #Use better accurate mode, size, times (mod, change). Accurate = smc } } } Thanks, Ed -- Live Security Virtual Conference Exclusive live event will cover all the ways today's security and threat landscape has changed and how IT managers can respond. Discussions will include endpoint security, mobile security and the latest in malware threats. http://www.accelacomm.com/jaw/sfrnl04242012/114/50122263/ ___ Bacula-users mailing list Bacula-users@lists.sourceforge.net https://lists.sourceforge.net/lists/listinfo/bacula-users -- You may have to add other esoteric combinations of letters to get Beryl working and so on... Husse Jul 15 2007 -- next part -- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 262 bytes Desc: OpenPGP digital signature
Re: [Bacula-users] Tryed to recover from recently purged Volume by bscan
Did you select any files using `mark` during the restore? On 14 August 2012 21:18, CDuv bacula-fo...@backupcentral.com wrote: Hi, I'm having an issue for which I couldn't find answers. A Full Backup job was pruned by Bacula (obviously during yesterday night because it was there yesterday and gone today) and I need it (for the Incremental to work actually). After some research I decided to run a bscan on the volumes listed into the .bsr files of the relevant fd client. It did found things and added them into my Catalog: I can tell because the missing Full backup is now listed in brestore's Bat panel and the bconsole's restore is not complaining about No Full backup found prior to... anymore. But the problem is that when trying to restore I get a No files selected to be restored as if the volume where empty. Am I doomed? Is there a way for me to get my backup back? +-- |This was sent by c.duvergier@online.fr via Backup Central. |Forward SPAM to ab...@backupcentral.com. +-- -- Live Security Virtual Conference Exclusive live event will cover all the ways today's security and threat landscape has changed and how IT managers can respond. Discussions will include endpoint security, mobile security and the latest in malware threats. http://www.accelacomm.com/jaw/sfrnl04242012/114/50122263/ ___ Bacula-users mailing list Bacula-users@lists.sourceforge.net https://lists.sourceforge.net/lists/listinfo/bacula-users -- Live Security Virtual Conference Exclusive live event will cover all the ways today's security and threat landscape has changed and how IT managers can respond. Discussions will include endpoint security, mobile security and the latest in malware threats. http://www.accelacomm.com/jaw/sfrnl04242012/114/50122263/___ Bacula-users mailing list Bacula-users@lists.sourceforge.net https://lists.sourceforge.net/lists/listinfo/bacula-users
[Bacula-users] Accurate mode problem, multiple jobs on one client
Dir/SD: openSUSE 11.4 (x86_64) - v5.0.3 (from rpm) Client: Solaris 10 x86 - v5.0.3 I am having problems with an incremental backup using the accurate option. I have two jobs for the client both of which use a script to generate a file list of individual files (no recursion) is used. The scripts can give different lists for each job and are specified in separate filesets. One job is for offsite backing up with around 70GB for a full backup and one is a local backup of around 600GB. This setup has been working happily for months. No tapes are involved, files only. All has been working well until suddenly I noticed the offsite backup job was backing up over 1GB one day, the normal being around 10-20mb. Naturally I went on the hunt for the misplaced large file, but I didn't find anything. Looking at the logs I noticed that the jobs had backed up files that had been there since the original full backup. Eventually I ended up rolling back the database to check if this solved the problem, perhaps a corruption. This seemed to work, the manually triggered offsite job ran as expected. The next morning same problem. I eventually discovered after rolling back the database quite a few more times, that after the other job for that client ran (the local job) the offsite job would then go wrong and I needed to roll back. I have also tested creating a dummy job for that client which doesn't use accurate and the fileset is a single file. Again running this causes the specified behaviour. Running a job for another client doesn't cause any problems. To clarify the jobs are still showing up as incremental, this isn't a retention issue. I've set the file deamon to debug and seen that for the files it's backing up which shouldn't be backed up accurate.c reports them as (not found). It seems as though the accurate list is disappearing or being altered by running another job on the client. Here is the job and filest definition: #Offsite Job { Name = OffsiteBackup Type = Backup Level = Incremental Client = elephant-fd FileSet = elephant_offsite Storage = File-offsite Messages = Standard Pool = File-offsite-pool Priority = 10 #Bootstrap file (Use network directory) Write Bootstrap = /backupstore/bootstraps/%c_%d_%n_%i_%v.bsr #Use accurate mode Accurate = yes Schedule = daily-night_inc } #Offsite FileSet { Name = elephant_offsite Include { File = \\|/usr/bin/perl /export/sysadmin/configuration_files_and_scripts/bacula/bin/backup_list.pl/export/sysadmin/configuration_files_and_scripts/bacula/config/backup_list_offsite.config Options { signature = MD5 #When explicitly expressing directories use recurse=no #Include ACL support aclsupport=yes #on storage daemon at the filesystem level compression = GZIP6 #Use better accurate mode, size, times (mod, change). Accurate = smc } } } Thanks, Ed -- Live Security Virtual Conference Exclusive live event will cover all the ways today's security and threat landscape has changed and how IT managers can respond. Discussions will include endpoint security, mobile security and the latest in malware threats. http://www.accelacomm.com/jaw/sfrnl04242012/114/50122263/___ Bacula-users mailing list Bacula-users@lists.sourceforge.net https://lists.sourceforge.net/lists/listinfo/bacula-users
Re: [Bacula-users] Unusually high compression ratio/capacity for LTO-4 tapes?
I also used to think that the upper number in the tape raiting was a hard limit, it is not. You just happen to have data in your file system that is HIGHLY compressable, congrats. On Wed, 8 Feb 2012, Josh Nielsen wrote: Hello, I am relatively new to tape backups in general and I have recently become accustomed to using bacula, and I have a quick question about compression ratios/storage capacity on LTO tapes. I have an IBM 24-tape library with Sony Ultrium LTO-4 tapes (Rated: 800GB/1,600GB compressed). I recently set a job for a full backup of one of our servers that has a little over 3TB of disk capacity. A du -h of that server yields: Filesystem Size Used Avail Use% Mounted on /dev/mapper/VolGroup00-LogVol00 3.3T 2.2T 983G 70% / My question relates to this: from a fresh install of bacula (5.0.3) I created a pool for monthly backups and ran a full backup of that server above. All the tapes were marked empty and the backup started by picking a tape from the designated pool and backed up successfully to it, but I fully expected it to span _two_ tapes with 1.6TB as the supposed maximum rated compression capacity for LTO-4 tapes. However it fit _all of it_ onto a single tape. Here is an excerpt from the job output: Storage: IBM_Autochanger (From Job resource) Scheduled time: 07-Feb-2012 10:26:00 Start time: 07-Feb-2012 10:26:02 End time: 07-Feb-2012 21:08:39 Elapsed time: 10 hours 42 mins 37 secs Priority: 10 FD Files Written: 458,656 SD Files Written: 458,656 FD Bytes Written: 2,362,329,795,537 (2.362 TB) SD Bytes Written: 2,362,399,227,266 (2.362 TB) Rate: 61268.5 KB/s Software Compression: None VSS: no Encryption: no Accurate: no Volume name(s): ML1038L4 Volume Session Id: 1 Volume Session Time: 1328631897 Last Volume Bytes: 2,364,166,103,040 (2.364 TB) And 'list media' showed the pool as follows: Pool: Monthly +-++---+-+---+--+--+-+--+---+---+-+ | MediaId | VolumeName | VolStatus | Enabled | VolBytes | VolFiles | VolRetention | Recycle | Slot | InChanger | MediaType | LastWritten | +-++---+-+---+--+--+-+--+---+---+-+ | 19 | ML1037L4 | Append | 1 | 64,512 | 0 | 31,536,000 | 1 | 19 | 1 | LTO-4 | -00-00 00:00:00 | | 20 | ML1047L4 | Append | 1 | 64,512 | 0 | 31,536,000 | 1 | 20 | 1 | LTO-4 | -00-00 00:00:00 | | 21 | ML1044L4 | Append | 1 | 64,512 | 0 | 31,536,000 | 1 | 21 | 1 | LTO-4 | -00-00 00:00:00 | | 22 | ML1041L4 | Append | 1 | 64,512 | 0 | 31,536,000 | 1 | 22 | 1 | LTO-4 | -00-00 00:00:00 | | 23 | ML1038L4 | Append | 1 | 2,364,166,103,040 | 2,365 | 31,536,000 | 1 | 23 | 1 | LTO-4 | 2012-02-07 21:08:17 | +-++---+-+---+--+--+-+--+---+---+-+ I actually calculated 2,364,166,103,040 bytes to be 2.15 TB, but either way this is much higher than the rated 1.6TB with the (theoretically) maximum compression, as I understand it. Not until I ran another relatively tiny backup afterward (around 90GB) of something else did it fill the first tape and start to write the remaining 80GB or so onto a second tape. The job output above says there was no software compression being used, and unless it is a default I have done nothing to enable (or disable) tape compression on the IBM Library itself. Has anyone heard of getting more capacity out of an LTO-4 tape than it is rated for? Or are the byte amounts inflated, possibly, by artificially counting skipped-over file systems? I got several messages like /boot is a different filesystem. Will not descend from / into /boot, but you would think that it wouldn't count those in the overall storage amount. I essentially just want to know if these figures are real, and if I'm just getting an awesome compression ratio, or if something else is going on. Thanks! Josh -- Keep Your Developer Skills Current with LearnDevNow! The most comprehensive online learning library for Microsoft developers is just $99.99! Visual Studio, SharePoint, SQL - plus HTML5, CSS3, MVC3, Metro Style Apps, more. Free future releases when you subscribe now! http://p.sf.net/sfu/learndevnow-d2d___ Bacula-users mailing
Re: [Bacula-users] Hardware Encryption?
here at work we do not use bacula so I can't speak to the compatibility, but we use the following: Started with Netapp Datafort inline apiliances, we have around 30 of the deployed, they work mostly, but require more maintaince then I would like. Our second generation solution for inline encryption is using lto-4 and sun T-10k native encryption. Ed m. On Wed, 5 Jan 2011, Mingus Dew wrote: Fellow Bacula Users, I am in search of a hardware based encryption solution that is OS/Storage independent. Whether it be a new drive or something else even Bacula compatible. The encryption provided within Bacula itself is not sufficient for my needs and during my last search I was unable to find an LTO-4 drive with encryption that was capable of encrypting tapes w/o interaction from the Backup program (essentially only being supported by the software of the vendor itself) So some responses from fellow users on what you're doing for encryption of Tapes outside of Bacula would be great. Thanks, Shon !DSPAM:4d24eb1011887277626594! -- Learn how Oracle Real Application Clusters (RAC) One Node allows customers to consolidate database storage, standardize their database environment, and, should the need arise, upgrade to a full multi-node Oracle RAC database without downtime or disruption http://p.sf.net/sfu/oracle-sfdevnl ___ Bacula-users mailing list Bacula-users@lists.sourceforge.net https://lists.sourceforge.net/lists/listinfo/bacula-users
Re: [Bacula-users] Unable to position to end of data on device
Is this happening with this one tape only, or all tapes you try in this particulat drive? If the problem is only with this tape, is the tape old and been used a lot? If so, then the tape has reached/passed the end of it's usefull life. If this tape drive is in a library, check the library for events it can provide more info. LTO tape drives track the number of errors, and I think tape usage, per-cartrage and will throw this error if the tape falls below a quality threshold. I think the usage counts are written into the RFID that is inside the cartrage. When we get these messages we retire the tape. Hope this helps. Ed m. On Thu, 11 Nov 2010, Alexander Kosykh wrote: Hi. I run my bacula on FreeBSD 8.0-RELEASE (ProLiant ML370 G3) with HP MSL6000 ULTRIUM960. My bacula server was hard reseted then bacula write on tape. Then server came up and bacula start working it mark tape in error with log message 11-Nov 13:41 backup-sd JobId 401: Error: Unable to position to end of data on device ULTRIUM960 (/dev/sa0): ERR=dev.c:956 ioctl MTEOM error on ULTRIUM960 (/dev/sa0). ERR=Input/output error. I test this tape with mt and get this #mt -f /dev/sa0 eod mt: /dev/sa0: eod: Input/output error How can i fix that tape to keep data that already on it and will be able to append data till end of tape? Regards, Alexander. !DSPAM:4cdc589f11881512535287! -- Centralized Desktop Delivery: Dell and VMware Reference Architecture Simplifying enterprise desktop deployment and management using Dell EqualLogic storage and VMware View: A highly scalable, end-to-end client virtualization framework. Read more! http://p.sf.net/sfu/dell-eql-dev2dev ___ Bacula-users mailing list Bacula-users@lists.sourceforge.net https://lists.sourceforge.net/lists/listinfo/bacula-users
[Bacula-users] How to run 1 backup job to tape with duplicate to a 2nd tape
I have this customer that wants to backup a filesystem that currently is a bit over 4TB used and grows up to 300GB per week. The growth is new files, existing files do not change. Each backup run, full/diff/inc are to go to 2 sets of tapes, one to keep in a vault onsite, one to go offsite. The expectation is for these tapes to never expire, and restores are to come from either side of what amounts to a tape mirror set, local set requested first. The data files will get pruned by age, so the disk requirement will not grow indefinitely just the tape requirement. The customer wants to ship the offsite tape only when it's full, I'd prefer to ship it as it is used, I want to have an implementation that supports either shipping time. They currently have 1 (fairly idle) LTO-3 tape drive for this project, the current single tape backup of this filesystem has plenty of time to run. The existing backup of this filesystem was one full then incrementals. I read through the bacula archives and did not find anything that quite fit this request. I looked at copy jobs (easiest way to implement the request IMO), but the implication was that it required 2 drives. I had considered two implementation methods. 1. Backup to disk volumes then run one copy job to the onsite tape pool, and one migrate job to the other tape pool. My questions about this: A. What to do about the existing data in the filesystem? We have enough spool disk to put the weekly growth onto disk then do the copy and migrate, but not the existing data. B. How to ensure that any restore will first request the onsite tape? C. In the case of the onsite tape restore failing, how to specify the offsite tape, once it is delivered of course. 2. Modify the existing location that these files get copied into. Have a script pick up the files from the new directory, move them into backup dir hard link them to the space the users expect to find them, run two fulls on this new space. After two full jobs finish, remove the hard link in the backup dir. Ship the offsite tape Issues here: A. How do I ensure that both full backups are successful before the link is removed. B. On restores how do ensure the local tape is requested, see item B and C from above. Am I setting up the catalog up for disaster with either method? Any thoughts on which is a better/cleaner implementation or suggestions for a better one? Thanks Ed M. !DSPAM:4bff3b3211881414713969! -- ___ Bacula-users mailing list Bacula-users@lists.sourceforge.net https://lists.sourceforge.net/lists/listinfo/bacula-users
Re: [Bacula-users] tape size speed LTO-1 drive
Hi and welcome. I'm also new to using Bacula, but hopefully can offer some usefull info :-) Tape Capacity: As you have seen LTO-1 tapes have an uncompressed capacity of 100GB, due to bacula overhead, you should see no less then slightly under the 100GB on each cartrage. I have seen traffic indicating that any write error is treated as EOT Question to the group, are there any failures from write(2) that do not show up in the system logs? How old are these cartrages, is it possible that they have deterated? Run the btape tests on a set of these cartrages, use the full command in btape, do a multitape test, record the output. btape will show both the number of bytes written and the speed. Backup throughput: While 6MB/s is slow for LTO-1, one of the biggest things that I have seen affect throughput is the rate that the data can be pulled from the source disks. My own system can serve up data at about 15MB/s, not even close to enough to keep the LTO-3 drive happy that I am using. If you have not already done so, set up a spool disk, I use a dedicated mirrored pair of SATA drives and am able to see speeds to tape of about 45MB/s. While a spool area will not reduce the total time of the backup job, may even increase it, it will allow the streaming of data to the drive at the fastest possible speed and reduce shoe shining as much as possible and thus wear and tear on the tape and drive parts. Hope the above helps. Ed M. On Mon, 30 Nov 2009, Jens Froehlich wrote: -BEGIN PGP SIGNED MESSAGE- Hash: SHA1 Hi bacula-users, I has a problem with my Bacula (3.0.2) installation on OpenSuSE 11.1 (32Bit). The LTO-1 tapes are described only a half, nevertheless, it should fit 100 GB on it? I already succeed different values of the parametres minimum block size and maximum blocksize tested, unfortunately. If I the tapes with tar describe I reach 100 GB. I also find the writing speed with 6 MB/S a little bit slow? - -- Pool: DailyTAPE DailyTAPE-0001 | Full | 1 | 54,780,100,608 |1 | 518,400 | 1 |1 | 1 | LTO-1 DailyTAPE-0002 | Full | 1 | 53,212,479,488 |1 | 518,400 | 1 |2 | 1 | LTO-1 . DailyTAPE-0010 | Append| 1 |131,072 |0 | 518,400 | 1 |10 | 1 | LTO-1 - -- What must I do, the Bacula the tapes with at least 90 - 95 GB describes? Can somebody help me here? By Jens Sorry for my bad English :-( -BEGIN PGP SIGNATURE- Version: GnuPG v1.4.9 (GNU/Linux) Comment: Using GnuPG with Mozilla - http://enigmail.mozdev.org iEYEARECAAYFAksTuM4ACgkQ8ZcA+K8jdeTELgCg1dozNtjKZT0Mko5SjrW4i3kg WToAoL1u9bUFpCc1EDIdFJFwdEKq/Wk+ =wj5A -END PGP SIGNATURE- -- Let Crystal Reports handle the reporting - Free Crystal Reports 2008 30-Day trial. Simplify your report design, integration and deployment - and focus on what you do best, core application coding. Discover what's new with Crystal Reports now. http://p.sf.net/sfu/bobj-july ___ Bacula-users mailing list Bacula-users@lists.sourceforge.net https://lists.sourceforge.net/lists/listinfo/bacula-users !DSPAM:4b13fe6411881720371277! -- Let Crystal Reports handle the reporting - Free Crystal Reports 2008 30-Day trial. Simplify your report design, integration and deployment - and focus on what you do best, core application coding. Discover what's new with Crystal Reports now. http://p.sf.net/sfu/bobj-july ___ Bacula-users mailing list Bacula-users@lists.sourceforge.net https://lists.sourceforge.net/lists/listinfo/bacula-users
Re: [Bacula-users] SOT: When to update ...
On 10/25/2009 09:34 PM, ReynierPM wrote: Hi every: I was watching Bacula since 1 year and I've very graceful with this tool and the community behind the software. I have Bacula 3.0.2 working in a production server. Now Bacula 3.0.3 is out with bugfixes. My question here is: when to update? I mean wich guidelines you (administers or people who work with Bacula) follow to upgrade your Bacula server. Cheers and thanks in advance I have seen 3 schools of thought on this... 1. If it ain't broke, don't fix it. If it is doing what you need and doing it well, don't mess with it. 2. Upgrade every little patch. 3. upgrade if it has a patch you are waiting on or security fixes. I tend to be either in the 1 or 3 camp, but it's really each administrator's call within any greater policy of the organization they are supporting. As always, it is a bad idea to hold off upgrading for so long that you end up being unsupportable. Hope this helps Ed M. -- Come build with us! The BlackBerry(R) Developer Conference in SF, CA is the only developer event you need to attend this year. Jumpstart your developing skills, take BlackBerry mobile applications to market and stay ahead of the curve. Join us from November 9 - 12, 2009. Register now! http://p.sf.net/sfu/devconference ___ Bacula-users mailing list Bacula-users@lists.sourceforge.net https://lists.sourceforge.net/lists/listinfo/bacula-users
Re: [Bacula-users] New autochanger adds L3 after tape barcode
Check the autochanger user doc, many have a setting that will cause it to stop reporting the cartrage type field. On Mon, 19 Oct 2009, Adam Cécile wrote: Hello, I juste replaced an autochanger and the new one adds L3 at the end of each tape label (ie: XY becames XYL3). What can I do to keep the same names as before ? Thanks in advance, Regards, Adam. -- Adam Cécile Mandriva / Linbox 152, rue de Grigy - Technopole Metz 57070 METZ - FRANCE tel: +33 (0)3 87 50 87 90http://mandriva.com -- Come build with us! The BlackBerry(R) Developer Conference in SF, CA is the only developer event you need to attend this year. Jumpstart your developing skills, take BlackBerry mobile applications to market and stay ahead of the curve. Join us from November 9 - 12, 2009. Register now! http://p.sf.net/sfu/devconference ___ Bacula-users mailing list Bacula-users@lists.sourceforge.net https://lists.sourceforge.net/lists/listinfo/bacula-users !DSPAM:4addf48b11881920712126! -- Come build with us! The BlackBerry(R) Developer Conference in SF, CA is the only developer event you need to attend this year. Jumpstart your developing skills, take BlackBerry mobile applications to market and stay ahead of the curve. Join us from November 9 - 12, 2009. Register now! http://p.sf.net/sfu/devconference___ Bacula-users mailing list Bacula-users@lists.sourceforge.net https://lists.sourceforge.net/lists/listinfo/bacula-users
[Bacula-users] Bacula thinks empty tapes are full
Hey All, I've been using Bacula to backup 7 or 8 servers for a while now and have been fairly pleased. I backup to an external autochanger. I started with 20 empty tapes. Now, half way through Bacula is convinced that the rest of the tapes are full, but they aren't... they've never been written to. Does anyone have any suggestions how to convince bacula that these tape aren't full? *list volumes Automatically selected Catalog: MyCatalog Using Catalog MyCatalog Pool: Default +-++---+-+- +--+--+-+--+---+--- +-+ | MediaId | VolumeName | VolStatus | Enabled | VolBytes| VolFiles | VolRetention | Recycle | Slot | InChanger | MediaType | LastWritten | +-++---+-+- +--+--+-+--+---+--- +-+ | 1 | KHA000L1 | Full | 1 | 247,597,894,656 | 548 | 31,536,000 | 1 |1 | 1 | LTO-1 | 2008-11-03 01:07:34 | | 2 | KHA001L1 | Full | 1 | 326,155,640,832 | 736 | 31,536,000 | 1 |2 | 1 | LTO-1 | 2009-01-05 00:10:10 | | 3 | KHA002L1 | Full | 1 | 257,168,572,416 | 491 | 31,536,000 | 1 |3 | 1 | LTO-1 | 2009-02-06 23:12:42 | | 4 | KHA003L1 | Full | 1 | 293,170,913,280 | 531 | 31,536,000 | 1 |4 | 1 | LTO-1 | 2009-03-15 23:15:33 | | 5 | KHA004L1 | Error | 1 | 136,982,974,464 | 256 | 31,536,000 | 1 |5 | 1 | LTO-1 | 2009-04-02 23:14:37 | | 6 | KHA005L1 | Error | 1 | 93,291,061,248 | 138 | 31,536,000 | 1 |6 | 1 | LTO-1 | 2009-04-10 23:43:20 | | 7 | KHA006L1 | Full | 1 | 271,544,232,960 | 474 | 31,536,000 | 1 |7 | 1 | LTO-1 | 2009-05-17 23:49:44 | | 8 | KHA007L1 | Full | 1 | 278,454,435,840 | 472 | 31,536,000 | 1 |8 | 1 | LTO-1 | 2009-06-22 23:13:12 | | 9 | KHA008L1 | Full | 1 | 249,486,999,552 | 402 | 31,536,000 | 1 |9 | 1 | LTO-1 | 2009-07-16 23:21:56 | | 10 | KHA009L1 | Full | 1 | 42,706,944 |0 | 31,536,000 | 1 | 10 | 1 | LTO-1 | 2009-07-16 23:34:23 | | 11 | KHA010L1 | Full | 1 | 59,157,504 |0 | 31,536,000 | 1 | 11 | 1 | LTO-1 | 2009-07-16 23:39:32 | | 12 | KHA011L1 | Full | 1 | 66,963,456 |0 | 31,536,000 | 1 | 12 | 1 | LTO-1 | 2009-07-16 23:44:47 | | 13 | KHA012L1 | Full | 1 | 61,608,960 |0 | 31,536,000 | 1 | 13 | 1 | LTO-1 | 2009-07-16 23:49:32 | | 14 | KHA013L1 | Full | 1 | 58,447,872 |0 | 31,536,000 | 1 | 14 | 1 | LTO-1 | 2009-07-16 23:54:16 | | 15 | KHA014L1 | Full | 1 | 57,028,608 |0 | 31,536,000 | 1 | 15 | 1 | LTO-1 | 2009-07-16 23:59:38 | | 16 | KHA015L1 | Full | 1 | 56,383,488 |0 | 31,536,000 | 1 | 16 | 1 | LTO-1 | 2009-07-17 00:04:49 | | 17 | KHA016L1 | Full | 1 | 58,705,920 |0 | 31,536,000 | 1 | 17 | 1 | LTO-1 | 2009-07-17 00:10:00 | | 18 | KHA017L1 | Full | 1 | 62,447,616 |0 | 31,536,000 | 1 | 18 | 1 | LTO-1 | 2009-07-17 00:14:29 | | 19 | KHA018L1 | Full | 1 | 64,963,584 |0 | 31,536,000 | 1 | 19 | 1 | LTO-1 | 2009-07-17 00:19:49 | | 20 | KHA019L1 | Full | 1 | 65,544,192 |0 | 31,536,000 | 1 | 20 | 1 | LTO-1 | 2009-07-17 00:25:11 | | 21 | CLNS00L1 | Cleaning | 1 | 0 |0 | 31,536,000 | 1 |0 | 0 | | -00-00 00:00:00 | | 22 | CLNS01L1 | Cleaning | 1 | 0 |0 | 31,536,000 | 1 |0 | 0 | | -00-00 00:00:00 | +-++---+-+- +--+--+-+--+---+--- +-+ Pool: Scratch No results to list. *version backup.keeranhosting.com-dir Version: 3.0.2 (18 July 2009) i386- portbld-freebsd7.1 freebsd 7.1-RELEASE-p6 Thanks for your help! Ed Aronyk -- Let Crystal Reports handle the reporting - Free Crystal Reports 2008 30-Day trial. Simplify your report design, integration and
[Bacula-users] Unable to restore off tape after filled
| KHA004L1 | Append| 1 | 64,512 |0 | 31,536,000 | 1 |5 | 0 | LTO-1 | | +-++---+-+- +--+--+-+--+---+--- +-+ ### ## END List Volumes ### ### ## BEGIN Snippets of bacula-sd.conf ### Autochanger { Name = ADICScalar100 Device = Drive0 Device = Drive1 Changer Command = /usr/local/etc/rc-chio-changer %c %o %S %a %d Changer Device = /dev/ch0 } Device { Name = Drive0 Drive Index = 0 Media Type = LTO-1 Archive Device = /dev/nsa0 AutomaticMount = yes; AlwaysOpen = yes; RemovableMedia = yes; RandomAccess = no; AutoChanger = yes # FreeBSD specific settings Hardware End of Medium = no BSF at EOM = yes Backward Space Record = no Backward Space File = no Fast Forward Space File = no TWO EOF = yes } Device { Name = Drive1 Drive Index = 1 Media Type = LTO-1 Archive Device = /dev/nsa1 AutomaticMount = yes; AlwaysOpen = yes; RemovableMedia = yes; RandomAccess = no; AutoChanger = yes # FreeBSD specific settings Hardware End of Medium = no BSF at EOM = yes Backward Space Record = no Backward Space File = no Fast Forward Space File = no TWO EOF = yes } ### ## END Snippets of bacula-sd.conf ### I would appreciate any help anyone could offer. It's very unnerving to know that I am unable to restore these files if I need to. Happy holidays to all, Edward Aronyk - This SF.net email is sponsored by: Microsoft Defy all challenges. Microsoft(R) Visual Studio 2005. http://clk.atdmt.com/MRT/go/vse012070mrt/direct/01/ ___ Bacula-users mailing list Bacula-users@lists.sourceforge.net https://lists.sourceforge.net/lists/listinfo/bacula-users