Re: [Bacula-users] SD on 2 IP addresses
On Wed, Jul 14, 2010 at 9:52 PM, James Harper james.har...@bendigoit.com.au wrote: My bacula server is multi-homed. I am needing port 9102 to answer on all of the IP addresses on the server in order to service all of the subnets attached to the machine I looked at taking care of this on the network routing level but it just would not be practical Any help would be greatly appreciated I just set the SD to listen on 0.0.0.0 in this case. Is that not working for you? I just comment out the SDAddress line and it listens on all addresses. I've also done the 0.0.0.0 in the past, but it is not needed. Robert LeBlanc -- This SF.net email is sponsored by Sprint What will you do first with EVO, the first 4G phone? Visit sprint.com/first -- http://p.sf.net/sfu/sprint-com-first___ Bacula-users mailing list Bacula-users@lists.sourceforge.net https://lists.sourceforge.net/lists/listinfo/bacula-users
[Bacula-users] New Features?
We are running 5.0.2 and my VirtualFulls are failing saying that it can't read from the destination pool. At http://www.bacula.org/5.1.x-manuals/en/main/main/New_Features_in_5_0_0.html (the 5.1.x manual link from the website), it says that this should be included in the development version. However, it also says that tab completion should be introduced in this version, but I can do tab completion in 5.0.2. I'm confused as to what should be in which version. If when doing VirtualFull I should be able to read and write to the same pool, then I think there may be a bug. If it is indeed intended for 5.1.x and not 5.0.x, then I'll wait and not be concerned about it. Thanks, Robert LeBlanc Life Sciences Undergraduate Education Computer Support Brigham Young University -- This SF.net email is sponsored by Sprint What will you do first with EVO, the first 4G phone? Visit sprint.com/first -- http://p.sf.net/sfu/sprint-com-first___ Bacula-users mailing list Bacula-users@lists.sourceforge.net https://lists.sourceforge.net/lists/listinfo/bacula-users
Re: [Bacula-users] New Features?
This is from Debian's repository and it seems like it was compiled with readline because it is working. It does not say in the 5.0.x manual that it is there, but it does say it in the 5.1.x manual, hence the confusion. Robert LeBlanc Life Sciences Undergraduate Education Computer Support Brigham Young University On Sat, Jul 10, 2010 at 6:09 PM, francisco javier funes nieto esen...@gmail.com wrote: Did you compile Bacula with readline support? Kern said in another mail that you need this for tab-completion support in bconsole. Cheers J. 2010/7/11 Robert LeBlanc rob...@leblancnet.us We are running 5.0.2 and my VirtualFulls are failing saying that it can't read from the destination pool. At http://www.bacula.org/5.1.x-manuals/en/main/main/New_Features_in_5_0_0.html (the 5.1.x manual link from the website), it says that this should be included in the development version. However, it also says that tab completion should be introduced in this version, but I can do tab completion in 5.0.2. I'm confused as to what should be in which version. If when doing VirtualFull I should be able to read and write to the same pool, then I think there may be a bug. If it is indeed intended for 5.1.x and not 5.0.x, then I'll wait and not be concerned about it. Thanks, Robert LeBlanc Life Sciences Undergraduate Education Computer Support Brigham Young University -- This SF.net email is sponsored by Sprint What will you do first with EVO, the first 4G phone? Visit sprint.com/first -- http://p.sf.net/sfu/sprint-com-first ___ Bacula-users mailing list Bacula-users@lists.sourceforge.net https://lists.sourceforge.net/lists/listinfo/bacula-users -- _ Francisco Javier Funes Nieto [esen...@gmail.com] CANONIGOS Servicios Informáticos para PYMES. Cl. Cruz 2, 1º Oficina 7 Tlf: 958.536759 / 661134556 Fax: 958.521354 GRANADA - 18002 -- This SF.net email is sponsored by Sprint What will you do first with EVO, the first 4G phone? Visit sprint.com/first -- http://p.sf.net/sfu/sprint-com-first___ Bacula-users mailing list Bacula-users@lists.sourceforge.net https://lists.sourceforge.net/lists/listinfo/bacula-users
[Bacula-users] Accurate backups, D2D2T, concurrent jobs ... Oh my!
Ok, So I'm trying to figure out how to accomplish this and I think I have a kink in my think! I've switched to running Accurate Jobs for my backups from GFS. I've created an Accurate pool where my Incrementals are going now and it has a next pool directive to my Data Domain Monthly pool (see the config below). I have about 60 clients and I don't want to configure separate jobs for the VirtualFulls, one because I'm lazy and two, it's not very manageable. Using the schedule option, I can get this working the way I want, however, Concurrent Jobs get in the way! Basically, 8 VirtualFull jobs try to start and they all fight for resources so that none of them complete. If the schedule stanza could accept Concurrent Jobs then that would be ok, but it doesn't fit in the Job section. Setting the concurrent jobs to 1 would make virtual fulls work fine, but standard backups will go from 1-2 hours to 5-6 hours which is not acceptable. Schedule { Name = AccurateJobs Run = Level=Incremental Pool=DD-Accurate at 20:04 Run = Level=VirtualFull Pool=DD-Accurate 2nd fri at 16:00 } Storage { Name = DD-Accurate Address = my.sd.com Password = password Media Type = DD-Accurate Device = DD-Accurate Maximum Concurrent Jobs = 8 } Storage { Name = DD-Monthly Address = my.sd.com Password = password Media Type = DD-Month Device = DD-Monthly Maximum Concurrent Jobs = 8 } Pool { Name = DD-Accurate Pool Type = Backup LabelFormat = Accurate- Recycle = yes AutoPrune = yes Storage = DD-Accurate Volume Retention = 4 month Maximum Volume Bytes = 100G Next Pool = DD-Monthly RecyclePool = DD-Accurate Action On Purge = Truncate } Pool { Name = DD-Monthly Pool Type = Backup LabelFormat = Monthly- Recycle = yes AutoPrune = yes Storage = DD-Monthly Volume Retention = 6 month Maximum Volume Bytes = 200G Next Pool = Monthly Migration Time = 5 month RecyclePool = DD-Monthly Action On Purge = Truncate } Thanks, Robert LeBlanc Life Sciences Undergraduate Education Computer Support Brigham Young University -- This SF.net email is sponsored by Sprint What will you do first with EVO, the first 4G phone? Visit sprint.com/first -- http://p.sf.net/sfu/sprint-com-first___ Bacula-users mailing list Bacula-users@lists.sourceforge.net https://lists.sourceforge.net/lists/listinfo/bacula-users
Re: [Bacula-users] [Bacula-devel] Storage Daemon crash backtrace
On Fri, Jul 2, 2010 at 9:17 AM, Kern Sibbald k...@sibbald.com wrote: Hello Robert, Eric and I finished Bacula Enterprise version 4.0.0 today, a bit faster than I expected, so I am not running all the final tests, which gave me some time to look at the problem. I downloaded the zlib source code, and I don't immediately see anything in the file that would cause problems -- of course it is quite complicated code. I did look through the Bacula TLS code, and I noticed that the author did not properly set error conditions in Bacula when it finds an error on the comm line. This could cause Bacula to continue running, and might cause subsequent calls to openssl subroutines, when there is no valid data, and thus the seg fault. I still must test the changes I made. It is rather a long shot, but if you see that everytime that the SD crashes it is when there is a disrupted comm line problem, then it could well be the problem -- of course, if one has a good solid network, there should never be any broken pipe errors, which is possibly why we cannot see the problem. Having said this, I cannot rule out a problem on openssl at this point. Best regards, Kern zlib is the compression library, right? I haven't specified to use compression, is it on as a requirement of TLS? Since the transfer is all on a LAN I'm not hurting for compression. Is it possible to turn it off? Thanks, Robert LeBlanc Life Sciences Undergraduate Education Computer Support Brigham Young University -- This SF.net email is sponsored by Sprint What will you do first with EVO, the first 4G phone? Visit sprint.com/first -- http://p.sf.net/sfu/sprint-com-first___ Bacula-users mailing list Bacula-users@lists.sourceforge.net https://lists.sourceforge.net/lists/listinfo/bacula-users
Re: [Bacula-users] [Bacula-devel] Storage Daemon crash backtrace
On Fri, Jul 2, 2010 at 2:59 AM, Kern Sibbald k...@sibbald.com wrote: The question that I have is am I missing some debug symbols in other packages like open-ssl that would help? I'm not a programmer so backtraces are pretty much a wall of text to me. I want to give helpful info so that others may not run into the same problem into the future. If this is not helpful, I'm not sure what else to do, so I'll give up and just create a cron job that will restart bacula-sd if it crashes or modify btraceback to restart bacula-sd. The dump does not clearly show what is going on. I suspect this is because you are not following the advice in the manual (e.g. you should not use set loggin...) as it seems to only partially show what is going on. However, if I am interpreting what you show above and what is in the log file as being all the same output, it looks like the problems are coming because either the operator or by a directive, a cancel command has been sent to the SD. In Bacula 5.0.2, cancelling jobs is known to occassionally crash the Director and the SD. Perhaps it happens more frequently when TLS is running. My best guess is that the libz routines have a signal bug, or perhaps there is a problem in the Bacula code -- I am not sure. I do know that we have a number of fixes for the cancel command in Bacula 5.0.3, which will probably be released near the end of the month. Most if not all of the fixes are in the Source Forge bacula repo under Branch-5.0. In the mean time, you should try to find out why Bacula is attempting to cancel the job and make sure that does not happen. Perhaps it is a max runtime or something that is set too short or a rogue operator :-) I believe that your bug is a duplicate of bug #1568, which is a bug in zlib that causes it to crash when a signal is received. You will notice that the tracebacks look very similar to yours. You might want to talk to Frank Sweetzer about how he is resolving the problem. He is also at a University ... I think this is helpful for me. Debian does run bacula under bacula.tape, I'll change it to run under root.root and see if that helps with the automated backtrace. I do think there is some sort of error in the SSL, and the problem may be compounded by the cancel bug, here's why: I was able to test this on a machine that was not able to get a good backup. When running a TLS job, the connection is established and the FD starts transferring data to the SD. I watch as the spool size increments and when it stops, I look on the client and the SEND-Q in netstat for the connection to the SD starts incrementing. 30 minutes later, I get Connection times out, and then the job is canceled (not put in error state). (Disabling TLS allowed the client to complete the back-up on the first try). When I get a Broken pipe, then bacula puts the job in error state, but connection timed out is always canceled. I think this may be triggering the crash. I'll pull head and see if it runs into the same problem. I'm afraid that you might be right about the SSL bug and it is definitely out of your hands. I'll see what I can do to submit a bug to openSSL about it. Robert LeBlanc Life Sciences Undergraduate Education Computer Support Brigham Young University -- This SF.net email is sponsored by Sprint What will you do first with EVO, the first 4G phone? Visit sprint.com/first -- http://p.sf.net/sfu/sprint-com-first___ Bacula-users mailing list Bacula-users@lists.sourceforge.net https://lists.sourceforge.net/lists/listinfo/bacula-users
Re: [Bacula-users] [Bacula-devel] Storage Daemon crash backtrace
On Fri, Jul 2, 2010 at 9:44 AM, Frank Sweetser f...@wpi.edu wrote: On 07/02/2010 11:17 AM, Kern Sibbald wrote: Having said this, I cannot rule out a problem on openssl at this point. I forgot to mention one other *very* important data point. After the ticket I opened up came back as a problem outside of bacula, I did some more testing on the system in question. I found that I was able to reproduce similar problems using scp to do encrypted transfers of multi-gig files. I didn't get segfaults, but I did get socket errors. This pretty strongly supports the conclusion that the root cause of the problem isn't in bacula itself. Did you happen to open a bug against OpenSSL for this? I would like to track it if you did. I wonder if this is a problem that I'm seeing in Apache too. Thanks, Robert LeBlanc Life Sciences Undergraduate Education Computer Support Brigham Young University -- This SF.net email is sponsored by Sprint What will you do first with EVO, the first 4G phone? Visit sprint.com/first -- http://p.sf.net/sfu/sprint-com-first___ Bacula-users mailing list Bacula-users@lists.sourceforge.net https://lists.sourceforge.net/lists/listinfo/bacula-users
Re: [Bacula-users] [Bacula-devel] Storage Daemon crash backtrace
On Wed, Jun 30, 2010 at 8:35 AM, Robert LeBlanc rob...@leblancnet.uswrote: On Wed, Jun 30, 2010 at 1:06 AM, Kern Sibbald k...@sibbald.com wrote: This seems to a support issue. The dump that you posted shows no indication of a crash, which means that your understanding of a crash an mine are different. This is possibly a deadlock, but I won't spend any more time on it until the problem is a bit clearer. Best regards, Kern By the way, if this is a production system, you should be running on Lenny, which is known to be stable, and we support it. I'm not really sure what you need as a good backtrace, since I'm not a programmer. I always thought that segfault lead to a program crashing. I just don't know enough about gdb to know when there is enough information. All I know is that when it crashes when running as a daemon, I get a traceback that is useless in my e-mail (says no ptrace). When I run it under gdb and get the segfault, when I type 'cont' it says that bacula-sd has exited, and when I run it again, it doesn't complain that a process is already running. In both cases, there is no process called bacula-sd running on the system. I updated/upgraded about 10 clients yesterday to using TLS, and I did not get a crash from the SD. I will keep running it under the debugger in case it crashes again, although, I'm not sure how useful it will be if I can not operate gdb correctly to get you anything helpful. I have a feeling it's some perfect storm of configuration that may be causing the issue. I've been running Bacula for 6 years and never have had a problem like this. I'm just trying to help the project be as robust as possible because we like it and it has treated us so well in the past. As a side note, I get a lot more connection timeouts and broken pipes when using TLS, adding heartbeat interval helps, but it is not a silver bullet. Most of the back-ups are succeeding with only a few here and there having problems. Not using TLS and not having heartbeat interval, the back-ups aways succeed. I'll keep working through things and see if I can come up with anything. Thank you for the time and the great project. Robert LeBlanc Life Sciences Undergraduate Education Computer Support Brigham Young University P.S. We are working on a support contract and will be talking with you in about 24 hours with many others from our group who are also interested in using Bacula. I know you are probably getting tired of hearing from me, but I had another crash today. I'm attaching the backtrace that I got this time. I typed 'cont' after the backtrace and all it said was that all the threads exited (this is in the log this time). Here is what was before the back trace: [Thread 0x7fffebfff710 (LWP 25670) exited] [New Thread 0x7fffebfff710 (LWP 25671)] [Thread 0x7fffebfff710 (LWP 25671) exited] [Thread 0x70e88710 (LWP 24428) exited] [Thread 0x71e8a710 (LWP 25530) exited] [Thread 0x72e8c710 (LWP 25663) exited] [New Thread 0x72e8c710 (LWP 25785)] Program received signal SIGSEGV, Segmentation fault. [Switching to Thread 0x72e8c710 (LWP 25785)] 0x777c5b1c in ?? () from /usr/lib/libz.so.1 (gdb) set loggin file /home/rleblanc/bacula-sd-seg.log (gdb) set logging on Copying output to /home/rleblanc/bacula-sd-seg.log. (gdb) thread apply all bt Thread 219 (Thread 0x72e8c710 (LWP 25785)): #0 0x777c5b1c in ?? () from /usr/lib/libz.so.1 #1 0x777c6ef7 in ?? () from /usr/lib/libz.so.1 #2 0x777c40eb in ?? () from /usr/lib/libz.so.1 #3 0x777c2251 in deflate () from /usr/lib/libz.so.1 #4 0x75eea6f2 in ?? () from /usr/lib/libcrypto.so.0.9.8 The question that I have is am I missing some debug symbols in other packages like open-ssl that would help? I'm not a programmer so backtraces are pretty much a wall of text to me. I want to give helpful info so that others may not run into the same problem into the future. If this is not helpful, I'm not sure what else to do, so I'll give up and just create a cron job that will restart bacula-sd if it crashes or modify btraceback to restart bacula-sd. Thanks, Robert LeBlanc Life Sciences Undergraduate Education Computer Support Brigham Young University Thread 219 (Thread 0x72e8c710 (LWP 25785)): #0 0x777c5b1c in ?? () from /usr/lib/libz.so.1 #1 0x777c6ef7 in ?? () from /usr/lib/libz.so.1 #2 0x777c40eb in ?? () from /usr/lib/libz.so.1 #3 0x777c2251 in deflate () from /usr/lib/libz.so.1 #4 0x75eea6f2 in ?? () from /usr/lib/libcrypto.so.0.9.8 #5 0x75ee9ab0 in COMP_compress_block () from /usr/lib/libcrypto.so.0.9.8 #6 0x761897be in ssl3_do_compress () from /usr/lib/libssl.so.0.9.8 #7 0x761898fe in ?? () from /usr/lib/libssl.so.0.9.8 #8 0x76189e16 in ssl3_write_bytes () from /usr/lib/libssl.so.0.9.8 #9 0x7719308e in openssl_bsock_readwrite (bsock=0xf1b7a8, ptr=0x7dab3c , nbytes
Re: [Bacula-users] [Bacula-devel] Storage Daemon crash backtrace
On Wed, Jun 30, 2010 at 1:06 AM, Kern Sibbald k...@sibbald.com wrote: This seems to a support issue. The dump that you posted shows no indication of a crash, which means that your understanding of a crash an mine are different. This is possibly a deadlock, but I won't spend any more time on it until the problem is a bit clearer. Best regards, Kern By the way, if this is a production system, you should be running on Lenny, which is known to be stable, and we support it. I'm not really sure what you need as a good backtrace, since I'm not a programmer. I always thought that segfault lead to a program crashing. I just don't know enough about gdb to know when there is enough information. All I know is that when it crashes when running as a daemon, I get a traceback that is useless in my e-mail (says no ptrace). When I run it under gdb and get the segfault, when I type 'cont' it says that bacula-sd has exited, and when I run it again, it doesn't complain that a process is already running. In both cases, there is no process called bacula-sd running on the system. I updated/upgraded about 10 clients yesterday to using TLS, and I did not get a crash from the SD. I will keep running it under the debugger in case it crashes again, although, I'm not sure how useful it will be if I can not operate gdb correctly to get you anything helpful. I have a feeling it's some perfect storm of configuration that may be causing the issue. I've been running Bacula for 6 years and never have had a problem like this. I'm just trying to help the project be as robust as possible because we like it and it has treated us so well in the past. As a side note, I get a lot more connection timeouts and broken pipes when using TLS, adding heartbeat interval helps, but it is not a silver bullet. Most of the back-ups are succeeding with only a few here and there having problems. Not using TLS and not having heartbeat interval, the back-ups aways succeed. I'll keep working through things and see if I can come up with anything. Thank you for the time and the great project. Robert LeBlanc Life Sciences Undergraduate Education Computer Support Brigham Young University P.S. We are working on a support contract and will be talking with you in about 24 hours with many others from our group who are also interested in using Bacula. -- This SF.net email is sponsored by Sprint What will you do first with EVO, the first 4G phone? Visit sprint.com/first -- http://p.sf.net/sfu/sprint-com-first___ Bacula-users mailing list Bacula-users@lists.sourceforge.net https://lists.sourceforge.net/lists/listinfo/bacula-users
Re: [Bacula-users] [Bacula-devel] Storage Daemon crash backtrace
On Thu, Jun 24, 2010 at 2:23 AM, Kern Sibbald k...@sibbald.com wrote: Hello, Either the handle SIGUSR2 nostop noprint pass confused gdb (note it is not necessary when you use -s) or you have something broken on your build/os/machine. gdb with -s should ignore SIGPIPE and Bacula always ignores SIGPIPE, so the backtrace below is useless and doesn't correspond to any real problem. When you get a valid dump, please open a bug report, in the mean time, use the bacula-users list and the manual to help you get a valid dump. Best regards, Kern Ok, I finally got a segfault and I got a backtrace, I've put in a bug #1599. Please let me know if there is more information you need. I'd really like to get a resolution to this. Thanks, Robert LeBlanc Life Sciences Undergraduate Education Computer Support Brigham Young University -- This SF.net email is sponsored by Sprint What will you do first with EVO, the first 4G phone? Visit sprint.com/first -- http://p.sf.net/sfu/sprint-com-first___ Bacula-users mailing list Bacula-users@lists.sourceforge.net https://lists.sourceforge.net/lists/listinfo/bacula-users
[Bacula-users] TLS and broken pipes?!?
I'm moving some servers across an untrusted network and am trying to enable TLS and also enable Accurate backups so I don't have to do fulls over the network. I have a specific client that I'm not able to get a backup to complete (over a week now at least 20 attempts), it always contacts the SD and transfers between 0-600MB and then I get a broken pipe error. I disabled TLS and a back-up worked perfectly the first time. I re-enable TLS and it's a no go. The weird thing is that on one try, on the client I set TLS off, and on the SD I set TLS required, however, it still transfered 330MB before stopping. According to the Manual, this should not happen at all because TLS off should prevent any TLS activity, but the SD requires it so I should have gotten an error about TLS not being available. Can anyone offer some suggestions on what I should try, this is pretty frustrating. Thanks, Robert LeBlanc Life Sciences Undergraduate Education Computer Support Brigham Young University -- ThinkGeek and WIRED's GeekDad team up for the Ultimate GeekDad Father's Day Giveaway. ONE MASSIVE PRIZE to the lucky parental unit. See the prize list and enter to win: http://p.sf.net/sfu/thinkgeek-promo___ Bacula-users mailing list Bacula-users@lists.sourceforge.net https://lists.sourceforge.net/lists/listinfo/bacula-users
[Bacula-users] Storage Daemon crash backtrace
My SD has been crashing every night and I finally got a backtrace. I don't know what all this means, but I could sure use some help to figure out why it keeps crashing and how to stop it so that I can get some back-ups done. Thanks, Robert LeBlanc Life Sciences Undergraduate Education Computer Support Brigham Young University Starting program: /usr/sbin/bacula-sd -s -f -c /etc/bacula/bacula-sd.conf [Thread debugging using libthread_db enabled] [New Thread 0x7509c710 (LWP 27308)] [New Thread 0x7489b710 (LWP 27309)] [Thread 0x7509c710 (LWP 27308) exited] [New Thread 0x7509c710 (LWP 27412)] [New Thread 0x7409a710 (LWP 27413)] [New Thread 0x7368d710 (LWP 27414)] [New Thread 0x72e8c710 (LWP 27415)] [Thread 0x72e8c710 (LWP 27415) exited] [Thread 0x7368d710 (LWP 27414) exited] [New Thread 0x7368d710 (LWP 27416)] [New Thread 0x72e8c710 (LWP 27417)] [New Thread 0x7268b710 (LWP 27418)] [New Thread 0x71e8a710 (LWP 27423)] [New Thread 0x71689710 (LWP 27424)] [New Thread 0x70e88710 (LWP 27425)] [New Thread 0x7fffebfff710 (LWP 27426)] [New Thread 0x7fffeb7fe710 (LWP 27427)] [New Thread 0x7fffeaffd710 (LWP 27428)] [New Thread 0x7fffea7fc710 (LWP 27429)] [New Thread 0x7fffe9ffb710 (LWP 27430)] [New Thread 0x7fffe97fa710 (LWP 27431)] [Thread 0x71689710 (LWP 27424) exited] [Thread 0x70e88710 (LWP 27425) exited] [Thread 0x7fffeaffd710 (LWP 27428) exited] [Thread 0x7409a710 (LWP 27413) exited] [Thread 0x7fffea7fc710 (LWP 27429) exited] [Thread 0x7fffe97fa710 (LWP 27431) exited] [Thread 0x7fffe9ffb710 (LWP 27430) exited] [New Thread 0x7fffe97fa710 (LWP 27432)] [New Thread 0x7fffe9ffb710 (LWP 27433)] [Thread 0x7fffe9ffb710 (LWP 27433) exited] [New Thread 0x7fffe9ffb710 (LWP 27434)] [New Thread 0x7fffea7fc710 (LWP 27435)] [Thread 0x7fffeb7fe710 (LWP 27427) exited] [New Thread 0x7fffeb7fe710 (LWP 27436)] [New Thread 0x7409a710 (LWP 27437)] [Thread 0x7fffea7fc710 (LWP 27435) exited] [Thread 0x7fffe9ffb710 (LWP 27434) exited] [Thread 0x7409a710 (LWP 27437) exited] [New Thread 0x7409a710 (LWP 27438)] [New Thread 0x7fffe9ffb710 (LWP 27439)] [Thread 0x7fffeb7fe710 (LWP 27436) exited] [Thread 0x7fffe9ffb710 (LWP 27439) exited] [New Thread 0x7fffe9ffb710 (LWP 27440)] [New Thread 0x7fffeb7fe710 (LWP 27441)] [Thread 0x7fffeb7fe710 (LWP 27441) exited] [Thread 0x7fffe97fa710 (LWP 27432) exited] [New Thread 0x7fffe97fa710 (LWP 27442)] [New Thread 0x7fffeb7fe710 (LWP 27443)] [Thread 0x7409a710 (LWP 27438) exited] [Thread 0x7fffeb7fe710 (LWP 27443) exited] [Thread 0x7fffe9ffb710 (LWP 27440) exited] [New Thread 0x7fffe9ffb710 (LWP 27444)] [New Thread 0x7fffeb7fe710 (LWP 27445)] [Thread 0x7fffeb7fe710 (LWP 27445) exited] [New Thread 0x7fffeb7fe710 (LWP 27446)] [New Thread 0x7409a710 (LWP 27447)] [Thread 0x7fffe97fa710 (LWP 27442) exited] [Thread 0x7409a710 (LWP 27447) exited] [New Thread 0x7409a710 (LWP 27448)] [Thread 0x7409a710 (LWP 27448) exited] [New Thread 0x7409a710 (LWP 27449)] [New Thread 0x7fffe97fa710 (LWP 27450)] [Thread 0x7fffe97fa710 (LWP 27450) exited] Program received signal SIGUSR2, User defined signal 2. [Switching to Thread 0x72e8c710 (LWP 27417)] 0x767d30bd in read () from /lib/libpthread.so.0 (gdb) thread apply all bt Thread 37 (Thread 0x7409a710 (LWP 27449)): #0 0x767d30bd in read () from /lib/libpthread.so.0 #1 0x77172d56 in read_nbytes (bsock=0xe24ae8, ptr=0x740999fc , nbytes=4) at bnet.c:80 #2 0x77175bd7 in BSOCK::recv (this=0xe24ae8) at bsock.c:451 #3 0x00424120 in do_fd_commands (jcr=0x8d0ac8) at fd_cmds.c:149 #4 0x00424b7a in run_job (jcr=0x8d0ac8) at fd_cmds.c:124 #5 0x0042541b in run_cmd (jcr=0x8d0ac8) at job.c:225 #6 0x0042169f in handle_connection_request (arg=value optimized out) at dircmd.c:233 #7 0x7719a619 in workq_server (arg=value optimized out) at workq.c:346 #8 0x767cb8ba in start_thread () from /lib/libpthread.so.0 #9 0x7538401d in clone () from /lib/libc.so.6 #10 0x in ?? () Thread 34 (Thread 0x7fffeb7fe710 (LWP 27446)): #0 0x767d30bd in read () from /lib/libpthread.so.0 #1 0x77172d56 in read_nbytes (bsock=0xbe3788, ptr=0x7fffeb7fd9fc , nbytes=4) at bnet.c:80 #2 0x77175bd7 in BSOCK::recv (this=0xbe3788) at bsock.c:451 #3 0x00424120 in do_fd_commands (jcr=0x6a7b78) at fd_cmds.c:149 #4 0x00424b7a in run_job (jcr=0x6a7b78) at fd_cmds.c:124 #5 0x0042541b in run_cmd (jcr=0x6a7b78) at job.c:225 #6 0x0042169f in handle_connection_request (arg=value optimized out) at dircmd.c:233 #7 0x7719a619 in workq_server (arg=value optimized out) at workq.c:346 #8 0x767cb8ba in start_thread () from /lib/libpthread.so.0 #9 0x7538401d in clone () from /lib/libc.so.6 #10 0x in ?? () Thread 32 (Thread 0x7fffe9ffb710 (LWP 27444)): #0 0x767d30bd in read () from /lib/libpthread.so
Re: [Bacula-users] [Bacula-devel] Storage Daemon crash backtrace
On Wed, Jun 23, 2010 at 1:17 PM, Kern Sibbald k...@sibbald.com wrote: Yes, as Martin says, SIGUSR2 is something that should be ignored. We use it internally to signal between threads, and when you are running the debugger on Bacula, you need to tell the debugger to ignore it -- as Martin indicates, or most often, when I am manually debugging, I start it with run -s -f For more information, see the Kaboom chapter of the Problems manual. Of course, if Bacula sent you the traceback (or put it in your working directory), you should open a bug report and post it there, and we will look at it. I had to run gdb manually (the e-mail report kept coming back empty) and followed the notes in the manual. I did 'run -s -f ...' as the manual said. I'll ignore SIGUSR2 and get it to crash again. Robert LeBlanc Life Sciences Undergraduate Education Computer Support Brigham Young University -- ThinkGeek and WIRED's GeekDad team up for the Ultimate GeekDad Father's Day Giveaway. ONE MASSIVE PRIZE to the lucky parental unit. See the prize list and enter to win: http://p.sf.net/sfu/thinkgeek-promo ___ Bacula-users mailing list Bacula-users@lists.sourceforge.net https://lists.sourceforge.net/lists/listinfo/bacula-users
Re: [Bacula-users] [Bacula-devel] Storage Daemon crash backtrace
On Wed, Jun 23, 2010 at 1:59 PM, Kern Sibbald k...@sibbald.com wrote: On Wednesday 23 June 2010 21:24:20 Robert LeBlanc wrote: On Wed, Jun 23, 2010 at 1:17 PM, Kern Sibbald k...@sibbald.com wrote: Yes, as Martin says, SIGUSR2 is something that should be ignored. We use it internally to signal between threads, and when you are running the debugger on Bacula, you need to tell the debugger to ignore it -- as Martin indicates, or most often, when I am manually debugging, I start it with run -s -f For more information, see the Kaboom chapter of the Problems manual. Of course, if Bacula sent you the traceback (or put it in your working directory), you should open a bug report and post it there, and we will look at it. I had to run gdb manually (the e-mail report kept coming back empty) and followed the notes in the manual. I did 'run -s -f ...' as the manual said. I'll ignore SIGUSR2 and get it to crash again. Well, the -s should cause gdb to ignore the signal and just pass it to Bacula, which in turn ignores it. If you are running Bacula 5.0.2, in 99% of the cases, you will find the traceback and the bactrace files in your working directory when Bacula is not run under the debugger and it crashes. If you are running a 3.0.x or older version, you will need a support contract if you want us to look at the problem ... Kern Ok, ignoring the SIGUSR2, I got a SIGPIPE, here is the backtrace when gdb paused. We are running 5.0.2 and I've looked in the working directory for traceback files. There are some, however they only have: ptrace: No such process. /var/lib/bacula/26433: No such file or directory. I thought by recompiling the Debian package with debug symbols that this would be resolved, but this is from the last traceback file written. I hope this is helpful. r...@lsbacsd0:/usr/sbin# gdb ./bacula-sd GNU gdb (GDB) 7.0.1-debian Copyright (C) 2009 Free Software Foundation, Inc. License GPLv3+: GNU GPL version 3 or later http://gnu.org/licenses/gpl.html This is free software: you are free to change and redistribute it. There is NO WARRANTY, to the extent permitted by law. Type show copying and show warranty for details. This GDB was configured as x86_64-linux-gnu. For bug reporting instructions, please see: http://www.gnu.org/software/gdb/bugs/... Reading symbols from /usr/sbin/bacula-sd...done. (gdb) handle SIGUSR2 nostop noprint pass SignalStop Print Pass to program Description SIGUSR2 No No Yes User defined signal 2 (gdb) run -s -f -c /etc/bacula/bacula-sd.conf Starting program: /usr/sbin/bacula-sd -s -f -c /etc/bacula/bacula-sd.conf [Thread debugging using libthread_db enabled] [New Thread 0x7509c710 (LWP 28087)] [New Thread 0x7489b710 (LWP 28088)] [New Thread 0x7409a710 (LWP 28098)] [New Thread 0x73899710 (LWP 28099)] [Thread 0x7509c710 (LWP 28087) exited] [New Thread 0x7509c710 (LWP 28187)] [New Thread 0x72e8c710 (LWP 28188)] [New Thread 0x7268b710 (LWP 28189)] [New Thread 0x71e8a710 (LWP 28190)] [Thread 0x73899710 (LWP 28099) exited] [Thread 0x7409a710 (LWP 28098) exited] [New Thread 0x7409a710 (LWP 28191)] [New Thread 0x73899710 (LWP 28192)] [New Thread 0x71689710 (LWP 28193)] [New Thread 0x70e88710 (LWP 28194)] [New Thread 0x7fffebfff710 (LWP 28195)] [Thread 0x71689710 (LWP 28193) exited] [Thread 0x7409a710 (LWP 28191) exited] [Thread 0x73899710 (LWP 28192) exited] [Thread 0x7fffebfff710 (LWP 28195) exited] [New Thread 0x7fffebfff710 (LWP 28196)] [New Thread 0x73899710 (LWP 28197)] [New Thread 0x7409a710 (LWP 28198)] [New Thread 0x71689710 (LWP 28199)] [New Thread 0x7fffeb7fe710 (LWP 28200)] [New Thread 0x7fffeaffd710 (LWP 28201)] [Thread 0x7fffeb7fe710 (LWP 28200) exited] [Thread 0x71689710 (LWP 28199) exited] [Thread 0x7fffeaffd710 (LWP 28201) exited] [New Thread 0x7fffeaffd710 (LWP 28203)] [New Thread 0x71689710 (LWP 28204)] [Thread 0x7268b710 (LWP 28189) exited] [New Thread 0x7268b710 (LWP 28205)] [Thread 0x71689710 (LWP 28204) exited] [Thread 0x72e8c710 (LWP 28188) exited] [New Thread 0x72e8c710 (LWP 28206)] [Thread 0x72e8c710 (LWP 28206) exited] [New Thread 0x72e8c710 (LWP 28208)] [New Thread 0x71689710 (LWP 28209)] [Thread 0x71689710 (LWP 28209) exited] [Thread 0x7fffeaffd710 (LWP 28203) exited] [New Thread 0x7fffeaffd710 (LWP 28210)] [Thread 0x7fffeaffd710 (LWP 28210) exited] [New Thread 0x7fffeaffd710 (LWP 28211)] [Thread 0x70e88710 (LWP 28194) exited] [New Thread 0x70e88710 (LWP 28212)] [Thread 0x70e88710 (LWP 28212) exited] [New Thread 0x70e88710 (LWP 28213)] [New Thread 0x71689710 (LWP 28214)] [Thread 0x71689710 (LWP 28214) exited] [New Thread 0x71689710 (LWP 28217)] [Thread 0x7fffebfff710 (LWP 28196) exited] [New Thread 0x7fffebfff710 (LWP 28218)] [New Thread 0x7fffeb7fe710 (LWP 28219)] [Thread 0x7268b710 (LWP 28205) exited] [Thread 0x7fffeb7fe710
Re: [Bacula-users] Bacula interrupted by signal 11: Segmentation violation
On Fri, Jun 18, 2010 at 12:15 PM, Martin Simmons mar...@lispworks.com wrote: On Fri, 18 Jun 2010 08:25:04 -0600, Robert LeBlanc said: Our Bacula 5.0.2 SD is crashing, but all I'm seeing is Bacula interrupted by signal 11: Segmentation violation. What can I do to get more information about this. We are moving our clients to TLS, but it doesn't seem to correlate to the TLS clients. See What To Do When Bacula Crashes (Kaboom): http://www.bacula.org/5.0.x-manuals/en/problems/problems/What_Do_When_Bacula.html __Martin I've had two crashes over the weekend, and all I got both times was (PID was different): ptrace: No such process. /var/lib/bacula/4153: No such file or directory. I guess I'll have to run bacula through gdb to find out what is going on. Robert LeBlanc Life Sciences Undergraduate Education Computer Support Brigham Young University -- ThinkGeek and WIRED's GeekDad team up for the Ultimate GeekDad Father's Day Giveaway. ONE MASSIVE PRIZE to the lucky parental unit. See the prize list and enter to win: http://p.sf.net/sfu/thinkgeek-promo ___ Bacula-users mailing list Bacula-users@lists.sourceforge.net https://lists.sourceforge.net/lists/listinfo/bacula-users
Re: [Bacula-users] Bacula interrupted by signal 11: Segmentation violation
On Mon, Jun 21, 2010 at 8:48 AM, Martin Simmons mar...@lispworks.comwrote: I've had two crashes over the weekend, and all I got both times was (PID was different): ptrace: No such process. /var/lib/bacula/4153: No such file or directory. The second message is normal (it is a misfeature of gdb), but the ptrace: No such process is surprising. I guess I'll have to run bacula through gdb to find out what is going on. Yes, that's probably the best approach. It allows gdb to take control as soon as the segmentation violation occurs. r...@lsbacsd0:/usr/sbin# gdb ./bacula-sd GNU gdb (GDB) 7.0.1-debian Copyright (C) 2009 Free Software Foundation, Inc. License GPLv3+: GNU GPL version 3 or later http://gnu.org/licenses/gpl.html This is free software: you are free to change and redistribute it. There is NO WARRANTY, to the extent permitted by law. Type show copying and show warranty for details. This GDB was configured as x86_64-linux-gnu. For bug reporting instructions, please see: http://www.gnu.org/software/gdb/bugs/... Reading symbols from /usr/sbin/bacula-sd...(no debugging symbols found)...done. Looks like the Debian maintainers strip out the debugging symbols and do not provide a debug package. Looks like I'll have to re-build the package to see what the error is. This is probably why I didn't get anything e-mailed to me. Robert LeBlanc Life Sciences Undergraduate Education Computer Support Brigham Young University -- ThinkGeek and WIRED's GeekDad team up for the Ultimate GeekDad Father's Day Giveaway. ONE MASSIVE PRIZE to the lucky parental unit. See the prize list and enter to win: http://p.sf.net/sfu/thinkgeek-promo___ Bacula-users mailing list Bacula-users@lists.sourceforge.net https://lists.sourceforge.net/lists/listinfo/bacula-users
[Bacula-users] Bacula interrupted by signal 11: Segmentation violation
Our Bacula 5.0.2 SD is crashing, but all I'm seeing is Bacula interrupted by signal 11: Segmentation violation. What can I do to get more information about this. We are moving our clients to TLS, but it doesn't seem to correlate to the TLS clients. Robert LeBlanc Life Sciences Undergraduate Education Computer Support Brigham Young University -- ThinkGeek and WIRED's GeekDad team up for the Ultimate GeekDad Father's Day Giveaway. ONE MASSIVE PRIZE to the lucky parental unit. See the prize list and enter to win: http://p.sf.net/sfu/thinkgeek-promo___ Bacula-users mailing list Bacula-users@lists.sourceforge.net https://lists.sourceforge.net/lists/listinfo/bacula-users
[Bacula-users] Next Pool migration vs. VirtualFull
So, I'm getting around to getting my VirtualFull backups running. I'm already doing D2D2T, so my Next Pool directive for my three pools (Daily, Weekly, Monthly) each point to their respective tape pools. I've run a VirtualFull and it took the first pool it found (my Disk Daily pool) and now it's writing my full to the Daily tape. How would I go about defining a different Next Pool for my VirtualFulls? Thanks, Robert LeBlanc Life Sciences Undergraduate Education Computer Support Brigham Young University -- ThinkGeek and WIRED's GeekDad team up for the Ultimate GeekDad Father's Day Giveaway. ONE MASSIVE PRIZE to the lucky parental unit. See the prize list and enter to win: http://p.sf.net/sfu/thinkgeek-promo___ Bacula-users mailing list Bacula-users@lists.sourceforge.net https://lists.sourceforge.net/lists/listinfo/bacula-users
[Bacula-users] Windows Installer not creating service
I'm seeing this on a regular basis where the Windows installer will not create the bacula-fd service and I'm having to do it manually. I've seen this on upgraded machines (I have ran the uninstall which did not remove the service (I removed it by hand before installing the new version)) and on new installs. Bacula is 5.0.2 and Windows is 2003 and 2003 R2. Has anyone see this problem and knows a solution? Thanks, Robert LeBlanc Life Sciences Undergraduate Education Computer Support Brigham Young University -- ThinkGeek and WIRED's GeekDad team up for the Ultimate GeekDad Father's Day Giveaway. ONE MASSIVE PRIZE to the lucky parental unit. See the prize list and enter to win: http://p.sf.net/sfu/thinkgeek-promo___ Bacula-users mailing list Bacula-users@lists.sourceforge.net https://lists.sourceforge.net/lists/listinfo/bacula-users
Re: [Bacula-users] New FileStorage files in a different directory
On Thu, Jun 10, 2010 at 1:30 PM, John Drescher dresche...@gmail.com wrote: On Thu, Jun 10, 2010 at 2:48 PM, Martin Simmons mar...@lispworks.com wrote: On Thu, 10 Jun 2010 17:11:26 +0200, Bernd Schoeller said: Is there an easy way to add more volumes to the FileStorage in a different directory? Not within bacula itself. It might work with symbolic links (create a volume on /store, use mv to move it to /store2, use ln -s to link from /store to /store2). A better way might be to convert /store and /store2 into an LVM partition so it can use both disks for one directory. Be careful with the LVM. If one drive dies (unless both are RAIDs) you loose all of your backups. I would instead use the bacula vchanger. http://sourceforge.net/projects/vchanger/ With the vchanger I would make each storage a virtual magazine. However the conversion would probably require migration and I believe you may run into problems if the storages are different sizes. The documentation also says not to use vchanger for anything other than testing. Robert LeBlanc Life Sciences Undergraduate Education Computer Support Brigham Young University -- ThinkGeek and WIRED's GeekDad team up for the Ultimate GeekDad Father's Day Giveaway. ONE MASSIVE PRIZE to the lucky parental unit. See the prize list and enter to win: http://p.sf.net/sfu/thinkgeek-promo___ Bacula-users mailing list Bacula-users@lists.sourceforge.net https://lists.sourceforge.net/lists/listinfo/bacula-users
[Bacula-users] Question on recycling migrated volumes
I've been looking through the manual and I can't find a clear answer. We are doing d2d2t and we would like to recycle volumes that have all jobs migrated first, even if that means the retention period of the volume is not up. Right now, I think the retention period has to expire before the volume is considered to be recycled. The problem comes where we may have a large flux of data some data and very little for a while. We have set our migration policy for three weeks and our retention for four weeks, but sometimes we have a lot of volumes that have been migrated and are just sitting taking up disk space that could be used for other back-ups (we have to keep a lot of disk space free to compensate for this flux) and then we wind up with a lot of volumes with only about half really having data in our 3 week migration window. I'm looking at Action on Purge to free the disk space of unused volumes, but I don't think the volume is purged when all the jobs are migrated. If someone has clarification or a good idea of how to accomplish our goals, it would be helpful. Thank you, Robert LeBlanc Life Sciences Undergraduate Education Computer Support Brigham Young University -- ThinkGeek and WIRED's GeekDad team up for the Ultimate GeekDad Father's Day Giveaway. ONE MASSIVE PRIZE to the lucky parental unit. See the prize list and enter to win: http://p.sf.net/sfu/thinkgeek-promo___ Bacula-users mailing list Bacula-users@lists.sourceforge.net https://lists.sourceforge.net/lists/listinfo/bacula-users
Re: [Bacula-users] Bacula tape format vs. rsync on deduplicated file systems
On Fri, May 28, 2010 at 12:32 AM, Eric Bollengier eric.bolleng...@baculasystems.com wrote: Hello Robert, What would be the result if you do Incremental backup instead of full backup ? Imagine that you have 1% changes by day, it will give something like total_size = 30GB + 30GB*0.01 * nb_days (instead of 30GB * nb_days) I'm quite sure it will give a compression like 19:1 for 20 backups... This kind of comparison is the big argument of dedup companies, do 20 full backup, and you will have 20:1 dedup ratio, but do 19 incremental + 1 full and this ratio will fall down to 1:1... (It's not exactly true neither because you can save space with multiple systems having same data) The idea was to in some ways simulate a few things all at once. This kind of test could show how multiple similar OSes could dedupe (20 Windows OS for example, you only have to store those bits once for any number of Windows machines), using Bacula's incrementals, you have to store the bits once per machine and then again when you do your next full each week or month. It also was to show how much you could save when doing your fulls each week or month, a similar effect would happen for the differentials too. It wasn't meant to be all inclusive, but just to show some trends that I was interested in. In our environment, since everything is virtual, we don't save the OS data, and only try to save the minimum that we need, that doesn't work for everyone though. [image: backup.png] This chart shows that using the sync method, the data's compression grew in almost a linear fashion, while the Bacula data stayed close to 1x compression. My suspicion is that since the Bacula tape format inserts job information regularly into the stream file and lessfs uses a fixed block size, lessfs is not able to find much unique data in the Bacula stream. You are right, we have a current project to add a new device format that will be able to be compatible with dedup layer. I don't know yet how it will work because I can imagine that each dedup system works differently, and finding a common denominator won't be easy. A first proof of concept will certainly use LessFS (It is already in my radar scope). But as you said, depending on block size, alignment, etc... it's not so easy. I think in some ways, each dedupe file system can work very well with each file as it's own instead of being in a stream. That way the start of the file is always on a boundary that the deduplication file system uses. I think you might be able to use sparse files for a stream and always sparse up the block alignment, that would make the stream file look really large compared to what it actually uses on a non deduped file system. I still think if Bacula lays the data down in the same file structure as on the client organized by jobID with some small bacula files to hold permissions, etc I think it would be the most flexible for all dedupe file systems because it would be individual files like they are expecting. Although Data Domain's variable block size feature allows it much better compression of Bacula data, rsync still achieved an almost 2x greater compression over Bacula. The compression on disk is better, on the network layer and the remote IO disk system, this is an other story. BackupPC is smarter on this part (but have problems with big set of files). I'm not sure I understand exactly what you mean. I understand that BacupPC can cause a file system to not mount because it exhausts the number of hard links the fs can support. Luckly, with deduplication file system, you don't have this problem, because you just copy the bits and the fs does the work of finding the duplicates. A dedupe fs can even only store a small part of a file (if most of the file is duplicate and only a small part is unique) where BackupPC would have to write that whole file. I don't want Bacula to adopt what BackupPC is doing, I think it's a step backwards. In conclusion, lessfs is a great file system and can benefit from variable block sizes, if it can be added, for both regular data and Bacula data. Bacula could also greatly benefit by providing a format similar to a native file system on lessfs and even a good benefit on DataDomain. Yes, variable block size and dynamic alignment seems the edge of the technology, but it's also heavily covered by patents (and those companies are not very friendly). And I can imagine that it's easy to ask for them, and it's a little more complex to implement :-) One of the reasons I mentioned if it could be implemented. If there is anything I know about OSS, is that there are some amazing people with an ability to think so outside the box that these things have not been able to stop the progress of OSS. Robert LeBlanc Life Sciences Undergraduate Education Computer Support Brigham Young University
Re: [Bacula-users] Bacula tape format vs. rsync on deduplicated file systems
On Fri, May 28, 2010 at 10:48 AM, Eric Bollengier eric.bolleng...@baculasystems.com wrote: First, thank you for the kind replies, this is helping me to ensure I see the big picture. Le vendredi 28 mai 2010 16:42:01, Robert LeBlanc a écrit : On Fri, May 28, 2010 at 12:32 AM, Eric Bollengier eric.bolleng...@baculasystems.com wrote: Hello Robert, What would be the result if you do Incremental backup instead of full backup ? Imagine that you have 1% changes by day, it will give something like total_size = 30GB + 30GB*0.01 * nb_days (instead of 30GB * nb_days) I'm quite sure it will give a compression like 19:1 for 20 backups... This kind of comparison is the big argument of dedup companies, do 20 full backup, and you will have 20:1 dedup ratio, but do 19 incremental + 1 full and this ratio will fall down to 1:1... (It's not exactly true neither because you can save space with multiple systems having same data) The idea was to in some ways simulate a few things all at once. This kind of test could show how multiple similar OSes could dedupe (20 Windows OS for example, you only have to store those bits once for any number of Windows machines), using Bacula's incrementals, you have to store the bits once per machine In this particular case, you can use the BaseJob file level deduplication that allows you to store only one version of each OS. (But I admit that if the system can do it automatically, it's better) I agree, I haven't looked into BaseJobs yet because they are not the easiest thing to understand. Since I've very pressed for time, I don't have a lot of time to commit to reading. I plan on understanding it, but when a system can do it automatically and transparently, I like that a lot. and then again when you do your next full each week or month. Why do you want to schedule Full backup every weeks? With Accurate option, you can adopt the Incremental forever (Differential can limit the number of incremental to use for restore) If it's to have multiple copies of a particular file (what I like to advise when using tapes), since the deduplication will turn multiple copies to a single instance, I think that it's very similar. We are using accurate jobs on a few machines, however, I have not scheduled the roll-ups yet as I haven't had time to read the manual enough. I need to do it soon as I have months of incrementals without any fulls in between. I do like having multiple copies of my files on tape, on disk not so much. The reason is I've had tapes go bad, with disk, I have a lot of redundancy built in. It also was to show how much you could save when doing your fulls each week or month, a similar effect would happen for the differentials too. It wasn't meant to be all inclusive, but just to show some trends that I was interested in. Yes, but comparing 20 full backup with 20 full copies with deduplication is like comparing apples and oranges... At least, it should appear somewhere that you choose the worst case for bacula and the best case for deduplication :-) Please remember that the bacula tape files were on a lessfs file system, so the same amount of data was written using rsync and bacula, just different formats on lessfs. So best case senario is that they should have had the same dedupe rate. The idea was to see how both formats faired on lessfs. In our environment, since everything is virtual, we don't save the OS data, and only try to save the minimum that we need, that doesn't work for everyone though. Yes, this is an other very common way to do, and I agree that sometime you can't do that. It's also very practical to just rsync the whole disk and let LessFS do it's job. If you want to browse the backup, it's just a directory. With Bacula, as incremental/full/differential are presented in a virtual tree, it's not needed. Understandable, in a disaster recovery instance with Bacula, if the on disk format was a tree, you could browse to the lastest backup of your catalog and import it and off you go. Right now, I have no clue which of the 100 tapes I have has the latest catalog backup, I would have to scan them all, and if the backup spans tapes, I have to figure out what order to scan the tapes to recover the back-up, that could take forever. Now, that I've thought about it, I think it's time for a new pool for catalog backups, sigh. I think in some ways, each dedupe file system can work very well with each file as it's own instead of being in a stream. That way the start of the file is always on a boundary that the deduplication file system uses. I think you might be able to use sparse files for a stream and always sparse up the block alignment, I'm not very familiar with sparse file, but I'm pretty sure that the sparse unit is a block. So, if a block is empty ok, but if you have some bytes used inside this block, it will take 4KB. I'm not an expert with sparse
Re: [Bacula-users] Bacula tape format vs. rsync on deduplicated file systems
On Fri, May 28, 2010 at 11:49 AM, Phil Stracchino ala...@metrocast.netwrote: On 05/28/10 13:24, Robert LeBlanc wrote: I agree, I haven't looked into BaseJobs yet because they are not the easiest thing to understand. Since I've very pressed for time, I don't have a lot of time to commit to reading. I plan on understanding it, but when a system can do it automatically and transparently, I like that a lot. The basic concept behind base jobs is that you define one machine (or the base OS install image on that machine) as a reference install for a class of similar machines, and do a full backup of *it*, but then for the other machines in the class, you back up only user data plus any base system files that are different from those on the reference machine. Once I have all of my Windows boxes on the same version of Windows again (right now, half are XP Pro and half are 2K Pro), I'm planning to set up a base job for them. That will be near impossible for my Linux servers, they all seem to be at different patch levels. I guess it would do ok for my Windows machines, but if I need bare-metal, there is a reason because the OS was configured differently than the standard. I think this is where dedup could really provide a benefit. When you patch your servers, do you have to redo your base at the same time to keep it synced? Robert LeBlanc Life Sciences Undergraduate Education Computer Support Brigham Young University -- ___ Bacula-users mailing list Bacula-users@lists.sourceforge.net https://lists.sourceforge.net/lists/listinfo/bacula-users
[Bacula-users] Does Bacula issue the Prevent Media Removal command?
I'm trying to track down the cause of tapes being stuck in our library at random times. When a tape is stuck, the library reports that a soft removal error occurred, and all I can think of is that Bacula is sending the Prevent Media Removal command to prevent tapes from being changed out without it's knowledge, but sometimes, it does not issue the Allow Media Removal command correctly to change the tape. This causes the jobs to back up waiting for a tape. We are running Debian Squeeze with 5.0.1. Thanks, Robert LeBlanc Life Sciences Undergraduate Education Computer Support Brigham Young University -- ___ Bacula-users mailing list Bacula-users@lists.sourceforge.net https://lists.sourceforge.net/lists/listinfo/bacula-users
Re: [Bacula-users] Does Bacula issue the Prevent Media Removal command?
On Mon, May 24, 2010 at 9:00 AM, John Drescher dresche...@gmail.com wrote: I'm trying to track down the cause of tapes being stuck in our library at random times. When a tape is stuck, the library reports that a soft removal error occurred, and all I can think of is that Bacula is sending the Prevent Media Removal command to prevent tapes from being changed out without it's knowledge, but sometimes, it does not issue the Allow Media Removal command correctly to change the tape. This causes the jobs to back up waiting for a tape. We are running Debian Squeeze with 5.0.1. Thanks, I have the same thing on gentoo from time to time with my exabyte magnum 224 changer. John Someone mentioned that changing Always open to no could help, but that kind of scares me. I'd like the protection of preventing removal, as long a Bacula can re-enable it when it needs to change tapes. I've had multiple drives changed out, and have updated the firmware on the drives and library a number of times in the last couple of years to try and fix this problem, but non of the fixes it, so I think it's either something Bacula or the kernel/mtx is doing. Robert LeBlanc Life Sciences Undergraduate Education Computer Support Brigham Young University -- ___ Bacula-users mailing list Bacula-users@lists.sourceforge.net https://lists.sourceforge.net/lists/listinfo/bacula-users
Re: [Bacula-users] Does Bacula issue the Prevent Media Removal command?
On Mon, May 24, 2010 at 9:49 AM, John Drescher dresche...@gmail.com wrote: Someone mentioned that changing Always open to no could help, but that kind of scares me. I'd like the protection of preventing removal, as long a Bacula can re-enable it when it needs to change tapes. I've had multiple drives changed out, and have updated the firmware on the drives and library a number of times in the last couple of years to try and fix this problem, but non of the fixes it, so I think it's either something Bacula or the kernel/mtx is doing. What happens to me is that it is possible to get in a state where the robot will not remove tapes from one or both drived. I can eject the tape from the drive but when mtx or the move commands on the changer itself both fail. I have been able to reset this situation by stopping bacula-sd then unload the st and scsi card module then reenabling both and restarting bacula-sd. John I can't get the drive to eject at all, I can get the drive to seek and rewind, but not eject either through mt, mtx or the library controls. I usually have to reboot the library once or twice and sometimes restart bacula-sd. It's pretty frustrating, I'd like to get to the bottom of it. Since I only roll to tape once a week, it usually takes a couple of weeks for the problem to show up. Robert LeBlanc Life Sciences Undergraduate Education Computer Support Brigham Young University -- ___ Bacula-users mailing list Bacula-users@lists.sourceforge.net https://lists.sourceforge.net/lists/listinfo/bacula-users
Re: [Bacula-users] Does Bacula issue the Prevent Media Removal command?
On Mon, May 24, 2010 at 10:50 AM, Alan Brown a...@mssl.ucl.ac.uk wrote: On Mon, 24 May 2010, Robert LeBlanc wrote: I'm trying to track down the cause of tapes being stuck in our library at random times. What is the make and model of your library, _and_ your tape drives? I get this semi-regularly here with HP LTO2 drives and MSL6000 changer (Overland Neo 4000) Most of the time simply issuing a mount command inside bconsole causes the drive to be rescanned and unlocked, then the tape will eject and unload correctly. (Mount, device, slot 0) Last night we had scsi io errors which persisted because the tape had been ejected, yet hadn't been unloaded. In that case it was a matter of telling MTX to unload/reload the tape drive before bacula regained control of the situation. I believe this is a robot or fc/scsi bridge glitch, but HP have consistently dodged all questions on the issue and close trouble tickets as fast as they're opened... Ideally Bacula would attempt this stuff automatically instead of needing human intervention. I've tried to file a bugzilla on the problem, but Kern keeps closing it. AB We have a Neo8000 firmware 5.16 with two HP LTO-3 (firmware 011.930) drives and one HP LTP-4 (firmware 011.135) drive. Sometimes Bacula will recover if I let is sit and it automatically retries, but this is rare. I have never had a case where the tape was ejected and all that was needed was for me to move the tape, in every case the tape will not eject. Robert LeBlanc Life Sciences Undergraduate Education Computer Support Brigham Young University -- ___ Bacula-users mailing list Bacula-users@lists.sourceforge.net https://lists.sourceforge.net/lists/listinfo/bacula-users
Re: [Bacula-users] [Bacula-devel] Idea/suggestion for dedicated disk-based sd
On Thu, Apr 8, 2010 at 12:39 AM, Kern Sibbald k...@sibbald.com wrote: Hello, I haven't seen the original messages, so I am not sure if I understand the full concept here so my remarks may not be pertinent. However, from what I see, this is basically similar to what BackuPC does. The big problem I have with it is that it does not scale well to thousands of machines. If I were thinking about changing the disk Volume format, I would start by looking at how git handles storing objects, and whether git can scale to handle a machine with 40 million file entries. One thing is sure is that, unless some new way of implementing hardlinks is implemented, you will never see Bacula using hard links in the volumes. That is a sure way to make your machine unbootable if you scale large enough Just backup enough clients with BackupPC and one day you will find that fsck no longer works. I suspect that it will require only a couple hundred million hardlinks before a Linux machine will no longer boot. It wasn't my intention that Bacula try to create the hard links like BackupPC, I figured that if someone wanted to do that, they could run a script outside of Bacula. I'm thinking of the ability to offload the data compression to the file system in general, or alternatively have Bacula compress it. The reason being is that with Bacula's current tape format, dedup technologies can not dedup it very well. From what I can tell of the tape format, every 64K of duplicate data had a unique header rendering it unique and therefore not a candidate for dedup. I had two ideas for trying to overcome this problem, one was to have a slightly modified Bacula tape format for disks that would move the unique header information to the front or the back of the job stream, and the format would create a sparse file with job files starting at a user defined blocksize. I then thought about storing tier 3 data on the same dedup device or file system and that if done a certain way we could get 'free' backups. If Bacula backed-up to the same device with a hierarchical file system approach, then the original files and the files that Bacula backed up would look the same. Plus it would be easy to recover in the case of total failures of Bacula-dir and sd (I'm thinking disaster recovery). I've been running Bacula backups on a dedup box for almost a year and can't get better than 4x when I believe the data that we have should be about 10x. With dedup becoming more popular, I'm just trying to make Bacula even more appealing for those who want to dedup. If people are using straight disk, then compression could be enabled by bacula and the format might be a little different (like tar bz2 archives), but most newer file systems are starting to support on the fly compression, so I don't know how critical it is. These are all ideas to get some discussion about how, if a file aware SD is implemented, what may be good to offer maximum flexibility and be able to leverage features that are being implemented in current and future file systems. Thanks, Robert LeBlanc Life Sciences Undergraduate Education Computer Support Brigham Young University -- Download Intel#174; Parallel Studio Eval Try the new software tools for yourself. Speed compiling, find bugs proactively, and fine-tune applications for parallel performance. See why Intel Parallel Studio got high marks during beta. http://p.sf.net/sfu/intel-sw-dev ___ Bacula-users mailing list Bacula-users@lists.sourceforge.net https://lists.sourceforge.net/lists/listinfo/bacula-users
Re: [Bacula-users] Need help debugging SD crash
On Thu, Apr 8, 2010 at 9:02 AM, Matija Nalis mnalis+bac...@carnet.hrmnalis%2bbac...@carnet.hr wrote: On Tue, Apr 06, 2010 at 08:40:20AM -0600, Robert LeBlanc wrote: I've tried in the past to do exactly this. Bacula will usually spit out an error that the tape could not be moved or in rarer situations say the drive is not there. I then shut down bacula-sd and try to run the mt eject command I I usually get back about ten lines that describe the error, but it does really make sense. Sometimes the drive doesn't appear as a device on the system any more. As far as the tape library, the Overland Neo 8000 most of [...] the time says soft removal error on the screen and will keep saying that if I try to have the library remove it. There is no easy way to get to the hardware eject button as the library is fully enclosed. It looks like the drive gets confused if it gets commands too fast (and/or while it is still processing previous commands)... Anyway, it looks like problem outside bacula (probably either the kernel, or a drive firmware, or both are at error). drives and our LTO-4 drive. The only thing that I can think of is that bacula is trying to take some shortcuts (issuing a command to move the tape and expecting the tape library to correctly rewind the tape, eject and then move it and maybe bacula is not quiet letting go of the drive fast enough and there gets a deadlock between the drive controlled by Bacula and the library trying to control it), or there is a kernel/driver problem. Only thing bacula does is execute mtx-changer script; it is the scripts responsibility to does everything needed for your drive / changer combination. The default script is usually good, but you may need to tailor it for your needs (if it needs manual rewind before offline, or things like that). I've set the offline=1 in mtx-changer.conf and that seems to help a little, I've still encountered some drive unmouting issues, but nothing that bacula hasn't been able to recover from on it's own or with very little manual intervention. I run mine (IBM3584) with: offline=1 offline_sleep=2 load_sleep=20 I do recall having sporadic issues with load_sleep of just a 2-3 seconds, so I've put it to 20 to allow the drive to settle fully before issuing a bunch of mt status to it in wait_for_drive(). I was pretty sure the messages were informational, I'm glad that someone can confirm that. I'll keep working on the problem to see what I can come up with. If there is a better way to tell Bacula to be stupid slow with unmount and mount requests, that may help me find where in the process things are getting hung up. Well, you can put (in 5.0.1 at least) offline_sleep and load_sleep to 30 seconds or more for example, that might help if drive is getting confused while receiving commands too fast. On older versions (3.0.x or 2.4 ?) you can edit the mtx-changer shell script itself, IIRC it had commented out sleep statements at right places already... Thanks, this is helpful, I'll give these a try. Robert LeBlanc Life Sciences Undergraduate Education Computer Support Brigham Young University -- Download Intel#174; Parallel Studio Eval Try the new software tools for yourself. Speed compiling, find bugs proactively, and fine-tune applications for parallel performance. See why Intel Parallel Studio got high marks during beta. http://p.sf.net/sfu/intel-sw-dev___ Bacula-users mailing list Bacula-users@lists.sourceforge.net https://lists.sourceforge.net/lists/listinfo/bacula-users
Re: [Bacula-users] Idea/suggestion for dedicated disk-based sd
On Tue, Apr 6, 2010 at 5:19 PM, Robert LeBlanc rob...@leblancnet.us wrote: On Tue, Apr 6, 2010 at 12:37 AM, Craig Ringer cr...@postnewspapers.com.au wrote: [snip] Is this insane? Or a viable approach to tackling some of the complexities of faking tape backup on disk as Bacula currently tries to do? I love Bacula and have been working hard to promote it to people I know. The biggest problem with bacula is it's disk management. We have a DataDomain box that is getting horrible dedup rate and after looking at the Bacula tape stream format, I can understand why. There is so much extra data inserted into the stream that is very helpful for tape drives that it makes deduping the data nearly impossible. I would love to see the stream simplified for disk based storage. Another thing I'd like the option for is to be able to specify a block size and start a file on the block boundry, you could use sparse files to skip the space without taking it up. This would allow dedup algorithms to really be able to compress Bacula data much better. It would be awesome if the file stored in the Bacula stream looked exactly like on the file systm so that if you do any tier 3 storage with dedup and run your Bacula backups to the same storage, you get free backups. Dedup is gaining a lot of traction, name your favorite vendor, or as I'm doing look at lessfs. All of these would benefit hugely from a smart SD that knows how to handle disk storage better and make Bacula much more attractive. With the types of backups we are doing, we should be getting 10x easy on our DataDomain, but we are lucky to get 4x and I think that mostly comes from compression. Thanks, Robert LeBlanc Life Sciences Undergraduate Education Computer Support Brigham Young University So still thinking about this, is there any reason to not have a hierarchical file structure for disk based backup rather than a serialized stream? Here are my thought, any comments welcome to have a good discussion about this. SD_Base_Dir +- PoolA +- PoolB +- JobID1 +- JobID2 +- Clientinfo.bacula (Bacula serial file that holds information similar to block header) +- Original File Structure (File structure from client is maintained and repeated here, allows for browsing of files outside of bacula) +- ClientFileA +- ClientFileA.bacula (Bacula serial file that holds information similar to the unix file attribute package) +- ClientFileB +- ClientFileB.bacula +- ClientDirA +- ClientDirA.bacula Although it's great to reuse code, I think something like this would be very benifical to disk based backups. The would help increase dedup rates and some file systems like btrfs and ZFS may be able to take advantage of linked files (there has been some discussion on the btrfs list about things like this). This would also allow it to reside on any file system as all the ACL and information is being serialized in separate files which keeps unique data out of the blocks of possible duplicated data. I think we could even reuse a lot of the serialization code, so it would just differ in how it writes the stream of data. Please excuse me if I'm way off here, just trying to think outside of the box a little. Robert LeBlanc Life Sciences Undergraduate Education Computer Support Brigham Young University -- Download Intel#174; Parallel Studio Eval Try the new software tools for yourself. Speed compiling, find bugs proactively, and fine-tune applications for parallel performance. See why Intel Parallel Studio got high marks during beta. http://p.sf.net/sfu/intel-sw-dev ___ Bacula-users mailing list Bacula-users@lists.sourceforge.net https://lists.sourceforge.net/lists/listinfo/bacula-users
Re: [Bacula-users] Idea/suggestion for dedicated disk-based sd
On Wed, Apr 7, 2010 at 2:15 PM, Phil Stracchino ala...@metrocast.net wrote: After having thought about this a bit, I believe the idea has significant merit. Tape and disk differ significantly enough that there is no conceptual reason not to have separate tape-specific and disk-specific SDs. So long as the storage logically looks the same from the point of view of other daemons, the other daemons don't need to know that the underlying storage architecture is different. Creating a hierarchical disk SD in this fashion that appears to the rest of Bacula exactly the same as the existing FD does, and yet takes advantage of the features offered by such an implementation, will not necessarily be a trivial problem. It's a pretty major project and, if approved, wouldn't happen right away. The major problem I see at the moment, architecturally speaking, is that at the present time, this would break both migration and copy jobs between volumes on the new disk-only SD and volumes of any kind on the traditional SD, because Bacula does not yet support copy or migration between different SDs. At this time, both source and destination devices are required to be on the same SD. I didn't think about the copy/migration jobs (I'm using them), and that would be a problem. It seems for this to take off, the copy/migration between SDs will have to be implemented. We would have to look at the stream as a copy/migration is happening, i believe that the record blocks are being rewritten with a new jobid and time, so it seems that the data is already being reconstructed an rewritten. The disk SD would have the provide the same stream from the hierarchical file system and be able to take the reconstructed stream and build a hierarchical file system from it. The question I have is what is the barrier for implementing inter-sd copy/migration jobs? Thanks, Robert LeBlanc Life Sciences Undergraduate Education Computer Support Brigham Young University -- Download Intel#174; Parallel Studio Eval Try the new software tools for yourself. Speed compiling, find bugs proactively, and fine-tune applications for parallel performance. See why Intel Parallel Studio got high marks during beta. http://p.sf.net/sfu/intel-sw-dev ___ Bacula-users mailing list Bacula-users@lists.sourceforge.net https://lists.sourceforge.net/lists/listinfo/bacula-users
Re: [Bacula-users] Need help debugging SD crash
On Tue, Apr 6, 2010 at 3:23 AM, Matija Nalis mnalis+bac...@carnet.hrmnalis%2bbac...@carnet.hr wrote: On Sun, Apr 04, 2010 at 01:20:49PM -0600, Robert LeBlanc wrote: I'm having problems with our SD and tapes being locked in the drive occasionally. How does it manifest exactly ? bconsole umount command returns error, or remains in some state (check with status storage) ? Which state and/or error ? Have you tried shutting down bacula-sd and ejecting tape with mt eject and/or mt offline ? Do they succeed (and the drive ejects) or do they return error (and which one) ? Double check that bacula-sd is down before you try those (they won't work if bacula-sd is still having the drive open). And if mt(1) also fails, can you eject tape manually by using tape library eject function and/or pressing hardware eject button on the drive itself (depending on the library type...) ? I've tried in the past to do exactly this. Bacula will usually spit out an error that the tape could not be moved or in rarer situations say the drive is not there. I then shut down bacula-sd and try to run the mt eject command I I usually get back about ten lines that describe the error, but it does really make sense. Sometimes the drive doesn't appear as a device on the system any more. As far as the tape library, the Overland Neo 8000 most of the time says soft removal error on the screen and will keep saying that if I try to have the library remove it. There is no easy way to get to the hardware eject button as the library is fully enclosed. We have had the LTO-3 drives replaced on multiple occasions for this very reason, that is why I don't think it is the drive. Each drive has a fibre connection to the Fibre switch and it is has happened on both our LTO-3 drives and our LTO-4 drive. The only thing that I can think of is that bacula is trying to take some shortcuts (issuing a command to move the tape and expecting the tape library to correctly rewind the tape, eject and then move it and maybe bacula is not quiet letting go of the drive fast enough and there gets a deadlock between the drive controlled by Bacula and the library trying to control it), or there is a kernel/driver problem. I've set the offline=1 in mtx-changer.conf and that seems to help a little, I've still encountered some drive unmouting issues, but nothing that bacula hasn't been able to recover from on it's own or with very little manual intervention. If mt works but bacula-sd doesn't, than you can rule out hardware and kernel -- it is bacula problem (and usually status storage will show it -- it can happen sometimes if you have more than one drive that it deadlocks by waiting for a tape that is in the other drive). At first I thought this might be a problem with our tape library. That still looks like the most probable cause to me - like a drive in the library is having problems. We've had a similar issue with one of several LTO2 drives in our library; it would (sometimes) take the tape and refuse to give it back (on mt eject and even physical button touch). Needed power cycling and long (half a minute?) button press to make it give the tape back. After it happened third time (always the same drive) we kicked it out of the library. Other drives worked OK all the time. If the hardware button always works but software commands don't, it could be fiber cables and/or GBIC/SPF (which we refused to believe at one time because drives were always detected OK and worked, albeit sometimes much slower than normal, without any errors in kernel logs, and would also lock up). You can try cleaning tape also. Then I saw these errors in the syslog. I switched out the Qlogic FC adapter thinking that maybe it was just losing all the paths to the drive. AFAIR you would get different errors if it loses path completely (but it is possible for drive to behave erratically even if it doesn't lose path) I have seen times where there are path errors and that is when the drive seems to disappear from the system completely, but this is not the usual case and the one that causes the most problems. I'm still getting the errors, so I'm not sure where the hangup is. I can't tell if it's a bug in the kernel module, mt or bacula. Can someone give me some pointers to narrowing this down? This has been happening for over a year and through several kernel and bacula versions. This is Debian Squeeze Linux lsddomainsd 2.6.32-trunk-686 #1 SMP Sun Jan 10 06:32:16 UTC 2010 i686 GNU/Linux The INFO: messages themselves are just normal feature of newer 2.6.x kernels, they are informational message only (See INFO:) that tells you some system call (like open(2) or write(2) or read(2)) is taking longer than 120 seconds to complete. They didn't exist in older kernels. It is there to catch problems with I/O schedulers and problematic hardware issues -- but sometime it needs to be increased for tape drives (it is quite possible
Re: [Bacula-users] Multiple drives in changer
On Tue, Apr 6, 2010 at 6:13 AM, Matija Nalis mnalis+bac...@carnet.hrmnalis%2bbac...@carnet.hr wrote: On Fri, Apr 02, 2010 at 10:36:59AM -0600, Robert LeBlanc wrote: On Fri, Apr 2, 2010 at 2:44 AM, Matija Nalis mnalis+bac...@carnet.hrmnalis%2bbac...@carnet.hr I think you need to set Prefer Mounted Volumes = no I guess this is where we need clarification about what is an available drive. I took this to mean a drive that has no tape is more available, and then a drive that does already have a tape mounted would be next in availability. Hm, it looks to me that any drive which is not doing R/W operation (no matter if there is a tape in drive or not) is counted as available. I could be wrong on that, though. Anyway, the safest way to know is to test it and let the others know how it goes :) From my observations of a few tests, this indeed seems to be the case. If the drive is not being R/W to/from, it is considered available. It seems that as long as no job is writing to that tape, then the drive is available. I do want this setting to yes and not no, however, I would like to minimize tape changes, but take advantage of the multiple drives. From what I see in practice, Prefer Mounted Volumes = yes would make sure there is only one drive in each pool that does the writing. For example, I have pool of 4 drives and I start 10 jobs at the same time, all using the same pool. I have an concurrency of 10 and spooling enabled, so all the jobs run at once and start spooling to disk -- but when they need to despool, one drive will grab a free tape from Scratch, and all the jobs will wait for their in turn to write to one tape in one drive, leaving 3 drives idle all the time. Only when that tape is full, another one is loaded, and the process repeats. I think same happens when I disable spooling, but then the 4 jobs all interleave writes -- but still all of them will write on one tape in one drive only. If you set Prefer Mounted Volumes = no, then all 4 drives get loaded with 4 fresh tapes (or just use them if right tapes are already in right drives -- I guess, I have autochanger) and each tape gets written to at the same time, maximizing drive (and thus, the tape) usage. But no setting can (or at least could in the past) lead to deadlocks sometimes (if you have autochanger), when no new jobs will get serviced because drive A will wait for tape 2 that is currently in drive B, and at the same time drive B will wait for tape 1 which is currently in drive A. Then the manual intervention (umount/mount) is needed (which is a big problem for us as we have lots of jobs/tapes). The (recommended) alternative is to go semi-manual way -- dedicate special pool for each drive, and go with Prefer Mounted Volumes = yes Then one can (and indeed, must) specify manually which jobs will go in which pools (and hence, in which drives) and can optimize it for maximum parallelism without deadlocks -- but it requires more planing and is problematic if your backups are more dynamic and hard to predict, and you have to redesign when you add/upgrade/remove drives, and your pools might become somewhat harder to manage. This is exactly my experience, and my goal is not to use multiple drives in the same pool at the same time, it's to use drives for different pools at the same time one drive per pool. We are looking to bring up a lot more storage in the future and will probably adopt the mentality of multiple daily, weekly, monthly pools and split them up based on the number of drives we want to run concurrently. I think that is the best way to go with Bacula for what we want to do. Thanks, Robert LeBlanc Life Sciences Undergraduate Education Computer Support Brigham Young University -- Download Intel#174; Parallel Studio Eval Try the new software tools for yourself. Speed compiling, find bugs proactively, and fine-tune applications for parallel performance. See why Intel Parallel Studio got high marks during beta. http://p.sf.net/sfu/intel-sw-dev___ Bacula-users mailing list Bacula-users@lists.sourceforge.net https://lists.sourceforge.net/lists/listinfo/bacula-users
[Bacula-users] Fwd: Idea/suggestion for dedicated disk-based sd
On Tue, Apr 6, 2010 at 12:37 AM, Craig Ringer cr...@postnewspapers.com.au wrote: [snip] Is this insane? Or a viable approach to tackling some of the complexities of faking tape backup on disk as Bacula currently tries to do? I love Bacula and have been working hard to promote it to people I know. The biggest problem with bacula is it's disk management. We have a DataDomain box that is getting horrible dedup rate and after looking at the Bacula tape stream format, I can understand why. There is so much extra data inserted into the stream that is very helpful for tape drives that it makes deduping the data nearly impossible. I would love to see the stream simplified for disk based storage. Another thing I'd like the option for is to be able to specify a block size and start a file on the block boundry, you could use sparse files to skip the space without taking it up. This would allow dedup algorithms to really be able to compress Bacula data much better. It would be awesome if the file stored in the Bacula stream looked exactly like on the file systm so that if you do any tier 3 storage with dedup and run your Bacula backups to the same storage, you get free backups. Dedup is gaining a lot of traction, name your favorite vendor, or as I'm doing look at lessfs. All of these would benefit hugely from a smart SD that knows how to handle disk storage better and make Bacula much more attractive. With the types of backups we are doing, we should be getting 10x easy on our DataDomain, but we are lucky to get 4x and I think that mostly comes from compression. Thanks, Robert LeBlanc Life Sciences Undergraduate Education Computer Support Brigham Young University -- Download Intel#174; Parallel Studio Eval Try the new software tools for yourself. Speed compiling, find bugs proactively, and fine-tune applications for parallel performance. See why Intel Parallel Studio got high marks during beta. http://p.sf.net/sfu/intel-sw-dev ___ Bacula-users mailing list Bacula-users@lists.sourceforge.net https://lists.sourceforge.net/lists/listinfo/bacula-users
[Bacula-users] Need help debugging SD crash
] ? scsi_dispatch_cmd+0x185/0x1e5 [scsi_mod] Apr 4 07:10:23 lsddomainsd kernel: [137761.030449] [f7e37382] ? scsi_request_fn+0x3c1/0x47a [scsi_mod] Apr 4 07:10:23 lsddomainsd kernel: [137761.030454] [c1259e52] ? wait_for_common+0xa4/0x100 Apr 4 07:10:23 lsddomainsd kernel: [137761.030460] [c102da50] ? default_wake_function+0x0/0x8 Apr 4 07:10:23 lsddomainsd kernel: [137761.030466] [f92fe756] ? st_scsi_execute_end+0x0/0x45 [st] Apr 4 07:10:23 lsddomainsd kernel: [137761.030470] [f92fee28] ? st_do_scsi+0x28d/0x2b5 [st] Apr 4 07:10:23 lsddomainsd kernel: [137761.030474] [f92ffb81] ? st_int_ioctl+0x624/0xa68 [st] Apr 4 07:10:23 lsddomainsd kernel: [137761.030480] [c11be12a] ? release_sock+0xf/0x7f Apr 4 07:10:23 lsddomainsd kernel: [137761.030486] [c11f0c2a] ? tcp_sendmsg+0x69d/0x77a Apr 4 07:10:23 lsddomainsd kernel: [137761.030490] [f92ff92e] ? st_int_ioctl+0x3d1/0xa68 [st] Apr 4 07:10:23 lsddomainsd kernel: [137761.030496] [c11bbb11] ? __sock_sendmsg+0x43/0x4a Apr 4 07:10:23 lsddomainsd kernel: [137761.030501] [f930198a] ? st_ioctl+0xb1b/0xe62 [st] Apr 4 07:10:23 lsddomainsd kernel: [137761.030504] [c1259e5d] ? wait_for_common+0xaf/0x100 Apr 4 07:10:23 lsddomainsd kernel: [137761.030511] [c10b1aa2] ? do_sync_write+0xc0/0x107 Apr 4 07:10:23 lsddomainsd kernel: [137761.030515] [f9300e6f] ? st_ioctl+0x0/0xe62 [st] Apr 4 07:10:23 lsddomainsd kernel: [137761.030521] [c10bc220] ? vfs_ioctl+0x1c/0x5f Apr 4 07:10:23 lsddomainsd kernel: [137761.030525] [c10bc7b4] ? do_vfs_ioctl+0x4aa/0x4e5 Apr 4 07:10:23 lsddomainsd kernel: [137761.030529] [c10b17ee] ? fsnotify_modify+0x5a/0x61 Apr 4 07:10:23 lsddomainsd kernel: [137761.030533] [c10b23ee] ? vfs_write+0x9e/0xd6 Apr 4 07:10:23 lsddomainsd kernel: [137761.030537] [c10bc830] ? sys_ioctl+0x41/0x58 Apr 4 07:10:23 lsddomainsd kernel: [137761.030543] [c10030fb] ? sysenter_do_call+0x12/0x28 Thanks, Robert LeBlanc Life Sciences Undergraduate Education Computer Support Brigham Young University -- Download Intel#174; Parallel Studio Eval Try the new software tools for yourself. Speed compiling, find bugs proactively, and fine-tune applications for parallel performance. See why Intel Parallel Studio got high marks during beta. http://p.sf.net/sfu/intel-sw-dev___ Bacula-users mailing list Bacula-users@lists.sourceforge.net https://lists.sourceforge.net/lists/listinfo/bacula-users
Re: [Bacula-users] Multiple drives in changer
On Fri, Apr 2, 2010 at 2:44 AM, Matija Nalis mnalis+bac...@carnet.hrmnalis%2bbac...@carnet.hr wrote: On Thu, Apr 01, 2010 at 08:49:43AM -0600, Robert LeBlanc wrote: I have two LTO-3 drives in a changer and three LTO-3 pools. Earlier version of Bacula would try to use an empty drive before unloading a drive when a tape from a different pool was requested. I used to also be able to run in parallel migration jobs from two different pools at the same time. Since moving to 5.0.1, my second drive goes unused. Is there some change in the code that prevents this behavior? I think you need to set Prefer Mounted Volumes = no to get old behaviour (maximum parallelism). Note however that it might lead to deadlocks (or at least, it could in the past... don't know if that was fixed). Prefer Mounted Volumes = yes|noIf the Prefer Mounted Volumes directive is set to yes (default yes), the Storage daemon is requested to select either an Autochanger or a drive with a valid Volume already mounted in preference to a drive that is not ready. This means that all jobs will attempt to append to the same Volume (providing the Volume is appropriate -- right Pool, ... for that job). If no drive with a suitable Volume is available, it will select the first available drive. Note, any Volume that has been requested to be mounted, will be considered valid as a mounted volume by another job. This if multiple jobs start at the same time and they all prefer mounted volumes, the first job will request the mount, and the other jobs will use the same volume. If the directive is set to no, the Storage daemon will prefer finding an unused drive, otherwise, each job started will append to the same Volume (assuming the Pool is the same for all jobs). Setting Prefer Mounted Volumes to no can be useful for those sites with multiple drive autochangers that prefer to maximize backup throughput at the expense of using additional drives and Volumes. This means that the job will prefer to use an unused drive rather than use a drive that is already in use. I guess this is where we need clarification about what is an available drive. I took this to mean a drive that has no tape is more available, and then a drive that does already have a tape mounted would be next in availability. It seems that as long as no job is writing to that tape, then the drive is available. I do want this setting to yes and not no, however, I would like to minimize tape changes, but take advantage of the multiple drives. Robert LeBlanc Life Sciences Undergraduate Education Computer Support Brigham Young University -- Download Intel#174; Parallel Studio Eval Try the new software tools for yourself. Speed compiling, find bugs proactively, and fine-tune applications for parallel performance. See why Intel Parallel Studio got high marks during beta. http://p.sf.net/sfu/intel-sw-dev___ Bacula-users mailing list Bacula-users@lists.sourceforge.net https://lists.sourceforge.net/lists/listinfo/bacula-users
Re: [Bacula-users] Multiple drives in changer
On Thu, Apr 1, 2010 at 8:49 AM, Robert LeBlanc rob...@leblancnet.us wrote: I have two LTO-3 drives in a changer and three LTO-3 pools. Earlier version of Bacula would try to use an empty drive before unloading a drive when a tape from a different pool was requested. I used to also be able to run in parallel migration jobs from two different pools at the same time. Since moving to 5.0.1, my second drive goes unused. Is there some change in the code that prevents this behavior? Well, I'm not sure where the inconsistencies are, but I started migrating our Daily pool from disk to tape and it unloaded the Weekly pool tape that was already loaded, then it loaded a Daily tape and ran a few migration jobs. I then run the Weekly migration job and when the daily migration job that was running finished, it unloaded the Daily tape from Drive 1 and loaded it into Drive 2 then loaded a Weekly tape into Drive 1 and is going on it's merry way running both jobs at once. I'm not sure why there is all the changing going on. Robert LeBlanc Life Sciences Undergraduate Education Computer Support Brigham Young University -- Download Intel#174; Parallel Studio Eval Try the new software tools for yourself. Speed compiling, find bugs proactively, and fine-tune applications for parallel performance. See why Intel Parallel Studio got high marks during beta. http://p.sf.net/sfu/intel-sw-dev___ Bacula-users mailing list Bacula-users@lists.sourceforge.net https://lists.sourceforge.net/lists/listinfo/bacula-users
[Bacula-users] Multiple drives in changer
I have two LTO-3 drives in a changer and three LTO-3 pools. Earlier version of Bacula would try to use an empty drive before unloading a drive when a tape from a different pool was requested. I used to also be able to run in parallel migration jobs from two different pools at the same time. Since moving to 5.0.1, my second drive goes unused. Is there some change in the code that prevents this behavior? Thanks, Robert LeBlanc Life Sciences Undergraduate Education Computer Support Brigham Young University -- Download Intel#174; Parallel Studio Eval Try the new software tools for yourself. Speed compiling, find bugs proactively, and fine-tune applications for parallel performance. See why Intel Parallel Studio got high marks during beta. http://p.sf.net/sfu/intel-sw-dev___ Bacula-users mailing list Bacula-users@lists.sourceforge.net https://lists.sourceforge.net/lists/listinfo/bacula-users
Re: [Bacula-users] HELP - Bacula 5.0.0 migration jobs not honoring retention period
On Thu, Mar 4, 2010 at 8:40 AM, Robert LeBlanc rob...@leblancnet.us wrote: I've set up some migration jobs from tapes that had errors and were not using full capacity to disk so that I could migrate them back. I started these jobs using bacula 3.x then upgraded in the middle. I've set the disk pool to have a retention period of 1 month 7 days and there were two volumes from when 3.x created them. Over night last night, these two volumes have been purged and recycled 5 times instead of having new volumes created. There is plenty of space left on the disk and there is no volume number restriction. All my tapes are showing that all the jobs are migrated, so it won't remigrate the jobs. The data on the tapes are still there. Please advise on why the retention period was not honored and how to remigrate the jobs. Storage { Name = DD-tmp Address = bacsd0.byu.edu Password = secretpassword Media Type = DD-tmp Device = DD-tmp Maximum Concurrent Jobs = 8 } Pool { Name = DD-tmp Pool Type = Backup LabelFormat = tmp- Recycle = yes AutoPrune = yes Storage = DD-tmp Volume Retention = 37 days # Use Volume Once = yes Maximum Volume Bytes = 100G } Job { Name = Redo_454FLX Type = Migrate Level = Full Client = 454datarig-fd FileSet = FULL Windows Messages = Standard Pool = 454FLX Maximum Concurrent Jobs = 4 Selection Type = Volume Selection Pattern = .*[2,4]L4 } Thanks, Robert LeBlanc Life Sciences Undergraduate Education Computer Support Brigham Young University My backups are honoring retention time just fine and are creating new volumes as needed, but I still need help with these failed migrations. Thanks, Robert LeBlanc Life Sciences Undergraduate Education Computer Support Brigham Young University -- Download Intel#174; Parallel Studio Eval Try the new software tools for yourself. Speed compiling, find bugs proactively, and fine-tune applications for parallel performance. See why Intel Parallel Studio got high marks during beta. http://p.sf.net/sfu/intel-sw-dev ___ Bacula-users mailing list Bacula-users@lists.sourceforge.net https://lists.sourceforge.net/lists/listinfo/bacula-users
[Bacula-users] HELP - Bacula 5.0.0 migration jobs not honoring retention period
I've set up some migration jobs from tapes that had errors and were not using full capacity to disk so that I could migrate them back. I started these jobs using bacula 3.x then upgraded in the middle. I've set the disk pool to have a retention period of 1 month 7 days and there were two volumes from when 3.x created them. Over night last night, these two volumes have been purged and recycled 5 times instead of having new volumes created. There is plenty of space left on the disk and there is no volume number restriction. All my tapes are showing that all the jobs are migrated, so it won't remigrate the jobs. The data on the tapes are still there. Please advise on why the retention period was not honored and how to remigrate the jobs. Storage { Name = DD-tmp Address = bacsd0.byu.edu Password = secretpassword Media Type = DD-tmp Device = DD-tmp Maximum Concurrent Jobs = 8 } Pool { Name = DD-tmp Pool Type = Backup LabelFormat = tmp- Recycle = yes AutoPrune = yes Storage = DD-tmp Volume Retention = 37 days # Use Volume Once = yes Maximum Volume Bytes = 100G } Job { Name = Redo_454FLX Type = Migrate Level = Full Client = 454datarig-fd FileSet = FULL Windows Messages = Standard Pool = 454FLX Maximum Concurrent Jobs = 4 Selection Type = Volume Selection Pattern = .*[2,4]L4 } Thanks, Robert LeBlanc Life Sciences Undergraduate Education Computer Support Brigham Young University -- Download Intel#174; Parallel Studio Eval Try the new software tools for yourself. Speed compiling, find bugs proactively, and fine-tune applications for parallel performance. See why Intel Parallel Studio got high marks during beta. http://p.sf.net/sfu/intel-sw-dev ___ Bacula-users mailing list Bacula-users@lists.sourceforge.net https://lists.sourceforge.net/lists/listinfo/bacula-users
[Bacula-users] Change media type of existing media
Hello all, I've got some file storage that I use for backups and I've created a few storage declarations for a few pools which are just different directories on the same file system. I specified Media Type = File for all the declarations and when I went to do a restore that spanned multiple of these storage declarations, I got an error similar to Please mount volume X in storage A, but that volume is in storage B. I was able to just move the file to the storage A directory and it was happy, then I moved it back. I found that what I should have done when I set everything up was to give each storage a different Media Type (like File-A, File-B, File-C, etc). How can I change the media type for existing volumes? In Bat there doesn't seem to be a way to change it on a volume basis. Will 'update' then 'volume parameters' then 'All Volumes from Pool' do the trick if I make the appropriate changes in bacula-dir.conf and bacula-sd.conf first? Or will this require some direct SQL manipulation? Thanks, Robert LeBlanc Life Sciences Undergraduate Education Computer Support Brigham Young University -- Download Intel#174; Parallel Studio Eval Try the new software tools for yourself. Speed compiling, find bugs proactively, and fine-tune applications for parallel performance. See why Intel Parallel Studio got high marks during beta. http://p.sf.net/sfu/intel-sw-dev ___ Bacula-users mailing list Bacula-users@lists.sourceforge.net https://lists.sourceforge.net/lists/listinfo/bacula-users
Re: [Bacula-users] Log partition weirdness
On Wed, Dec 23, 2009 at 2:21 PM, Martin Simmons mar...@lispworks.comwrote: Possibly you have sparse files in /var/log? Bacula can handle those better if you use the sparse=yes option in the FileSet (see the doc for more). I tried the sparse=yes, but that didn't help. I worked though a file list and found the culprit to be lastlog. Although it is only 27 lines (using the lastlog program), it is 216GB. It is supposed to be a sparse file, but bacula is still estimating it at 216GB. Since I can't sparse a particular file, I just may exclude it as it's not critical for restore. Thanks for the help. Robert LeBlanc Life Sciences Undergraduate Education Computer Support Brigham Young University -- This SF.Net email is sponsored by the Verizon Developer Community Take advantage of Verizon's best-in-class app development support A streamlined, 14 day to market process makes app distribution fast and easy Join now and get one step closer to millions of Verizon customers http://p.sf.net/sfu/verizon-dev2dev ___ Bacula-users mailing list Bacula-users@lists.sourceforge.net https://lists.sourceforge.net/lists/listinfo/bacula-users
[Bacula-users] Log partition weirdness
So, I've run into an interesting problem with 3.x. We have our servers configured for separate root, home and log partitions usually using LVM. I've seen this in the past, but now I'm looking to find an answer. We are running 3.0.2 on Debian Squeeze, when we include the /var/log/ partition, either by using oneFS on or oneFS yes and specifying it, it explodes the size of the backup. For instance, here is the layout of the partitions for one machine: FilesystemSize Used Avail Use% Mounted on /dev/mapper/lsgw-root 2.3G 842M 1.3G 39% / tmpfs 502M 0 502M 0% /lib/init/rw udev 10M 176K 9.9M 2% /dev tmpfs 502M 0 502M 0% /dev/shm /dev/sda1 228M 25M 192M 12% /boot /dev/mapper/lsgw-home 938M 18M 920M 2% /home /dev/mapper/lsgw-logs 3.6G 77M 3.6G 3% /var/log If we exclude the /var/log/ partition, estimate give us: 2000 OK estimate files=40539 bytes=711,837,359 When we include the /var/log/ partition using oneFS=no, estimate give us: 2000 OK estimate files=66741 bytes=233,544,784,662 When we include the /var/log/ partition using oneFS=yes, estimate gives us: 2000 OK estimate files=40620 bytes=232,347,795,189 df on the machine give us: du -sh / du: cannot access `/proc/4002/task/4002/fd/4': No such file or directory du: cannot access `/proc/4002/task/4002/fdinfo/4': No such file or directory du: cannot access `/proc/4002/fd/4': No such file or directory du: cannot access `/proc/4002/fdinfo/4': No such file or directory 800M/ As you can see a far cry from 232 GB. I've tried accurate backups without any change and the estimate from experience is what is actually backed up so it's not just a bad figure in the estimate command. Here is my fileset def: FileSet { Name = LinuxServer Include { Options { signature = MD5 onefs = yes exclude = yes wildfile = .reiserfs* wildfile = .journal wildfile = .autofsck wildfile = *~ } File = / File = /home File = /var/log } } For some reason /home is not affected by this, only /var/log. What can I do to try to track down what is going on, for now, I'll have to exclude /var/log, but I'd like to get to the bottom of this. Thanks, Robert LeBlanc Life Sciences Undergraduate Education Computer Support Brigham Young University -- This SF.Net email is sponsored by the Verizon Developer Community Take advantage of Verizon's best-in-class app development support A streamlined, 14 day to market process makes app distribution fast and easy Join now and get one step closer to millions of Verizon customers http://p.sf.net/sfu/verizon-dev2dev ___ Bacula-users mailing list Bacula-users@lists.sourceforge.net https://lists.sourceforge.net/lists/listinfo/bacula-users
Re: [Bacula-users] copy for offsite storage and then migrate to tape
On Fri, Dec 4, 2009 at 1:40 AM, Arno Lehmann a...@its-lehmann.de wrote: Hello, 03.12.2009 20:03, Alessandro Bono wrote: Hi all I'm trying to implement a backup strategy where every week a full backup is done on a file storage, after that a copy is done on a tape for offsite storage (I'll use mailslot of hp storageworks 1/8) and finally migrate backup to tape To summarize I have 3 pools diskpool tapepool offsitepool I want to do a full backup to diskpool copy full backup from diskpool to offsitepool migrate full backup from diskpool to tapepool I tried but don't know how to specify NextPool correctly for the two operation copy and migration Is it possible to do this type of thing? Is there a better way to accomplish the same result? As there is only one NextPool setting available, you'll have to work around that, but it is possible AFAIK. (By the way - on bacula-devel, there's some discussion regarding introduction of overrides for the NextPool setting. You might be interested to join that.) You can set up a fake pool with the second NextPool target, and then use SQLquery as a job selection scheme. The NextPool is taken from the pool setting, and the actual jobs to move are selected by the SQL, choosing volumes from a different pool. Should work, though I haven't tested it yet. We will want to do a very similar thing as well. It would be nice to be able to override nextpool in the schedules section instead of having to go through that messy work around. Robert LeBlanc Life Sciences Undergraduate Education Computer Support Brigham Young University -- Join us December 9, 2009 for the Red Hat Virtual Experience, a free event focused on virtualization and cloud computing. Attend in-depth sessions from your desk. Your couch. Anywhere. http://p.sf.net/sfu/redhat-sfdev2dev___ Bacula-users mailing list Bacula-users@lists.sourceforge.net https://lists.sourceforge.net/lists/listinfo/bacula-users
Re: [Bacula-users] [Bacula-devel] RFC: backing up hundreds of TB
purchase. That may give you an idea that you may not have had previously. Robert LeBlanc Life Sciences Undergraduate Education Computer Support Brigham Young University -- Let Crystal Reports handle the reporting - Free Crystal Reports 2008 30-Day trial. Simplify your report design, integration and deployment - and focus on what you do best, core application coding. Discover what's new with Crystal Reports now. http://p.sf.net/sfu/bobj-july___ Bacula-users mailing list Bacula-users@lists.sourceforge.net https://lists.sourceforge.net/lists/listinfo/bacula-users
[Bacula-users] Changed behavior in 3.0.x?
I've upgraded to 3.0.2 from 2.4.x and now it seems when spooling job, that when the SD is despooling, no other jobs will spool until it is finished despooling. I had a very large job running and it started despooling, I was adding a new client with some client run before scripts, so I was testing them. The job would run the script, it executed, then sat there and waited. I thought it was weird so I ran anoher small job without a script and it just sat their waiting. They were both listed as running, but the FD was not tranferring to the SD. As soon as the large job finished despooling both jobs started transferring to the SD. The smaller jobs finished before the second job finished spooling so I wasn't able to see exactly if both jobs started spooling, or they waited in turn. Before the upgrade I constantly had 4 jobs spooling and I changed nothing to do with concurrency. Is this a bug? Thanks, Robert LeBlanc Life Sciences Undergraduate Education Computer Support Brigham Young University -- Come build with us! The BlackBerryreg; Developer Conference in SF, CA is the only developer event you need to attend this year. Jumpstart your developing skills, take BlackBerry mobile applications to market and stay ahead of the curve. Join us from November 9#45;12, 2009. Register now#33; http://p.sf.net/sfu/devconf___ Bacula-users mailing list Bacula-users@lists.sourceforge.net https://lists.sourceforge.net/lists/listinfo/bacula-users
[Bacula-users] Volume retention with Migration
I've read through the docs and can't find a definitive answer to this. We back-up to a Data Domain box, then migrate the jobs after some period of time off to tape for archive. It seems that if all the jobs are migrated off a volume, but the volume is not past it's retention period then the volume is not recycled. What I want to do is keep the backup on the Data Domain box for 30 days and then migrate it off to tape. I've set the volume retention for 45 days as our migration jobs have been taking a long time since it reads the whole volume for even KB of data. I don't want the volume to be recycled before all the jobs are migrated, but I want it recycled before the retention period if all the jobs are migrated. Any ideas will be appreciated. Thanks, Robert LeBlanc Life Sciences Undergraduate Education Computer Support Brigham Young University -- Come build with us! The BlackBerryreg; Developer Conference in SF, CA is the only developer event you need to attend this year. Jumpstart your developing skills, take BlackBerry mobile applications to market and stay ahead of the curve. Join us from November 9#45;12, 2009. Register now#33; http://p.sf.net/sfu/devconf___ Bacula-users mailing list Bacula-users@lists.sourceforge.net https://lists.sourceforge.net/lists/listinfo/bacula-users
Re: [Bacula-users] Fwd: Migration job marking destination volume used!
On Mon, Aug 10, 2009 at 4:44 AM, Martin Simmons mar...@lispworks.comwrote: On Fri, 7 Aug 2009 13:53:01 -0600, Robert LeBlanc said: It is really weird, I marked it as append and then when it filled up the second tape it was using with multiple migration jobs, it tried to use the first tape again since it was in the append state. It gave the message 07-Aug 09:31 babacula-dir JobId 40116: Max configured use duration exceeded. Marking Volume 43L3 as Used. The very odd thing is that it recycled this tape before the first migration job. There is a use duration of 20 days for tapes in this pool. I don't believe that 1.5 hours exceeds that. Again, the second tape that is used did not have this problem and it was recycled as well. I've marked the tape append again to see if it happens when the thrid tape is full. This is very confusing. It does sound confusing. Yoy could check that the tape itself has the correct use duration (the pool's use duration is only used to set the value for the tape when it is added to the catalog). The llist media pool= command will show this. __Martin I did check the tapes retention and even reset it, still no luck. The migration jobs used three other tapes without a problem, just this one tape would not fill up. I guess I'll just let it go and hope that on the next recycle it will behave itself. Thanks, Robert LeBlanc Life Sciences Undergraduate Education Computer Support Brigham Young University -- Let Crystal Reports handle the reporting - Free Crystal Reports 2008 30-Day trial. Simplify your report design, integration and deployment - and focus on what you do best, core application coding. Discover what's new with Crystal Reports now. http://p.sf.net/sfu/bobj-july___ Bacula-users mailing list Bacula-users@lists.sourceforge.net https://lists.sourceforge.net/lists/listinfo/bacula-users
[Bacula-users] Fwd: Migration job marking destination volume used!
It is really weird, I marked it as append and then when it filled up the second tape it was using with multiple migration jobs, it tried to use the first tape again since it was in the append state. It gave the message 07-Aug 09:31 babacula-dir JobId 40116: Max configured use duration exceeded. Marking Volume 43L3 as Used. The very odd thing is that it recycled this tape before the first migration job. There is a use duration of 20 days for tapes in this pool. I don't believe that 1.5 hours exceeds that. Again, the second tape that is used did not have this problem and it was recycled as well. I've marked the tape append again to see if it happens when the thrid tape is full. This is very confusing. Robert LeBlanc Life Sciences Undergraduate Education Computer Support Brigham Young University On Fri, Aug 7, 2009 at 1:28 PM, Martin Simmons mar...@lispworks.com wrote: On Thu, 6 Aug 2009 14:17:00 -0600, Robert LeBlanc said: I'm attempting to migrate some back-ups from our Data Domain box to tape and am migrating our first set of jobs. Once it finished migrating the first job it marked the LTO3 tape as used with only 50GB written. I thought the migration jobs would leave the volume in Append until it was full or met other criteria (volume use duration, etc) was satisfied. We are using 2.4.4. Any help on fixing this for the rest of the jobs? Look in the Bacula log to see if it says *why* it was marked as used at that point. Also check the syslog for messages about the tape drive device. __Martin -- Let Crystal Reports handle the reporting - Free Crystal Reports 2008 30-Day trial. Simplify your report design, integration and deployment - and focus on what you do best, core application coding. Discover what's new with Crystal Reports now. http://p.sf.net/sfu/bobj-july ___ Bacula-users mailing list Bacula-users@lists.sourceforge.net https://lists.sourceforge.net/lists/listinfo/bacula-users -- Let Crystal Reports handle the reporting - Free Crystal Reports 2008 30-Day trial. Simplify your report design, integration and deployment - and focus on what you do best, core application coding. Discover what's new with Crystal Reports now. http://p.sf.net/sfu/bobj-july___ Bacula-users mailing list Bacula-users@lists.sourceforge.net https://lists.sourceforge.net/lists/listinfo/bacula-users
[Bacula-users] Migration job marking destination volume used!
I'm attempting to migrate some back-ups from our Data Domain box to tape and am migrating our first set of jobs. Once it finished migrating the first job it marked the LTO3 tape as used with only 50GB written. I thought the migration jobs would leave the volume in Append until it was full or met other criteria (volume use duration, etc) was satisfied. We are using 2.4.4. Any help on fixing this for the rest of the jobs? Robert LeBlanc Life Sciences Undergraduate Education Computer Support Brigham Young University -- Let Crystal Reports handle the reporting - Free Crystal Reports 2008 30-Day trial. Simplify your report design, integration and deployment - and focus on what you do best, core application coding. Discover what's new with Crystal Reports now. http://p.sf.net/sfu/bobj-july___ Bacula-users mailing list Bacula-users@lists.sourceforge.net https://lists.sourceforge.net/lists/listinfo/bacula-users
[Bacula-users] Data Interleaving Question
We are running backups to a DataDomain box and we are not getting the dedup rate that we thought we should be getting. I'm thinking that the concurrent jobs may have something to do with it. I've read that with tape devices you can specify Minimum Block Size, but not for random access devices. Is there a minimum chuck size for disk based volumes? I tried setting the Use Volume Once option, but that effectively made concurrent jobs = 1. I'd hate to spool to the DataDomain box and then transfer it back to the DataDomain device in a volume. We are running 2.4.4. Thanks, Robert LeBlanc Life Sciences Undergraduate Education Computer Support Brigham Young University -- ___ Bacula-users mailing list Bacula-users@lists.sourceforge.net https://lists.sourceforge.net/lists/listinfo/bacula-users
Re: [Bacula-users] Migration Job pruging destination volumes!!
On Thu, Jun 25, 2009 at 5:53 PM, Robert LeBlanc rob...@leblancnet.uswrote: I've set-up a migration job to migrate jobs from one set of tape volumes to disk volumes. I've configured the destination pool to use the volume once and have a retention period of 2 months. For some reason when the migration job completes and gets to the next queued migration job it marks the destination volume as purged and then overwrites the contents with the next job. The Jobs from the source pool are being marked as purged, so the data is going straight into the bit bucket! I'm using version 2.4.4 from Debian and here is the pool and job portion of my conf files. Pool { Name = 454FLX Pool Type = Backup AutoPrune = yes Storage = Neo8000-LTO4 VolumeRetention = 3 years Recycle = yes Next Pool = DD-454FLX } Pool { Name = DD-454FLX Pool Type = Backup LabelFormat = 454FLX- Recycle = yes AutoPrune = yes Storage = DD-454FLX Volume Retention = 2 months Use Volume Once = yes } Job { Name = Migrate_454FLX Type = Migrate Level = Full Client = 454datarig-fd FileSet = FULL Windows Messages = Standard Pool = 454FLX Maximum Concurrent Jobs = 4 Selection Type = Volume Selection Pattern = .*L4 } Here is a piece of the output that is confirming my bit bucket suspicion: 25-Jun 17:50 babacula-dir JobId 37438: Start Migration JobId 37438, Job=Migrate_454FLX.2009-06-25_15.15.54.27 25-Jun 17:50 babacula-dir JobId 37438: There are no more Jobs associated with Volume 454FLX-0169. Marking it purged. 25-Jun 17:50 babacula-dir JobId 37438: All records pruned from Volume 454FLX-0169; marking it Purged 25-Jun 17:50 babacula-dir JobId 37438: Recycled volume 454FLX-0169 25-Jun 17:50 babacula-dir JobId 37438: Using Device DD-454FLX 25-Jun 17:50 lsbacsd0-sd JobId 37438: Ready to read from volume 02L4 on device Drive-2 (/dev/tape/drive2). 25-Jun 17:50 lsbacsd0-sd JobId 37438: Recycled volume 454FLX-0169 on device DD-454FLX (/backup/pools/454FLX), all previous data lost. 25-Jun 17:51 babacula-dir JobId 37438: Volume used once. Marking Volume 454FLX-0169 as Used. 25-Jun 17:50 lsbacsd0-sd JobId 37438: Forward spacing Volume 02L4 to file:block 418:0. So, two questions. 1. What am I doing wrong? 2. Is there an easy way to unpurge the jobs on the tape since they have not been recycled, or do I have to run bscan on them? Thanks, Robert LeBlanc Life Sciences Undergraduate Education Computer Support Brigham Young University Since no one offered any alternative suggestions, I've run bscan on all the tapes, but all the jobs are still showing as purged in the database. How do I get the jobs to be unpurged short of manually changing the database? Thanks, Robert LeBlanc Life Sciences Undergraduate Education Computer Support Brigham Young University -- ___ Bacula-users mailing list Bacula-users@lists.sourceforge.net https://lists.sourceforge.net/lists/listinfo/bacula-users
[Bacula-users] Migration Job pruging destination volumes!!
I've set-up a migration job to migrate jobs from one set of tape volumes to disk volumes. I've configured the destination pool to use the volume once and have a retention period of 2 months. For some reason when the migration job completes and gets to the next queued migration job it marks the destination volume as purged and then overwrites the contents with the next job. The Jobs from the source pool are being marked as purged, so the data is going straight into the bit bucket! I'm using version 2.4.4 from Debian and here is the pool and job portion of my conf files. Pool { Name = 454FLX Pool Type = Backup AutoPrune = yes Storage = Neo8000-LTO4 VolumeRetention = 3 years Recycle = yes Next Pool = DD-454FLX } Pool { Name = DD-454FLX Pool Type = Backup LabelFormat = 454FLX- Recycle = yes AutoPrune = yes Storage = DD-454FLX Volume Retention = 2 months Use Volume Once = yes } Job { Name = Migrate_454FLX Type = Migrate Level = Full Client = 454datarig-fd FileSet = FULL Windows Messages = Standard Pool = 454FLX Maximum Concurrent Jobs = 4 Selection Type = Volume Selection Pattern = .*L4 } Here is a piece of the output that is confirming my bit bucket suspicion: 25-Jun 17:50 babacula-dir JobId 37438: Start Migration JobId 37438, Job=Migrate_454FLX.2009-06-25_15.15.54.27 25-Jun 17:50 babacula-dir JobId 37438: There are no more Jobs associated with Volume 454FLX-0169. Marking it purged. 25-Jun 17:50 babacula-dir JobId 37438: All records pruned from Volume 454FLX-0169; marking it Purged 25-Jun 17:50 babacula-dir JobId 37438: Recycled volume 454FLX-0169 25-Jun 17:50 babacula-dir JobId 37438: Using Device DD-454FLX 25-Jun 17:50 lsbacsd0-sd JobId 37438: Ready to read from volume 02L4 on device Drive-2 (/dev/tape/drive2). 25-Jun 17:50 lsbacsd0-sd JobId 37438: Recycled volume 454FLX-0169 on device DD-454FLX (/backup/pools/454FLX), all previous data lost. 25-Jun 17:51 babacula-dir JobId 37438: Volume used once. Marking Volume 454FLX-0169 as Used. 25-Jun 17:50 lsbacsd0-sd JobId 37438: Forward spacing Volume 02L4 to file:block 418:0. So, two questions. 1. What am I doing wrong? 2. Is there an easy way to unpurge the jobs on the tape since they have not been recycled, or do I have to run bscan on them? Thanks, Robert LeBlanc Life Sciences Undergraduate Education Computer Support Brigham Young University -- ___ Bacula-users mailing list Bacula-users@lists.sourceforge.net https://lists.sourceforge.net/lists/listinfo/bacula-users
Re: [Bacula-users] BAT for windows ?
I believe bat has a too big dependency on qt and qt is not easily ported to Windows yet. You may still have to wait a while for a Windows port. Robert On Fri, Jun 5, 2009 at 7:39 AM, Olivier Delestre olivier.deles...@univ-rouen.fr wrote: Hi, someone knows where i can find bat for windows ? bin, how to, ... Thanks Olivier -- OpenSolaris 2009.06 is a cutting edge operating system for enterprises looking to deploy the next generation of Solaris that includes the latest innovations from Sun and the OpenSource community. Download a copy and enjoy capabilities such as Networking, Storage and Virtualization. Go to: http://p.sf.net/sfu/opensolaris-get ___ Bacula-users mailing list Bacula-users@lists.sourceforge.net https://lists.sourceforge.net/lists/listinfo/bacula-users -- OpenSolaris 2009.06 is a cutting edge operating system for enterprises looking to deploy the next generation of Solaris that includes the latest innovations from Sun and the OpenSource community. Download a copy and enjoy capabilities such as Networking, Storage and Virtualization. Go to: http://p.sf.net/sfu/opensolaris-get___ Bacula-users mailing list Bacula-users@lists.sourceforge.net https://lists.sourceforge.net/lists/listinfo/bacula-users
Re: [Bacula-users] Rif: Bacula-FD for VMware ESXi - Server
I don't believe that this will work for ESXi, only ESX. I think ESXi is too stripped down and is missing some components to run bacula-fd. I have an ESXi machine, but only do file level inside the VMs at home. At work we have ESX and I do disk level with bacula as well as file level. Robert From: Ferdinando Pasqualetti [mailto:fpasq...@ccci.it] Sent: Tuesday, March 31, 2009 7:21 AM To: bacula-users@lists.sourceforge.net Subject: [Bacula-users] Rif: Bacula-FD for VMware ESXi - Server I have used an RPM for RHEL3: bacula-client-2.4.4-1.el3.i386.rpm You need to modify or disable VMware firewall. -- Ferdinando Pasqualetti G.T.Dati srl Tel. 0557310862 - 3356172731 - Fax 055720143 Masopust, Christian christian.masop...@siemens.com wrote on 31/03/2009 10.09.33: Hi there, has anybody already tried to run/build a bacula-fd for VMware ESXi - Server? What options needed for configure? Thanks a lot, Christian ___ Christian Masopust SIEMENS AG SIS PSE TMF Tel: +43 (0) 5 1707 26866 E-mail: christian.masop...@siemens.com Addr: Austria, 1210 Vienna, Siemensstraße 90-92, B. 33, Rm. 243 Leader of the RUGA Firma: Siemens Aktiengesellschaft Österreich, Rechtsform: Aktiengesellschaft, Sitz: Wien, Firmenbuchnummer: FN 60562 m, Firmenbuchgericht: Handelsgericht Wien, DVR 0001708 ___ -- ___ Bacula-users mailing list Bacula-users@lists.sourceforge.net https://lists.sourceforge.net/lists/listinfo/bacula-users -- ___ Bacula-users mailing list Bacula-users@lists.sourceforge.net https://lists.sourceforge.net/lists/listinfo/bacula-users
Re: [Bacula-users] how restore a file with accent ?
On 3/26/09 10:39 AM, Graham Keeling gra...@equiinet.com wrote: On Thu, Mar 26, 2009 at 05:19:19PM +0100, Fran?ois Mehault wrote: Hi All I would like to know how I could restore a file which has some accent. I want to restore with bconsole, In bconsole : * * *restore [...] $ ls h??h?? lol.txt $ mark h And i can't mark my file :s, and I can't rename the file (it is the file of my customer) Regards, Fran?ois Try this, or something similar: mark h* lol.txt As an aside: The mark quoting in bconsole is quite whacky. I found that you need to have three backslashes to quote a backslash, two to quote each of *?[, and one to quote a double-quote! I use and this seems to work for things like spaces. It sure would be nice to have tab completion like BASH. In fact, I find myself hitting tab, just to have to backspace. -- Robert LeBlanc Life Sciences Computer Support Brigham Young University lebl...@byu.edu (801)422-1882 -- ___ Bacula-users mailing list Bacula-users@lists.sourceforge.net https://lists.sourceforge.net/lists/listinfo/bacula-users
Re: [Bacula-users] Bacula with an empty tape drive (was: mtx-changerloaded issues)
Changed the subject to better reflect the current issue. Attempting to run bacula-sd or btape when my drive is empty results in the application hanging. I believe this is related to section 37.1.1 in the manual. Can anyone confirm that Bacula simply does not operate properly when configured to use a tape drive that does not have a tape in it? Are there any workarounds for this? -HKS relevant bacula-sd.conf: Autochanger { Name = 124T-Autochanger Device = 124T-Drive Changer Command = /usr/local/libexec/bacula/mtx-changer %c %o %s %a %d Changer Device = /dev/ch0 } Device { Name = 124T-Drive Drive Index = 0 Media Type = 124T Archive Device = /dev/nrst0 AutomaticMount = yes; # when device opened, read it AlwaysOpen = yes; RemovableMedia = yes; RandomAccess = no; AutoChanger = yes Hardware End of Medium = No Fast Forward Space File = No BSF at EOM = yes } Not sure why you are having such issues. I've used a Dell PV142T with only one tape drive and have had no problems when the drive was empty. I am using btape right now and it doesn't like it if there is no tape. I haven't used the Dell library in over a year now. I did not have to modify any scripts to get it to work. Robert LeBlanc Life Sciences Computer Support Brigham Young University lebl...@byu.edu (801)422-1882 -- Open Source Business Conference (OSBC), March 24-25, 2009, San Francisco, CA -OSBC tackles the biggest issue in open source: Open Sourcing the Enterprise -Strategies to boost innovation and cut costs with open source participation -Receive a $600 discount off the registration fee with the source code: SFAD http://p.sf.net/sfu/XcvMzF8H ___ Bacula-users mailing list Bacula-users@lists.sourceforge.net https://lists.sourceforge.net/lists/listinfo/bacula-users
Re: [Bacula-users] Bacula with an empty tape drive (was:mtx-changerloaded issues)
Thanks for the responses. The trouble listed in my other thread is mostly due to OpenBSD's chio utility not keeping track of the slot from which the currently loaded tape was pulled. I'm unclear on your response, though. Are you saying that btape gives you problems if you don't have a tape loaded? If it's working, would you mind including your storage configs? Btape bombs out if there is not a tape loaded. I have to manually load a tape, also if bacula-sd is running, I have to first release the drive, then manually load the tape and then run btape. I thought bacula kept track of where the tapes were loaded from. The reason I suspect this is that before I had udev map the drives based on WWID, tapes would get swapped when the drives did not come up in the same order. The mtx command showed the correct slot loaded in the drive, but Bacula would switch them. Robert LeBlanc Life Sciences Computer Support Brigham Young University lebl...@byu.edu (801)422-1882 -- Open Source Business Conference (OSBC), March 24-25, 2009, San Francisco, CA -OSBC tackles the biggest issue in open source: Open Sourcing the Enterprise -Strategies to boost innovation and cut costs with open source participation -Receive a $600 discount off the registration fee with the source code: SFAD http://p.sf.net/sfu/XcvMzF8H ___ Bacula-users mailing list Bacula-users@lists.sourceforge.net https://lists.sourceforge.net/lists/listinfo/bacula-users
Re: [Bacula-users] Bacula loses tape label?
So it seems this is related to the library losing power and during the start-up it rewind the tape. Bacula then happily writes to the tape without knowing that it is not at the end of the data. To have bacula always check would be a pain because the best I can come up with is to rewind the tape and then forward to the end. To have that happen at every job would be a nightmare. Unfortunately I couldn't find a way for the LTO drive to report which file it was positioned at, that sure would be helpful in this case. It also seems that LTO keeps the last file record on the chip in the cartridge, so even if the bits are intact on the tape, the drive refuses to read them. Also when bcopy is run, it can't find the Volume tag so it dies without trying to read anything. Although I could use btape and issue the scanblocks command it would read all the Bacula blocks on the tape. This makes me very confused about what exactly Bacula can and can't read off the tape. If anyone has any other ideas I can try, I'd like to hear them. Otherwise, I guess I'll have to just recycle the volumes and hope that they didn't need the data on the tapes. Robert LeBlanc Life Sciences Computer Support Brigham Young University lebl...@byu.edu (801)422-1882 From: Robert LeBlanc Sent: Saturday, March 07, 2009 10:25 PM To: Bacula-users@lists.sourceforge.net Subject: [Bacula-users] Bacula loses tape label? I've and my third instance now where Bacula has lost the tape label. This seems to happen after a power outage (our tape library goes out, but the server is on UPS). What is really interesting is that when the power goes out, nothing is writing to the tape, but the tape is usually in the drive at the end of the data. When Bacula goes to write to the tape again, it tells me to insert the volume that is in the drive or label a new tape. I would just write a new label to it, but my understanding is that it also writes an EoF basically blanking the tape and losing all the data on it. Can someone give me an idea of how to get the data off or relabel the tape so that none of the data is lost? These are LTO4 tape and only using 100 GB or so; we are going to run through the tapes too fast if I just have to mark them as used. I'm also worried that I won't be able to restore any data off them since the label can not be read. Here are the btape commands to read the labels from two of the tapes; I can't remember the third tape that showed the problem. Thanks, Robert btape: butil.c:285 Using device: /dev/tape/drive2 for writing. 07-Mar 22:12 btape JobId 0: 3301 Issuing autochanger loaded? drive 2 command. 07-Mar 22:12 btape JobId 0: 3302 Autochanger loaded? drive 2, result is Slot 101. btape: btape.c:372 open device Drive-2 (/dev/tape/drive2): OK *readlabel btape: btape.c:422 Volume has no label. Volume Label: Id: **error**VerNo : 0 VolName : PrevVolName : VolFile : 0 LabelType : Unknown 0 LabelSize : 0 PoolName : MediaType : PoolType : HostName : Date label written: -4712-01-01 at 00:00 *q lsgw0:/home/leblanc# btape /dev/tape/drive2 Tape block granularity is 1024 bytes. btape: butil.c:285 Using device: /dev/tape/drive2 for writing. 07-Mar 22:22 btape JobId 0: 3301 Issuing autochanger loaded? drive 2 command. 07-Mar 22:22 btape JobId 0: 3302 Autochanger loaded? drive 2, result is Slot 105. btape: btape.c:372 open device Drive-2 (/dev/tape/drive2): OK *readlabel btape: btape.c:422 Volume has no label. Volume Label: Id: **error**VerNo : 0 VolName : PrevVolName : VolFile : 0 LabelType : Unknown 0 LabelSize : 0 PoolName : MediaType : PoolType : HostName : Date label written: -4712-01-01 at 00:00 * -- Open Source Business Conference (OSBC), March 24-25, 2009, San Francisco, CA -OSBC tackles the biggest issue in open source: Open Sourcing the Enterprise -Strategies to boost innovation and cut costs with open source participation -Receive a $600 discount off the registration fee with the source code: SFAD http://p.sf.net/sfu/XcvMzF8H___ Bacula-users mailing list Bacula-users@lists.sourceforge.net https://lists.sourceforge.net/lists/listinfo/bacula-users
[Bacula-users] Bacula loses tape label?
I've and my third instance now where Bacula has lost the tape label. This seems to happen after a power outage (our tape library goes out, but the server is on UPS). What is really interesting is that when the power goes out, nothing is writing to the tape, but the tape is usually in the drive at the end of the data. When Bacula goes to write to the tape again, it tells me to insert the volume that is in the drive or label a new tape. I would just write a new label to it, but my understanding is that it also writes an EoF basically blanking the tape and losing all the data on it. Can someone give me an idea of how to get the data off or relabel the tape so that none of the data is lost? These are LTO4 tape and only using 100 GB or so; we are going to run through the tapes too fast if I just have to mark them as used. I'm also worried that I won't be able to restore any data off them since the label can not be read. Here are the btape commands to read the labels from two of the tapes; I can't remember the third tape that showed the problem. Thanks, Robert btape: butil.c:285 Using device: /dev/tape/drive2 for writing. 07-Mar 22:12 btape JobId 0: 3301 Issuing autochanger loaded? drive 2 command. 07-Mar 22:12 btape JobId 0: 3302 Autochanger loaded? drive 2, result is Slot 101. btape: btape.c:372 open device Drive-2 (/dev/tape/drive2): OK *readlabel btape: btape.c:422 Volume has no label. Volume Label: Id: **error**VerNo : 0 VolName : PrevVolName : VolFile : 0 LabelType : Unknown 0 LabelSize : 0 PoolName : MediaType : PoolType : HostName : Date label written: -4712-01-01 at 00:00 *q lsgw0:/home/leblanc# btape /dev/tape/drive2 Tape block granularity is 1024 bytes. btape: butil.c:285 Using device: /dev/tape/drive2 for writing. 07-Mar 22:22 btape JobId 0: 3301 Issuing autochanger loaded? drive 2 command. 07-Mar 22:22 btape JobId 0: 3302 Autochanger loaded? drive 2, result is Slot 105. btape: btape.c:372 open device Drive-2 (/dev/tape/drive2): OK *readlabel btape: btape.c:422 Volume has no label. Volume Label: Id: **error**VerNo : 0 VolName : PrevVolName : VolFile : 0 LabelType : Unknown 0 LabelSize : 0 PoolName : MediaType : PoolType : HostName : Date label written: -4712-01-01 at 00:00 * -- Open Source Business Conference (OSBC), March 24-25, 2009, San Francisco, CA -OSBC tackles the biggest issue in open source: Open Sourcing the Enterprise -Strategies to boost innovation and cut costs with open source participation -Receive a $600 discount off the registration fee with the source code: SFAD http://p.sf.net/sfu/XcvMzF8H___ Bacula-users mailing list Bacula-users@lists.sourceforge.net https://lists.sourceforge.net/lists/listinfo/bacula-users
Re: [Bacula-users] Offsite backup solution
Hi Robert If you are working on it try to use pdo extension This can greatly improve the base user which would be interested Use bacula with sqlite - config in sqlite with pdo Use bacula with mysql - config in mysql with pdo Use bacula with postgresql - config in postgresql with pdo Use bacula with oracle - config in oracle with pdo etc ... But with all options present in bacula, changing time to time for different version with the possibly of using one director at one version client with another and sd a third (even if not recommended) I'm just imaging that would give you too much work. vi, emacs [put the name of your favorite text editor] rocks in case of bacula GUI ? there's gedit, kate, x-term+vi :- I'm using Symfony with Propel, but I'll look into pdo. The idea is that this will be very flexible so that anyone could easily add new directives without touching the code. That way new features in the future don't need to wait for the config tool. Robert -- Open Source Business Conference (OSBC), March 24-25, 2009, San Francisco, CA -OSBC tackles the biggest issue in open source: Open Sourcing the Enterprise -Strategies to boost innovation and cut costs with open source participation -Receive a $600 discount off the registration fee with the source code: SFAD http://p.sf.net/sfu/XcvMzF8H ___ Bacula-users mailing list Bacula-users@lists.sourceforge.net https://lists.sourceforge.net/lists/listinfo/bacula-users
Re: [Bacula-users] Offsite backup solution
P.S. Hasn't anybody created a graphical configuration program for bacula yet? ^^ I'm working on one using PHP and MySQL, I'm hoping to be able to pull the configuration straight from MySQL for the Director and SD. The FD doesn't change so much so I was going to just spit out a file to put on the FD. -- Robert LeBlanc Life Sciences Computer Support Brigham Young University lebl...@byu.edu (801)422-1882 -- Open Source Business Conference (OSBC), March 24-25, 2009, San Francisco, CA -OSBC tackles the biggest issue in open source: Open Sourcing the Enterprise -Strategies to boost innovation and cut costs with open source participation -Receive a $600 discount off the registration fee with the source code: SFAD http://p.sf.net/sfu/XcvMzF8H ___ Bacula-users mailing list Bacula-users@lists.sourceforge.net https://lists.sourceforge.net/lists/listinfo/bacula-users
Re: [Bacula-users] Debian/Ubuntu and openssl
Historically it has been my experience that the package maintainer for Bacula in Debian has only packaged release code. Looking at http://packages.qa.debian.org/b/bacula.html shows that the unstable version (2.4.4-1) is the same as Lenny (testing). There would be no way to find a newer version on an official Debian mirror. Someone may have Debs that they built on their own mirror that you could add to apt sources, but I don't know of any. If you can build your own debs then install them with deb -I as mentioned before. If you make your version number higher than what is in apt then it will keep it installed until a higher version comes out. (you could make it 2.9 so that when 3.0 comes out it will automatically install the official Debian packages). The other option is to give it a different package name (myBacula) and make it conflict with Bacula, when the new version comes out then you will have to force install the one from Debian. Robert Thank you Kern for responding. I think you are saying if I can find the right Debian test or development repository I will have a working version of bacula. I will look around. I have been trying to build bacula with encryption support, which seems to work. I have so far failed at getting the debian package management to install my packages instead of the main repositories. When I understand that I will will know more about debian repository management. Perhaps I will get their test repositories to work before my test repositories. -- Create and Deploy Rich Internet Apps outside the browser with Adobe(R)AIR(TM) software. With Adobe AIR, Ajax developers can use existing skills and code to build responsive, highly engaging applications that combine the power of local resources and data with the reach of the web. Download the Adobe AIR SDK and Ajax docs to start building applications today-http://p.sf.net/sfu/adobe-com___ Bacula-users mailing list Bacula-users@lists.sourceforge.net https://lists.sourceforge.net/lists/listinfo/bacula-users
Re: [Bacula-users] [Bacula-devel] Feature Request: Implementan'Volume Append Duration' pool directive
I've run into a similar problem that he seems to talk about. When I started suing Bacula, I set Volume Use Duration to 24 hours for my monthly back-ups because I want to be able to take the tapes off site and not have to worry about mixing months. Well, our fulls took longer than 24 hours to complete, so we would have a half empty tape marked as used because 24 hours had elapsed since it started writing. The back-up would continue on the next tape which would have fit on the previous tape. For us upping the Volume Use Duration to 1 week solved the problem, but where time is a bit more critical I can see where he is coming from. To say use a tape as long as it has been last written to in x number of hours would help some of the jobs that take a long time from start to finish (i.e. a lot of jobs). Robert LeBlanc College of Life Sciences Computer Support Brigham Young University (801) 422-1882 lebl...@byu.edu -Original Message- From: Jean Gobin [mailto:jgo...@strozllc.com] Sent: Wednesday, February 11, 2009 2:30 PM To: Kern Sibbald; bacula-de...@lists.sourceforge.net Cc: bacula-users Subject: Re: [Bacula-users] [Bacula-devel] Feature Request: Implementan'Volume Append Duration' pool directive -BEGIN PGP SIGNED MESSAGE- Hash: SHA1 I think what he wants is a way to make sure a tape is closed to, say, start a week with a fresh tape. Pretty easy to do with different pools/schedules/jobs actually. Jean F. Gobin Network Administrator Tel: 212.542.3175 Mobile:917.213.2532 Fax: 212.981.6545 32 Avenue of the Americas, 4th Floor, New York, NY 10013 jgo...@strozllc.com www.strozllc.com S T R O Z F R I E D B E R G TURNING INFORMATION INTO INTELLIGENCE This message is for the named person's use only. It may contain confidential, proprietary or legally privileged information. No right to confidential or privileged treatment of this message is waived or lost by any error in transmission. If you have received this message in error, please immediately notify the sender by e-mail or by telephone, delete the message and all copies from your system and destroy any hard copies. You must not, directly or indirectly, use, disclose, distribute, print or copy any part of this message if you are not the intended recipient. - -Original Message- From: Kern Sibbald [mailto:k...@sibbald.com] Sent: Wednesday, February 11, 2009 4:20 PM To: bacula-de...@lists.sourceforge.net Cc: bacula-users Subject: Re: [Bacula-users] [Bacula-devel] Feature Request: Implement an'Volume Append Duration' pool directive Unfortunately, I don't understand the explanation below -- more precisely, I don't understand what it fixes. Can anyone explain the need for this Feature Request to me? I just cannot grasp what it serves to mark a tape Used when it has been unused for a certain period. It also seems to me that ANDing together two different directives is a new concept, which could add complexity to the existing plethera of directives. Regards, Kern On Wednesday 11 February 2009 18:38:15 Brian Debelius wrote: I rotate sets of tapes each day. I would like the last tape used in the backup to be marked used after a certain amount of time has elapsed. It matters not to me when the volume was first written to. For me, Volume use duration marks tapes used too early if there is a hiccup in the backup run (and then requires an extra tape when none it needed) or too late and bacula wants to use the last tape for the next backup run. Kern Sibbald wrote: I don't see the need for this feature. With the current code, there is no harm if the tape is marked used while a job is running, so please explain why this feature is needed. Regards, Kern Item 1: Implement an 'Volume Append Duration' pool directive Origin: Brian Debelius, bdebelius at intelesyscorp.com Date: 5 February 2009 What: An 'Volume Append Duration' pool directive. This directive would set a window of time after the last write to a tape, after which the tape is marked used. This would be a complement to the Volume Use Duration directive. Why: Sometimes when a job pauses for whatever reason, the backup run is interrupted long enough for the Volume Use Duration to be exceeded and the currently loaded tape is mark used before the entire backup run is complete. With the Volume Append Duration, I can set the tape to be marked used after x hours of no use. This would give more flexibility in determining when a tape gets marked used. Notes: Volume Append Duration or Volume Use Duration may be used, OR both may be used in a pool definition. If both are used then then they are ANDED together, in that they both must be true for the tape to be marked used. Best regards, Kern
Re: [Bacula-users] How can I prevent this?
-Original Message- From: Brian Debelius [mailto:bdebel...@intelesyscorp.com] Sent: Friday, February 06, 2009 7:22 AM To: Arno Lehmann Cc: bacula-users Subject: Re: [Bacula-users] How can I prevent this? Arno Lehmann wrote: There are some ideas floating around in the list archives - the most interesting one, to me, is to use a run before job script that, using bconsole, checks which jobs are currently running and aborts with an error code if its own name is in the list. Then, you can wait for the next major version of Bacula, where there will be configuration directives to modify the scheduling and running behaviour in thss case. Arno Ok, I can wait. Are the new directives documented yet? Yes, please see http://www.bacula.org/manuals/en/concepts/concepts/New_Features.html#SEC TION0057 -- Create and Deploy Rich Internet Apps outside the browser with Adobe(R)AIR(TM) software. With Adobe AIR, Ajax developers can use existing skills and code to build responsive, highly engaging applications that combine the power of local resources and data with the reach of the web. Download the Adobe AIR SDK and Ajax docs to start building applications today-http://p.sf.net/sfu/adobe-com ___ Bacula-users mailing list Bacula-users@lists.sourceforge.net https://lists.sourceforge.net/lists/listinfo/bacula-users
Re: [Bacula-users] mixed autochanger configuration LTO2 // LTO3
I will try to answer you on the other thread on how to fix this labeling. And please excuse me if I am not being descriptive enough, I am very sleepy (had to get up early this morning to shovel snow/ice so the SO could get her car out of the driveway without hitting my car..) John no thanks a lot ! hope you r well ... in fact ... mixed configuration is a very bad idea ... then i will remove LTO2 drive , it s will be better ... simpler , us better . On 21 Jan 2009 I wrote the following concerning adding an LTO4 drive to our autochanger with two LTO3 drives. So far it is working fine. We just added a new LTO4 drive to our Neo8000 with two existing LTO3 drives. I thought I would just have to add the drive into the device section of the SD with a new Media Label and things would be set to go. Well, the director wants to know what media type the SD is. I've created a second SD block in the dir pointing to the same SD with a different name and Media type. Doing a stat storage on both result in the same info. Can someone please help me to make sure I'm not shooting myself in the foot. I was able to label a tape using the LTO3 SD directive with the LTO4 device, but it did show up as LTO3 media despite what the device setting is. Relabeling the tape with the second directive did label the tape as LTO4. Bacula-sd.conf: Autochanger { Name = Neo8000 Device = Drive-0 Device = Drive-1 Device = Drive-2 Changer Command = /etc/bacula/scripts/mtx-changer %c %o %S %a %d Changer Device = /dev/tape/neo8000 } Device { Name = Drive-0 # Drive Index = 0 Media Type = LTO3 Archive Device = /dev/tape/drive0 AutomaticMount = yes; # when device opened, read it AlwaysOpen = yes; RemovableMedia = yes; RandomAccess = no; AutoChanger = yes Spool Directory = /backup/spool Maximum Network Buffer Size = 65536 # Enable the Alert command only if you have the mtx package loaded # Alert Command = sh -c 'tapeinfo -f %c |grep TapeAlert|cat' # If you have smartctl, enable this, it has more info than tapeinfo Alert Command = sh -c 'smartctl -H -l error %c' } Device { Name = Drive-1 # Drive Index = 1 Media Type = LTO3 Archive Device = /dev/tape/drive1 AutomaticMount = yes; # when device opened, read it AlwaysOpen = yes; RemovableMedia = yes; RandomAccess = no; AutoChanger = yes Spool Directory = /backup/spool Maximum Network Buffer Size = 65536 # Enable the Alert command only if you have the mtx package loaded # Alert Command = sh -c 'tapeinfo -f %c |grep TapeAlert|cat' # # If you have smartctl, enable this, it has more info than tapeinfo Alert Command = sh -c 'smartctl -H -l error %c' } Device { Name = Drive-2 # Drive Index = 2 Media Type = LTO4 Archive Device = /dev/tape/drive2 AutomaticMount = yes; # when device opened, read it AlwaysOpen = yes; RemovableMedia = yes; RandomAccess = no; AutoChanger = yes Spool Directory = /backup/spool Maximum Network Buffer Size = 65536 # Enable the Alert command only if you have the mtx package loaded # Alert Command = sh -c 'tapeinfo -f %c |grep TapeAlert|cat' # # If you have smartctl, enable this, it has more info than tapeinfo Alert Command = sh -c 'smartctl -H -l error %c' } Bacula-dir.conf Storage { Name = Neo8000 Address = 192.168.3.18 # N.B. Use a fully qualified name here SDPort = 9103 Password = mysecretpassword Media Type = LTO3 # must be same as MediaType in Storage daemon Device = Neo8000 Autochanger = yes # enable for autochanger device Maximum Concurrent Jobs = 2 } Storage { Name = Neo8000-LTO4 Address = 192.168.3.18 # N.B. Use a fully qualified name here SDPort = 9103 Password = mysecretpassword Media Type = LTO4 # must be same as MediaType in Storage daemon Device = Neo8000 Autochanger = yes # enable for autochanger device Maximum Concurrent Jobs = 2 } Robert LeBlanc College of Life Sciences Computer Support Brigham Young University (801) 422-1882 lebl...@byu.edu -- This SF.net email is sponsored by: SourcForge Community SourceForge wants to tell your story. http://p.sf.net/sfu/sf-spreadtheword ___ Bacula-users mailing list Bacula-users@lists.sourceforge.net https://lists.sourceforge.net/lists/listinfo/bacula-users
Re: [Bacula-users] Mixed Drives
= This is something I raised at least 2-3 years ago. There's been no = apparent interest in solving the issue and likely won't be until one of = the developers encounters the problem. I'd like to see that supported, but it's not critical...and there are probably a huge number of vendor-specific corner cases. What I like about what I specified in my earlier e-mail is that the user determines which compatibility they want. If they only want to read a previous gen tape, even though the drive could write to it, they can specify that. That would prevent having to code some nasty stuff to cover all the corner cases. Robert LeBlanc College of Life Sciences Computer Support Brigham Young University (801) 422-1882 lebl...@byu.edu -- This SF.net email is sponsored by: SourcForge Community SourceForge wants to tell your story. http://p.sf.net/sfu/sf-spreadtheword ___ Bacula-users mailing list Bacula-users@lists.sourceforge.net https://lists.sourceforge.net/lists/listinfo/bacula-users
Re: [Bacula-users] Mixed Drives
Anyway, personally, I'd like to see Alans suggestions implemented, but neither I nor any of my customers has a setup where that would be needed... the original poster might think about how they could support development here :-) Believe me, I'd love to support the development of Bacula especially in this area. Unfortunately, we have a serious shortage of man power in my area to dedicate to coding. I would be willing to help with testing to a point as we only have the one library and it is used for production, it would have to be sandboxed to certain tapes for testing purposes. An overall idea that I've had is when a job is run, it finds the pool to put/get the data and knows the SD, the DIR queries the SD to find the supported media for the devices then selects a drive that matches the media from the volume. This would remove the Media directive from the SD directive in the Director config. To extend this to support backwards compatibility, the media directive could be split into media-write, media-read directives which could either support multiple directives or a delimited list of media the drive supports. This list would be fed back to the director when the DIR queries the SD for devices to choose an appropriate device. When we have time and resources we will look into coding it, but don't hold your breath on that being anytime soon. Robert LeBlanc College of Life Sciences Computer Support Brigham Young University (801) 422-1882 lebl...@byu.edu -- This SF.net email is sponsored by: SourcForge Community SourceForge wants to tell your story. http://p.sf.net/sfu/sf-spreadtheword ___ Bacula-users mailing list Bacula-users@lists.sourceforge.net https://lists.sourceforge.net/lists/listinfo/bacula-users
[Bacula-users] Mixed Drives
We just added a new LTO4 drive to our Neo8000 with two existing LTO3 drives. I thought I would just have to add the drive into the device section of the SD with a new Media Label and things would be set to go. Well, the director wants to know what media type the SD is. I've created a second SD block in the dir pointing to the same SD with a different name and Media type. Doing a stat storage on both result in the same info. Can someone please help me to make sure I'm not shooting myself in the foot. I was able to label a tape using the LTO3 SD directive with the LTO4 device, but it did show up as LTO3 media despite what the device setting is. Relabeling the tape with the second directive did label the tape as LTO4. Bacula-sd.conf: Autochanger { Name = Neo8000 Device = Drive-0 Device = Drive-1 Device = Drive-2 Changer Command = /etc/bacula/scripts/mtx-changer %c %o %S %a %d Changer Device = /dev/tape/neo8000 } Device { Name = Drive-0 # Drive Index = 0 Media Type = LTO3 Archive Device = /dev/tape/drive0 AutomaticMount = yes; # when device opened, read it AlwaysOpen = yes; RemovableMedia = yes; RandomAccess = no; AutoChanger = yes Spool Directory = /backup/spool Maximum Network Buffer Size = 65536 # Enable the Alert command only if you have the mtx package loaded # Alert Command = sh -c 'tapeinfo -f %c |grep TapeAlert|cat' # If you have smartctl, enable this, it has more info than tapeinfo Alert Command = sh -c 'smartctl -H -l error %c' } Device { Name = Drive-1 # Drive Index = 1 Media Type = LTO3 Archive Device = /dev/tape/drive1 AutomaticMount = yes; # when device opened, read it AlwaysOpen = yes; RemovableMedia = yes; RandomAccess = no; AutoChanger = yes Spool Directory = /backup/spool Maximum Network Buffer Size = 65536 # Enable the Alert command only if you have the mtx package loaded # Alert Command = sh -c 'tapeinfo -f %c |grep TapeAlert|cat' # # If you have smartctl, enable this, it has more info than tapeinfo Alert Command = sh -c 'smartctl -H -l error %c' } Device { Name = Drive-2 # Drive Index = 2 Media Type = LTO4 Archive Device = /dev/tape/drive2 AutomaticMount = yes; # when device opened, read it AlwaysOpen = yes; RemovableMedia = yes; RandomAccess = no; AutoChanger = yes Spool Directory = /backup/spool Maximum Network Buffer Size = 65536 # Enable the Alert command only if you have the mtx package loaded # Alert Command = sh -c 'tapeinfo -f %c |grep TapeAlert|cat' # # If you have smartctl, enable this, it has more info than tapeinfo Alert Command = sh -c 'smartctl -H -l error %c' } Bacula-dir.conf Storage { Name = Neo8000 Address = 192.168.3.18 # N.B. Use a fully qualified name here SDPort = 9103 Password = mysecretpassword Media Type = LTO3 # must be same as MediaType in Storage daemon Device = Neo8000 Autochanger = yes # enable for autochanger device Maximum Concurrent Jobs = 2 } Storage { Name = Neo8000-LTO4 Address = 192.168.3.18 # N.B. Use a fully qualified name here SDPort = 9103 Password = mysecretpassword Media Type = LTO4 # must be same as MediaType in Storage daemon Device = Neo8000 Autochanger = yes # enable for autochanger device Maximum Concurrent Jobs = 2 } Robert LeBlanc College of Life Sciences Computer Support Brigham Young University (801) 422-1882 lebl...@byu.edu -- This SF.net email is sponsored by: SourcForge Community SourceForge wants to tell your story. http://p.sf.net/sfu/sf-spreadtheword___ Bacula-users mailing list Bacula-users@lists.sourceforge.net https://lists.sourceforge.net/lists/listinfo/bacula-users
Re: [Bacula-users] Rif: Re: LTO3 performance
Hi I really appreciate the discussion about advantages and disadvantages of compression. But please bear in mind that I am not using compression *in anyway*. So please could you help me to solve this problem. Below I let you config files (a bit shortened and changed in obvious parts like Password), JOb report and info from mt and tapeinfo: Some Clarifications: DirectorandStorageServerThatHasDat72, is the Director and Storage Server That Has Dat 72 ;) Bulldog, Clark are bacula clients StorageServerThasHasLTO3: Storage Server Thas Has LTO3 ;) I'm not sure if anyone brought this up, but it looks like you are using spooling. I don't see any concurrent job options which would default to only running one job at a time. Depending on the speed of your connection from the FD to the SD and the speed of the spool disks and its size would also contribute to the speed. What I look for is despool time and speed. That gives me an indication of how fast my LTO-3 drives are during jobs. If your spool space is not large enough to hold the entire job, you may notice a very bad transfer rate because I think Bacula starts the clock at the first despool and stops it at the end of the last despool. I'm not an expert, but just jumping into this thread, that is what I can see. -- Check out the new SourceForge.net Marketplace. It is the best place to buy or sell services for just about anything Open Source. http://p.sf.net/sfu/Xq1LFB ___ Bacula-users mailing list Bacula-users@lists.sourceforge.net https://lists.sourceforge.net/lists/listinfo/bacula-users
Re: [Bacula-users] Can bacula recognise newly added volumes while waiting for another volume?
-Original Message- From: Dan Langille [mailto:d...@langille.org] Sent: Friday, January 09, 2009 4:59 PM To: Nils Blanck-Wehde Cc: bacula-users@lists.sourceforge.net Subject: Re: [Bacula-users] Can bacula recognise newly added volumes while waiting for another volume? Nils Blanck-Wehde wrote: Hello everyone! Please excuse the crappy topic, I couldn't think of a better one... Bacula just asked me to load a specific volume or lable a new one. This is totally ok as none of the available volumes has exceeded the retention period. In order to give bacula an appendable volume, I manually purged one of the remaining volumes. As a result, now there in fact is an appendable volume from the right pool available. Is there any way to make bacula recognise the newly purged volume and use it for the pending job? use the mount command. mount that Volume When I had this happen, I issued the mount command but did not specify a slot or volume and it found the newly purged volume automatically and used it. Robert -- Check out the new SourceForge.net Marketplace. It is the best place to buy or sell services for just about anything Open Source. http://p.sf.net/sfu/Xq1LFB ___ Bacula-users mailing list Bacula-users@lists.sourceforge.net https://lists.sourceforge.net/lists/listinfo/bacula-users
Re: [Bacula-users] Fibre Channel drives keep switching
-Original Message- From: Alan Brown [mailto:[EMAIL PROTECTED] Sent: Thursday, November 20, 2008 5:03 AM To: Cedric Devillers Cc: John Drescher; bacula-users@lists.sourceforge.net Subject: Re: [Bacula-users] Fibre Channel drives keep switching On Thu, 20 Nov 2008, Cedric Devillers wrote: too) you can use /dev/tape/by-id/scsi-XX-nst that should be fixed by using specific id (instead of XX). Looking at that directory it's only been created for the first tape drive. The others have not been picked up. Debian Lenny only picked up the changer in /dev/tape/by-id, but /dev/tape/by-path had both tape drives, but the names are horribly long: pci-:01:04.0-fc-0x500110a00058bd40:0x-nst-nst pci-:01:04.0-fc-0x500110a00058bd40:0x-st pci-:01:04.0-fc-0x500110a00058c2f0:0x-nst-nst pci-:01:04.0-fc-0x500110a00058c2f0:0x-st The udev rule for the friendly name seems worth the effort for me anyways. Robert Robert LeBlanc College of Life Sciences Computer Support Brigham Young University (801) 422-1882 [EMAIL PROTECTED] - This SF.Net email is sponsored by the Moblin Your Move Developer's challenge Build the coolest Linux based applications with Moblin SDK win great prizes Grand prize is a trip for two to an Open Source event anywhere in the world http://moblin-contest.org/redirect.php?banner_id=100url=/ ___ Bacula-users mailing list Bacula-users@lists.sourceforge.net https://lists.sourceforge.net/lists/listinfo/bacula-users
[Bacula-users] Fibre Channel drives keep switching
We have a Overland Storage Neo 8000 with two FC LTO3 drives, and almost every time the SD is rebooted, the drives swap places. Drive0 will be /dev/st0 and Drive1 will be /dev/st1, then sometimes after reboot Drive0 will be /dev/st1 and Drive1 will be /dev/st0. It doesn't happen all the time and I thought udev would take care of keeping the drive the same across reboots like the Ethernet cards. Does anyone have some wisdom that they can impart to me on this problem? Relevant part of bacula-sd.conf: Autochanger { Name = Neo8000 Device = Drive-0 Device = Drive-1 Changer Command = /etc/bacula/scripts/mtx-changer %c %o %S %a %d Changer Device = /dev/sg4 } Device { Name = Drive-0 # Drive Index = 0 Media Type = LTO3 Archive Device = /dev/st1 AutomaticMount = yes; # when device opened, read it AlwaysOpen = yes; RemovableMedia = yes; RandomAccess = no; AutoChanger = yes Spool Directory = /backup/spool Maximum Network Buffer Size = 65536 # Enable the Alert command only if you have the mtx package loaded # Alert Command = sh -c 'tapeinfo -f %c |grep TapeAlert|cat' # If you have smartctl, enable this, it has more info than tapeinfo Alert Command = sh -c 'smartctl -H -l error %c' } Device { Name = Drive-1 # Drive Index = 1 Media Type = LTO3 Archive Device = /dev/st0 AutomaticMount = yes; # when device opened, read it AlwaysOpen = yes; RemovableMedia = yes; RandomAccess = no; AutoChanger = yes Spool Directory = /backup/spool Maximum Network Buffer Size = 65536 # Enable the Alert command only if you have the mtx package loaded # Alert Command = sh -c 'tapeinfo -f %c |grep TapeAlert|cat' # # If you have smartctl, enable this, it has more info than tapeinfo Alert Command = sh -c 'smartctl -H -l error %c' } Thanks, Robert Robert LeBlanc College of Life Sciences Computer Support Brigham Young University (801) 422-1882 [EMAIL PROTECTED] - This SF.Net email is sponsored by the Moblin Your Move Developer's challenge Build the coolest Linux based applications with Moblin SDK win great prizes Grand prize is a trip for two to an Open Source event anywhere in the world http://moblin-contest.org/redirect.php?banner_id=100url=/___ Bacula-users mailing list Bacula-users@lists.sourceforge.net https://lists.sourceforge.net/lists/listinfo/bacula-users
Re: [Bacula-users] Fibre Channel drives keep switching
You can write a udev rule to lock the drives down. I wrote one awhile back to keep the changer device the same. dev6 ~ # cat /etc/udev/rules.d/55-bacula.rules SUBSYSTEM==scsi,ATTRS{vendor}==EXABYTE*,ATTRS{type}==8, SYMLINK+=autochanger1 changer BTW, Is this swapping of drives causing any real problems? I'm not having a problem with the changer, only the drives in the changer as they are separate targets and not LUNS of the changer. This causes great problems with Bacula. What will happen is Bacula needs a tape in Drive0, but unload the tape in Drive1 into the slot for Drive0 and load the correct tape into Drive1, then find that the wrong tape is in Drive0 and wait for a mount of the correct tape. I have to stop the SD, swap the /dev/st[01] entries, restart the SD, perform an 'update slots', after a few jobs crash, then they start working again. After the jobs run, I have to manually move tapes around to get them back into the correct slots and do another 'update slots'. This is because some of bacula interacts with the tape drives as /dev/st* (tape changing, etc) and some of Bacula (job scheduling, etc) use Drive*. Robert - This SF.Net email is sponsored by the Moblin Your Move Developer's challenge Build the coolest Linux based applications with Moblin SDK win great prizes Grand prize is a trip for two to an Open Source event anywhere in the world http://moblin-contest.org/redirect.php?banner_id=100url=/ ___ Bacula-users mailing list Bacula-users@lists.sourceforge.net https://lists.sourceforge.net/lists/listinfo/bacula-users
Re: [Bacula-users] Fibre Channel drives keep switching
-Original Message- From: John Drescher [mailto:[EMAIL PROTECTED] Sent: Wednesday, November 19, 2008 9:35 AM To: Robert LeBlanc Cc: bacula-users@lists.sourceforge.net Subject: Re: [Bacula-users] Fibre Channel drives keep switching I see. I think the problem is mtx will load the tape into the first real drive in the changer and with the /dev/nst0 and /dev/nst1 switched bacula will look for the tape in the wrong drive when /dev/nst1 is the first drive in the changer. This is the exact problem that I am seeing. I couldn't put it into those words for some reason. Robert LeBlanc College of Life Sciences Computer Support Brigham Young University (801) 422-1882 [EMAIL PROTECTED] - This SF.Net email is sponsored by the Moblin Your Move Developer's challenge Build the coolest Linux based applications with Moblin SDK win great prizes Grand prize is a trip for two to an Open Source event anywhere in the world http://moblin-contest.org/redirect.php?banner_id=100url=/ ___ Bacula-users mailing list Bacula-users@lists.sourceforge.net https://lists.sourceforge.net/lists/listinfo/bacula-users
Re: [Bacula-users] Fibre Channel drives keep switching
Thanks for all the help. I decided to use the WWID of the drives, as it was about the only clearly unique thing that I could find. It also is robust to changes in the FC fabric. I've created a file in /etc/udev/rules.d/local.rules with the following: # tape changer KERNEL==sg*[0-9], ENV{ID_SERIAL}==200900d0715a2007a, SYMLINK+=tape/neo8000 # first tape drive by WWID KERNEL==st*[0-9], ENV{ID_PATH}==*500110a00058bd40*, SYMLINK+=tape/drive0 # second tape drive by WWID KERNEL==st*[0-9], ENV{ID_PATH}==*500110a00058c2f0*, SYMLINK+=tape/drive1 Hope that can be of help to anyone else. I spent most of the day chasing this tail. Robert LeBlanc College of Life Sciences Computer Support Brigham Young University (801) 422-1882 [EMAIL PROTECTED] -Original Message- From: Alan Brown [mailto:[EMAIL PROTECTED] Sent: Wednesday, November 19, 2008 10:37 AM To: Robert LeBlanc Cc: John Drescher; bacula-users@lists.sourceforge.net Subject: Re: [Bacula-users] Fibre Channel drives keep switching On Wed, 19 Nov 2008, Robert LeBlanc wrote: You can write a udev rule to lock the drives down. I wrote one awhile back to keep the changer device the same. dev6 ~ # cat /etc/udev/rules.d/55-bacula.rules SUBSYSTEM==scsi,ATTRS{vendor}==EXABYTE*,ATTRS{type}==8, SYMLINK+=autochanger1 changer For the drives you're better off creating udev rules to create something like /dev/scsi/ntape/{drive-WWID} and /dev/scsi/generics/{WWID} These won't change no matter where on the scsi/fabric the drive is. I've had a request ticket in with Redhat to implement this on RHEL4/5 for a long time, ditto with the /dev/sg/ devices for the same reason. (Not to mention a strong wish for multipath support for tapes and generics...) - This SF.Net email is sponsored by the Moblin Your Move Developer's challenge Build the coolest Linux based applications with Moblin SDK win great prizes Grand prize is a trip for two to an Open Source event anywhere in the world http://moblin-contest.org/redirect.php?banner_id=100url=/ ___ Bacula-users mailing list Bacula-users@lists.sourceforge.net https://lists.sourceforge.net/lists/listinfo/bacula-users
Re: [Bacula-users] BACULA/VMWARE crashes entire system [RESURRECTED]
I've been running Bacula with Vmware Server 1.2 for a week now with no problems. I run 4 VMs on the same machine that does the back-up. No hicups of any kind. Running Debian Lenny (2.6.23) on Dell Optiplex GX270. Robert On 12/9/07 10:21 AM, Scott Ruckh [EMAIL PROTECTED] wrote: On 9/19/07 10:22 AM, Scott Ruckh [EMAIL PROTECTED] wrote: I am running CentOS 4.5 x64 with self compiled kernel 2.6.13.4. I have installed the latest bacula 2.2.4 Director, Client, and Storage daemon on this server. Bacula run flawlessly most of the time. Unfortunately when I run VMWare Workstation and have a virtual machine running, Bacula crashes the entire system. I am running the latest version of the 5.x series of VMWare. I am also running fluxbox as my Window Manager, although I don't think that has anything to do with the problem. This problem has existed with all versions of Bacula starting with 1.3.8.11 which is the first version I installed. My backups are to an external USB disk connected to the Host OS. The virtual machine is not configured with USB port because at one time I thought there might be contention between the Physical USB disk and the USB port configured with the Virtual Machine. All virtual machines and host system work fine when bacula backups are not running. I have tested this with all sorts of guest Virtual Machines and the results are the same; bacula will crash (completely unusable) the host machine. The only recovery method it to reboot the server. Is anyone else successfully running a similar environment? As bacula is the only program that appears to cause the problem I am making an assumption it is a problem with bacula. If anyone has this type of environment working successfully I would like to hear about it. Thanks. I am now running bacula 2.2.6 built from source RPMs. Now I had a crash with no VMware running. I did not even have an Xsession running. This is two times in two weeks where the system crashes while bacula is running. The crash completely shuts the machine off. It is not just in a hung state. There is really nothing in the system logs except that the database backup job that runs before the backup is kicked off and is run successfully. After that, there is nothing in the logs. This is the first time this has happened when a VmWare guest OS was not running. Prior to the past two weeks, backups were running fine for about 29 days. This is a very random as far a when the crash occurs, but constant problem. I don't believe I am running an abnormal server configuration, but this Hard crash is very annoying. Anyone have anymore hints or suggestions to try? - SF.Net email is sponsored by: Check out the new SourceForge.net Marketplace. It's the best place to buy or sell services for just about anything Open Source. http://sourceforge.net/services/buy/index.php ___ Bacula-users mailing list Bacula-users@lists.sourceforge.net https://lists.sourceforge.net/lists/listinfo/bacula-users Robert LeBlanc College of Life Sciences Computer Support Brigham Young University [EMAIL PROTECTED] (801)422-1882 - SF.Net email is sponsored by: Check out the new SourceForge.net Marketplace. It's the best place to buy or sell services for just about anything Open Source. http://sourceforge.net/services/buy/index.php ___ Bacula-users mailing list Bacula-users@lists.sourceforge.net https://lists.sourceforge.net/lists/listinfo/bacula-users
Re: [Bacula-users] All the problems with Bacula
On 11/21/07 7:11 AM, Foo Bar [EMAIL PROTECTED] wrote: all set up (I can't get BAT to compile for example, hopefully the Debian stable package will be updated soon). Debian stable will never get the new version until Lenny is released as stable. I've used the testing and sid packages in Lenny without any problems. Beware that 2.2.5 is stuck in sid until the new version of qt4 fixes dependencies. Robert Robert LeBlanc College of Life Sciences Computer Support Brigham Young University [EMAIL PROTECTED] (801)422-1882 - This SF.net email is sponsored by: Microsoft Defy all challenges. Microsoft(R) Visual Studio 2005. http://clk.atdmt.com/MRT/go/vse012070mrt/direct/01/ ___ Bacula-users mailing list Bacula-users@lists.sourceforge.net https://lists.sourceforge.net/lists/listinfo/bacula-users
Re: [Bacula-users] All the problems with Bacula
On 11/21/07 9:02 AM, Wes Hardaker [EMAIL PROTECTED] wrote: Final word: I love bacula and wouldn't switch away from it if I had a choice. But I'm still on the learning curve, and it isn't small. But very very worth it. Thanks again! I've seen Bacula as an enterprise level product. Everything that I've done that is enterprise level has had a steep learning curve and Bacula is no different. It is extremely flexible, and to be able to separate the components (dir, database and storage) is great and one of the strong points I think. It also comes with the downside of a lot more configuration (you have to get the dir, database and storage to all talk even if they are on the same machine). Anytime I start on some new enterprise thing, I always try and understand the concept as much as possible before starting in the configuration. I think once I dedicated the time to Bacula, I had it backing up 20 machines (linux, Windows and Mac) using a LTO2 tape library on virtual hardware in about 2 weeks. I contribute that to understanding the concept before I started configuring. I had a student working with me and I had to teach him the concepts so it helped me understand it better. I strongly suggest teaching someone with whom you work about how Bacula works and you will understand it better even if they don't use it. I know Shon had it working before he moved it to production and then ran into problems. I had my fair share of problems with we went to production as well. Some things you just learn the hard way (like reloading the director config while a job is running). Making sure you have enough disk space for you SQL database, etc. Sometimes it just requires classes from the School of Hard Knocks. That is where your experience can help others in the future, either through the list or your revisions to the manual. Stick with Bacula and you will be happy with it. Ask questions. I've asked questions that no one has answered. Does that mean that the product stinks? No, I've just done some things that no one has encountered and they didn't have an answer for. Sometimes I've answered my own questions and posted back to the list with the solution so that there is a history of it. This group is very helpful if you ask. Robert Robert LeBlanc College of Life Sciences Computer Support Brigham Young University [EMAIL PROTECTED] (801)422-1882 - This SF.net email is sponsored by: Microsoft Defy all challenges. Microsoft(R) Visual Studio 2005. http://clk.atdmt.com/MRT/go/vse012070mrt/direct/01/ ___ Bacula-users mailing list Bacula-users@lists.sourceforge.net https://lists.sourceforge.net/lists/listinfo/bacula-users
[Bacula-users] Includes in conf file
It would be nice if the [EMAIL PROTECTED] for including other files in the director conf file could be expanded to directories. I've set-up my bacula directory to hold the client configurations in /etc/bacula/clients-available and then in /etc/bacula/clients-available I symlink to the clients-available. If I could include a directory then I could easily remove a client (storage, fileset, job, etc) without having to remove or comment out the config file. Just remove the symlink. Debian does this with Apache and I think it's a good approach for managing lots of sites and clients, etc. Robert Robert LeBlanc College of Life Sciences Computer Support Brigham Young University [EMAIL PROTECTED] (801)422-1882 - This SF.net email is sponsored by: Microsoft Defy all challenges. Microsoft(R) Visual Studio 2005. http://clk.atdmt.com/MRT/go/vse012070mrt/direct/01/ ___ Bacula-users mailing list Bacula-users@lists.sourceforge.net https://lists.sourceforge.net/lists/listinfo/bacula-users
Re: [Bacula-users] restore backups to another server
Don't reuse the client definition. Just set-up a new client definition for the new server so that both are contactable by the console. Be sure that the passwords match from the fd config to the dir config. Once you can do an estimate on the new server type the restore command. Select the old client and then after you selected the files and before you select yes to run the job. Hit the modify key and change the client to the new server. Run the job and the files will be restored to the new machine. I do this all the time, in fact I just recovered a machine by restoring the fd config to a different machine, then job Bacula up and running and then restored all the files back to the machine. Robert On 11/17/07 1:23 PM, daniel [EMAIL PROTECTED] wrote: Hi! I made some backups from one win server. All done. I'm trying to restore some files from a backup on another server win server. Is this possible? I installed bacula-win on the server where i want to restore all these files, I changed the IP of the client in bacula-dir.conf and I tried to restore but it doesn't work. How can I do this? Thank in advance for any advice Robert LeBlanc College of Life Sciences Computer Support Brigham Young University [EMAIL PROTECTED] (801)422-1882 - This SF.net email is sponsored by: Microsoft Defy all challenges. Microsoft(R) Visual Studio 2005. http://clk.atdmt.com/MRT/go/vse012070mrt/direct/01/ ___ Bacula-users mailing list Bacula-users@lists.sourceforge.net https://lists.sourceforge.net/lists/listinfo/bacula-users
Re: [Bacula-users] restore backups to another server
I guess my post was a bit confusing. I only have one director. Installing anything would be a client. Your list of instructions was exactly what I was trying to convey. I think you did a better job. Thanks for clearing up what I may have confused Daniel with. Robert On 11/17/07 3:26 PM, Michael Lewinger [EMAIL PROTECTED] wrote: Hello Robert, You do NOT want to restore with bacula-dir, you need only to install the client on the other windows server. The restore is performed on the director machine (the original bacula director daemon). 1) connect to director 2) install bacula-fd on the other server 3) open bacula-dir.conf on the main server 4) define the new client (the new server) as per the bacula-fd definitons in step 2 and save. from next step on bconsole: 5) type reload 6) type restore 7) choose the job/client you wish to restore 8) mark files 9) after all done, and before you proceed with restore, type mod 10) choose the NEWLY DEFINED client (as per step 4) 11) type yes and wait for the restore to proceed over the net to the new server. Good luck ! Michael On Nov 18, 2007 12:00 AM, Robert LeBlanc [EMAIL PROTECTED] wrote: Don't reuse the client definition. Just set-up a new client definition for the new server so that both are contactable by the console. Be sure that the passwords match from the fd config to the dir config. Once you can do an estimate on the new server type the restore command. Select the old client and then after you selected the files and before you select yes to run the job. Hit the modify key and change the client to the new server. Run the job and the files will be restored to the new machine. I do this all the time, in fact I just recovered a machine by restoring the fd config to a different machine, then job Bacula up and running and then restored all the files back to the machine. Robert On 11/17/07 1:23 PM, daniel [EMAIL PROTECTED] wrote: Hi! I made some backups from one win server. All done. I'm trying to restore some files from a backup on another server win server. Is this possible? I installed bacula-win on the server where i want to restore all these files, I changed the IP of the client in bacula-dir.conf and I tried to restore but it doesn't work. How can I do this? Thank in advance for any advice Robert LeBlanc College of Life Sciences Computer Support Brigham Young University [EMAIL PROTECTED] (801)422-1882 - This SF.net email is sponsored by: Microsoft Defy all challenges. Microsoft(R) Visual Studio 2005. http://clk.atdmt.com/MRT/go/vse012070mrt/direct/01/ ___ Bacula-users mailing list Bacula-users@lists.sourceforge.net https://lists.sourceforge.net/lists/listinfo/bacula-users Robert LeBlanc College of Life Sciences Computer Support Brigham Young University [EMAIL PROTECTED] (801)422-1882 - This SF.net email is sponsored by: Microsoft Defy all challenges. Microsoft(R) Visual Studio 2005. http://clk.atdmt.com/MRT/go/vse012070mrt/direct/01/ ___ Bacula-users mailing list Bacula-users@lists.sourceforge.net https://lists.sourceforge.net/lists/listinfo/bacula-users
[Bacula-users] Feature request for BAT
I've been using BAT for about a month now and love it. One request that would be very helpful is to add an item in the context menu on the JobList page. I use that page to find which jobs have issues. It would be very handy to right-click on a failed job and say rerun job. BAT is a great product, thanks for all the hard work. Robert Robert LeBlanc College of Life Sciences Computer Support Brigham Young University [EMAIL PROTECTED] (801)422-1882 - This SF.net email is sponsored by: Microsoft Defy all challenges. Microsoft(R) Visual Studio 2005. http://clk.atdmt.com/MRT/go/vse012070mrt/direct/01/___ Bacula-users mailing list Bacula-users@lists.sourceforge.net https://lists.sourceforge.net/lists/listinfo/bacula-users
Re: [Bacula-users] Backing up MS SQL server with Bacula?
We put MS SQL in full recovery mode, then have a maintenance schedule in MS SQL that does a full back-up. We exclude the actual data files and include the log files (full and transaction). We have the schedule run about 15 min before bacula runs on the client. I think the RunBefore Client would be a better option though. Robert On 10/11/07 1:09 AM, Matthias Kellermann [EMAIL PROTECTED] wrote: Hi list, anyone backuped a MS SQL server with Bacula? Is this somehow possible - are there any plugins for that or do I have to use the built-in backup functionality of the MS SQL server and backup the database dumps / transaction logs later with Bacula? Matthias - This SF.net email is sponsored by: Splunk Inc. Still grepping through log files to find problems? Stop. Now Search log events and configuration files using AJAX and a browser. Download your FREE copy of Splunk now http://get.splunk.com/ ___ Bacula-users mailing list Bacula-users@lists.sourceforge.net https://lists.sourceforge.net/lists/listinfo/bacula-users Robert LeBlanc College of Life Sciences Computer Support Brigham Young University [EMAIL PROTECTED] (801)422-1882 - This SF.net email is sponsored by: Splunk Inc. Still grepping through log files to find problems? Stop. Now Search log events and configuration files using AJAX and a browser. Download your FREE copy of Splunk now http://get.splunk.com/ ___ Bacula-users mailing list Bacula-users@lists.sourceforge.net https://lists.sourceforge.net/lists/listinfo/bacula-users
Re: [Bacula-users] LTO 3, Volume Bytes
Yes that is probably right. I've gotten 1 TB on LTO-2 tapes a couple of times. It just means that you have highly compressible data. I've found that my incremental and differentials have much higher compression then my full back-ups. Robert On 9/27/07 5:02 AM, hgrapt [EMAIL PROTECTED] wrote: I'm using a Quantum Autoloader with LTO 3 tapes (400/800 GB) with HW-compression on. I'm just wondering if the output from bacula is correct ? Volume Bytes: 1,470,728,448,000 (1.470 TB) It's still writing Thank you Robert LeBlanc College of Life Sciences Computer Support Brigham Young University [EMAIL PROTECTED] (801)422-1882 - This SF.net email is sponsored by: Splunk Inc. Still grepping through log files to find problems? Stop. Now Search log events and configuration files using AJAX and a browser. Download your FREE copy of Splunk now http://get.splunk.com/ ___ Bacula-users mailing list Bacula-users@lists.sourceforge.net https://lists.sourceforge.net/lists/listinfo/bacula-users
Re: [Bacula-users] LTO 3, Volume Bytes
I've gotten almost 1 TB on a LTO2 tape. It was filled with daily incremental jobs which was mostly highly compressible log files. From what I've gathered the Volume Bytes are uncompressed bytes. Since the compression is done on the hardware, Bacula doesn't know the 'true' bytes. It would be nice if Bacula could report the true bytes, then we could get an accurate % filled statistic. Sometimes, I just have to keep wondering how much tape is left and how many more jobs will run on it especially when there is a high compression. Robert On 9/27/07 6:07 AM, John Drescher [EMAIL PROTECTED] wrote: On 9/27/07, hgrapt [EMAIL PROTECTED] wrote: I'm using a Quantum Autoloader with LTO 3 tapes (400/800 GB) with HW-compression on. I'm just wondering if the output from bacula is correct ? Volume Bytes: 1,470,728,448,000 (1.470 TB) I believe so. It said that Bacula wrote 1.47TB to your 400GB tape. You must have a lot of highly compressible data (text) and very few compressed files in your fileset. The 2:1 number is just a guess and for me it is for the most part too high as I mostly get 1.5:1. John - This SF.net email is sponsored by: Microsoft Defy all challenges. Microsoft(R) Visual Studio 2005. http://clk.atdmt.com/MRT/go/vse012070mrt/direct/01/ ___ Bacula-users mailing list Bacula-users@lists.sourceforge.net https://lists.sourceforge.net/lists/listinfo/bacula-users Robert LeBlanc College of Life Sciences Computer Support Brigham Young University [EMAIL PROTECTED] (801)422-1882 - This SF.net email is sponsored by: Microsoft Defy all challenges. Microsoft(R) Visual Studio 2005. http://clk.atdmt.com/MRT/go/vse012070mrt/direct/01/ ___ Bacula-users mailing list Bacula-users@lists.sourceforge.net https://lists.sourceforge.net/lists/listinfo/bacula-users
Re: [Bacula-users] BACULA/VMWARE crashes entire system [ON-GOING PROBLEM] [UPDATE]
I have skipped backing up the vmware disk files on the host system and bacula has not crashed the server in 3-days. I will have to do some more extensive testing, but it appears like I will have to implement a better strategy for backing up the vmware disk files for systems that are on-line. The LVM snapshot idea sounds like a good idea. I do not know why backing up those files causes a hard crash, but at least for the time being the environment appears to be more stable. I will repeat my suggestion of using Vmware's tools to create snapshots and then save the snapshot. I've done this on ESX server all scripted and it works great. It basically gives you crashed state restore. That may be able to be overcome in a soon to be released version of ESX. Robert LeBlanc College of Life Sciences Computer Support Brigham Young University [EMAIL PROTECTED] (801)422-1882 - This SF.net email is sponsored by: Microsoft Defy all challenges. Microsoft(R) Visual Studio 2005. http://clk.atdmt.com/MRT/go/vse012070mrt/direct/01/ ___ Bacula-users mailing list Bacula-users@lists.sourceforge.net https://lists.sourceforge.net/lists/listinfo/bacula-users
Re: [Bacula-users] Solved - How to mount tape in second drive
This is quite old, but I found the solution to my problem. I¹m posting here for archival purposes in case someone else runs into the same problem. The solution was to put the autochanger name as the device in the Storage section of the director¹s configuration. I had the actual drive name listed and as such would only act on the first device which was the first drive. This was documented in the manual as I was looking for something else. Robert On 7/31/07 3:29 PM, Robert LeBlanc [EMAIL PROTECTED] wrote: I upgraded to 2.1.28 to see if I could get this working. I¹ve set up our autochanger with two drives, but I can¹t mount a tape in the second drive. When asked for the drive I type in 1¹, but it says that it is mounting it in drive 0¹. Do I have the format wrong? *mount The defined Storage resources are: 1: PV132T 2: Neo8000 3: File Select Storage resource (1-3): 2 Enter autochanger drive[0]: 1 Enter autochanger slot: 12 3301 Issuing autochanger loaded? drive 0 command. 3302 Autochanger loaded? drive 0, result: nothing loaded. 3304 Issuing autochanger load slot 12, drive 0 command. 3305 Autochanger load slot 12, drive 0, status is OK. Device status: Autochanger Neo8000 with devices: Drive-1 (/dev/nst0) Drive-2 (/dev/nst1) Device Drive-1 (/dev/nst0) is mounted with: Volume: 12L3 Pool:*unknown* Media type: LTO3 Slot 12 is loaded in drive 0. Total Bytes Read=64,512 Blocks Read=1 Bytes/block=64,512 Positioned at File=0 Block=0 Device Drive-2 (/dev/nst1) is not open. Drive 1 status unknown. Robert LeBlanc College of Life Sciences Computer Support Brigham Young University [EMAIL PROTECTED] (801)422-1882 - This SF.net email is sponsored by: Splunk Inc. Still grepping through log files to find problems? Stop. Now Search log events and configuration files using AJAX and a browser. Download your FREE copy of Splunk now http://get.splunk.com/ ___ Bacula-users mailing list Bacula-users@lists.sourceforge.net https://lists.sourceforge.net/lists/listinfo/bacula-users Robert LeBlanc College of Life Sciences Computer Support Brigham Young University [EMAIL PROTECTED] (801)422-1882 - This SF.net email is sponsored by: Microsoft Defy all challenges. Microsoft(R) Visual Studio 2005. http://clk.atdmt.com/MRT/go/vse012070mrt/direct/01/___ Bacula-users mailing list Bacula-users@lists.sourceforge.net https://lists.sourceforge.net/lists/listinfo/bacula-users
Re: [Bacula-users] BACULA/VMWARE crashes entire system [ON-GOING PROBLEM]
Try excluding the VM's vmdk file directory from Bacula. My guess is that it is trying to read an ever changing file (the virtual hard disk). If you want to back-up your VMs while they are running, look into automating snapshots and be sure to exclude the portion of the snapshot that is holding the changes (usually name_of_VM-snap.vmdk or something like that. Robert LeBlanc On 9/19/07 10:22 AM, Scott Ruckh [EMAIL PROTECTED] wrote: I am running CentOS 4.5 x64 with self compiled kernel 2.6.13.4. I have installed the latest bacula 2.2.4 Director, Client, and Storage daemon on this server. Bacula run flawlessly most of the time. Unfortunately when I run VMWare Workstation and have a virtual machine running, Bacula crashes the entire system. I am running the latest version of the 5.x series of VMWare. I am also running fluxbox as my Window Manager, although I don't think that has anything to do with the problem. This problem has existed with all versions of Bacula starting with 1.3.8.11 which is the first version I installed. My backups are to an external USB disk connected to the Host OS. The virtual machine is not configured with USB port because at one time I thought there might be contention between the Physical USB disk and the USB port configured with the Virtual Machine. All virtual machines and host system work fine when bacula backups are not running. I have tested this with all sorts of guest Virtual Machines and the results are the same; bacula will crash (completely unusable) the host machine. The only recovery method it to reboot the server. Is anyone else successfully running a similar environment? As bacula is the only program that appears to cause the problem I am making an assumption it is a problem with bacula. If anyone has this type of environment working successfully I would like to hear about it. Thanks. Robert LeBlanc College of Life Sciences Computer Support Brigham Young University [EMAIL PROTECTED] (801)422-1882 - This SF.net email is sponsored by: Microsoft Defy all challenges. Microsoft(R) Visual Studio 2005. http://clk.atdmt.com/MRT/go/vse012070mrt/direct/01/ ___ Bacula-users mailing list Bacula-users@lists.sourceforge.net https://lists.sourceforge.net/lists/listinfo/bacula-users
[Bacula-users] Bcopy
Can someone point me to a resource on how to use bcopy. I've looked at the Manual, but it's pretty bare. I need to copy some LTO2 tapes to LTO3. I've mucked up my pools so bad that migration is not working at all. My idea is to bcopy my LTO2 tapes to disk (or straight to LTO3), bscan the LTO3 tape in and then purge the LTO2 tape. Anyone see a problem with that? For those wanting more detail about how I've mucked up my pools, when we got our new library, I renamed the old pools to include LTO2 in the name and then named the new pools the same as the old names. When I try to migrate jobs, I get all kinds of cyclic errors and it keeps trying to write to itself or into the same pool which is really different and other weird things. Robert LeBlanc College of Life Sciences Computer Support Brigham Young University [EMAIL PROTECTED] (801)422-1882 - This SF.net email is sponsored by: Microsoft Defy all challenges. Microsoft(R) Visual Studio 2005. http://clk.atdmt.com/MRT/go/vse012070mrt/direct/01/ ___ Bacula-users mailing list Bacula-users@lists.sourceforge.net https://lists.sourceforge.net/lists/listinfo/bacula-users
[Bacula-users] Errors migrating jobs
I've been trying to get jobs migrated off my old tape library onto my new tape library. I've followed the documentation online, but it seems like it is trying to go backwards (new library to old library). I've tried several different things and I keep getting an error that says This shouldn't happen. I'm using the bacula 2.2.0 Debian packages from sid on Lenny, but I had the same problem with 2.1.28 that I built myself. A few things catch my attention: 06-Sep 15:24 babacula-dir: Job queued. JobId=6256 06-Sep 15:24 babacula-dir: Migration JobId 6256 started. 06-Sep 15:24 babacula-dir: The following 1 JobIds were chosen to be migrated: 3693 06-Sep 15:24 babacula-dir: Migration using JobId=3693 Job=nightwing.2007-08-01_08.49.46 06-Sep 15:24 babacula-dir: Bootstrap records written to /var/lib/bacula/babacula-dir.restore.54.bsr 06-Sep 15:24 babacula-dir: 06-Sep 15:24 babacula-dir: The job will require the following Volume(s) Storage(s)SD Device(s) === 06-Sep 15:24 babacula-dir: 06-Sep 15:24 babacula-dir:61L3 Neo8000 Drive-1 06-Sep 15:24 babacula-dir: 06-Sep 15:24 babacula-dir: Start Migration JobId 6256, Job=Migrate_volume.2007-09-06_15.24.49 06-Sep 15:24 babacula-dir: Job queued. JobId=6258 06-Sep 15:24 babacula-dir: Migration JobId 6258 started. 06-Sep 15:24 babacula-dir: Using Device Drive-1 06-Sep 14:57 lsbacsd0-sd: acquire.c:115 Changing device. Want Media Type=LTO3 have=LTO2 device=IBM-1 (/dev/nst2) 06-Sep 14:57 lsbacsd0-sd: Migrate_volume.2007-09-06_15.24.49 Fatal error: askdir.c:332 NULL Volume name. This shouldn't happen!!! First it says that it will use 61L3 which is not in the source or destination pool. It is also in the new library. It then says that it wants media LTO3, but that is has LTO2 which says that it might be trying to use the right device. Then it has a NULL Volume name and says This shouldn't happen. The error output seems correct: Build OS: i486-pc-linux-gnu debian lenny/sid Prev Backup JobId: 3664 New Backup JobId: 6257 Migration JobId:6256 Migration Job: Migrate_volume.2007-09-06_15.24.49 Backup Level: Full Client: lsbacsd0-fd FileSet:Windows 2007-03-22 16:44:22 Read Pool: Monthly (From Job resource) Read Storage: PV132T (From Pool resource) Write Pool: Monthly-new (From Job Pool's NextPool resource) Write Storage: Neo8000 (From Storage from Pool's NextPool resource) Start time: 06-Sep-2007 15:24:53 End time: 06-Sep-2007 15:24:54 Elapsed time: 1 sec Priority: 10 SD Files Written: 0 SD Bytes Written: 0 (0 B) Rate: 0.0 KB/s Volume name(s): Volume Session Id: 209 Volume Session Time:1189109201 Last Volume Bytes: 0 (0 B) SD Errors: 0 SD termination status: Error Termination:*** Migration Error *** The correct read/write pool and storage are correct. My director conf is: Pool { Name = Monthly-new Volume Use Duration = 20d Pool Type = Backup Storage = Neo8000 AutoPrune = yes VolumeRetention = 2 years Recycle = yes Label Format = Monthly- } Pool { Name = Monthly Volume Use Duration = 20d Pool Type = Backup Storage = PV132T AutoPrune = yes VolumeRetention = 2 years Recycle = yes Label Format = Monthly-LTO2- Next Pool = Monthly-new } Job { Name = Migrate_volume Type = Migrate Level = Full Client = lsbacsd0-fd File Set = Windows Messages = Standard Pool = Monthly Maximum Concurrent Jobs = 4 Selection Type = Volume Selection Pattern = 0* } When we got our new library, I renamed the Monthly pool to Monthly-LTO2 and created a Monthly pool on the Neo8000 so that jobs would run there. I tried migrating jobs with that configuration and I got the error that Monthly did not have a next pool. So I named Monthly to Monthly-new and Monthly-LTO2 to Monthly and that is the config and error above. I've also tried migrating to a new pool that did not exist in the old library and I get the same errors as above. The old library was attached to the director and I was having problems so I attached it to the SD that has the Neo8000, so both libraries are attached to the same computer. Any help or pointers will be helpful. Thanks, Robert Life Sciences Computer Support Brigham Young University - This SF.net email is sponsored by: Splunk Inc. Still grepping through log files to find problems? Stop. Now Search log events and configuration files using AJAX and a browser. Download your FREE copy of Splunk now
Re: [Bacula-users] Dealing with failed/missed jobs
On 8/7/07 5:32 PM, Charles Sprickman [EMAIL PROTECTED] wrote: Hi all, I'm having some trouble figuring out how to catch up when someone has forgotten to put a tape in or if I manually schedule a job that requires a different pool than what is in the tape. I think a real-world example is in order. My fulls are on the first weekend of the month, diffs each subsequent weekend, then incrementals on weekdays. No one is in the office sat/sun to change tapes. This past weekend I mistakenly asked for a tape from the weekly pool to be inserted. Unfortunately, I had forgotten this was a new month. So on Sunday afternoon when bacula was going to do a run, it wanted to do a Full and it wanted a tape from the Monthly pool. No one was around, so the jobs did not start. Monday I asked for someone to put in the next Monthly tape, but then that night bacula wanted a Daily. This is where I get confused. If a job fails simply due to the wrong tape, how do I make bacula re-run the job and run it to the appropriate pool? If I let this slide, is bacula simply going to wait until the first weekend in September to do a full run? I'd really like to get one in ASAP. I've just run the jobs manually and modified the job to be the right level and the right pool. Kind of a pain sometimes when a lot of jobs fail (we have almost 30 clients). I would be interested in a batch restart too. This sort of mishandling of tapes will likely not be a one-time occurence, plus there's issues of people going on vacation and similar where there will be no operator on site to swap tapes. How do other people deal with this? What happens to these failed jobs in the catalog? Should they be deleted? Is there a way to reschedule them all? Another thing that I have not figured out is how to see what bacula thinks it's next run will be (what hosts, what level, what pool). I'd like to know this for troubleshooting purposes as well as to try and script something to give people an advance warning about what tape should be in the drive each night. You can do a status on the director and it will tell you all that info in the top portion of the screen except the pool (it does tell you the tape it thinks it will use which can change if the tape fills up) And lastly, any plans to have the spool act like it does in Amanda? Meaning that if you have the space and you don't have the right tape in, bacula will spool all the jobs until the right tape ends up in the drive. Or perhaps it is possible in some way that I'm not seeing. That is a cool feature that would be pretty nifty. We have a changer so it's not as big a del, but it sounds like a great feature Any help is appreciated, we're very happy so far with bacula but for this little issue of our sneakernet changer not being 100% reliable. :) Thanks, Charles - This SF.net email is sponsored by: Splunk Inc. Still grepping through log files to find problems? Stop. Now Search log events and configuration files using AJAX and a browser. Download your FREE copy of Splunk now http://get.splunk.com/ ___ Bacula-users mailing list Bacula-users@lists.sourceforge.net https://lists.sourceforge.net/lists/listinfo/bacula-users Robert LeBlanc College of Life Sciences Computer Support Brigham Young University [EMAIL PROTECTED] (801)422-1882 - This SF.net email is sponsored by: Splunk Inc. Still grepping through log files to find problems? Stop. Now Search log events and configuration files using AJAX and a browser. Download your FREE copy of Splunk now http://get.splunk.com/ ___ Bacula-users mailing list Bacula-users@lists.sourceforge.net https://lists.sourceforge.net/lists/listinfo/bacula-users
Re: [Bacula-users] space issues
Not without losing all your back-ups. With disk, it is best to set a volume limit so that it will create multiple back-up files. These will look like tapes and bacula will be able to prune and recycle these, freeing up disk space the size of the back-up file. You may be able to use bcopy to extract the back-up into another set of files, but I'm not sure and it would require more disk space. Robert LeBlanc College of Life Sciences Computer Support Brigham Young University [EMAIL PROTECTED] (801)422-1882 -Original Message- From: [EMAIL PROTECTED] [mailto:[EMAIL PROTECTED] On Behalf Of Megan Kispert Sent: Wednesday, August 01, 2007 7:55 AM To: bacula-users@lists.sourceforge.net Subject: [Bacula-users] space issues Morning, I'm running bacula-2.1.26 on a centos 4.5 server. I have my backups going to disk. One of my disks ran out of space due to a failure on my part to exclude a directory that shouldn't have been backed up. I have two volumes on this disk. I tried to delete jobs for this particular problem client, and I also used prune to try to clean up the volumes, files, and jobs, but I cannot get the actual used disk space to budge. Is there a way to delete files from the volume? -megan ++ | Megan Kispert | Code: 423 | GSFC: 301-614-5410 | ADNET: 301.352.4632 | [EMAIL PROTECTED] ++ - This SF.net email is sponsored by: Splunk Inc. Still grepping through log files to find problems? Stop. Now Search log events and configuration files using AJAX and a browser. Download your FREE copy of Splunk now http://get.splunk.com/ ___ Bacula-users mailing list Bacula-users@lists.sourceforge.net https://lists.sourceforge.net/lists/listinfo/bacula-users - This SF.net email is sponsored by: Splunk Inc. Still grepping through log files to find problems? Stop. Now Search log events and configuration files using AJAX and a browser. Download your FREE copy of Splunk now http://get.splunk.com/ ___ Bacula-users mailing list Bacula-users@lists.sourceforge.net https://lists.sourceforge.net/lists/listinfo/bacula-users
Re: [Bacula-users] Moving to new library
More info on this. I put one of the old tapes in the new library and the barcode reader reads it differently then the old one does. The new library attaches L2 to the volume name. I would guess if this is not the case then Bacula would not have a problem. So before I go through the database and rename the volume to see if it works, does anyone have a suggestion? Catalog record for Volume 99L3 updated to reference slot 99. Volume 13L2 not found in catalog. Slot=500 InChanger set to zero. Catalog record for Volume 000100L3 updated to reference slot 100. Thanks, Robert Robert LeBlanc College of Life Sciences Computer Support Brigham Young University [EMAIL PROTECTED] (801)422-1882 From: [EMAIL PROTECTED] [mailto:[EMAIL PROTECTED] On Behalf Of Robert LeBlanc Sent: Monday, July 30, 2007 3:59 PM To: bacula-users@lists.sourceforge.net Subject: [Bacula-users] Moving to new library We just got a shiny new tape library and I'd like to move our old LTO2 tapes into it from our old one. Anyone have experience moving tapes like this? I've set-up a new server to act as just the SD and have the new library connected to it. It is labeling tapes right now. Our old library is connected to the SD/DIR. The thing that concerns me is that the barcodes overlap, but the old library is LTO2 and the new one is LTO3. It seems that the LTO format is part of the barcode of the new tapes, but the old ones don't record it. Old tapes: Storage Element 1:Full :VolumeTag=01 Storage Element 2:Full :VolumeTag=02 Storage Element 3:Full :VolumeTag=03 New tapes: Storage Element 1:Full :VolumeTag=01L3 Storage Element 2:Full :VolumeTag=02L3 Storage Element 3:Full :VolumeTag=03L3 I think I should be able to just move the tapes to the new library, but I'm not sure if bacula would get confused. Thanks, Robert Robert LeBlanc College of Life Sciences Computer Support Brigham Young University [EMAIL PROTECTED] (801)422-1882 - This SF.net email is sponsored by: Splunk Inc. Still grepping through log files to find problems? Stop. Now Search log events and configuration files using AJAX and a browser. Download your FREE copy of Splunk now http://get.splunk.com/___ Bacula-users mailing list Bacula-users@lists.sourceforge.net https://lists.sourceforge.net/lists/listinfo/bacula-users
[Bacula-users] Recall: Moving to new library
Robert LeBlanc would like to recall the message, [Bacula-users] Moving to new library. - This SF.net email is sponsored by: Splunk Inc. Still grepping through log files to find problems? Stop. Now Search log events and configuration files using AJAX and a browser. Download your FREE copy of Splunk now http://get.splunk.com/___ Bacula-users mailing list Bacula-users@lists.sourceforge.net https://lists.sourceforge.net/lists/listinfo/bacula-users
Re: [Bacula-users] Moving to new library
Another update: I changed the record in the database to reflect the way the barcode reads the volume, and it seems to be recognized, however, it is listed as LTO2 and as such, the library can not find a suitable device to read the tape. Although LTO3 can read LTO2 tapes. Has this been addressed in a newer release? I am running 2.0.3 from Debian packages. Thanks, Robert More info on this. I put one of the old tapes in the new library and the barcode reader reads it differently then the old one does. The new library attaches L2 to the volume name. I would guess if this is not the case then Bacula would not have a problem. So before I go through the database and rename the volume to see if it works, does anyone have a suggestion? Catalog record for Volume 99L3 updated to reference slot 99. Volume 13L2 not found in catalog. Slot=500 InChanger set to zero. Catalog record for Volume 000100L3 updated to reference slot 100. Thanks, Robert Robert LeBlanc College of Life Sciences Computer Support Brigham Young University [EMAIL PROTECTED] (801)422-1882 From: [EMAIL PROTECTED] [mailto:[EMAIL PROTECTED] On Behalf Of Robert LeBlanc Sent: Monday, July 30, 2007 3:59 PM To: bacula-users@lists.sourceforge.net Subject: [Bacula-users] Moving to new library We just got a shiny new tape library and I'd like to move our old LTO2 tapes into it from our old one. Anyone have experience moving tapes like this? I've set-up a new server to act as just the SD and have the new library connected to it. It is labeling tapes right now. Our old library is connected to the SD/DIR. The thing that concerns me is that the barcodes overlap, but the old library is LTO2 and the new one is LTO3. It seems that the LTO format is part of the barcode of the new tapes, but the old ones don't record it. Old tapes: Storage Element 1:Full :VolumeTag=01 Storage Element 2:Full :VolumeTag=02 Storage Element 3:Full :VolumeTag=03 New tapes: Storage Element 1:Full :VolumeTag=01L3 Storage Element 2:Full :VolumeTag=02L3 Storage Element 3:Full :VolumeTag=03L3 I think I should be able to just move the tapes to the new library, but I'm not sure if bacula would get confused. Thanks, Robert Robert LeBlanc College of Life Sciences Computer Support Brigham Young University [EMAIL PROTECTED] (801)422-1882 - This SF.net email is sponsored by: Splunk Inc. Still grepping through log files to find problems? Stop. Now Search log events and configuration files using AJAX and a browser. Download your FREE copy of Splunk now http://get.splunk.com/___ Bacula-users mailing list Bacula-users@lists.sourceforge.net https://lists.sourceforge.net/lists/listinfo/bacula-users
[Bacula-users] How to mount tape in second drive
I upgraded to 2.1.28 to see if I could get this working. I've set up our autochanger with two drives, but I can't mount a tape in the second drive. When asked for the drive I type in '1', but it says that it is mounting it in drive '0'. Do I have the format wrong? *mount The defined Storage resources are: 1: PV132T 2: Neo8000 3: File Select Storage resource (1-3): 2 Enter autochanger drive[0]: 1 Enter autochanger slot: 12 3301 Issuing autochanger loaded? drive 0 command. 3302 Autochanger loaded? drive 0, result: nothing loaded. 3304 Issuing autochanger load slot 12, drive 0 command. 3305 Autochanger load slot 12, drive 0, status is OK. ... Device status: Autochanger Neo8000 with devices: Drive-1 (/dev/nst0) Drive-2 (/dev/nst1) Device Drive-1 (/dev/nst0) is mounted with: Volume: 12L3 Pool:*unknown* Media type: LTO3 Slot 12 is loaded in drive 0. Total Bytes Read=64,512 Blocks Read=1 Bytes/block=64,512 Positioned at File=0 Block=0 Device Drive-2 (/dev/nst1) is not open. Drive 1 status unknown. Robert LeBlanc College of Life Sciences Computer Support Brigham Young University [EMAIL PROTECTED] (801)422-1882 - This SF.net email is sponsored by: Splunk Inc. Still grepping through log files to find problems? Stop. Now Search log events and configuration files using AJAX and a browser. Download your FREE copy of Splunk now http://get.splunk.com/___ Bacula-users mailing list Bacula-users@lists.sourceforge.net https://lists.sourceforge.net/lists/listinfo/bacula-users
[Bacula-users] Moving to new library
We just got a shiny new tape library and I'd like to move our old LTO2 tapes into it from our old one. Anyone have experience moving tapes like this? I've set-up a new server to act as just the SD and have the new library connected to it. It is labeling tapes right now. Our old library is connected to the SD/DIR. The thing that concerns me is that the barcodes overlap, but the old library is LTO2 and the new one is LTO3. It seems that the LTO format is part of the barcode of the new tapes, but the old ones don't record it. Old tapes: Storage Element 1:Full :VolumeTag=01 Storage Element 2:Full :VolumeTag=02 Storage Element 3:Full :VolumeTag=03 New tapes: Storage Element 1:Full :VolumeTag=01L3 Storage Element 2:Full :VolumeTag=02L3 Storage Element 3:Full :VolumeTag=03L3 I think I should be able to just move the tapes to the new library, but I'm not sure if bacula would get confused. Thanks, Robert Robert LeBlanc College of Life Sciences Computer Support Brigham Young University [EMAIL PROTECTED] (801)422-1882 - This SF.net email is sponsored by: Splunk Inc. Still grepping through log files to find problems? Stop. Now Search log events and configuration files using AJAX and a browser. Download your FREE copy of Splunk now http://get.splunk.com/___ Bacula-users mailing list Bacula-users@lists.sourceforge.net https://lists.sourceforge.net/lists/listinfo/bacula-users
Re: [Bacula-users] Setting up a Dell PV 132T
Attached devices: Host: scsi0 Channel: 00 Id: 00 Lun: 00 Vendor: VMware Model: Virtual disk Rev: 1.0 Type: Direct-AccessANSI SCSI revision: 02 Host: scsi0 Channel: 00 Id: 01 Lun: 00 Vendor: VMware Model: Virtual disk Rev: 1.0 Type: Direct-AccessANSI SCSI revision: 02 Host: scsi0 Channel: 00 Id: 02 Lun: 00 Vendor: VMware Model: Virtual disk Rev: 1.0 Type: Direct-AccessANSI SCSI revision: 02 Host: scsi0 Channel: 00 Id: 03 Lun: 00 Vendor: VMware Model: Virtual disk Rev: 1.0 Type: Direct-AccessANSI SCSI revision: 02 Host: scsi0 Channel: 00 Id: 10 Lun: 00 Vendor: DELL Model: PV-132T-FC Rev: 42d4 Type: RAID ANSI SCSI revision: 03 Host: scsi0 Channel: 00 Id: 11 Lun: 00 Vendor: DELL Model: PV-132T Rev: 308D Type: Medium Changer ANSI SCSI revision: 02 Host: scsi0 Channel: 00 Id: 12 Lun: 00 Vendor: IBM Model: ULTRIUM-TD2 Rev: 53Y3 Type: Sequential-AccessANSI SCSI revision: 03 My PV132T uses /dev/sg5, but we are fibre attached, it looks like yours is SCSI. Robert LeBlanc College of Life Sciences Computer Support Brigham Young University [EMAIL PROTECTED] (801)422-1882 -Original Message- From: [EMAIL PROTECTED] [mailto:[EMAIL PROTECTED] On Behalf Of Mike Vasquez Sent: Thursday, July 26, 2007 9:49 AM To: bacula-users@lists.sourceforge.net Subject: [Bacula-users] Setting up a Dell PV 132T I am trying to migrate my bacula system to a newer machine. I have installed bacula with mysql with no problems. I installed a Dell PV 132 T to this machine and ran cat /proc/scsi/scsi and got the following results: Attached devices: Host: scsi0 Channel: 00 Id: 06 Lun: 00 Vendor: PE/PVModel: 1x6 SCSI BP Rev: 1.0 Type: ProcessorANSI SCSI revision: 02 Host: scsi0 Channel: 02 Id: 00 Lun: 00 Vendor: MegaRAID Model: LD 0 RAID1 279G Rev: 521X Type: Direct-AccessANSI SCSI revision: 02 Host: scsi1 Channel: 00 Id: 06 Lun: 00 Vendor: IBM Model: ULTRIUM-TD2 Rev: 333K Type: Sequential-AccessANSI SCSI revision: 03 Then when I run the command mtx -f /dev/sg2 inquiry, I get the following results: mtx: Request Sense: Long Report=yes mtx: Request Sense: Valid Residual=no mtx: Request Sense: Error Code=0 (Unknown?!) mtx: Request Sense: Sense Key=No Sense mtx: Request Sense: FileMark=no mtx: Request Sense: EOM=no mtx: Request Sense: ILI=no mtx: Request Sense: Additional Sense Code = 00 mtx: Request Sense: Additional Sense Qualifier = 00 mtx: Request Sense: BPV=no mtx: Request Sense: Error in CDB=no mtx: Request Sense: SKSV=no INQUIRY Command Failed Would anyone know the cause of this error? I have the device set at the factory default settings except I have turned off the scanner, since I don't have any barcodes. TIA Mike -- View this message in context: http://www.nabble.com/Setting-up-a-Dell-PV-132T-tf4152467.html#a11813266 Sent from the Bacula - Users mailing list archive at Nabble.com. - This SF.net email is sponsored by: Splunk Inc. Still grepping through log files to find problems? Stop. Now Search log events and configuration files using AJAX and a browser. Download your FREE copy of Splunk now http://get.splunk.com/ ___ Bacula-users mailing list Bacula-users@lists.sourceforge.net https://lists.sourceforge.net/lists/listinfo/bacula-users - This SF.net email is sponsored by: Splunk Inc. Still grepping through log files to find problems? Stop. Now Search log events and configuration files using AJAX and a browser. Download your FREE copy of Splunk now http://get.splunk.com/ ___ Bacula-users mailing list Bacula-users@lists.sourceforge.net https://lists.sourceforge.net/lists/listinfo/bacula-users