Re: Monthly archival tapes
Hi, thanks again for the replies. I think Jon's suggestion of using labels such as daily-9b sounds like the best option in my case. Yep reconnecting the SCSI cables cut amanda's dump time in half! Whoohoo! If I did change to incremental backups in future, then I assume this would then be correct, right? dumpcycle 7 days (full backup once every 7 days, with incrementals in-between) runspercycle 5 days (amdump runs 5 times per dumpcycle - Monday to Friday) tapecycle 21 tapes (4 weeks' worth of backups + 1 tape to prevent overwriting a last level 0) I also have that problem where my first 5 tapes are requested out of sequence by amanda, so if I leave the tapecycle at 21 tapes, but actually use 26 tapes, then I'd be able to fix that issue too by the time amanda has ran through the entire set of tapes? Is that correct? Regular expressions do my head it...but I've come up with this to enable me to label tapes daily-1 or daily-1b, etc: labelstr daily-[0-9][0-9]*[a-z]* :) Jon LaBadie wrote: On Tue, Aug 08, 2006 at 08:51:32AM -0700, Joe Donner (sent by Nabble.com) wrote: The amanda configuration I'm planning to put live this week includes these entries in amanda.conf: dumpcycle 0 (to force a full backup on every run, because all our data fit comfortably onto a single tape every night, and amdump only runs for 4.5 hours) Gee, I thought it took about 53 hours for your taping. Oh you reconnected the cables huh? ;)) runspercycle 5 days (to do an amdump each day Monday to Friday) No, that is runs per dumpcycle. Your dumpcycle is basically 1 day, you are only doing one run per day. tapecycle 21 tapes (to have 4 weeks' worth of historical backups + one extra tape for good measure) How do I now handle taking out one tape a month for long-term archiving? You could use a lower tapecycle setting but really use 21 tapes. Then amanada wouldn't care if one went away. Otherwise, add a newly labeled tape. If my tapes are labelled daily-1, daily-2, etc., then how do I take out one tape a month but make sure that this doesn't confuse amanda, and that I will be able to restore from that tape in the future? amadmin config noreuse tapelabel Do I add a new tape each time in my numbering sequence? Can I reuse tape labels but somehow cause amanda to not overwrite the information needed to do restores from the archived tapes? Not with amanda capabilities. How about when you remove daily-9, you make a new tape as daily-9b or daily-9.1. Reset your label string appropriately. -- Jon H. LaBadie [EMAIL PROTECTED] JG Computing 4455 Province Line Road(609) 252-0159 Princeton, NJ 08540-4322 (609) 683-7220 (fax) -- View this message in context: http://www.nabble.com/Monthly-archival-tapes-tf2073538.html#a5740533 Sent from the Amanda - Users forum at Nabble.com.
Re: tapelist - how to edit
Thanks for your insight into this. As it stands, I'm only using 5 tapes and amanda hasn't gone into production yet. So it really is a small matter if I lose what has already been backed up to those tapes. In fact, if I amrmtape'd all of the existing tapes and start over again, it wouldn't really matter at this stage. However, I want to put amanda into production this week, so would like to have everything looking nice and neat. This is purely as a convenience to me - I just don't like the tape numbers being out of sequence... Jon LaBadie wrote: On Mon, Aug 07, 2006 at 09:02:48AM -0400, Jon LaBadie wrote: On Mon, Aug 07, 2006 at 02:19:12AM -0700, Joe Donner (sent by Nabble.com) wrote: Does anyone know, please? Will appreciate your thoughts a lot. Thanks. Joe Donner wrote: Dear all, my tapelist has become a bit confused, due to my own fault, by me adding tapes in the middle of my numbered sequence. At the moment it looks like this: 20060803 daily-5 reuse 20060802 daily-1 reuse 20060801 daily-3 reuse 20060731 daily-4 reuse 20060731 daily-2 reuse daily-5 was used last night, and amanda now wants daily-2. What is the correct way of editing this so amanda will use the daily-1 next, then daily-2, etc.? Would this be correct (I'm not sure of the implications of the dates in the tapelist, and whether I will need to adjust those dates as well)?: 20060803 daily-5 reuse 20060731 daily-4 reuse 20060801 daily-3 reuse 20060731 daily-2 reuse 20060802 daily-1 reuse (I'm using full dumps at the moment for every amdump run, if that helps...) Your advice will be much appreciated, as always. I think tapelist sequence is the last thing considered when selecting the next tape to be used. Only those used earlier or equal to tapecycle runs ago are considered. So yes, date does matter. Be aware that if you overwrite a tape out of sequence, the dumps of each DLE higher in level and newer lose a lot of their backup value. Meant to add, amcheck -s and amtape taper each show what tape will be selected to use next. Save a copy of your tapelist file, edit and test. -- Jon H. LaBadie [EMAIL PROTECTED] JG Computing 4455 Province Line Road(609) 252-0159 Princeton, NJ 08540-4322 (609) 683-7220 (fax) -- View this message in context: http://www.nabble.com/tapelist---how-to-edit-tf2050982.html#a5702341 Sent from the Amanda - Users forum at Nabble.com.
Monthly archival tapes
The amanda configuration I'm planning to put live this week includes these entries in amanda.conf: dumpcycle 0 (to force a full backup on every run, because all our data fit comfortably onto a single tape every night, and amdump only runs for 4.5 hours) runspercycle 5 days (to do an amdump each day Monday to Friday) tapecycle 21 tapes (to have 4 weeks' worth of historical backups + one extra tape for good measure) How do I now handle taking out one tape a month for long-term archiving? If my tapes are labelled daily-1, daily-2, etc., then how do I take out one tape a month but make sure that this doesn't confuse amanda, and that I will be able to restore from that tape in the future? Do I add a new tape each time in my numbering sequence? Can I reuse tape labels but somehow cause amanda to not overwrite the information needed to do restores from the archived tapes? Your insights will be much appreciated. -- View this message in context: http://www.nabble.com/Monthly-archival-tapes-tf2073538.html#a5709386 Sent from the Amanda - Users forum at Nabble.com.
Re: tapelist - how to edit
Does anyone know, please? Will appreciate your thoughts a lot. Thanks. Joe Donner wrote: Dear all, my tapelist has become a bit confused, due to my own fault, by me adding tapes in the middle of my numbered sequence. At the moment it looks like this: 20060803 daily-5 reuse 20060802 daily-1 reuse 20060801 daily-3 reuse 20060731 daily-4 reuse 20060731 daily-2 reuse daily-5 was used last night, and amanda now wants daily-2. What is the correct way of editing this so amanda will use the daily-1 next, then daily-2, etc.? Would this be correct (I'm not sure of the implications of the dates in the tapelist, and whether I will need to adjust those dates as well)?: 20060803 daily-5 reuse 20060731 daily-4 reuse 20060801 daily-3 reuse 20060731 daily-2 reuse 20060802 daily-1 reuse (I'm using full dumps at the moment for every amdump run, if that helps...) Your advice will be much appreciated, as always. -- View this message in context: http://www.nabble.com/tapelist---how-to-edit-tf2050982.html#a5681796 Sent from the Amanda - Users forum at Nabble.com.
Re: Dramatic reduction in backup time
I'm running amanda on Red Hat Enterprise 3. I'm a bit new to Linux, but will try and investigate dmesg as you suggested. Otherwise, thanks very much for all your input and help. I'll see how it goes from here on. At least now I know what the probable cause is. Thanks. Michael Loftis wrote: --On August 3, 2006 9:05:03 AM -0700 Joe Donner (sent by Nabble.com) [EMAIL PROTECTED] wrote: Yes, it's the exact same tape drive I've been using extensively for testing, and all that time it had been sitting in the same position on the floor. I moved it on Monday, and then amanda took off like a lightning bolt. Wow, that's something I'll be classing as very weird, but very good. I was impressed by the 9 hours it took to do backups, but now I'm speechless... I've introduced a new tape for tonight's backup run. Will see what it tells me tomorrow. Thanks very much for your help! Yeah it sounds VERY much like a marginal SCSI cablewhat OS do you have? Sometimes termination issues will show up in dmesg, also dmesg will usually show the speed at which the device negotiatesit is shown elsewhere too depending on your OS. -- View this message in context: http://www.nabble.com/Dramatic-reduction-in-backup-time-tf2044425.html#a5646615 Sent from the Amanda - Users forum at Nabble.com.
tapelist - how to edit
Dear all, my tapelist has become a bit confused, due to my own fault, by me adding tapes in the middle of my numbered sequence. At the moment it looks like this: 20060803 daily-5 reuse 20060802 daily-1 reuse 20060801 daily-3 reuse 20060731 daily-4 reuse 20060731 daily-2 reuse daily-5 was used last night, and amanda now wants daily-2. What is the correct way of editing this so amanda will use the daily-1 next, then daily-2, etc.? Would this be correct (I'm not sure of the implications of the dates in the tapelist, and whether I will need to adjust those dates as well)?: 20060803 daily-5 reuse 20060731 daily-4 reuse 20060801 daily-3 reuse 20060731 daily-2 reuse 20060802 daily-1 reuse Your advice will be much appreciated, as always. -- View this message in context: http://www.nabble.com/tapelist---how-to-edit-tf2050982.html#a5649657 Sent from the Amanda - Users forum at Nabble.com.
Dramatic reduction in backup time
Dear all, Something really strange has happened to my amanda setup. I'm inclined to feel impressed, but don't know whether this in fact points to something being wrong. If you compare run times, dump times, tape times, average dump rate and average tape write rate for the four different dates below, you'll see that suddenly amanda has shown a dramatic increase in performance (so to speak) even though the amount of data in question hasn't changed that much at all. I honestly haven't changed anything. The only thing that happened was that I had to run amflush on July 31, because for some reason the amanda-related services on the backup server seemed to have hung (but that's an entirely different issue). Extracts from the amanda mail report after each amdump (I'm doing full backups with each run at the moment): July 26 Estimate Time (hrs:min)0:04 Run Time (hrs:min) 9:43 Dump Time (hrs:min)6:21 6:21 0:00 Output Size (meg) 56974.256974.20.0 Original Size (meg)136759.4 136759.40.0 Avg Compressed Size (%)41.7 41.7-- Filesystems Dumped 106106 0 Avg Dump Rate (k/s) 2549.2 2549.2-- Tape Time (hrs:min)9:23 9:23 0:00 Tape Size (meg) 56974.356974.30.0 Tape Used (%) 36.5 36.50.0 Filesystems Taped 106106 0 Avg Tp Write Rate (k/s) 1727.5 1727.5-- July 27 Estimate Time (hrs:min)0:04 Run Time (hrs:min) 9:43 Dump Time (hrs:min)6:17 6:17 0:00 Output Size (meg) 57089.057089.00.0 Original Size (meg)137013.6 137013.60.0 Avg Compressed Size (%)41.7 41.7-- Filesystems Dumped 106106 0 Avg Dump Rate (k/s) 2587.5 2587.5-- Tape Time (hrs:min)9:23 9:23 0:00 Tape Size (meg) 57089.157089.10.0 Tape Used (%) 36.6 36.60.0 Filesystems Taped 106106 0 Avg Tp Write Rate (k/s) 1730.2 1730.2-- July 31 Estimate Time (hrs:min)0:04 Run Time (hrs:min) 4:03 Dump Time (hrs:min)3:12 3:12 0:00 Output Size (meg) 57265.857265.80.0 Original Size (meg)137495.9 137495.90.0 Avg Compressed Size (%)41.6 41.6-- Filesystems Dumped 109109 0 Avg Dump Rate (k/s) 5079.1 5079.1-- Tape Time (hrs:min)1:17 1:17 0:00 Tape Size (meg) 57265.957265.90.0 Tape Used (%) 36.7 36.70.0 Filesystems Taped 109109 0 Avg Tp Write Rate (k/s) 12695.612695.6-- August 1 Estimate Time (hrs:min)0:04 Run Time (hrs:min) 4:27 Dump Time (hrs:min)3:37 3:37 0:00 Output Size (meg) 57887.157887.10.0 Original Size (meg)139210.7 139210.70.0 Avg Compressed Size (%)41.6 41.6-- Filesystems Dumped 106106 0 Avg Dump Rate (k/s) 4562.9 4562.9-- Tape Time (hrs:min)1:16 1:16 0:00 Tape Size (meg) 57887.257887.20.0 Tape Used (%) 37.1 37.10.0 Filesystems Taped 106106 0 Avg Tp Write Rate (k/s) 12951.412951.4-- -- View this message in context: http://www.nabble.com/Dramatic-reduction-in-backup-time-tf2044425.html#a5628905 Sent from the Amanda - Users forum at Nabble.com.
Re: Dramatic reduction in backup time
That was definitely not the case. The holding disk has been empty all this time, except on 31 July. And after I ran amflush it was empty again. What I don't understand is the increase in dump time and tape time? They both more than doubled in speed! How is that possible? Olivier Nicole wrote: I honestly haven't changed anything. The only thing that happened was that I had to run amflush on July 31, because for some reason the amanda-related services on the backup server seemed to have hung (but that's an entirely different issue). That si just a wilkd guess, but how about the amflush has freed space on your holding disk that had been wasted for ages? Olivier -- View this message in context: http://www.nabble.com/Dramatic-reduction-in-backup-time-tf2044425.html#a5629636 Sent from the Amanda - Users forum at Nabble.com.
Re: Dramatic reduction in backup time
Well, I may have lied a bit when saying that nothing had changed... What did change was that I moved the server from temporarily sitting on the floor to where it will now stay full-time. So it was a shutdown, unplug the external tape drive and other devices, and then plug them all back in. Also, one of the little screws which connects the SCSI cable to the back of the server has long ago broken off and gone missing. So it's only attached by one screw. Well, do you agree with me that it is still backing up the same amount of data and all the DLEs? It certainly looks that way from the mail reports. I'm a bit worried because it's looking too good to be true. I also don't see how it can dump 130-odd gigabytes of data within a bit more than 3 hours?? Thanks for your help on this. Jon LaBadie wrote: On Thu, Aug 03, 2006 at 02:46:06AM -0700, Joe Donner (sent by Nabble.com) wrote: That was definitely not the case. The holding disk has been empty all this time, except on 31 July. And after I ran amflush it was empty again. What I don't understand is the increase in dump time and tape time? They both more than doubled in speed! How is that possible? Olivier Nicole wrote: I honestly haven't changed anything. The only thing that happened was that I had to run amflush on July 31, because for some reason the amanda-related services on the backup server seemed to have hung (but that's an entirely different issue). That si just a wilkd guess, but how about the amflush has freed space on your holding disk that had been wasted for ages? Holding disk issues were going to be my guess also. When the run time (wall clock) and tape time (taper is active) match as the first two: Run Time (hrs:min) 9:43 Dump Time (hrs:min)6:21 Tape Time (hrs:min)9:23 Run Time (hrs:min) 9:43 Dump Time (hrs:min)6:17 Tape Time (hrs:min)9:23 it generally means your dumps are going straight to tape. Contrast that with the last two where time for taping the same amount of data is greatly lower than total run time. Run Time (hrs:min) 4:03 Dump Time (hrs:min)3:12 Tape Time (hrs:min)1:17 Run Time (hrs:min) 4:27 Dump Time (hrs:min)3:37 Tape Time (hrs:min)1:16 However, if the dumps were going straight to tape then I would expect the dump time (cumulative time for dumping of each DLE - thus can be run time if DLEs dump in parallel) to match tape time. So it would appear the dumps were going to the holding disk but were severely constrained by slow tapeing. I wonder about the halving of dump time also. There may still have been a problem with the disk system. Did anyone change any scsi cables or terminators. Or maybe the just jiggled things back there doing other work? -- Jon H. LaBadie [EMAIL PROTECTED] JG Computing 4455 Province Line Road(609) 252-0159 Princeton, NJ 08540-4322 (609) 683-7220 (fax) -- View this message in context: http://www.nabble.com/Dramatic-reduction-in-backup-time-tf2044425.html#a5634710 Sent from the Amanda - Users forum at Nabble.com.
Re: Dramatic reduction in backup time
Yes, it's the exact same tape drive I've been using extensively for testing, and all that time it had been sitting in the same position on the floor. I moved it on Monday, and then amanda took off like a lightning bolt. Wow, that's something I'll be classing as very weird, but very good. I was impressed by the 9 hours it took to do backups, but now I'm speechless... I've introduced a new tape for tonight's backup run. Will see what it tells me tomorrow. Thanks very much for your help! Jon LaBadie wrote: On Thu, Aug 03, 2006 at 08:25:43AM -0700, Joe Donner (sent by Nabble.com) wrote: Well, I may have lied a bit when saying that nothing had changed... What did change was that I moved the server from temporarily sitting on the floor to where it will now stay full-time. So it was a shutdown, unplug the external tape drive and other devices, and then plug them all back in. Also, one of the little screws which connects the SCSI cable to the back of the server has long ago broken off and gone missing. So it's only attached by one screw. Well, do you agree with me that it is still backing up the same amount of data and all the DLEs? It certainly looks that way from the mail reports. I'm a bit worried because it's looking too good to be true. I also don't see how it can dump 130-odd gigabytes of data within a bit more than 3 hours?? Thanks for your help on this. Jon LaBadie wrote: On Thu, Aug 03, 2006 at 02:46:06AM -0700, Joe Donner (sent by Nabble.com) wrote: That was definitely not the case. The holding disk has been empty all this time, except on 31 July. And after I ran amflush it was empty again. What I don't understand is the increase in dump time and tape time? They both more than doubled in speed! How is that possible? Olivier Nicole wrote: I honestly haven't changed anything. The only thing that happened was that I had to run amflush on July 31, because for some reason the amanda-related services on the backup server seemed to have hung (but that's an entirely different issue). That si just a wilkd guess, but how about the amflush has freed space on your holding disk that had been wasted for ages? Holding disk issues were going to be my guess also. When the run time (wall clock) and tape time (taper is active) match as the first two: Run Time (hrs:min) 9:43 Dump Time (hrs:min)6:21 Tape Time (hrs:min)9:23 Run Time (hrs:min) 9:43 Dump Time (hrs:min)6:17 Tape Time (hrs:min)9:23 it generally means your dumps are going straight to tape. Contrast that with the last two where time for taping the same amount of data is greatly lower than total run time. Run Time (hrs:min) 4:03 Dump Time (hrs:min)3:12 Tape Time (hrs:min)1:17 Run Time (hrs:min) 4:27 Dump Time (hrs:min)3:37 Tape Time (hrs:min)1:16 However, if the dumps were going straight to tape then I would expect the dump time (cumulative time for dumping of each DLE - thus can be run time if DLEs dump in parallel) to match tape time. So it would appear the dumps were going to the holding disk but were severely constrained by slow tapeing. I wonder about the halving of dump time also. There may still have been a problem with the disk system. Did anyone change any scsi cables or terminators. Or maybe the just jiggled things back there doing other work? Are you using that SDLT drive for which you reported a tapetype back in early June? If so, you have had a problem since then. In June your tapetype showed 2200 k/s. Similarly, your recent slow dumps had a tapeing speed of about 2500 and 1800 k/s. Your more recent fast dumps were taped at about 13000 k/s. The sdlt is rated at 16000 k/s. I'd say you had tape system problems fixed by physical movement or reboot. -- Jon H. LaBadie [EMAIL PROTECTED] JG Computing 4455 Province Line Road(609) 252-0159 Princeton, NJ 08540-4322 (609) 683-7220 (fax) -- View this message in context: http://www.nabble.com/Dramatic-reduction-in-backup-time-tf2044425.html#a5635442 Sent from the Amanda - Users forum at Nabble.com.
Correct use of amflush
Hi, I'd appreciate some advice on what is regarded as the correct usage of amflush and the tapes to use with amflush. If the need to run amflush occurs, then amanda tells you which tape to use next, or that you can use a new tape. My question is simply this: Is it acceptable to use the next (already labelled) tape for the amflush operation, or is there any reason why amflush should rather/only be done with a new tape? E.g. I use daily-1 for a backup run, and the backup fails for some reason and some data is on the holding disk. Amanda tells me that she expects daily-2 or a new tape for the next run. I run amflush, and amanda wants a tape. Is it alright to use daily-2, or is it better to use a new tape? Thanks very much. -- View this message in context: http://www.nabble.com/Correct-use-of-amflush-tf2026836.html#a5573711 Sent from the Amanda - Users forum at Nabble.com.
Re: Correct use of amflush
Aha - thanks for that! That was kind of what I was wondering about - didn't realise that amanda will add it to the rotation, so I thought what happens to the tape then... I wonder - how will amanda handle labelling the new tape, then? Will you be asked to label it first, manually, or does amanda label it automatically? Thanks! Mitch Collinsworth wrote: On Mon, 31 Jul 2006, Joe Donner (sent by Nabble.com) wrote: E.g. I use daily-1 for a backup run, and the backup fails for some reason and some data is on the holding disk. Amanda tells me that she expects daily-2 or a new tape for the next run. I run amflush, and amanda wants a tape. Is it alright to use daily-2, or is it better to use a new tape? Either is fine. If you're going to use an existing tape, daily-2 is what amanda will want. At any time it's always acceptable to use a new tape. If you use a new tape amanda will add it to the normal rotation at this point. What you choose to do is up to local policy. -Mitch -- View this message in context: http://www.nabble.com/Correct-use-of-amflush-tf2026836.html#a5574181 Sent from the Amanda - Users forum at Nabble.com.
Re: Still struggling with L0
The little bit of experience I have of amanda has showed me that dumpcycle 0 in amanda.conf will force a level 0 (full) dump. I've been running amanda in a test lab with that setting before going production with incremental backups. Probably not the correct way of doing it in your case, but maybe it works if all else fails? Alan Pearson wrote: Guys I'm have _real_ trouble trying to force Amanda to do a L0 backup of the system that was effected by Amanda wiping out the last L0. I've tried to force a L0 backup of it, like : amadmin DailySet1 force qtvpdc.lab:/Shares amadmin: qtvpdc.lab:/Shares is set to a forced level 0 at next run. But when amdump runs, it just schedules a L1 or L2 backup. As you can imagine I'm not best pleased as I _still_ haven't been able to get an L0 of this system on tape 5 days later (after severely reducing the filesystem size) Any ideas ??? Cheers -- AlanP -- View this message in context: http://www.nabble.com/Still-struggling-with-L0-tf2027407.html#a5575507 Sent from the Amanda - Users forum at Nabble.com.
Backup server disaster recovery
Dear all, say your amanda backup server itself dies, and you need to reinstall/recreate it from scratch. You want the new backup server to have available the information needed to find and restore data from tapes, i.e. the info you get when running: amadmin config find server_name disk or like when you restore something from a specific date, when the server tells you which tapes you need to do the restore (while using amrecoger). What do you need to back up on the current backup server to enable you to get the new server to this state? Will appreciate your insight. Joe -- View this message in context: http://www.nabble.com/Backup-server-disaster-recovery-tf1980202.html#a5433481 Sent from the Amanda - Users forum at Nabble.com.
Re: RE Unravel amstatus output
Well, for what it's worth: I ran a backup job with just the DLEs that failed to backup/flush, and it all went well. I then ran the exact same job I did on Friday, and it succeeded with no errors this time. I'm beginning to think that maybe the tape used on Friday may be damaged in some way. I'm now using 5 tapes for testing, and will run that job until all tapes have been used, and see whether the job fails consistently on any particular tape. Thanks very much for your input. It is much appreciated! Regards, Joe -- View this message in context: http://www.nabble.com/Unravel-amstatus-output-tf1953587.html#a5393206 Sent from the Amanda - Users forum at Nabble.com.
Unravel amstatus output
I set up Amanda on Friday to do an almost real backup job. I thought this would be the final test before putting it into operation. When I arrived at work this morning, I was somewhat surprised to see that the Amanda run doesn't seem to have finished. amstatus daily gives me some information, but I'm not sure how to interpret it. There are still 3 files on the holding disk, adding up to about 48GB. The tape drive doesn't seem to be doing anything - just sitting there quietly at the moment with no sign of activity. I won't include the entire output of amstatus daily, but here are extracts, if someone can please tell me if they see something wrong. I have many entries like these - seems to be one for each DLE: cerberus:/home 0 1003801k finished (22:18:15) Then these entries, which I think are the 2 that failed, as shown later in the summary: cerberus:/.autofsck 0 planner: [disk /.autofsck offline on cerberus?] cerberus:/.fonts.cache-1 0 planner: [disk /.fonts.cache-1 offline on cerberus?] Then these 3 that are the ones still on the holding disk: minerva:/home0 8774296k writing to tape (23:09:07) minerva:/usr/local/clients 0 32253287k dump done (1:08:27), wait for writing to tape minerva:/usr/local/development 0 9687648k dump done (23:48:17), wait for writing to tape And then this summary, which I'm not sure how to interpret: SUMMARY part real estimated size size partition : 109 estimated : 107 69631760k flush : 0 0k failed : 20k ( 0.00%) wait for dumping: 00k ( 0.00%) dumping to tape : 00k ( 0.00%) dumping : 0 0k 0k ( 0.00%) ( 0.00%) dumped : 107 58148656k 69631760k ( 83.51%) ( 83.51%) wait for writing: 2 41940935k 48107940k ( 87.18%) ( 60.23%) wait to flush : 0 0k 0k (100.00%) ( 0.00%) writing to tape : 1 8774296k 12515695k ( 70.11%) ( 12.60%) failed to tape : 0 0k 0k ( 0.00%) ( 0.00%) taped : 104 7433425k 9008125k ( 82.52%) ( 10.68%) 4 dumpers idle : not-idle taper writing, tapeq: 2 network free kps: 2000 holding space : 50295358k ( 49.79%) dumper0 busy : 2:53:47 ( 5.67%) dumper1 busy : 0:13:48 ( 0.45%) dumper2 busy : 0:00:00 ( 0.00%) taper busy : 0:12:38 ( 0.41%) 0 dumpers busy : 2+0:07:56 ( 94.22%)not-idle: 2+0:00:04 ( 99.73%) start-wait: 0:07:51 ( 0.27%) 1 dumper busy : 2:46:29 ( 5.43%)not-idle: 1:20:10 ( 48.15%) client-constrained: 1:18:08 ( 46.93%) no-bandwidth: 0:04:16 ( 2.57%) start-wait: 0:03:54 ( 2.35%) 2 dumpers busy : 0:10:34 ( 0.35%) client-constrained: 0:06:22 ( 60.27%) start-wait: 0:04:05 ( 38.76%) no-bandwidth: 0:00:06 ( 0.96%) 3 dumpers busy : 0:00:00 ( 0.00%) I would highly appreciate your insight into what is going on, especially for the 3 DLEs that are waiting for writing to tape. -- View this message in context: http://www.nabble.com/Unravel-amstatus-output-tf1953587.html#a5357597 Sent from the Amanda - Users forum at Nabble.com.
Re: RE Unravel amstatus output
Good point - and that is why I need help unravelling what it all means. My question now would be: 0.41% of what? What would 100% of that something represent? Constant streaming of data to tape from holding disk? I've just left it alone to see if I get different results when subsequently running amstatus, but it seems stuck at wherever it is at the moment. The tape drive itself is doing nothing... It really seems as if all went reasonably well and then froze up for some reason. Please help if at all possible. Cyrille Bollu wrote: Looking with my newbie's eyes it seems that Amanda is running well. Just very slowly. And Amanda's log seems to indicate that the problem is on the tape drive side. The only thing strange that I see is the following line which say that your drive is busy only 0,41% of the time: taper busy : 0:12:38 ( 0.41%) What does it do the rest of the time??? [EMAIL PROTECTED] a écrit sur 17/07/2006 10:54:55 : I set up Amanda on Friday to do an almost real backup job. I thought this would be the final test before putting it into operation. When I arrived at work this morning, I was somewhat surprised to see that the Amanda run doesn't seem to have finished. amstatus daily gives me some information, but I'm not sure how to interpret it. There are still 3 files on the holding disk, adding up to about 48GB. The tape drive doesn't seem to be doing anything - just sitting there quietly at the moment with no sign of activity. I won't include the entire output of amstatus daily, but here are extracts, if someone can please tell me if they see something wrong. I have many entries like these - seems to be one for each DLE: cerberus:/home 0 1003801k finished (22:18:15) Then these entries, which I think are the 2 that failed, as shown later in the summary: cerberus:/.autofsck 0 planner: [disk /.autofsck offline on cerberus?] cerberus:/.fonts.cache-1 0 planner: [disk /.fonts.cache-1 offline on cerberus?] Then these 3 that are the ones still on the holding disk: minerva:/home0 8774296k writing to tape (23:09:07) minerva:/usr/local/clients 0 32253287k dump done (1:08:27), wait for writing to tape minerva:/usr/local/development 0 9687648k dump done (23:48:17), wait for writing to tape And then this summary, which I'm not sure how to interpret: SUMMARY part real estimated size size partition : 109 estimated : 107 69631760k flush : 0 0k failed : 20k ( 0.00%) wait for dumping: 00k ( 0.00%) dumping to tape : 00k ( 0.00%) dumping : 0 0k 0k ( 0.00%) ( 0.00%) dumped : 107 58148656k 69631760k ( 83.51%) ( 83.51%) wait for writing: 2 41940935k 48107940k ( 87.18%) ( 60.23%) wait to flush : 0 0k 0k (100.00%) ( 0.00%) writing to tape : 1 8774296k 12515695k ( 70.11%) ( 12.60%) failed to tape : 0 0k 0k ( 0.00%) ( 0.00%) taped : 104 7433425k 9008125k ( 82.52%) ( 10.68%) 4 dumpers idle : not-idle taper writing, tapeq: 2 network free kps: 2000 holding space : 50295358k ( 49.79%) dumper0 busy : 2:53:47 ( 5.67%) dumper1 busy : 0:13:48 ( 0.45%) dumper2 busy : 0:00:00 ( 0.00%) taper busy : 0:12:38 ( 0.41%) 0 dumpers busy : 2+0:07:56 ( 94.22%)not-idle: 2+0:00:04 ( 99.73%) start-wait: 0:07:51 ( 0.27%) 1 dumper busy : 2:46:29 ( 5.43%)not-idle: 1:20:10 ( 48.15%) client-constrained: 1:18:08 ( 46.93%) no-bandwidth: 0:04:16 ( 2.57%) start-wait: 0:03:54 ( 2.35%) 2 dumpers busy : 0:10:34 ( 0.35%) client-constrained: 0:06:22 ( 60.27%) start-wait: 0:04:05 ( 38.76%) no-bandwidth: 0:00:06 ( 0.96%) 3 dumpers busy : 0:00:00 ( 0.00%) I would highly appreciate your insight into what is going on, especially for the 3 DLEs that are waiting for writing to tape. -- View this message in context: http://www.nabble.com/Unravel- amstatus-output-tf1953587.html#a5357597 Sent from the Amanda - Users forum at Nabble.com. -- View this message in context: http://www.nabble.com/Unravel-amstatus-output-tf1953587.html#a5358022 Sent from the Amanda - Users forum at Nabble.com.
Re: RE Unravel amstatus output
Well, one thing I've noticed is that the DLEs in question are the ones with largest overall size: +/- 8GB +/- 9GB +/- 32GB All the other DLEs (except for the two I mentioned, which are in fact hidden files) have successfully been written to tape and are less than approximately 2GB in size... Cyrille Bollu wrote: [EMAIL PROTECTED] a écrit sur 17/07/2006 11:36:21 : Good point - and that is why I need help unravelling what it all means. My question now would be: 0.41% of what? What would 100% of that something represent? Constant streaming of data to tape from holding disk? AFAIK, 100% would mean constant streaming. I've just left it alone to see if I get different results when subsequently running amstatus, but it seems stuck at wherever it is at the moment. The tape drive itself is doing nothing... It really seems as if all went reasonably well and then froze up for some reason. Yep, it looks like... Please help if at all possible. Cyrille Bollu wrote: Looking with my newbie's eyes it seems that Amanda is running well. Just very slowly. And Amanda's log seems to indicate that the problem is on the tape drive side. The only thing strange that I see is the following line which say that your drive is busy only 0,41% of the time: taper busy : 0:12:38 ( 0.41%) What does it do the rest of the time??? [EMAIL PROTECTED] a écrit sur 17/07/2006 10:54:55 : I set up Amanda on Friday to do an almost real backup job. I thought this would be the final test before putting it into operation. When I arrived at work this morning, I was somewhat surprised to see that the Amanda run doesn't seem to have finished. amstatus daily gives me some information, but I'm not sure how to interpret it. There are still 3 files on the holding disk, adding up to about 48GB. The tape drive doesn't seem to be doing anything - just sitting there quietly at the moment with no sign of activity. I won't include the entire output of amstatus daily, but here are extracts, if someone can please tell me if they see something wrong. I have many entries like these - seems to be one for each DLE: cerberus:/home 0 1003801k finished (22:18:15) Then these entries, which I think are the 2 that failed, as shown later in the summary: cerberus:/.autofsck 0 planner: [disk /.autofsck offline on cerberus?] cerberus:/.fonts.cache-1 0 planner: [disk /.fonts.cache-1 offline on cerberus?] Then these 3 that are the ones still on the holding disk: minerva:/home0 8774296k writing to tape (23:09:07) minerva:/usr/local/clients 0 32253287k dump done (1:08:27), wait for writing to tape minerva:/usr/local/development 0 9687648k dump done (23:48:17), wait for writing to tape And then this summary, which I'm not sure how to interpret: SUMMARY part real estimated size size partition : 109 estimated : 107 69631760k flush : 0 0k failed : 20k ( 0.00%) wait for dumping: 00k ( 0.00%) dumping to tape : 00k ( 0.00%) dumping : 0 0k 0k ( 0.00%) ( 0.00%) dumped : 107 58148656k 69631760k ( 83.51%) ( 83.51%) wait for writing: 2 41940935k 48107940k ( 87.18%) ( 60.23%) wait to flush : 0 0k 0k (100.00%) ( 0.00%) writing to tape : 1 8774296k 12515695k ( 70.11%) ( 12.60%) failed to tape : 0 0k 0k ( 0.00%) ( 0.00%) taped : 104 7433425k 9008125k ( 82.52%) ( 10.68%) 4 dumpers idle : not-idle taper writing, tapeq: 2 network free kps: 2000 holding space : 50295358k ( 49.79%) dumper0 busy : 2:53:47 ( 5.67%) dumper1 busy : 0:13:48 ( 0.45%) dumper2 busy : 0:00:00 ( 0.00%) taper busy : 0:12:38 ( 0.41%) 0 dumpers busy : 2+0:07:56 ( 94.22%)not-idle: 2+0:00:04 ( 99.73%) start-wait: 0:07:51 ( 0.27%) 1 dumper busy : 2:46:29 ( 5.43%)not-idle: 1:20:10 ( 48.15%) client-constrained: 1:18:08 ( 46.93%) no-bandwidth: 0:04:16 ( 2.57%) start-wait: 0:03:54 ( 2.35%) 2 dumpers busy : 0:10:34 ( 0.35%) client-constrained: 0:06:22 ( 60.27%) start-wait: 0:04:05 ( 38.76%) no-bandwidth: 0:00:06 ( 0.96%) 3 dumpers busy :
Re: RE Unravel amstatus output
When I execute the top command (Red Hat Enterprise 3) for user Amanda, I get: PID USER PRI NI SIZE RSS SHARE STAT %CPU %MEM TIME CPU COMMAND 2136 amanda15 0 948 948 836 S 0.0 0.0 0:00 1 amdump 2145 amanda15 0 1072 1072 844 S 0.0 0.1 0:02 1 driver 2146 amanda16 0 1536 1536 1388 S 0.0 0.1 0:52 0 taper 2147 amanda16 0 1560 1560 1396 D 0.0 0.1 0:34 0 taper 2148 amanda22 0 1120 1120 876 S 0.0 0.1 12:55 0 dumper 2153 amanda15 0 1120 1120 876 S 0.0 0.1 0:19 0 dumper 2154 amanda15 0 1044 1044 816 S 0.0 0.1 0:00 1 dumper 2155 amanda25 0 852 852 708 S 0.0 0.0 0:00 0 dumper and ps -fu amanda outputs: UIDPID PPID C STIME TTY TIME CMD amanda2136 2135 0 Jul14 ?00:00:00 /bin/sh /usr/sbin/amdump daily amanda2145 2136 0 Jul14 ?00:00:02 /usr/lib/amanda/driver daily amanda2146 2145 0 Jul14 ?00:00:52 taper daily amanda2147 2146 0 Jul14 ?00:00:34 taper daily amanda2148 2145 0 Jul14 ?00:12:55 dumper0 daily amanda2153 2145 0 Jul14 ?00:00:19 dumper1 daily amanda2154 2145 0 Jul14 ?00:00:00 dumper2 daily amanda2155 2145 0 Jul14 ?00:00:00 dumper3 daily Does this tell anyone anything? Paul Bijnens wrote: On 2006-07-17 11:36, Joe Donner (sent by Nabble.com) wrote: Good point - and that is why I need help unravelling what it all means. My question now would be: 0.41% of what? What would 100% of that something represent? Constant streaming of data to tape from holding disk? of the total elapsed time since the program started. But there is some caveat. The amstatus command works by parsing the log file. And the logfile is written to only when there is a change in state in the backup process. So the 0.41% probably means that since the last status message written by taper in the logfile is already long ago. It could well be that taper is taping one very large file, but has not yet written that into the log file which amstatus parses. So, to find out if really anything is still running, do ps -fu amanda on the tape server, and verify if there is still a taper process (and other processes like driver). If they are, then what are they doing (strace -p help here). You may kill them all, and then clean up the broken pieces by running amcleanup. I've just left it alone to see if I get different results when subsequently running amstatus, but it seems stuck at wherever it is at the moment. The tape drive itself is doing nothing... It really seems as if all went reasonably well and then froze up for some reason. Please help if at all possible. Cyrille Bollu wrote: Looking with my newbie's eyes it seems that Amanda is running well. Just very slowly. And Amanda's log seems to indicate that the problem is on the tape drive side. The only thing strange that I see is the following line which say that your drive is busy only 0,41% of the time: taper busy : 0:12:38 ( 0.41%) What does it do the rest of the time??? [EMAIL PROTECTED] a écrit sur 17/07/2006 10:54:55 : I set up Amanda on Friday to do an almost real backup job. I thought this would be the final test before putting it into operation. When I arrived at work this morning, I was somewhat surprised to see that the Amanda run doesn't seem to have finished. amstatus daily gives me some information, but I'm not sure how to interpret it. There are still 3 files on the holding disk, adding up to about 48GB. The tape drive doesn't seem to be doing anything - just sitting there quietly at the moment with no sign of activity. I won't include the entire output of amstatus daily, but here are extracts, if someone can please tell me if they see something wrong. I have many entries like these - seems to be one for each DLE: cerberus:/home 0 1003801k finished (22:18:15) Then these entries, which I think are the 2 that failed, as shown later in the summary: cerberus:/.autofsck 0 planner: [disk /.autofsck offline on cerberus?] cerberus:/.fonts.cache-1 0 planner: [disk /.fonts.cache-1 offline on cerberus?] Then these 3 that are the ones still on the holding disk: minerva:/home0 8774296k writing to tape (23:09:07) minerva:/usr/local/clients 0 32253287k dump done (1:08:27), wait for writing to tape minerva:/usr/local/development 0 9687648k dump done (23:48:17), wait for writing to tape And then this summary, which I'm not sure how to interpret: SUMMARY part real estimated size size partition : 109 estimated : 107 69631760k flush : 0 0k failed
Re: RE Unravel amstatus output
Red Hat Enterprise 3 doesn't seem to have strace as a command. I thought rather than killing the processes manually, I'd reboot the server and see if amcleanup runs as included in /etc/rc.d/rc.local (thought I may as well test that). Now the server came back up, and none of the amanda services are active anymore (unsurprisingly). Nothing seemed to happen, so I did a manual amcleanup, with these results: amcleanup: no unprocessed logfile to clean up. Scanning /mnt/hdb1... 20060714: found Amanda directory. So I'm thinking that this backup run is now finally broken. Next I thought I'll run amflush and see what happens. It outputs this: Scanning /mnt/hdb1... 20060714: found Amanda directory. Today is: 20060717 Flushing dumps in 20060714 to tape drive /dev/nst0. Expecting tape daily-1 or a new tape. (The last dumps were to tape daily-3) Are you sure you want to do this [yN]? y Running in background, you can log off now. You'll get mail when amflush is finished. Now what I notice is that it asks for the tape called daily-1, whereas the tape I used for Friday's backup was daily-3. Does this mean that daily-3 was filled up and caused this whole issue? Which brings me to another question. I've used these tapes before for testing. Will Amanda have appended Friday's backup to what was already on the tape daily-3, or does it overwrite data previously written to that tape each time a new backup runs? The reason I ask this is that the tape drive capacity is 160GB, and I believe that I'm trying to back up a lot less data than that. After I rebooted, I got this email from Amanda. As you can see, it only used 4.7% of the tape: *** THE DUMPS DID NOT FINISH PROPERLY! These dumps were to tape daily-3. The next tape Amanda expects to use is: daily-1. FAILURE AND STRANGE DUMP SUMMARY: cerberus /.fonts.cache-1 lev 0 FAILED [disk /.fonts.cache-1 offline on cerberus?] cerberus /.autofsck lev 0 FAILED [disk /.autofsck offline on cerberus?] STATISTICS: Total Full Daily Estimate Time (hrs:min)0:04 Run Time (hrs:min) 0:16 Dump Time (hrs:min)3:07 3:07 0:00 Output Size (meg) 56785.856785.80.0 Original Size (meg)136236.1 136236.10.0 Avg Compressed Size (%)41.7 41.7-- Filesystems Dumped 107107 0 Avg Dump Rate (k/s) 5169.8 5169.8-- Tape Time (hrs:min)0:13 0:13 0:00 Tape Size (meg) 7259.3 7259.30.0 Tape Used (%) 4.74.70.0 Filesystems Taped 104104 0 Avg Tp Write Rate (k/s) 9801.6 9801.6-- USAGE BY TAPE: Label Time Size %Nb daily-3 0:137259.34.7 104 And then, after I ran amflush, I got an email saying this (I didn't actually put daily-1 into the drive): *** A TAPE ERROR OCCURRED: [cannot overwrite active tape daily-3]. Some dumps may have been left in the holding disk. Run amflush again to flush them to tape. The next tape Amanda expects to use is: daily-1. And when I now do amstatus daily, I get: Using /var/lib/amanda/daily/amflush.1 from Mon Jul 17 12:58:42 BST 2006 minerva:/home 0 8774296k waiting to flush minerva:/usr/local/clients 0 32253287k waiting to flush minerva:/usr/local/development 0 9687648k waiting to flush I feel a headache coming on again... Any suggestions as how to best proceed? Paul Bijnens wrote: On 2006-07-17 13:32, Joe Donner (sent by Nabble.com) wrote: and ps -fu amanda outputs: UIDPID PPID C STIME TTY TIME CMD amanda2136 2135 0 Jul14 ?00:00:00 /bin/sh /usr/sbin/amdump daily amanda2145 2136 0 Jul14 ?00:00:02 /usr/lib/amanda/driver daily amanda2146 2145 0 Jul14 ?00:00:52 taper daily amanda2147 2146 0 Jul14 ?00:00:34 taper daily amanda2148 2145 0 Jul14 ?00:12:55 dumper0 daily amanda2153 2145 0 Jul14 ?00:00:19 dumper1 daily amanda2154 2145 0 Jul14 ?00:00:00 dumper2 daily amanda2155 2145 0 Jul14 ?00:00:00 dumper3 daily Does this tell anyone anything? It means the processes are still alive. Just a wild guess... Maybe you have specified a manual changer, and Amanda is just waiting for you to manually insert the next tape? Now find out what they are doing, and why it takes days to proceed. As root or amanda you can trace a process and see if it does somehting else, or is just sleeping on some event that will not happen: strace -p pid-of-the-process There are two taper processes, one reads from the holdingdisk file into a shared memory region, while the other one writes the bytes from shared memory to tape. When there is no holdingdisk file, then maybe the reader-taper is reading from a network socket? And maybe
Re: RE Unravel amstatus output
Ok, so I ran amflush again. It flushed 2 of the 3 outstanding DLE's data to daily-1, but the email I received includes: The dumps were flushed to tape daily-1. The next tape Amanda expects to use is: daily-2. FAILURE AND STRANGE DUMP SUMMARY: minerva/usr/local/clients lev 0 FAILED [input: Can't read data: : Input/output error] And the holding disk still contains a folder with Friday's date and a 30GB file for the DLE mentioned above. What on earth is going on?? Joe Donner wrote: Red Hat Enterprise 3 doesn't seem to have strace as a command. I thought rather than killing the processes manually, I'd reboot the server and see if amcleanup runs as included in /etc/rc.d/rc.local (thought I may as well test that). Now the server came back up, and none of the amanda services are active anymore (unsurprisingly). Nothing seemed to happen, so I did a manual amcleanup, with these results: amcleanup: no unprocessed logfile to clean up. Scanning /mnt/hdb1... 20060714: found Amanda directory. So I'm thinking that this backup run is now finally broken. Next I thought I'll run amflush and see what happens. It outputs this: Scanning /mnt/hdb1... 20060714: found Amanda directory. Today is: 20060717 Flushing dumps in 20060714 to tape drive /dev/nst0. Expecting tape daily-1 or a new tape. (The last dumps were to tape daily-3) Are you sure you want to do this [yN]? y Running in background, you can log off now. You'll get mail when amflush is finished. Now what I notice is that it asks for the tape called daily-1, whereas the tape I used for Friday's backup was daily-3. Does this mean that daily-3 was filled up and caused this whole issue? Which brings me to another question. I've used these tapes before for testing. Will Amanda have appended Friday's backup to what was already on the tape daily-3, or does it overwrite data previously written to that tape each time a new backup runs? The reason I ask this is that the tape drive capacity is 160GB, and I believe that I'm trying to back up a lot less data than that. After I rebooted, I got this email from Amanda. As you can see, it only used 4.7% of the tape: *** THE DUMPS DID NOT FINISH PROPERLY! These dumps were to tape daily-3. The next tape Amanda expects to use is: daily-1. FAILURE AND STRANGE DUMP SUMMARY: cerberus /.fonts.cache-1 lev 0 FAILED [disk /.fonts.cache-1 offline on cerberus?] cerberus /.autofsck lev 0 FAILED [disk /.autofsck offline on cerberus?] STATISTICS: Total Full Daily Estimate Time (hrs:min)0:04 Run Time (hrs:min) 0:16 Dump Time (hrs:min)3:07 3:07 0:00 Output Size (meg) 56785.856785.80.0 Original Size (meg)136236.1 136236.10.0 Avg Compressed Size (%)41.7 41.7-- Filesystems Dumped 107107 0 Avg Dump Rate (k/s) 5169.8 5169.8-- Tape Time (hrs:min)0:13 0:13 0:00 Tape Size (meg) 7259.3 7259.30.0 Tape Used (%) 4.74.70.0 Filesystems Taped 104104 0 Avg Tp Write Rate (k/s) 9801.6 9801.6-- USAGE BY TAPE: Label Time Size %Nb daily-3 0:137259.34.7 104 And then, after I ran amflush, I got an email saying this (I didn't actually put daily-1 into the drive): *** A TAPE ERROR OCCURRED: [cannot overwrite active tape daily-3]. Some dumps may have been left in the holding disk. Run amflush again to flush them to tape. The next tape Amanda expects to use is: daily-1. And when I now do amstatus daily, I get: Using /var/lib/amanda/daily/amflush.1 from Mon Jul 17 12:58:42 BST 2006 minerva:/home 0 8774296k waiting to flush minerva:/usr/local/clients 0 32253287k waiting to flush minerva:/usr/local/development 0 9687648k waiting to flush I feel a headache coming on again... Any suggestions as how to best proceed? Paul Bijnens wrote: On 2006-07-17 13:32, Joe Donner (sent by Nabble.com) wrote: and ps -fu amanda outputs: UIDPID PPID C STIME TTY TIME CMD amanda2136 2135 0 Jul14 ?00:00:00 /bin/sh /usr/sbin/amdump daily amanda2145 2136 0 Jul14 ?00:00:02 /usr/lib/amanda/driver daily amanda2146 2145 0 Jul14 ?00:00:52 taper daily amanda2147 2146 0 Jul14 ?00:00:34 taper daily amanda2148 2145 0 Jul14 ?00:12:55 dumper0 daily amanda2153 2145 0 Jul14 ?00:00:19 dumper1 daily amanda2154 2145 0 Jul14 ?00:00:00 dumper2 daily amanda2155 2145 0 Jul14 ?00:00:00 dumper3 daily Does this tell anyone anything? It means the processes are still alive. Just a wild guess... Maybe you have specified a manual
Re: RE Unravel amstatus output
Sorry, I've already went and deleted that file... Alexander Jolk wrote: Joe Donner (sent by Nabble.com) wrote: FAILURE AND STRANGE DUMP SUMMARY: minerva/usr/local/clients lev 0 FAILED [input: Can't read data: : Input/output error] And the holding disk still contains a folder with Friday's date and a 30GB file for the DLE mentioned above. Can you try cat'ting the file to /dev/null? My first guess would be that some blocks of the holding disk file are unreadable due to a disk failure. Alex -- Alexander Jolk / BUF Compagnie tel +33-1 42 68 18 28 / fax +33-1 42 68 18 29 -- View this message in context: http://www.nabble.com/Unravel-amstatus-output-tf1953587.html#a5361151 Sent from the Amanda - Users forum at Nabble.com.
Re: Exclude list entries
Thanks for your replies. I could have phrased my question slightly better. I want to know what the exclude list should look like to make sure I exclude all files with these extensions in any directory/subdirectory contained under the disklist entries: .ora .dbf .dmp .dmp.gz.xx If I understand you correctly, then using the following will only exclude files with those extensions located within top level directories, and not necessarily subdirectories, e.g. ./*.ora Is that correct? Olivier Nicole wrote: No there really is a difference between excluding /foo, ./foo, and foo. As you are backing up ., /foo will not match anything. Of course ./foo will match any foo in the top level directory . foo will match any foo in any directory under . Not so sure: banyanon: /usr/local/bin/tar --version tar (GNU tar) 1.13.25 Copyright (C) 2001 Free Software Foundation, Inc. This program comes with NO WARRANTY, to the extent permitted by law. You may redistribute it under the terms of the GNU General Public License; see the file named COPYING for details. Written by John Gilmore and Jay Fenlason. banyanon: /usr/local/bin/tar cfv /dev/null . | grep jpg ./Images/Picture-2 003.jpg ./Images/Picture-2 005.jpg ./maman.jpg banyanon: cat ../excl ./*.jpg banyanon: /usr/local/bin/tar cfv /dev/null --exclude-from=../excl . | grep jpg banyanon: cat ../excl2 *.jpg banyanon: /usr/local/bin/tar cfv /dev/null --exclude-from=../excl2 . | grep jpg Excl contains ./*.jpg and excl2 contains *.jpg but they both exclude any .jpg file in any level of the hierarchy. OK, funny enough, tar 1.13.19 on another machine gives different results, but still, tar 1.13.25 is recommeneded version isn't it? Olivier -- View this message in context: http://www.nabble.com/Exclude-list-entries-tf1936459.html#a5323781 Sent from the Amanda - Users forum at Nabble.com.
Exclude list entries
Dear all, I've now configured an exclusion list for use with Amanda. At first I used these entries: *.ora *.dbf *.dmp *.dmp.gz* but then I re-read the Amanda documentation, which states: When AMANDA attempts to exclude a file or directory it does so relative to the area being archived. For example if /var is in your disklist and you want to exclude /var/log/somefile, then your exclude file would contain ./log/somefile So I've changed the entries to these: ./*.ora ./*.dbf ./*.dmp ./*.dmp.gz* amcheck config reports no errors or problems with either set of entries. I really just want to confirm whether or not I could have used the entries without ./ and what the difference between the different sets of entries would be. Thanks very much. Joe -- View this message in context: http://www.nabble.com/Exclude-list-entries-tf1936459.html#a5305717 Sent from the Amanda - Users forum at Nabble.com.
amstatus during active dump
When you run amstatus config while a dump is active, does it show you a progress report (for want of a better term)? Thanks. -- View this message in context: http://www.nabble.com/amstatus-during-active-dump-tf1930763.html#a5288402 Sent from the Amanda - Users forum at Nabble.com.
using amflush
Dear all, in my Amanda test lab, while trying to sort out my issues with hardware vs. software compression, amdump didn't run on 5 July last week (I subsequently put Amanda on hold to prevent amdump from running throughout the rest of the week). So now I have a directory on the holding disk named 20060705, and I know that I'm really supposed to flush this to tape. However, I don't actually care about that backup, and just need to get rid of it. I noticed that amflush's man pages say: Amflush will look in the holding disks specified by the amanda.conf file in /etc/amanda/config for any non-empty Amanda work directories. It then prompts you to select a directory or to process all of the directories. The work directories in the holding disks are named by the date at the time amdump was run, e.g. 19910215. To get rid of that dump, can I simply delete that directory without harming Amanda or confusing anything...? Thank you. Joe -- View this message in context: http://www.nabble.com/using-amflush-tf1918197.html#a5251020 Sent from the Amanda - Users forum at Nabble.com.
Re: RE Compression usage
I got these replies by email, which I think clear it up for me. Still not sure about whether I can start from scratch by erasing the existing hardware-compressed tapes, and then issuing mt -f /dev/nst0 defcompression 0 in crontab, but I'm going to try anyway: Software compression is normally recommended when using amanda as it allows amanda to better estimate how much will fit on a tape. It's also produces smaller archives - usually. To turn on and off varies for mt implemention, but if that's what your manpage says. Also is best to either use it or not. If you switch over you need to physically label the tape etc otherwise restores won't be able to read the tape if you got the setting wrong to the tape... Martin Hepworth and You have to define the dump type to use for compression. If by switching between the two you mean changing the configuration between runs, then yes you can, it will all still be valid but with different performance. I use hardware compression on servers where, either, they are busy and can not additionally do the compression,or they have a very large dump to perform and compression would cause the backup to continue past the backup window. (this second option is dependent on your network throughput as it may actually be faster to compress and send the compressed data)I use software compression where the machine has plenty of time to do compression or the backup is small. On my machine, with DLT8000 the following works :mt -f /dev/nst0 compression off mt -f /dev/nst0 compression on Yes after restarting the tape drive it will revert to its default unless you use a physical switch to disable it. If you have the spare processing power then switch off the hardware compression and you will gain a bit of tape space.The already written tapes will still be valid backups. Chris Lee Joe Donner wrote: Sorry to be bothersome about this, but will the below work? Joe Donner wrote: Thanks for your reply. do you mean that I will see the behaviour you're describing only if I'm reusing tapes with hardware-compressed data? I only have 3 tapes so far, so don't mind erasing them. I find this slightly confusing. My understanding at this stage is that as long as I start with fresh tapes, and then use mt -f /dev/nst0 defcompression 0 to switch off hardware compression, I should be OK from then onwards? Surely the argument about the tape drive switching into compression mode (when fed a hardware-compressed tape) then also works the other way round, i.e. when a tape drive has hardware compression enabled, and is fed a non-hardware compressed tape, then it won't enable hardware compression? I see that you're saying use stinit, but I've had a look at that and at this stage it will only add more complexity to (for me) a rather complex situation. If I could get away with erasing 3 tapes, using mt -f /dev/nst0 defcompression 0 (maybe add that into crontab for good measure), and it all works reasonably well, then I'll be happy for now. Will appreciate your thoughts and thanks a lot for being so helpful. Joe -- View this message in context: http://www.nabble.com/Compression-usage-tf1889370.html#a5213937 Sent from the Amanda - Users forum at Nabble.com.
LVM snapshots
Does anyone use or have knowledge of using LVM snapshots with Amanda backups? I believe it to be the same concept as Shadow Volume Copies in Windows 2003, and that is quite useful. A little bit of info here: http://arstechnica.com/articles/columns/linux/linux-20041013.ars I'm just wondering what happens during the freeze - how freezing all activity to and from the filesystem to reduce the risk of problems affects the system? One would imagine that disk writes are somehow queued up and complete when the file system is unfreezed again? Joe -- View this message in context: http://www.nabble.com/LVM-snapshots-tf1905387.html#a5214340 Sent from the Amanda - Users forum at Nabble.com.
Re: RE Compression usage
Sorry to be bothersome about this, but will the below work? Joe Donner wrote: Thanks for your reply. do you mean that I will see the behaviour you're describing only if I'm reusing tapes with hardware-compressed data? I only have 3 tapes so far, so don't mind erasing them. I find this slightly confusing. My understanding at this stage is that as long as I start with fresh tapes, and then use mt -f /dev/nst0 defcompression 0 to switch off hardware compression, I should be OK from then onwards? Surely the argument about the tape drive switching into compression mode (when fed a hardware-compressed tape) then also works the other way round, i.e. when a tape drive has hardware compression enabled, and is fed a non-hardware compressed tape, then it won't enable hardware compression? I see that you're saying use stinit, but I've had a look at that and at this stage it will only add more complexity to (for me) a rather complex situation. If I could get away with erasing 3 tapes, using mt -f /dev/nst0 defcompression 0 (maybe add that into crontab for good measure), and it all works reasonably well, then I'll be happy for now. Will appreciate your thoughts and thanks a lot for being so helpful. Joe -- View this message in context: http://www.nabble.com/Compression-usage-tf1889370.html#a5196849 Sent from the Amanda - Users forum at Nabble.com.
Re: RE Compression usage
Thanks to everyone for your replies. Ok, so what I really wanted to do was to use software compression, and not hardware compression. Cyrille's suggestion sounds like what I need, but again I'm not sure whether using this command will mean that compression is turned off permanently. I'll look into the stinit command in the meantime... Now I'm wondering about the tapes I've already used while hardware compression was still on. Obviously the tape drive will need to feed those tapes through its decompression mechanism to read them again. If I use the command as suggested by Cyrille, will that mean that the used tapes become unreadable, or that you have to manually turn compression on and off (I've read the the compression command overrides the defcompression one for the currently loaded tape)? Thanks. Joe Cyrille Bollu wrote: [EMAIL PROTECTED] a ?crit sur 04/07/2006 13:17:46 : Dear all, I just want to clarify something about compression: My understanding is that you can use either software or hardware compression, or can switch between the two if needed. What is generally, in your experience, the best of the two to use? To switch off hardware compression, I believe I should use: mt -f /dev/nst0 compression 0 Does this command switch it off permanently until you use the following?: mt -f /dev/nst0 compression 1 or will hardware compression switch back on, say, after a reboot? AFAIK, there's another option. from mt man pages: defcompression (SCSI tapes) Set the default compression state. The value -1 disables the default compression. The compression state set by compression overrides the default until a new tape is inserted. Allowed only for the superuser. so one should use: mt -f /dev/nst0 defcompression 0 Cyrille -- View this message in context: http://www.nabble.com/Compression-usage-tf1889370.html#a5178655 Sent from the Amanda - Users forum at Nabble.com.
Compression usage
Dear all, I just want to clarify something about compression: My understanding is that you can use either software or hardware compression, or can switch between the two if needed. What is generally, in your experience, the best of the two to use? To switch off hardware compression, I believe I should use: mt -f /dev/nst0 compression 0 Does this command switch it off permanently until you use the following?: mt -f /dev/nst0 compression 1 or will hardware compression switch back on, say, after a reboot? I've set up Amanda to use compression as follows, and it seems to work quite well: define dumptype comp-tar { program GNUTAR compress fast index yes record yes } But it just dawned on me that I haven't taken into account that our HP SDLT 320 tape drive is probably using hardware compression, so now I'm wondering what's best to do. Thanks in advance. Joe -- View this message in context: http://www.nabble.com/Compression-usage-tf1889370.html#a5165754 Sent from the Amanda - Users forum at Nabble.com.
amrecover: rewinding tape and restore directory
Dear all, I think I may finally have cracked Amanda, but there are two things not quite clear to me: 1. When you do an amrecover, do you HAVE to rewind the tape first? This isn't a problem as such, I'm just wondering whether or not it should work that way, i.e. in amanda.conf I've got specified tapedev /dev/nst0, which I understand to mean that you're using your tape drive as a no-rewind device (which I believe is sort of required by Amanda). 2. I create a restore directory on the Amanda server, and then run amrecover as root from that directory to test doing restores from tape. At the moment I'm backing up directories from two hosts - the backup server itself and a client linux machine. I've noticed that when I restore stuff into the restore directory for one client, and subsequently do a restore to the same directory for a second client, then the first client's restored files get deleted just before the second client's files are restored. In other words, the restore directory is cleared of all its contents, and then the second restore runs. Is this intended behaviour? I would think that in practice you'd set up your clients so that you could restore data straight back to the client into a temporary directory, as opposed to doing restores to the backup server and then copying the data back to the client. Will appreciate your thoughts on this. Regards, Joe -- View this message in context: http://www.nabble.com/amrecover%3A-rewinding-tape-and-restore-directory-tf1872516.html#a5118250 Sent from the Amanda - Users forum at Nabble.com.
Re: amrecover: rewinding tape and restore directory
Thanks Jon! Your reply was very helpful indeed, especially on point number 2! I do appreciate it. Regards, Joe Jon LaBadie wrote: On Fri, Jun 30, 2006 at 03:40:50AM -0700, Joe Donner (sent by Nabble.com) wrote: Dear all, I think I may finally have cracked Amanda, but there are two things not quite clear to me: 1. When you do an amrecover, do you HAVE to rewind the tape first? This isn't a problem as such, I'm just wondering whether or not it should work I believe that amanda expects the tape used for recovery to be at the start of the tape when you type y to continue. that way, i.e. in amanda.conf I've got specified tapedev /dev/nst0, which I understand to mean that you're using your tape drive as a no-rewind device (which I believe is sort of required by Amanda). Typically amanda rewinds the tape when it wants the tape at the beginning. The above amrecover scenario may be an exception. The use of the no-rewind device is particularly important during tape writing. Suppose amanda checks the tape label to see which tape is in the drive, finds it is the correct tape, then closes the tape device. If the tape auto-rewound on close, then when amanda started to write a DLE, it would overwrite the tape label. 2. I create a restore directory on the Amanda server, and then run amrecover as root from that directory to test doing restores from tape. At the moment I'm backing up directories from two hosts - the backup server itself and a client linux machine. I've noticed that when I restore stuff into the restore directory for one client, and subsequently do a restore to the same directory for a second client, then the first client's restored files get deleted just before the second client's files are restored. In other words, the restore directory is cleared of all its contents, and then the second restore runs. Is this intended behaviour? I would think that in practice you'd set up your clients so that you could restore data straight back to the client into a temporary directory, as opposed to doing restores to the backup server and then copying the data back to the client. What many mis-understand is that amrecover tries to get the directory structure back to the state that existed at the date you specify for the recovery (setdate). This doesn't mean just get the files that were there then, but eliminate those that were not there. So when you tried to recover a second, totally different client/DLE to the same directory, it found things from the first client/DLE that did not exist on the second client/DLE and it eliminated them. Recovering each to its own separate empty directory is the way to go. When you recover directly to the original source tree, if you have specified and set a date for a week ago, then you are going to be eliminating those things created/modified since that date. -- Jon H. LaBadie [EMAIL PROTECTED] JG Computing 4455 Province Line Road(609) 252-0159 Princeton, NJ 08540-4322 (609) 683-7220 (fax) -- View this message in context: http://www.nabble.com/amrecover%3A-rewinding-tape-and-restore-directory-tf1872516.html#a5121473 Sent from the Amanda - Users forum at Nabble.com.
Re: Appropriate number of tapes in a set
Hi, thanks for your explanation. But I think maybe I'll just crawl around for a bit before trying to walk! It does sound interesting though - I'll keep it in mind for when I'm more comfortable with Amanda. Best regards, Joe -- View this message in context: http://www.nabble.com/Appropriate-number-of-tapes-in-a-set-tf1848286.html#a5062163 Sent from the Amanda - Users forum at Nabble.com.
Appropriate number of tapes in a set
Hi, can anyone please help me understand this: Apparently, you have to calculate how many tapes you need for your backup schedule as the number of tapes needed for every day the backup will run in your dumpcycle + one extra tape to avoid the risk of overwriting a full backup performed at the beginning of the previous cycle (quoted very loosely). I want to: 1. Run a backup job once a day Monday to Friday (all my data will fit onto one tape), 2. Have four weeks' worth of historical backups, i.e. be able to restore data to any given point within the last four weeks, and 3. Permanently archive one monthly full backup. So how would I calculate the appropriate number of tapes needed? 5 days * 4 weeks + 1 = 21 tapes ?? If I don't force full backups, and therefore full backups are sort of scattered between different tapes, then I'll have to take out multiple tapes every month for archiving, right? How should that be managed? What would appropriate values be for dumpcycle, runspercycle, and tapecycle? I'm thinking: dumpcycle 7 days (to ensure a full backup at least once a week) runspercycle 5 days (Monday to Friday every week) tapecycle 21 (number of tapes in rotation assuming my calculation above is correct) I'd really appreciate your advice on this type of backup scheme, because I'm struggling a bit to understand Amanda - they should have called her George or something :) -- View this message in context: http://www.nabble.com/Appropriate-number-of-tapes-in-a-set-t1848286.html#a5045123 Sent from the Amanda - Users forum at Nabble.com.
Re: Appropriate number of tapes in a set
Thanks very much for clarifying that to me. Yes, I meant that (at least at present) all data will fit onto one tape for every day of the week, i.e. if a full backup is done every day Monday to Friday, then Monday's full backup will fit onto one tape, Tuesday's will fit onto one tape, and so on. About the second configuration for doing monthly archival backups: 1. I assume then that you'd have to do something once a month to prevent the normal Amanda backup job from running while the monthly full one is active? I seem to remember I read somewhere about placing a file called hold inside an Amanda directory to cause the normal daily run to pause so it doesn't interfere with your monthly job - and so that you don't have to modify your cron job each time? 2. In other words, if you have a second configuration for monthly archives, then you really just happily rotate your 21 tapes for the normal daily backups, and then replace them one at a time either when they die naturally or when they've reached their expiry date? Thanks again - I think this has made things click somewhere in my head. -- View this message in context: http://www.nabble.com/Appropriate-number-of-tapes-in-a-set-t1848286.html#a5045679 Sent from the Amanda - Users forum at Nabble.com.
Re: Appropriate number of tapes in a set
Matt, thank you very much for your advice - it has helped me out a great deal to understand what to do. Much appreciated!! Regards, Joe -- View this message in context: http://www.nabble.com/Appropriate-number-of-tapes-in-a-set-t1848286.html#a5046521 Sent from the Amanda - Users forum at Nabble.com.
Re: Appropriate number of tapes in a set
That's an interesting suggestion, but I don't really follow on how to do this in practice. Are you saying that you do daily backups just to disk, and only rely on monthly backups on tape which are then kept off-site? Or do you mean do backups to both disk and tape - to disk for fast recovery and to tape for safekeeping? I'd be very interested to understand this concept :) Thanks for the suggestion. Regards, Joe -- View this message in context: http://www.nabble.com/Appropriate-number-of-tapes-in-a-set-t1848286.html#a5050318 Sent from the Amanda - Users forum at Nabble.com.
Re: amandad process defunct
Hi and thanks for your replies. See below for the log file contents (21.06.06). It also shows the version (which I used simply because Red Hat had those rpms available on their web site - I didn't want to add the 'complexity' of having to compile the source myself). In my stupidity I don't see anything glaringly obvious that can be wrong. Amanda is configured to use gnu-tar, and there's no hardware compression or encryption involved. I don't see why it should still show it's busy at 9am when the backup runs at 24:45 every morning, and only backs up about 100mb (this is just for testing purposes using virtual tapes). But this has happened twice now, and I simply don't know what to do or how to fix it without rebooting the test lab machine. I'm very new to Linux and Amanda has been making my head hurt, so please accept my apologies if I sound silly. # cat amandad.20060621004501.debug amandad: debug 1 pid 8021 ruid 33 euid 33: start at Wed Jun 21 00:45:01 2006 amandad: version 2.4.4p1 amandad: build: VERSION=Amanda-2.4.4p1 amandad:BUILT_DATE=Mon Jun 20 14:39:39 EDT 2005 amandad:BUILT_MACH=Linux porky.build.redhat.com 2.4.21-25.ELsmp #1 SMP Fri Nov 12 21:34:51 EST 2004 i686 i686 i386 GNU/Linux amandad:CC=i386-redhat-linux-gcc amandad:CONFIGURE_COMMAND='./configure' '--host=i386-redhat-linux' '--build=i386-redhat-linux' '--target=i386-redhat-linux-gnu' '--program-prefix=' '--prefix=/usr' '--exec-prefix=/usr' '--bindir=/usr/bin' '--sbindir=/usr/sbin' '--sysconfdir=/etc' '--datadir=/usr/share' '--includedir=/usr/include' '--libdir=/usr/lib' '--libexecdir=/usr/lib/amanda' '--localstatedir=/var/lib' '--sharedstatedir=/usr/com' '--mandir=/usr/share/man' '--infodir=/usr/share/info' '--enable-shared' '--with-index-server=localhost' '--with-gnutar-listdir=/var/lib/amanda/gnutar-lists' '--with-smbclient=/usr/bin/smbclient' '--with-amandahosts' '--with-user=amanda' '--with-group=disk' '--with-tmpdir=/var/log/amanda' '--with-gnutar=/bin/tar' amandad: paths: bindir=/usr/bin sbindir=/usr/sbin amandad:libexecdir=/usr/lib/amanda mandir=/usr/share/man amandad:AMANDA_TMPDIR=/var/log/amanda amandad:AMANDA_DBGDIR=/var/log/amanda CONFIG_DIR=/etc/amanda amandad:DEV_PREFIX=/dev/ RDEV_PREFIX=/dev/r amandad:DUMP=/sbin/dump RESTORE=/sbin/restore amandad:SAMBA_CLIENT=/usr/bin/smbclient GNUTAR=/bin/tar amandad:COMPRESS_PATH=/usr/bin/gzip amandad:UNCOMPRESS_PATH=/usr/bin/gzip MAILER=/usr/bin/Mail amandad:listed_incr_dir=/var/lib/amanda/gnutar-lists amandad: defs: DEFAULT_SERVER=localhost DEFAULT_CONFIG=DailySet1 amandad:DEFAULT_TAPE_SERVER=localhost amandad:DEFAULT_TAPE_DEVICE=/dev/null HAVE_MMAP HAVE_SYSVSHM amandad:LOCKING=POSIX_FCNTL SETPGRP_VOID DEBUG_CODE amandad:AMANDA_DEBUG_DAYS=4 BSD_SECURITY USE_AMANDAHOSTS amandad:CLIENT_LOGIN=amanda FORCE_USERID HAVE_GZIP amandad:COMPRESS_SUFFIX=.gz COMPRESS_FAST_OPT=--fast amandad:COMPRESS_BEST_OPT=--best UNCOMPRESS_OPT=-dc amandad: time 0.000: got packet: Amanda 2.4 REQ HANDLE 001-D8690608 SEQ 1150847102 SECURITY USER amanda SERVICE noop OPTIONS features=feff9ffe0f; amandad: time 0.000: sending ack: Amanda 2.4 ACK HANDLE 001-D8690608 SEQ 1150847102 amandad: time 0.005: bsd security: remote host hades.matrix-data.local user amanda local user amanda amandad: time 0.005: amandahosts security check passed amandad: time 0.005: running service noop amandad: time 0.005: sending REP packet: Amanda 2.4 REP HANDLE 001-D8690608 SEQ 1150847102 OPTIONS features=feff9ffe0f; amandad: time 0.021: got packet: Amanda 2.4 ACK HANDLE 001-D8690608 SEQ 1150847102 amandad: time 0.024: pid 8021 finish time Wed Jun 21 00:45:01 2006 -- View this message in context: http://www.nabble.com/amandad-process-defunct-t1825006.html#a4991274 Sent from the Amanda - Users forum at Nabble.com.
amandad process defunct
Hi, I'm running Amanda in a test lab, and every several days or so the backup job to virtual tape fails because a process called amandad becomes unresponsive on the backup client. When doing an amcheck from the Amanda server it tells me that amandad on the client is busy. When having a look at the client with the ps command, it lists two diferent amandad processes, with one showing as being defunct. I cannot kill this process, but when I kill the other amandad process I can get rid of both, but then have to reboot the client to get the backup job to work again. Do you know how I can avoid doing this reboot, or what may be going on here? I'd appreciate your help as I'm pretty new to Amanda and Linux, but have been asked to look into Amanda as an alternative backup solution for our 5 Linux servers. Thanks in advance! Joe -- View this message in context: http://www.nabble.com/amandad-process-defunct-t1825006.html#a4977961 Sent from the Amanda - Users forum at Nabble.com.
Re: tapetype definitions
Hmmm blimey...I find it all a bit complicated (but I hope that's due to me being rather new to Linux and Amanda and not me being stupid!). I'll give it another try and some test runs and see how it goes. Thank you very much for all your input and patience. Best regards, Joe -- View this message in context: http://www.nabble.com/tapetype-definitions-t1722903.html#a4711879 Sent from the Amanda - Users forum at Nabble.com.
tapetype definitions
Dear all, I've just run amtapetype -f /dev/nst0 for Amanda to generate a tapetype definition for an HP Storageworks SDLT 320 tape drive. The results came back as: define tapetype unknown-tapetype { comment just produced by tapetype prog (hardware compression on) length 135040 mbytes filemark 39 kbytes speed 2272 kps } amtapetype did slightly complain about hardware compression being enabled, but I couldn't see a way of disabling that. Would you, in your experience, think that this tapetype definition seems accurate and acceptable to use for real backups? This is the last outstanding piece of the Amanda puzzle before putting Amanda into operation. The tape drive's capacity is 160Gb uncompressed and 320Gb compressed, according to its specifications. So I'm not too sure about the length 135040 mbytes mentioned in the generated tapetype. Won't this cause Amanda to not use the drive's full capacity? Any advice will be much appreciated. Joe -- View this message in context: http://www.nabble.com/tapetype-definitions-t1722903.html#a4680242 Sent from the Amanda - Users forum at Nabble.com.
Re: tapetype definitions
Firstly - thank you for the replies. Secondly - I'm now more confused than before! Is my understanding correct then that I can conceivably use this tapetype definition, but that I probably shouldn't expect to be backing up more than approximately 160GB to a tape? Or, more to the point, approximately 135GB in this case? And that I should really disable compression (but that this isn't necessarily required) and accept the tape's capacity of around 160GB? I'm new to Linux (Redhat ES 3 in our case) and Amanda, so wouldn't mind things being spelled out to me - if you wouldn't mind doing that. -- View this message in context: http://www.nabble.com/tapetype-definitions-t1722903.html#a4681504 Sent from the Amanda - Users forum at Nabble.com.
Re: tapetype definitions
Or does it work this way: Amanda sends compressed data to the tape drive. The tape drive also compresses the data, and therefore actually expands it. Amanda doesn't know this. After Amanda has sent 135GB to the tape drive, the tape drive has actually written 160GB to the tape, and tells Amanda that it's full. Amand therefore thinks the tape drive's capacity is only 135GB. Is that correct? So if I use this tapetype definition, will Amanda in future only ever send 135GB to the drive? If I disable hardware compression, then I will always get a minimum of 135GB worth of capacity. If I modify the tapetype definition to contain, for example 155000 mbytes, then I will get a minimum of approximately 150GB worth of capacity, but depending on how compressable the data is, it may in fact be a lot more than that. Does that make sense? Furthermore, if Amanda can be configured to send uncompressed data to the drive, and the drive has hardware compression on, then I could expect a capacity of around 320GB, depending on how compressable the data is? My head hurts... -- View this message in context: http://www.nabble.com/tapetype-definitions-t1722903.html#a4682599 Sent from the Amanda - Users forum at Nabble.com.