SUMMARY: Re: onstream adr50 problems
Hello everyone who considers using the Onstream ADR50 drive with amanda, I won't let you wait any longer since Onstream support do not get back to me this time. The ADR50 drive, which is supposed to be a full linux compatible SCSI tape drive (it should work with the standard st scsi tape driver), does *not* work with amanda with firmware version 2.39. Although I had posted the problem desciption some time ago in this thread, I include it below again for completness. The problem description shows that we have spend quite a while to track down the problem. The problem is timeout related. It seems that the ADR50 gets into trouble when it runs out of data during writing a file and then receives the command to write a file mark. The drive will proceed to write the data, but reading the tape only the first file appears to be readable. After the first file has been read, the device reports a media error. Amanda *always* writes the file mark just before starting to write the next file, not right after a file has been written completely. The timeout in question happens after writing the tape label during the estimate collection phase. 90 seconds are enough. Onsteam support did respond to my inquiery two times. The second email I have got was cc'ed to me. In that email, a staff member guessed the problem would be firmware related and asked someone else to look into it. This has happened a month ago. I asked them when they will look into it but I so far haven't heared from them. I agree the problem is likely to be firmware related. The firmware version we can speak about is 2.39; there is currently no newer version available. We had asked on the amanda users list for people using the ADR50 drive with amanda. Only one person responded, saying that after trying the ADR drive, they had returned it and are now using a DAT drive. I will respond to myself here if I hear anything new about the problem. Greetings, Moritz Problem description follows --- -- 8 - We have a problem with our new OnStream ADR50 tape streamer. Brief problem description: The ADR50 device writes data which it cannot read afterwards under certain conditions. Those conditions are described below. Hardware description: AMD Duron 750. Asus A7V133 main board. 256 MB RAM. Adaptec 29160N SCSI controller. SCSI LVD bus internal with 2 devices: HDD IBM 20 GB (scsi id 0), tape device OnStream ADR50 internal (scsi id 5) Firmware rev. 2.39 S/N EA21J290564 Other SCSI bus (narrow 50pin internal): CDROM drive (scsi id 6). Software description: OS: RedHat Linux 7.1 kernel: 2.4.2, 2.4.9 (both RedHat patched versions; the problem appears with both) Tried Adaptec SCSI driver verison 6.1.7 (w/ kernel 2.4.2), 6.2.1 and 6.2.4 (w/ kernel 2.4.9); the problem appears with all of them Description of problem: Although the problem initially appeared when using the amanda backup software, it could be reproduced using a small C program which simulates what amanda does. The program performs the following steps: 1. open the tape device (given on the command line) in read/write mode (it remains open for all following steps) 2. rewind the tape. 3. read 32 KBytes of data from the tape. 4. rewind again. 5. write 32 KByte of data. 6. wait for 120 seconds 7. write an end of file mark 8. write another 1024 KBytes of data 9. write an end of file mark 10. rewind the tape 11. read 32 KBytes of data 12. try to read 32 KBytes of data, expect to get 0 bytes but pass the file mark 13. read 32 KBytes of data (file 2) When doing this, step 13 fails reproducibly. The program gets error code I/O error. Note that the no rewind tape device (/dev/nst0) was used for the tests in all cases. The most interesting part is that everything works fine when the program skips step 6 (wait for 120 seconds). What I have tried without success: * Replaced the SCSI controller * different kernel versions (see above) * different new SCSI drivers (aic7xxx, see above) Full details follow below. This is a transcript of a typical test program session: ---8- [root@hl tapetest]# ./test1 /dev/nst0 test1: rewind... success test1: reading 32768 bytes using 32768 byte blocks... + success test1: rewind... success test1: writing 32768 bytes using 32768 byte blocks... + success test1: sleeping for 120 s...woke up. test1: write filemark... success test1: writing 1048576 bytes using 32768 byte blocks... success test1: write filemark... success test1: rewind... success test1: reading 32768 bytes using 32768 byte blocks... + success test1: skipping file mark... done.test1: reading 1048576 bytes using 32768
access as operator not allowed from....
Hi, I'm trying to get amanda(client) running on a Freebsd4.3-Release. I fetched the ports did a ./configure \ --with-user=operator \ --with-group=operator \ --with-amandahosts \ --with-config=daily \ --with-gnutar=/bin/tar \ --with-tape-server=milestonenfs.system \ --with-configdir=/etc/amanda \ --without-server \ --with-index-server=milestonenfs.system and make install everything went on smoothly but a amcheck -c on the server fails regularly with Amanda Backup Client Hosts Check ERROR: strawberry.system: [access as operator not allowed from [EMAIL PROTECTED]] open of //.amandahosts failed Client check: 1 host checked in 0.033 seconds, 1 problem found (brought to you by Amanda 2.4.2p2) I have to mention that on the server p2 and on the client p1 is running, if that makes any difference. /usr/home/operator/.amandahosts milestonenfs.system operator /etc/inetd.conf amanda dgramudp waitoperator/usr/local/libexec/amandad amandad /etc/hosts.allow amandad : 192.168.1.0/255.255.255.240 : allow On a linux client, with the same configuration it works perfect. Thanks for any suggestions Tom
Re: Amanda 2.4.2 and Onstream ADR 50 Tape?
On Thu, 2002-01-03 at 13:07, Moritz Both wrote: Hi, actually yes. I have. I did restore some backups a while ago, but it _could_ be that the Tape drive was connected to a Sym53c875. I experience exactly the same problem that you describe at the moment (on an Adaptec UW controller). I currently have a brand new advance exchange unit directly from OnStream (while my local dealer was a complete failure in getting any support, the people from the OnStream hotline are very nice and competent and _really_ helpful. For their support, I can really recommend Onstream). This brand new tape drive shows exactly the same symptoms. So I do consider this either a systematic failure in our (quite similar) HW setup (I use RH6.2+ with 2.2 kernel) or a real issue with the drive firmware which would be very sad. I'm right now in the process of swapping SCSI controllers; I will keep you informed. Regards Henning Henning, have you ever been able to actually *restore* files from amanda tapes using this device? Which is the firmware verison it has? See also my email on the list from half an hour ago. Greetings, Moritz -- Dipl.-Inf. (Univ.) Henning P. Schmiedehausen -- Geschaeftsfuehrer INTERMETA - Gesellschaft fuer Mehrwertdienste mbH [EMAIL PROTECTED] Am Schwabachgrund 22 Fon.: 09131 / 50654-0 [EMAIL PROTECTED] D-91054 Buckenhof Fax.: 09131 / 50654-20
Re: access as operator not allowed from....
On Thu, Jan 03, 2002 at 03:13:10PM +0100, Tom Beer wrote: ERROR: strawberry.system: [access as operator not allowed from [EMAIL PROTECTED]] open of //.amandahosts failed [...] /usr/home/operator/.amandahosts We've always had to put .amandahosts in /, for whatever reason. -- Contemplate the mangled bodies of your countrymen, and then say, What should be the reward of such sacrifices? ... If ye love wealth better than liberty, the tranquillity of servitude than the animating contest of freedom -- go from us in peace. Crouch down and lick the hands which feed you. May your chains sit lightly upon you. -- Samuel Adams, 1776 CB461C61 8AFC E3A8 7CE5 9023 B35D C26A D849 1F6E CB46 1C61
Re: How to use xfsdump with Amanda?
In a message dated: Thu, 03 Jan 2002 11:08:35 +1100 Ben Wong said: Hi, I have several partitions using XFS and would like to ask how I can use xfsdump with Amanda? The backup server runs on Debian Linux and the debian packages amanda-server amanda-common and amanda-client are installed. You should be able to just configure, compile, and install the amanda client sw on that system and it should work. The ./configure script should look for xfsdump and note it's location for use with xfs partitions. I didn't have to do anything special for my system running xfs under Linux, other than create a symlink to where amanda was looking for xfsdump (IRIX places the binary in a different location than it ends up under Linux, and configure didn't detect it). But that's since been fixed IIRC. -- Seeya, Paul God Bless America! ...we don't need to be perfect to be the best around, and we never stop trying to be better. Tom Clancy, The Bear and The Dragon
Can't restore from Ultrium: changing volumes on pipe input, abort?
Since attempting to replace our old DDS3 tape drives with HP Ultriums, Amanda backups haven't worked properly. The HP tape drives themselves seem OK when we back things up to them directly. However when we use Amanda, the backups seem to work properly but restores fail. In slightly more detail: we have a Sun Ultra E250 running Solaris 2.6. It has an old DDS3 tape drive and an HP SureStore Ultrium 230 tape drive, both external. Amanda 2.3.0 is installed, and we've used it happily for some time with the DDS3 drives. We can write data to Ultrium tapes with tar or ufsdump, and read it back with tar or ufsrestore. It seems to be only with Amanda that problems occur. Restoring files from one backup on the Amanda tape is usually done with this command: amrestore -p /dev/rmt/0n machine name disk | ufsrestore -if - When this is tried with one of the Ultrium backup tapes, the restore proceeds as normally: I choose the files to be restored, type extract, and ufsrestore successfully restores some of the files from the tape; but before finishing the restore, it stops and gives me the message changing volumes on pipe input abort? [yn] I tried asking Sun, thinking that this might be a Solaris issue (before I noticed that the problem only occurs with Amanda-written ufsdump tapes), but Sun hasn't heard of this problem and says that it's never heard of Amanda and doesn't support it. Has anyone here come across a problem like this, or do you know what might be going wrong? I'm not sure that I should trust the tapetype definition that I've been using - the second of the definitions below. Does anyone have a better one? define tapetype Ultrium { comment HP Ultrium 2300 LTO drive, native length 101376 mbytes filemark 0 kbytes speed 13334 kbytes } define tapetype Ultrium-compressed { comment HP Ultrium 2300 LTO drive using compression length 16 mbytes filemark 0 kbytes speed 13334 kbytes } If more information would help, just ask, and I'll try to supply it. Happy New Year, -- -- Chris Cooke. Division of Informatics, University of Edinburgh, Scotland.
Re: access as operator not allowed from....
On Thu, Jan 03, 2002 at 03:13:10PM +0100, Tom Beer wrote: ERROR: strawberry.system: [access as operator not allowed from [EMAIL PROTECTED]] open of //.amandahosts failed [...] /usr/home/operator/.amandahosts We've always had to put .amandahosts in /, for whatever reason. Ok, I've done this and the straight on results are Amanda Backup Client Hosts Check ERROR: strawberry.system: [can not read/write /usr/local/var/amanda/gnutar-lists/.: No such file or directory] Client check: 1 host checked in 0.097 seconds, 1 problem found if I do a 777 on the whole tree I get Amanda Backup Client Hosts Check ERROR: strawberry.system: [can not read/write /usr/local/var/amanda/gnutar-lists/.: Permission denied] Client check: 1 host checked in 0.109 seconds, 1 problem found or vice versa ;-|
Problems with dumps
Hi all, I'm having trouble getting one of my clients backed up. There are 13 file systems on the client which need to be dumped, totalling about 12GB of data. I'm getting error messages like the following for several of the file systems: hacluster1 /dev/sda12 lev 0 FAILED [dumps too big, but cannot\ incremental dump skip-incr disk] I know that amanda seems to think that the dumps are too big, and failing these file systems because I've dis-allowed incremental backups. However, I've also specified the use of 2 tapes for the backups, and amanda doesn't seem to be filling both: Estimate Time (hrs:min)0:10 Run Time (hrs:min) 9:26 Dump Time (hrs:min)7:51 7:49 0:02 Output Size (meg) 54711.454709.12.2 Original Size (meg) 89777.289755.3 22.0 Avg Compressed Size (%)60.9 61.0 10.2 (level:#disks ...) Filesystems Dumped 33 28 5 (1:5) Avg Dump Rate (k/s) 1981.3 1991.6 15.7 Tape Time (hrs:min)6:50 6:50 0:00 Tape Size (meg) 54712.454710.02.4 Tape Used (%) 156.3 156.30.0 (level:#disks ...) Filesystems Taped33 28 5 (1:5) From the 'Tape Used' it appears that amanda is only filling 50% of the second tape. The 'Tape Size' seems to indicate I'm only filling about 55GB worth of tape. I'm using a DLT7000 drive with DLT4 tapes. I should be able to get 70GBs worth of data across 2 tapes, no? So, I should be able to get another 15GBs onto the second tape, by my calculations, which is fine, since there's less than 12GB currently failing. Any ideas? Thanks, -- Seeya, Paul God Bless America! ...we don't need to be perfect to be the best around, and we never stop trying to be better. Tom Clancy, The Bear and The Dragon
amanda,Lotus Notes and win2000
Is anyone out there, backing up a Lotus Notes server (on win 2k) with amanda? Any succes, failures? Also how much success/failures has anyone had with a restoring about everything on a windows box? I know backing up data is easy to restore, but for most programs you need to re-install. -Hussain
Re: strange amanda header
You are right, but I have realized that many tapes have the same (let instead lev) error or a gzip corrupted data format and never a tape header error. I tested the drive with tar (1.13.17, 1.13.19 and 1.13.25) cpio and dd (on old and new tapes) and it seems to work properly. Could the problem be an exausted 6 tape set ? Why dumps terminate without any I/O error? Now, having changed some tapes, upgraded tar to 1.13.25 and gzip to 1.3.2 (it was a thing that I already had to do), amrestoring tapes seems to work, but I'm not sure of the problem. Thanks, Alessandro - Original Message - From: Jean-Louis Martineau [EMAIL PROTECTED] To: Alessandro Prete [EMAIL PROTECTED] Cc: [EMAIL PROTECTED] Sent: Wednesday, January 02, 2002 10:57 PM Subject: Re: strange amanda header On Wed, Jan 02, 2002 at 09:52:04PM +0100, Alessandro Prete wrote: Someone has never had a similar problem? amrestore: 0: skipping start of tape: date 20011213 label Daily1-CdC-001 amrestore: strange amanda header: AMANDA: FILE 20011211 leonardo / let 1 comp .gz program /bin/gtar It should be: AMANDA: FILE 20011211 leonardo / lev 1 comp .gz program /bin/gtar ^ It look like a memory bug, one bit is fliped. Jean-Louis -- Jean-Louis Martineau email: [EMAIL PROTECTED] Departement IRO, Universite de Montreal C.P. 6128, Succ. CENTRE-VILLETel: (514) 343-6111 ext. 3529 Montreal, Canada, H3C 3J7Fax: (514) 343-5834
Re: Can't restore from Ultrium: changing volumes on pipe input, abort?
On Thu, Jan 03, 2002 at 03:20:02PM +, Chris Cooke wrote: with this command: amrestore -p /dev/rmt/0n machine name disk | ufsrestore -if - When this is tried with one of the Ultrium backup tapes, the restore proceeds as normally: I choose the files to be restored, type extract, and ufsrestore successfully restores some of the files from the tape; but before finishing the restore, it stops and gives me the message changing volumes on pipe input abort? [yn] I tried asking Sun, thinking that this might be a Solaris issue (before I noticed that the problem only occurs with Amanda-written ufsdump tapes), but Sun hasn't heard of this problem and says that it's never heard of Amanda and doesn't support it. Has anyone here come across a problem like this, or do you know what might be going wrong? Amanda 2.4.2p2, Solaris 2.6 server, Solaris 2.6 client, DLT-7000, file system size ~25GB. You're not the only person to see this. If you search the amanda-users archive for: changing volumes on pipe input, you'll find a not-terribly cheerful thread from last May. I've seen it once, so it appears to be intermittent for me. In my case, I got all files, but 'ufsrestore i' and 'ufsrestore x' failed with your error message, so I didn't get proper directory owner/group/permissions. In my case, 'ufsrestore r' did work perfectly, so I'm not quite as frightened as I would be otherwise. The trials I ran were: Trial Results - --- amrecover Fail with changing volumes on pipe input amrestore to file, then: ufsrestore if Fail with Specify next volume #. ufsrestore xf Fail with Specify next volume #. ufsrestore rf Worked perfectly. It's not clear to me if this is ufsdump problem, a ufsrestore problem, a taper problem, or some strange combination. I'm moving everything to Solaris 8 in the next two months, and have fantasized that the issue will go away; though the amanda-users archive is not terribly optimistic about that, either. :-) -- Jay Lessert [EMAIL PROTECTED] Accelerant Networks Inc. (voice)1.503.439.3461 Beaverton OR, USA(fax)1.503.466-9472
Re: strange amanda header
You should probably double check your hardware. I once had a similar error where every ~2,000,000,000th double word on the tape was corrupted. It turned out to be a bad mainboard (asus a7v133 pci problem). Run amverify regulary for a while, gzip will report these kind of errors (crc error). Greetings, Moritz You are right, but I have realized that many tapes have the same (let instead lev) error or a gzip corrupted data format and never a tape header error. I tested the drive with tar (1.13.17, 1.13.19 and 1.13.25) cpio and dd (on old and new tapes) and it seems to work properly. Could the problem be an exausted 6 tape set ? Why dumps terminate without any I/O error? Now, having changed some tapes, upgraded tar to 1.13.25 and gzip to 1.3.2 (it was a thing that I already had to do), amrestoring tapes seems to work, but I'm not sure of the problem. [...]
RE: Problems with dumps
Paul, A couple of things might help track down where the problem is coming from. First, if you can add a third tape to a run, that will indicate whether the problem is with AMANDA's tape size estimate or not. Next, do you have enough holding disk space for that backup? You can test this by configuring that entry in disklist to use a backup type that does not go to the holding disk. That should help narrow down where the limitation is. Paul -Original Message- From: Paul Lussier [mailto:[EMAIL PROTECTED]] Sent: Thursday, January 03, 2002 11:35 AM To: [EMAIL PROTECTED] Subject: Problems with dumps Hi all, I'm having trouble getting one of my clients backed up. There are 13 file systems on the client which need to be dumped, totalling about 12GB of data. I'm getting error messages like the following for several of the file systems: hacluster1 /dev/sda12 lev 0 FAILED [dumps too big, but cannot\ incremental dump skip-incr disk] I know that amanda seems to think that the dumps are too big, and failing these file systems because I've dis-allowed incremental backups. However, I've also specified the use of 2 tapes for the backups, and amanda doesn't seem to be filling both: Estimate Time (hrs:min)0:10 Run Time (hrs:min) 9:26 Dump Time (hrs:min)7:51 7:49 0:02 Output Size (meg) 54711.454709.12.2 Original Size (meg) 89777.289755.3 22.0 Avg Compressed Size (%)60.9 61.0 10.2 (level:#disks ...) Filesystems Dumped 33 28 5 (1:5) Avg Dump Rate (k/s) 1981.3 1991.6 15.7 Tape Time (hrs:min)6:50 6:50 0:00 Tape Size (meg) 54712.454710.02.4 Tape Used (%) 156.3 156.30.0 (level:#disks ...) Filesystems Taped33 28 5 (1:5) From the 'Tape Used' it appears that amanda is only filling 50% of the second tape. The 'Tape Size' seems to indicate I'm only filling about 55GB worth of tape. I'm using a DLT7000 drive with DLT4 tapes. I should be able to get 70GBs worth of data across 2 tapes, no? So, I should be able to get another 15GBs onto the second tape, by my calculations, which is fine, since there's less than 12GB currently failing. Any ideas? Thanks, -- Seeya, Paul God Bless America! ...we don't need to be perfect to be the best around, and we never stop trying to be better. Tom Clancy, The Bear and The Dragon
about amanda holding disk...
Hi, Since my incremental backup is not so big(about 20G)per day. I want amanda to put daily incremental backup to the holding disk first. and then after 3 or 4 days, I flush it to one tape(100G capacity).That means the holding disk will have 3 or 4 days of incremental backup. how can I do this in configuration file or which command should I run? Another rookie question is: Currently I use Sony AIT2 50/100 tape device, each time I run amdump manually to do the backup. Now I want amanda to do it automatedly. But, Amanda automatically schedules full/incremental dumps to maintain balanced daily runtimes and tape usage, If I do a full backup each month, then the backup machine is totally down in the middle of the month, does that means the only backup I have is the backup of last month?(since this month's full backup is not finished) Thank you very much,bow...
Re: about amanda holding disk...
At 14:26 03-01-2002 -0500, Dengfeng Liu wrote: Since my incremental backup is not so big(about 20G)per day. I want amanda to put daily incremental backup to the holding disk first. and then after 3 or 4 days, I flush it to one tape(100G capacity).That means the holding disk will have 3 or 4 days of incremental backup. how can I do this in configuration file or which command should I run? Another rookie question is: Currently I use Sony AIT2 50/100 tape device, each time I run amdump manually to do the backup. Now I want amanda to do it automatedly. Starting with the second question. Run amdump as a cronjob or a similar thing that will let you run commands at a specific time. With this working you dont have to do anything to get your dumps to the holding disk. If you dont put a tape in the drive amdump will complain a bit and dump to the holding disk. You can then flush to tape at when appropriate. But, Amanda automatically schedules full/incremental dumps to maintain balanced daily runtimes and tape usage, If I do a full backup each month, then the backup machine is totally down in the middle of the month, does that means the only backup I have is the backup of last month?(since this month's full backup is not finished) As far as I know, No, you have what has been backed the first half of the month. Not knowing the detail I would guess that amanda gives priority to new and changed files so the damage is minimised - but others on this list will know better... Kasper
Getting the current fs position
I'm using amanda 2.4.2p2 with a patch to append backups to the same tape. Everything works well and I have managed to retrieve files using amrecover. Problem is, after running that command, I'm guessing I need to reset the position of the tape to 1 beyond the last filesystem written to tape. Is there a command to do this? I looked around and it would seem amadmin with the balance option should return that value, but it does not return the total number of filesystems written to disk to date. So how can I find this information? Marc - Sitepak
Running programs during client execution?
I'm a new subscriber, so apologies in advance if this is an FAQ, I was unable to find an answer on the site. I'll take all relevant redirection to solutions. I'm running a large amanda installation without any problems whatsoever, but with a few desires. The largest of these is client-side backup customizations. For example, I would like to write a script to perform CVS write locks before backing up our CVS tree (which is on a partition by itself). I know how to do the locking, but the question is: How do I get my own program to run during this process? I'd love to hear there's a way to do this... I don't want to just wrapper the whole amandad, because I'd like to make sure that the scripts only get run during the backup process, NOT during amcheck hits or spurious connections from curious users). Suggestions or comments? :) TIA, -mh. -- Mark Hazen DataBuilt, Inc. (843) 836-2101 Ext. 251 They may forget what you said, but they will never forget how you made them feel. --Carl W. Buechner
Re: Amanda 2.4.2 and Onstream ADR 50 Tape?
On Thu, 2002-01-03 at 13:07, Moritz Both wrote: Henning, have you ever been able to actually *restore* files from amanda tapes using this device? Which is the firmware verison it has? Hi, [ There is a remotely Amanda specific reference much further below. But as some Amanda people seem to have an Onstream tape drive (or even an ADR50), I keep the Amanda-Users List as Cc on this mail. If you have an ADR50 and want to keep informed even if we take this discussion away from amanda-users, please write me a mail. Thanks. ] it doesn't seem to be so easy as we thought. I fetched your test program, and compiled it (I called it adr50): My HW: Intel Celeron 466 SMP (two processors), 512 MB RAM, One Adaptec 2940UW controller, exclusively for an external ADR50 tape (st0), one Symbios Logic 53c875 based (DawiControl 2976UW) controller exclusively for another external ADR 50 tape (st2). All disks are on a third controller. I run a RH 6.2 system + patches, Kernel 2.2.19, basically a RH vendor kernel + AA patches + IDE driver + some goodies. ;-) Both tapes are set in the BIOS to Async/Wide transfers as recommended in various articles about the ADR50. For the AIC I'm sure that this is also the case in the kernel, I'm not sure about the Symbios because I found a line Jan 3 19:49:00 babsi kernel: ncr53c875-1-3,*: FAST-20 WIDE SCSI 40.0 MB/s (50 ns, offset 15) in my logs, which seem to contradict the BIOS setting. So I do primary testing with the AIC and then try to reproduce with the Symbios. # cat /proc/scsi/aic7xxx/0 | tail (scsi0:0:4:0) Device using Wide/Async transfers. Transinfo settings: current(0/0/1/0), goal(255/0/1/0), user(0/0/1/0) Total transfers 270 (105 reads and 165 writes) 2K 2K+ 4K+ 8K+16K+32K+64K+ 128K+ Reads: 0 0 0 0 0 105 0 0 Writes: 0 0 0 0 0 165 0 0 I'm using st0 on the Adaptec 2940 UW. This is a brand new tape drive directly from OnStream which I got as an advance replacement for my own tape. I use two of my normal tapes, each has 5-10 cycles of wear on it. The tapes are kept in a climate controlled environment which is dust-free (or in other words: The tapes are as good as new. ;-) ) Before Testing: # mt -f /dev/nst0 stoptions scsi2log Test #1: Running on an erased tape # mt -f /dev/nst0 erase # ./adr50 /dev/nst0 ./adr50 /dev/nst0 test1: rewind... success test1: reading 32768 bytes using 32768 byte blocks... Input/output error [ that one is ok. The tape has been erased. You can't read from an erased tape ] test1: rewind... success test1: writing 32768 bytes using 32768 byte blocks... + success test1: sleeping for 120 s...woke up. test1: write filemark... success test1: writing 1048576 bytes using 32768 byte blocks... success test1: write filemark... success test1: rewind... success test1: reading 32768 bytes using 32768 byte blocks... + success test1: skipping file mark... done.test1: reading 1048576 bytes using 32768 byte blocks... success = worked fine! Test #2, directly after Test #1, same tape: # ./adr50 /dev/nst0 test1: rewind... success test1: reading 32768 bytes using 32768 byte blocks... + success test1: rewind... success test1: writing 32768 bytes using 32768 byte blocks... + success test1: sleeping for 120 s...woke up. test1: write filemark... success test1: writing 1048576 bytes using 32768 byte blocks... success test1: write filemark... success test1: rewind... success test1: reading 32768 bytes using 32768 byte blocks... + success test1: skipping file mark... [ INSERT A LONG, LONG, LONG PAUSE HERE, 10-15 Minutes ! ] test1: reading 1048576 bytes using 32768 byte blocks... Input/output error and from the kernel: Jan 3 23:19:46 babsi kernel: st0: Error with sense data: Info fld=0x40, Current st09:00: sense key Medium Error Jan 3 23:19:46 babsi kernel: Additional sense indicates Unrecovered read error Jan 3 23:19:46 babsi kernel: st0: Error with sense data: Info fld=0x40, Current st09:00: sense key Medium Error Jan 3 23:19:46 babsi kernel: Additional sense indicates Unrecovered read error == Did not work! Test #3, directly after Test #2, same tape # mt -f /dev/nst0 rewind # mt -f /dev/nst0 erase #./adr50 /dev/nst0 test1: rewind... success test1: reading 32768 bytes using 32768 byte blocks... Input/output error test1: rewind... success test1: writing 32768 bytes using 32768 byte blocks... + success test1: sleeping for 120 s...woke up. test1: write filemark... success test1: writing 1048576 bytes using 32768 byte blocks... success test1: write filemark... success test1: rewind... success test1: reading 32768 bytes using 32768 byte blocks... + success test1: skipping file mark... done.test1: reading 1048576 bytes using 32768 byte blocks...
amverify and grep
Hello, after upgrading to SuSE 7.3, I found that my amverify script (amanda-2.4.1p1) did not work properly - it told me that VOLUME is "matches" and Date is "file": Volume matches, Date file (actually, the first time I saw this I really thought the volume matches a "date file" - this may indeed "mean" something, especially if the first time you happen to use amverify in the past few months is after the refreshing experience of an upgrade of 400 packages...;-). This comes from the write statement report "Volume $VOLUME, Date $DWRITTEN" so clearly VOLUME and DWRITTEN were computed wrongly. But how, since I did not change the script at all? The answer is that, due probably to the upgrade from SuSE 6.2 to 7.3, the behaviour of grep has changed. Indeed, for binary files, grep now says: Binary file xxx matches instead of showing the line where the match happens. This may be good in other occasions, but for amverify it has the effect that when grep is used in the computation of TAPENDATE: TAPENDATE=`grep AMANDA: $TEMP/header | sed 's/^AMANDA: TAPESTART //'` grep will say Binary file /tmp/header matches so that when, a few lines further, we do set X $TAPENDATE shift VOLUME=$4 DWRITTEN=$2 VOLUME will be "matches" and DWRITTEN will be "file". The remedy is to use the -a option to grep in the TAPENDATE line: --- TAPENDATE=`grep -a AMANDA: $TEMP/header | sed 's/^AMANDA: TAPESTART //'` [ X"$TAPENDATE" = X"" ] \ report "** No amanda tape in slot" \ continue set X $TAPENDATE shift VOLUME=$4 DWRITTEN=$2 - I decided to post this curiosity, since I did not find any similar message. Please CC me - I am not on the list anymore. -- Regards Chris Karakas Dont waste your cpu time - crack rc5: http://www.distributed.net
Re: amverify and grep
Hi Chris, Upgrade, it was fixed a year ago. You can upgrade only the amverify script if you want. Jean-Louis On Fri, Jan 04, 2002 at 02:38:14AM +0100, Chris Karakas wrote: Hello, after upgrading to SuSE 7.3, I found that my amverify script (amanda-2.4.1p1) did not work properly - it told me that VOLUME is matches and Date is file: Volume matches, Date file (actually, the first time I saw this I really thought the volume matches a date file - this may indeed mean something, especially if the first time you happen to use amverify in the past few months is after the refreshing experience of an upgrade of 400 packages...;-). This comes from the write statement report Volume $VOLUME, Date $DWRITTEN so clearly VOLUME and DWRITTEN were computed wrongly. But how, since I did not change the script at all? The answer is that, due probably to the upgrade from SuSE 6.2 to 7.3, the behaviour of grep has changed. Indeed, for binary files, grep now says: Binary file xxx matches instead of showing the line where the match happens. This may be good in other occasions, but for amverify it has the effect that when grep is used in the computation of TAPENDATE: TAPENDATE=`grep AMANDA: $TEMP/header | sed 's/^AMANDA: TAPESTART //'` grep will say Binary file /tmp/header matches so that when, a few lines further, we do set X $TAPENDATE shift VOLUME=$4 DWRITTEN=$2 VOLUME will be matches and DWRITTEN will be file. The remedy is to use the -a option to grep in the TAPENDATE line: --- TAPENDATE=`grep -a AMANDA: $TEMP/header | sed 's/^AMANDA: TAPESTART //'` [ X$TAPENDATE = X ] \ report ** No amanda tape in slot \ continue set X $TAPENDATE shift VOLUME=$4 DWRITTEN=$2 - I decided to post this curiosity, since I did not find any similar message. Please CC me - I am not on the list anymore. -- Regards Chris Karakas DonĀ“t waste your cpu time - crack rc5: http://www.distributed.net -- Jean-Louis Martineau email: [EMAIL PROTECTED] Departement IRO, Universite de Montreal C.P. 6128, Succ. CENTRE-VILLETel: (514) 343-6111 ext. 3529 Montreal, Canada, H3C 3J7Fax: (514) 343-5834