[no subject]
unsubscribe
Mac OSX 10.1.5 and dump ...
HI guys Trying to get amanda working on Mac OSX 10.1.5 using dump as the backup service. However when I try and run dump I get the following.. sendsize: debug 1 pid 354 ruid 0 euid 0 start time Mon Jun 17 21:30:01 2002 /usr/local/amanda/libexec/sendsize: version 2.4.2p2 calculating for amname '/', dirname '/' sendsize: getting size via dump for / level 0 sendsize: running /sbin/dump 0sf 1048576 - / running /usr/local/amanda/libexec/killpgrp DUMP: Date of this level 0 dump: Mon Jun 17 21:30:01 2002 DUMP: Date of last level 0 dump: the epoch DUMP: Dumping / to standard output DUMP: bad sblock magic number DUMP: The ENTIRE dump is aborted. . (no size line match in above dump output) . asking killpgrp to terminate sendsize: pid 354 finish time Mon Jun 17 21:30:02 2002 Any ideas?? I'll try with gtar and see what happens, but I'd prefer to use dump. -- martin
Re: Mac OSX 10.1.5 and dump ...
Hi all! Martin wrote: Trying to get amanda working on Mac OSX 10.1.5 using dump as the backup service. However when I try and run dump I get the following.. ... DUMP: bad sblock magic number DUMP: The ENTIRE dump is aborted. Any ideas?? I'll try with gtar and see what happens, but I'd prefer to use dump. Well, I figure, you are using HFS+ for your Mac OS X hard disk? (It's the default) If so, then most probably dump on OS X supports UFS filesystems only. The message bad sblock magic number leads me to that conclusion. To be sure, you'd better either check the source from Darwin or ask at some Darwin developers' mailing list. HTH, Patrick M. Hausen Technical Director -- punkt.de GmbH Internet - Dienstleistungen - Beratung Scheffelstr. 17 a Tel. 0721 9109 -0 Fax: -100 76135 Karlsruhe http://punkt.de
New message output in log.
My amanda email this morning had a different output for the success messages than normal. At this point I don't think I need to be concerned but I searched the archives and found no references to SUCCESS - 0 listing. My new output contains share listing preceeded by SUCCESS - 0 listing. I do not understand why it has never done this before or if it is a problem. Also, notice that some files are listed while other times it lists *: (Small snip - all Samba shares are listed this way) /-- schroeder //ntusa1/e$ lev 0 STRANGE sendbackup: start [schroeder://ntusa1/e$ level 0] sendbackup: info BACKUP=/usr/bin/smbclient sendbackup: info RECOVER_CMD=/usr/bin/smbclient -f... - sendbackup: info end ? SUCCESS - 0 opening remote file \AppsN\INV\Inventaires Syteline.xls (\AppsN\INV\) ? SUCCESS - 0 opening remote file \AppsN\INV\Inventory MAIN Grade 1.xls (\AppsN\INV\) ? SUCCESS - 0 opening remote file \AppsN\INV\Inventory MAIN Grade 2.xls (\AppsN\INV\) ? SUCCESS - 0 opening remote file \AppsN\INV\Inventory MAIN Grade 3.xls (\AppsN\INV\) ? SUCCESS - 0 opening remote file \AppsN\INV\Inventory MAIN Grade 9.xls (\AppsN\INV\) ? SUCCESS - 0 opening remote file \AppsN\INV\Inventory TN Grade 1.xls (\AppsN\INV\) ? SUCCESS - 0 opening remote file \AppsN\INV\Inventory TN Grade 2.xls (\AppsN\INV\) ? SUCCESS - 0 opening remote file \AppsN\INV\Inventory TN Grade 3.xls (\AppsN\INV\) ? SUCCESS - 0 opening remote file \AppsN\INV\Inventory TN Grade 9.xls (\AppsN\INV\) ? SUCCESS - 0 opening remote file \AppsN\INV\Inventory TRMN Grade 1.xls (\AppsN\INV\) ? SUCCESS - 0 opening remote file \AppsN\INV\Inventory TRTN Grade 1.xls (\AppsN\INV\) ? SUCCESS - 0 opening remote file \AppsN\INV\Shortcut to Inventaires Syteline.lnk (\AppsN\INV\) ? SUCCESS - 0 listing \AppsN\INV\vibac letters kit\* ? SUCCESS - 0 listing \AppsN\Lab\* ? SUCCESS - 0 listing \AppsN\Maint\* ? SUCCESS - 0 listing \AppsN\Martin\* ? SUCCESS - 0 listing \AppsN\PMagic\* ? SUCCESS - 0 opening remote file \AppsN\Production Supervisor Shift Report to Maintenance.doc (\AppsN\) ? SUCCESS - 0 opening remote file \AppsN\Red Hat 7.2 Bible.pdf (\AppsN\) ? SUCCESS - 0 listing \AppsN\RND\* ? SUCCESS - 0 opening remote file \AppsN\Shortcut to Lab.lnk (\AppsN\) ? SUCCESS - 0 listing \AppsN\Spare Pts Inv Recv\* ? SUCCESS - 0 listing \AppsN\Systems\* ? SUCCESS - 0 listing \Engineering\* ? SUCCESS - 0 listing \HR\* ? SUCCESS - 0 listing \McAfee\* ? SUCCESS - 0 listing \ME\* ? SUCCESS - 0 listing \Progress\* ? SUCCESS - 0 listing \Share\* ? SUCCESS - 0 listing \Update\* ? SUCCESS - 0 listing \Wedge\* ? SUCCESS - 0 listing \Weight\* ? SUCCESS - 0 listing \Y2KFixes\* | tar: dumped 7218 files and directories | Total bytes written: 775137280 sendbackup: size 756970 sendbackup: end \ NOTES: taper: tape DailySet106 kb 14968800 fm 25 [OK] DUMP SUMMARY: DUMPER STATSTAPER STATS HOSTNAME DISKL ORIG-KB OUT-KB COMP% MMM:SS KB/s MMM:SS KB/s -- - development / 0 108032 108032 --0:313450.7 0:147989.0 development /home 0 18432 18432 --0:101863.3 0:035770.9 development /usr0 14061761406176 --5:114518.6 9:382431.4 development /var0 78848 78848 --0:272958.3 0:145733.7 schroeder/ 0 150112 150112 --0:552714.9 0:1410415.4 schroeder//ntusa1/c$ 0 369856 369856 --2:212625.8 5:321114.6 schroeder//ntusa1/d$ 0 15037121503712 --4:485226.7 5:554241.7 schroeder//ntusa1/e$ 0 759552 759552 --3:483332.9 1:1110647.1 schroeder//ntusa2/c$ 0 402688 402688 --1:483714.3 1:394081.7 schroeder//ntusa2/d$ 0 68480 68480 --0:183851.1 0:079793.4 schroeder//ntusa2/e$ 0 472000 472000 --1:176128.9 0:509371.8 schroeder/home 0 27520 27520 --0:132190.1 0:073712.9 schroeder/tmp0 672672 --0:02 429.2 0:04 192.7 schroeder/usr0 17894721789472 -- 14:591991.2 5:075820.5 schroeder/var0 16202881620288 -- 17:061579.9 4:535521.9 snoopy / 0 176224 176224 --0:266877.8 0:276639.6 snoopy /home 0 98496 98496 --0:204813.3 0:521878.7 snoopy /tmp0 384384 --0:01 377.1 0:04 113.5 snoopy /usr0 18313601831360 --4:506323.8 5:215710.9 snoopy /var0 115712 115712 --0:186603.1 0:333521.9 woodstock/ 0 174464 174464 --0:266817.2 0:256882.0 woodstock/home 0 18590081859008 --4:566287.3 6:174926.8 woodstock/tmp0 384384 --0:01 399.2 0:02 174.0 woodstock/usr0 18509761850976 --4:037609.6 9:033406.2 woodstock/var0 85152 85152 --0:136337.8 0:233631.4 (brought to you by Amanda version 2.4.2p2) Any ideas, comments, or suggestions that I need
amrecover 2.4.3b3 and file driver
Hi Folks, does anybody know how to fix this? I am running amanda using the new file driver option thus I am dumping all the Backups onto disk which works really fine. The only problem I've got right now is restoring those Backups. Using amrecover myconfig I get the following Error: AMRECOVER Version 2.4.3b3. Contacting server on amanda ... amrecover: Unexpected end of file, check amindexd*debug on server amanda Kind Regards, Bjoern v. Benckendorff
Re: Amanda Backup AMANDA MAIL REPORT FOR June 17, 2002
On Mon, 17 Jun 2002 at 7:32pm, Steve Bertrand wrote I am attempting to amdump, and the logs show: FAILURE AND STRANGE DUMP SUMMARY: baini /bkp lev 0 FAILED [disk /bkp offline on baini?] baini is the dns name for the pc's internal IP address, 192.168.x.x. I have verified that the ports for amanda are open and the /bkp dir has even been chown'ed to amanda:backup. I have also tried things such as ad0s1a and ad0 in place of /bkp. Do you see anything in sendsize*debug on baini about /bkp? -- Joshua Baker-LePain Department of Biomedical Engineering Duke University
[Fwd: Unable to deliver your message]
---BeginMessage--- We are unable to deliver the message from [EMAIL PROTECTED] to [EMAIL PROTECTED]. The amanda-users group is for archival use only and does not accept direct postings. If you want to post to the actual email list, please send your message to [EMAIL PROTECTED] For further assistance, please email [EMAIL PROTECTED] or visit http://help.yahoo.com/help/us/groups/ ---BeginMessage--- ---BeginMessage--- Hello, I am having a problem with amanda. I want to backup 7 linux (RedHat 7.2 7.3) machines with amanda, and I have removed the amanda rpms and installed directly from the tar file. I have used exactly the same configuration on all each of the clients, but 4 of the 7 are failing with reports like: = FAILURE AND STRANGE DUMP SUMMARY: amandaclient /usr/dante lev 0 FAILED [missing result for /usr/dante in amandaclient response] amandaclient /etc lev 0 FAILED [missing result for /etc in amandaclient response] amandaclient /home lev 0 FAILED [missing result for /home in amandaclient response] = The /tmp/amanda/sendsize.dump file has only 5 lines in it: = amandad: pid 6825 finish time Thu Jun 13 16:07:57 2002 sendsize: debug 1 pid 6826 ruid 33 euid 33 start time Thu Jun 13 16:07:57 2002 /usr/local/libexec/sendsize: version 2.4.3b3 sendsize: calculating for amname '/home', dirname '/home' sendsize: getting size via gnutar for /home level 0 = i.e. it is not spawning or running the runtar part of this. I have listed the amandad.dump file below for reference: = amandad: debug 1 pid 6825 ruid 33 euid 33 start time Thu Jun 13 16:07:57 2002 amandad: version 2.4.3b3 amandad: build: VERSION=Amanda-2.4.3b3 amandad:BUILT_DATE=Thu Jun 13 15:05:02 CDT 2002 amandad:BUILT_MACH=Linux amandaclient.psychiatry.uiowa.edu 2.4.9-31smp #1 SMP Tue Feb 26 05:55:42 EST 2002 i686 unknown amandad:CC=gcc amandad:CONFIGURE_COMMAND='./configure' '--with-user=amanda' '--with-group=disk' '--with-gnutar=/usr/local/etc/amanda/amanda_tar' '--with-index-server=amandaserver.psychiatry.uiowa.edu' '--with-config=IPLBackup' '--with-tape-server=amandaserver.psychiatry.uiowa.edu' '--with-tape-device=/dev/tape' '--with-gnutar-listdir' amandad: paths: bindir=/usr/local/bin sbindir=/usr/local/sbin amandad:libexecdir=/usr/local/libexec mandir=/usr/local/man amandad:AMANDA_TMPDIR=/tmp/amanda AMANDA_DBGDIR=/tmp/amanda amandad:CONFIG_DIR=/usr/local/etc/amanda DEV_PREFIX=/dev/ amandad:RDEV_PREFIX=/dev/ DUMP=/sbin/dump amandad:RESTORE=/sbin/restore SAMBA_CLIENT=/usr/bin/smbclient amandad:GNUTAR=/usr/local/etc/amanda/amanda_tar amandad:COMPRESS_PATH=/usr/local/etc/amanda/amanda_gzip amandad:UNCOMPRESS_PATH=/usr/local/etc/amanda/amanda_gzip amandad:MAILER=/usr/bin/Mail amandad:listed_incr_dir=/usr/local/var/amanda/gnutar-lists amandad: defs: DEFAULT_SERVER=amandaserver.psychiatry.uiowa.edu amandad:DEFAULT_CONFIG=IPLBackup amandad:DEFAULT_TAPE_SERVER=amandaserver.psychiatry.uiowa.edu amandad:DEFAULT_TAPE_DEVICE=/dev/tape HAVE_MMAP HAVE_SYSVSHM amandad:LOCKING=POSIX_FCNTL SETPGRP_VOID DEBUG_CODE amandad:AMANDA_DEBUG_DAYS=4 BSD_SECURITY USE_AMANDAHOSTS amandad:CLIENT_LOGIN=amanda FORCE_USERID HAVE_GZIP amandad:COMPRESS_SUFFIX=.gz COMPRESS_FAST_OPT=--fast amandad:COMPRESS_BEST_OPT=--best UNCOMPRESS_OPT=-dc got packet: Amanda 2.4 REQ HANDLE 000-A0FA0708 SEQ 1024002477 SECURITY USER amanda SERVICE sendsize OPTIONS maxdumps=1;hostname=amandaclient; GNUTAR /usr/dante 0 1970:1:1:0:0:0 -1 exclude-list=/usr/local/etc/amanda/IPLBackup/excludelist.gtar GNUTAR /usr/dante 2 2002:5:30:13:24:5 -1 exclude-list=/usr/local/etc/amanda/IPLBackup/excludelist.gtar GNUTAR /etc 0 1970:1:1:0:0:0 -1 exclude-list=/usr/local/etc/amanda/IPLBackup/excludelist.gtar GNUTAR /home 0 1970:1:1:0:0:0 -1 exclude-list=/usr/local/etc/amanda/IPLBackup/excludelist.gtar GNUTAR /home 1 2002:5:29:15:16:39 -1 exclude-list=/usr/local/etc/amanda/IPLBackup/excludelist.gtar GNUTAR /home 2 2002:6:5:6:16:17 -1 exclude-list=/usr/local/etc/amanda/IPLBackup/excludelist.gtar sending ack: Amanda 2.4 ACK HANDLE 000-A0FA0708 SEQ 1024002477 bsd security: remote host amandaserver.psychiatry.uiowa.edu user amanda local user amanda amandahosts security check passed amandad: running service /usr/local/libexec/sendsize amandad: sending REP packet: Amanda 2.4 REP HANDLE 000-A0FA0708 SEQ 1024002477
getting further
I am getting further with Amanda. I ran a backup last night but had a problem: error [/bin/tar returned 2] This problem has come up in the past (searched the group) but there seems to be no specific reason for this. I'm using tar version: tar (GNU tar) 1.13.11 I'm trying to backup: /boot always-full / comp-root-tar /bigcomp-root-tar With the result: DUMP SUMMARY: DUMPER STATSTAPER STATS HOSTNAME DISKL ORIG-KB OUT-KB COMP% MMM:SS KB/s MMM:SS KB/s -- - 216.13.244.1 / 0 FAILED --- 216.13.244.1 /big0 1216126011274432 92.7 144:011304.7 144:021304.7 216.13.244.1 /boot 0 14560 14592 --0:027555.8 0:111374.2 I'm not exactly sure why I can backup two partitions but not the third (root). Is this a permissions issue? I exclude the following: ./mnt ./proc ./no_backup ./backup ./download ./usr/doc ./usr/man ./usr/src ./var/spool/mqueue Thanks for any help or direction you can provide. Mike
Re: getting further
Hi Mike - I think there might be a problem with that version of gtar. Try getting the most recent version (even though it's still labeled alpha, that's the one you want). You will need to recompile the client to point Amanda to the correct gtar. HTH -Doug On Tue, 18 Jun 2002, Mike Heller wrote: I am getting further with Amanda. I ran a backup last night but had a problem: error [/bin/tar returned 2] This problem has come up in the past (searched the group) but there seems to be no specific reason for this. I'm using tar version: tar (GNU tar) 1.13.11 I'm trying to backup: /boot always-full / comp-root-tar /bigcomp-root-tar With the result: DUMP SUMMARY: DUMPER STATSTAPER STATS HOSTNAME DISKL ORIG-KB OUT-KB COMP% MMM:SS KB/s MMM:SS KB/s -- - 216.13.244.1 / 0 FAILED --- 216.13.244.1 /big0 1216126011274432 92.7 144:011304.7 144:021304.7 216.13.244.1 /boot 0 14560 14592 --0:027555.8 0:111374.2 I'm not exactly sure why I can backup two partitions but not the third (root). Is this a permissions issue? I exclude the following: ./mnt ./proc ./no_backup ./backup ./download ./usr/doc ./usr/man ./usr/src ./var/spool/mqueue Thanks for any help or direction you can provide. Mike -- ~~ Doug Silver 619 235-2665 Network Manager Urchin Software Corp. http://www.urchin.com ~~
Re: Backing up PostgreSQL?
[ On Sunday, June 16, 2002 at 14:42:50 (+0200), Ragnar Kjørstad wrote: ] Subject: Re: Backing up PostgreSQL? The term consistency has no meaning by itself unless there are well defined rules about the allowed data-content. If the rule is that alle the postgresql-datafiles together holds the up-to-date status of the database, then it is consistant at all times. If the rule is that each file should hold a up-to-date and complete representation of one particular database-table then it's only consistant after a clean shutdown. The rule should be very clear in this case: that the restoration of the pgsql database from backup media will be completely and entirely self-consistent and represent an exact snapshot of all the committed transactions at the time the backup was made. It must be immediately usable by simply starting the database as if starting it from any other clean shutdown. If any recovery procedures are first necessary then it is clearly not a self-consistent backup copy. The only requirement on the filesystem is that it is journaling as well, so it's always kept in a consistant state like the postgresql-database is.. Now you're getting a little out of hand. A journaling filesystem is a piling of one set of warts ontop of another. Now you've got a situation where even though the filesystem might be 100% consistent even after a catastrophic crash, the database won't be. There's no need to use a journaling filesystem with PostgreSQL (particularly if you use a proper hardware RAID subsystem with either full mirroring or full level 5 protection). Let's keep things clear here. RAID is totally irrelevant to this question. RAID protects you from hardware-failure, but it doesn't help to keep your filesystem or data consistant. RAID systems commonly have caches. If those caches are not flushed before a backup begins then a failure _during_ backup will possibly loose the data in them. However a properly designed independent RAID subsystem, in conjunction with a decently designed filesystem, will provide all the protection necessary because metadata writes by the operating system will be very quickly flushed to cache and the RAID subsystem will continue to write them to disk even if the host crashes. A journalling filesystem is not really necessary, nor of any real benefit, because (as you admit yourself below) with pgsql there's little metadata to futz around with on a regular basis (unless maybe you have zillions of small tables that come and go in your schema, or something else that causes pgsql to create and remove many files on a regular basis). I have no idea why you say journaling filesystems is a piling of one set of warts ontop of another, but the fact is that is required to always keep the filesystem consistant. You can have a completely consistent filesystem (all metadata is up-to-date and fsck or whatever will find nothing to fix) while at the same time the database files contained within the filesystem are in an internally inconsistent state w.r.t. the db schema and/or application and its ongoing transactions. You're confusing your levels of consistency. In theory non-journaling filesystems make no guarantees that the filesystem will be usable at all after a crash/powerfailure. In practise it's not that bad, and in most cases you will be able to recover everything but the last updates with fsck. Normal unix filesystems provide more than enough metadata consistency for pgsql and since you don't likely have billions of files you don't need to worry that fsck will be too slow during recovery. PostgreSQL need a filesystem that guarantees the metadata to be up-to-date. E.g. when it appends to the WAL the new file-size must be stored to disk or the latest data will not be available after crash/powerfailure. Now you're confusing your levels of metadata too. BUT; it's possible that the fsync() of the file itself will cause the metadata to be updated - at least on some filesystems. I should hope that's true on all filesystems that are useful for pgsql. Directory-updates on the other hand are _not_ flushed to disk because of fsync, so e.g. when postgreSQL creates new files (when a table/index becomes bigger than 2 GB) you risk loosing that new file if the machine crashes. (unless postgreSQL does fsync() on the directory after creating the file - I assume it doesn't just to be safe) It's trivial to force the OS to write directory metadata. Just close the file and then reopen it. I don't know if pgsql does that yet, but if not then they have something more to learn. In short: postgreSQL requires a journaling filesystem to guarantee consistant data after a crash / powerfailure. If pgsql deletes the old file before it has ensured the new ones are on disk then it has a very serious bug. Journaling filesystems are not even remotely necessary to guarantee safe operation of a properly designed database engine. Indeed
Re: getting further
On Tue, Jun 18, 2002 at 09:57:24AM -0700, Mike Heller wrote: I am getting further with Amanda. I ran a backup last night but had a problem: error [/bin/tar returned 2] My reading of the gtar code shows 2 is the return code when it has been run with the ignore failed reads option and did in fact ignore at least one failed read. This problem has come up in the past (searched the group) but there seems to be no specific reason for this. I'm using tar version: tar (GNU tar) 1.13.11 Not a recommended tarsion. (that was supposed to be version, but tarsion seems so right :). -- Jon H. LaBadie [EMAIL PROTECTED] JG Computing 4455 Province Line Road(609) 252-0159 Princeton, NJ 08540-4322 (609) 683-7220 (fax)
RE: getting further
Your version of tar is broken...get the latest from ftp://alpha.gnu.org/gnu/tar/ . You want 1.13.19 or later. -Original Message- From: Mike Heller [mailto:[EMAIL PROTECTED]] Sent: Tuesday, June 18, 2002 11:57 AM To: [EMAIL PROTECTED] Subject: getting further I am getting further with Amanda. I ran a backup last night but had a problem: error [/bin/tar returned 2] This problem has come up in the past (searched the group) but there seems to be no specific reason for this. I'm using tar version: tar (GNU tar) 1.13.11 I'm trying to backup: /boot always-full / comp-root-tar /bigcomp-root-tar With the result: DUMP SUMMARY: DUMPER STATSTAPER STATS HOSTNAME DISKL ORIG-KB OUT-KB COMP% MMM:SS KB/s MMM:SS KB/s -- - 216.13.244.1 / 0 FAILED --- 216.13.244.1 /big0 1216126011274432 92.7 144:011304.7 144:021304.7 216.13.244.1 /boot 0 14560 14592 --0:027555.8 0:111374.2 I'm not exactly sure why I can backup two partitions but not the third (root). Is this a permissions issue? I exclude the following: ./mnt ./proc ./no_backup ./backup ./download ./usr/doc ./usr/man ./usr/src ./var/spool/mqueue Thanks for any help or direction you can provide. Mike
manual restore
How do I list and restore a tape that was dump with Amanda 2.4.3b3? Thanks in advance, Rick. - Rick Jones Systems Administrator Leda Systems Inc. 2201 Ave. K Suite A2 Plano TX 75074 [EMAIL PROTECTED] 972-543-4333 972-543-4350 fax
Re: manual restore
On Tue, 18 Jun 2002 at 1:31pm, Rick Jones wrote How do I list and restore a tape that was dump with Amanda 2.4.3b3? more docs/RESTORE -- Joshua Baker-LePain Department of Biomedical Engineering Duke University
Auto-flush during amdump?
Greetings Amanda types. I've been using amanda for a few years now, so I'm pretty sure this can't be done. I thought I'd ask anyway, perhaps it will stimulate discussion of this as a potential new feature. I'm interested in having amanda flush any backups that are in the holding area as a part of the nightly amdump run. Our server is co-located with our ISP. While they are pretty good at changing our tapes, they do occasionally mess up, and we often don't catch them before the backup runs. What happens is that our backups wind up in the holding area, as you'd expect. The problem is that it's a multi-step headache to call the ISP, explain the problem, get the correct tape loaded, hang up; an amflush is then performed; then we have to call the ISP again, and get them to put the next night's tape in. Now, when you run amflush at the command line, it offers to put all the held backups onto one tape for you. What would really make life easier for us is if amdump essentially could be instructed to do the same thing... Take all the backups from the holding area, and put them on the tape with the results of the current amdump run. Can anyone think of a way to do this with the current Amanda? Would this be a popular addition if it appeared in a new version? Could this be put on a feature-request list? Colin
Re: Auto-flush during amdump?
that's an interesting idea.. maybe it would be better to make it part of the planner.. planner sees dumps in the holding disk, adds that count to the plan for the current day, and will flush it along with the current disks. this would save a lot of time for me as well. just add to amanda.conf: autoflush true -ben On Tue, 18 Jun 2002, Colin Henein wrote: Greetings Amanda types. I've been using amanda for a few years now, so I'm pretty sure this can't be done. I thought I'd ask anyway, perhaps it will stimulate discussion of this as a potential new feature. I'm interested in having amanda flush any backups that are in the holding area as a part of the nightly amdump run. Our server is co-located with our ISP. While they are pretty good at changing our tapes, they do occasionally mess up, and we often don't catch them before the backup runs. What happens is that our backups wind up in the holding area, as you'd expect. The problem is that it's a multi-step headache to call the ISP, explain the problem, get the correct tape loaded, hang up; an amflush is then performed; then we have to call the ISP again, and get them to put the next night's tape in. Now, when you run amflush at the command line, it offers to put all the held backups onto one tape for you. What would really make life easier for us is if amdump essentially could be instructed to do the same thing... Take all the backups from the holding area, and put them on the tape with the results of the current amdump run. Can anyone think of a way to do this with the current Amanda? Would this be a popular addition if it appeared in a new version? Could this be put on a feature-request list? Colin
Re: Auto-flush during amdump?
On Tue, 18 Jun 2002 at 3:37pm, Colin Henein wrote I've been using amanda for a few years now, so I'm pretty sure this can't be done. I thought I'd ask anyway, perhaps it will stimulate discussion of this as a potential new feature. I'm interested in having amanda flush any backups that are in the holding area as a part of the nightly amdump run. [jlb@chaos tmp2]$ more amanda-2.4.3b3/example/amanda.conf.in . . # reserve 30 # percent # This means save at least 30% of the holding disk space for degraded # mode backups. autoflush no # # if autoflush is set to yes, then amdump will schedule all dump on # holding disks to be flush to tape during the run. :) -- Joshua Baker-LePain Department of Biomedical Engineering Duke University
Re: Auto-flush during amdump?
On Tue, Jun 18, 2002 at 03:37:52PM -0400, Colin Henein wrote: Greetings Amanda types. I've been using amanda for a few years now, so I'm pretty sure this can't be done. I thought I'd ask anyway, perhaps it will stimulate discussion of this as a potential new feature. I'm interested in having amanda flush any backups that are in the holding area as a part of the nightly amdump run. That feature is already in 2.4.3b3. Jean-Louis Our server is co-located with our ISP. While they are pretty good at changing our tapes, they do occasionally mess up, and we often don't catch them before the backup runs. What happens is that our backups wind up in the holding area, as you'd expect. The problem is that it's a multi-step headache to call the ISP, explain the problem, get the correct tape loaded, hang up; an amflush is then performed; then we have to call the ISP again, and get them to put the next night's tape in. Now, when you run amflush at the command line, it offers to put all the held backups onto one tape for you. What would really make life easier for us is if amdump essentially could be instructed to do the same thing... Take all the backups from the holding area, and put them on the tape with the results of the current amdump run. Can anyone think of a way to do this with the current Amanda? Would this be a popular addition if it appeared in a new version? Could this be put on a feature-request list? Colin -- Jean-Louis Martineau email: [EMAIL PROTECTED] Departement IRO, Universite de Montreal C.P. 6128, Succ. CENTRE-VILLETel: (514) 343-6111 ext. 3529 Montreal, Canada, H3C 3J7Fax: (514) 343-5834
Re: Auto-flush during amdump?
As others have pointed out, it is a new, existing feature. One thing though. you will probably want to make sure there is a sufficient tape capacity, possilby allowing multiple tapes, for both flush and normal dump. -- Jon H. LaBadie [EMAIL PROTECTED] JG Computing 4455 Province Line Road(609) 252-0159 Princeton, NJ 08540-4322 (609) 683-7220 (fax)
Please help me!!
I have been working at this for 14 hours now, and still can't get it. I have even wiped amanda, reinstalled, then wiped BSD and reinstalled it and amanda. I still get the (/bkp1 offline on baini?) Here is the last sendsize message. This is driving me up the wall. All I want is one tape!! LOL It has been verified not to be a permissions issue. Amanda is in the operator group, and even put her in the wheel group for testing. sendsize: debug 1 pid 39689 ruid 1005 euid 1005 start time Tue Jun 18 20:41:37 2002 /usr/local/libexec/sendsize: version 2.4.3b3 sendsize: calculating for amname '/bkp1', dirname '/bkp1' sendsize: getting size via dump for /bkp1 level 0 sendsize: running /sbin/dump 0sf 1048576 - /bkp1 sendsize: running /usr/local/libexec/killpgrp DUMP: Date of this level 0 dump: Tue Jun 18 20:41:37 2002 DUMP: Date of last level 0 dump: the epoch DUMP: Dumping /bkp1 to standard output dump: /bkp1: unknown file system . sendsize: (no size line match in above dump output) . sendsize: asking killpgrp to terminate sendsize: pid 39689 finish time Tue Jun 18 20:41:38 2002 Tks, Steve
Variables to control tape changer
Hello, I have just acquirred a Quantum/ATL L500 library. -yay- I believe I have it functioning correctly, but there sure appears to be some arcane information regarding the changerfile configuration for chg-scsi. Here are my thougts/questions: 1) why would I want to use scsitapedev instead of dev in my changer config file? 2) What is the purpose of the sleep variable? Is it the time amanda will wait from when the changer returns to when the tape can be used? I set mine to 140 seconds as the documentation for my tape drive shows an average of 133 seconds to load a blank tape. 3) it is not clear if I can tell amanda to tell the tape changer a barcode value to load instead of searching the slots... = Now here are the specifics My amanda server is a dual pIII redhat 7.0 linux box, running amanda-2.4.3b3. this machine has the following tape hardware attached: /dev/nst0: sony dds3 4mm drive only (currently used to do backups) /dev/nst1: quantum/atl dlt8000 (tape on library) /dev/sg1: Tape changer mechanism. #dev= scsitapedev=/dev/nst1 amcheck-server: slot n: tape_rdlabel: tape open: 0: no such file or directory my chg-scsi file showed: warning open of 0: failed #dev= scsitapedev=/dev/sg0 I can't recall, but it didn't work dev=/dev/nst1 #scsitapedev= This is where I got the thing working correctly. perhaps /dev/sg0 would have worked, if I had unloaded kernel module st, and reloaded sg. I didn't want to do that though since I am running backups on /dev/nst0. What do you all think? Should I also post this to amanda-developers? --jason -- ~~~ Jason Brooks ~ (503) 641-3440 x1861 Direct ~ (503) 924-1861 System / Network Administrator Wind River Systems 8905 SW Nimbus ~ Suite 255 Beaverton, Or 97008
Re: Please help me!!
I still get the (/bkp1 offline on baini?) ... sendsize: calculating for amname '/bkp1', dirname '/bkp1' sendsize: getting size via dump for /bkp1 level 0 sendsize: running /sbin/dump 0sf 1048576 - /bkp1 sendsize: running /usr/local/libexec/killpgrp DUMP: Date of this level 0 dump: Tue Jun 18 20:41:37 2002 DUMP: Date of last level 0 dump: the epoch DUMP: Dumping /bkp1 to standard output dump: /bkp1: unknown file system The implication here is that /bkp1 is not a file system, but maybe just a top level directory. Dump can only (in general) back up whole file systems, not portions. What do you see if you do df -k /bkp1? Steve John R. Jackson, Technical Software Specialist, [EMAIL PROTECTED]
Re: Please help me!!
On Tue, 18 Jun 2002, Steve Bertrand wrote: I have been working at this for 14 hours now, and still can't get it. I have even wiped amanda, reinstalled, then wiped BSD and reinstalled it and amanda. Process of elimination: have you tried switching your dumptype for this disk to GNUtar to see if that fares any better? -- Brandon D. Valentine [EMAIL PROTECTED] Computer Geek, Center for Structural Biology This isn't rocket science -- but it _is_ computer science. - Terry Lambert on [EMAIL PROTECTED]
Re: Please help me!!
I forgot one thing. Is /bkp1 listed in /etc/fstab (or the equivalent for whatever OS you're using)? It has to be in there so Amanda can convert the logical (mount point) name to a disk name to hand to dump. John R. Jackson, Technical Software Specialist, [EMAIL PROTECTED]
Re: amrecover 2.4.3b3 and file driver
does anybody know how to fix this? ... AMRECOVER Version 2.4.3b3. Contacting server on amanda ... amrecover: Unexpected end of file, check amindexd*debug on server amanda Ummm, it would help a lot if you'd look at that log file and see if it tells you what the problem is, or post the contents. Bjoern v. Benckendorff John R. Jackson, Technical Software Specialist, [EMAIL PROTECTED]
YES!!! Thanks so much!!!
Thanks to you, I finally got success. /bkp1 is in fact a tld, which of course is not listed in /etc/fstab. Here's the output of the success! These dumps were to tape northnet01. The next tape Amanda expects to use is: a new tape. STATISTICS: Total Full Daily Estimate Time (hrs:min)0:00 Run Time (hrs:min) 0:02 Dump Time (hrs:min)0:01 0:01 0:00 Output Size (meg) 11.3 11.30.0 Original Size (meg) 130.3 130.30.0 Avg Compressed Size (%) 8.78.7-- Filesystems Dumped1 1 0 Avg Dump Rate (k/s) 281.6 281.6-- Tape Time (hrs:min)0:01 0:01 0:00 Tape Size (meg)11.3 11.30.0 Tape Used (%) 0.30.30.0 Filesystems Taped 1 1 0 Avg Tp Write Rate (k/s) 281.4 281.4-- NOTES: planner: Adding new disk localhost:/tmp. taper: tape northnet01 kb 11616 fm 1 [OK] DUMP SUMMARY: DUMPER STATSTAPER STATS HOSTNAME DISKL ORIG-KB OUT-KB COMP% MMM:SS KB/s MMM:SS KB/s -- - localhost/tmp0 133405 11616 8.7 0:41 281.6 0:41 281.4 (brought to you by Amanda version 2.4.3b3) Tks again!! Steve
Re: YES!!! Thanks so much!!!
planner: Adding new disk localhost:/tmp. FYI, you should not use localhost in your disklist file. Trust me. It will eventually bite you. Use the fully qualified name of your host. Steve John R. Jackson, Technical Software Specialist, [EMAIL PROTECTED]