Move disk without loosing backup history?

2008-08-28 Thread Toralf Lund
I've just moved a disk from one server to another without really changing anything with respect to how the clients see it; logically, the disk still represents exactly the same volume on the network. Is there any way I can change the amanda config so that the disk is backed up via the new

Re: Move disk without loosing backup history?

2008-08-28 Thread Toralf Lund
Gene Heskett wrote: On Thursday 28 August 2008, Toralf Lund wrote: I've just moved a disk from one server to another without really changing anything with respect to how the clients see it; logically, the disk still represents exactly the same volume on the network. [... ] Unforch

Re: Move disk without loosing backup history?

2008-08-28 Thread Toralf Lund
Toralf Lund wrote: Gene Heskett wrote: On Thursday 28 August 2008, Toralf Lund wrote: I've just moved a disk from one server to another without really changing anything with respect to how the clients see it; logically, the disk still represents exactly the same volume on the network

Re: Move disk without loosing backup history?

2008-08-28 Thread Toralf Lund
Dustin J. Mitchell wrote: On Thu, Aug 28, 2008 at 7:48 AM, Gene Heskett [EMAIL PROTECTED] wrote: The tar folks at gnu do not consider that a bug, but as a part of the security. To be fair, the tar developers did fix this -- in 1.21, IIRC. As to the original question, no, I don't

Re: disk was stranded on waitq/sendsize timed out waiting for REP data

2007-03-12 Thread Toralf Lund
Toralf Lund wrote: Forgot to answer this earlier, I think... If you can't kill sendsize, it's because it is hang in a system call. It's often when it try to access a mount point. Do you have a hanged mount point? But yes, you are absolutely right. The host in question had problems with an NFS

Re: disk was stranded on waitq/sendsize timed out waiting for REP data

2007-03-07 Thread Toralf Lund
Forgot to answer this earlier, I think... If you can't kill sendsize, it's because it is hang in a system call. It's often when it try to access a mount point. Do you have a hanged mount point? But yes, you are absolutely right. The host in question had problems with an NFS mount. I didn't

sendbackup: index tee cannot write [Broken pipe]

2007-03-07 Thread Toralf Lund
A few days ago I had some backup problems that turned out to be caused by a hanging NFS mount, causing sendsize to lock up completely - see a separate post on this. Now I have sorted out this problem, and it seemed like amdump would once again start properly, but it turns out that the backup

Re: sendbackup: index tee cannot write [Broken pipe]

2007-03-07 Thread Toralf Lund
A few days ago I had some backup problems that turned out to be caused by a hanging NFS mount, causing sendsize to lock up completely - see a separate post on this. Now I have sorted out this problem, and it seemed like amdump would once again start properly, but it turns out that the backup

disk was stranded on waitq/sendsize timed out waiting for REP data

2007-03-01 Thread Toralf Lund
We just started to get a serious problem with our amdump execution (Amanda 2.5.0p2). As usual, we don't thing we have changed anything at all after the last successful dump Symptoms: 1. amstatus says fileserv:/scanner0 planner: [hmm, disk was stranded on

Unknown partition table signature on ghosted disk

2006-08-31 Thread Toralf Lund
I'm not sure this is the right place to ask questions like this, but I'll give it a try: On a certain host at work, the RH system installer (usually booted from a customised DVD...) will fail to detect the existing Linux partitions. The error message is (I think) Unknown partition table

Re: Unknown partition table signature on ghosted disk

2006-08-31 Thread Toralf Lund
I'm not sure this is the right place to ask questions like this, but I'll give it a try: Oh. No. It most certainly isn't, since I didn't even post to the intended list. I wanted Anaconda, not Amanda (that's automatic address lookup for you.) Sorry. - Toralf

Re: determining backup set

2006-06-26 Thread Toralf Lund
Maybe I've missed something, but I can't see why a special script would be necessary in this case. [ ... ] This script is not special for the situation in question. In contrast, it is meant to be run after every backup to produce tape-labels with information which files from

Re: determining backup set

2006-06-26 Thread Toralf Lund
As long as the log files are available, the amandatape script that I posted a while ago to this list will give you the info that you are looking for. You can find it here: Maybe I've missed something, but I can't see why a special script would be necessary in this case. Why not use

strategy incronly and skip-full again: According to manual page, both are buggy. Have bugs been fixed?

2006-06-22 Thread Toralf Lund
Regarding my recent post regarding strategy incronly and skip-full, I just found out more about the problems I've had in the past with this simply by checking the amanda.conf manual page it says: skip-full /boolean/ Default: no. If true and planner has scheduled a full backup,

Re: determining backup set

2006-06-22 Thread Toralf Lund
Josef Wolf wrote: On Fri, Jun 16, 2006 at 10:47:52AM -0400, Marlin Whitaker wrote: On Jun 15, 2006, at 3:03 PM, Jon LaBadie wrote: On Thu, Jun 15, 2006 at 12:33:16PM -0400, Marlin Whitaker wrote: If I have a collection of tapes from previous amanda backups, is there a

Recommended value for bumppercent?

2006-06-15 Thread Toralf Lund
Another question related to one of my recent threads: What do you think is a good value for bumppercent? Why? - Toralf

Re: Is tape spanning documented anywhere?

2006-06-14 Thread Toralf Lund
I also have one other scenario in mind, though - which is one I've actually come across a number of times: What if a certain DLE due for backup is estimated to be slightly smaller than runtapes*tape size, and thus dumped to holding disk, but then turns out to be slightly larger?

Re: Is tape spanning documented anywhere?

2006-06-13 Thread Toralf Lund
Anyhow, I'd really like to know more about how the spanning actually works. Is it documented anywhere? http://www.amanda.org/docs and FAQ still say that the option does not exist... Try http://wiki.zmanda.org/index.php/Splitting_dumps_across_tapes Yes. Thanks. That's quite

Re: Is tape spanning documented anywhere?

2006-06-13 Thread Toralf Lund
2. What happens to the holding disk file after a dump is partially written to tape? Will Amanda keep the entire file, or just what will be written next time around? And what if the holding disk data is split into chunks? Amanda keeps the entire dump, and will be flushed

Version 2.5.0: Have incronly/skip-fill issues been addressed

2006-06-13 Thread Toralf Lund
Another question related to amanda 2.5: Does anyone know if the issues with skip-full and/or strategy incronly been addressed? In the past, neither strategy incronly nor skip-full have worked quite as expected. I'm afraid I can't remember the full details, but one problem I've had, was that

Re: Is tape spanning documented anywhere?

2006-06-13 Thread Toralf Lund
Yes indeed. The whole DLE. A singe DLE still needs to be written in one run, possibly using many tapes. Oh no... Like I said, that's a big disappointment. I'm tempted to say that it is not correct to claim that Amanda now suppots tape spanning, if it can't span dumps across tapes written

Re: Is tape spanning documented anywhere?

2006-06-13 Thread Toralf Lund
Paul Bijnens wrote: On 2006-06-13 12:10, Toralf Lund wrote: Yes indeed. The whole DLE. A singe DLE still needs to be written in one run, possibly using many tapes. Oh no... Like I said, that's a big disappointment. I'm tempted to say that it is not correct to claim that Amanda now suppots

Re: Is tape spanning documented anywhere?

2006-06-13 Thread Toralf Lund
To throw my $.02 in here, the situations would be very different. If one is forced to have all DLEs tapeable in one amdump run, then (theoretically), nothing will be left on the holding disk to lose should said disk die. But we're talking about a situation where the DLEs are not tapeable.

Re: Is tape spanning documented anywhere?

2006-06-13 Thread Toralf Lund
Jon LaBadie wrote: On Tue, Jun 13, 2006 at 02:46:31PM +0200, Toralf Lund wrote: Normally I would agree, but I have to back up 3Tb of data organised as one single volume. The only simple option would be to have one 3Tb tape as well, but such a thing isn't available (to me at least

Re: Is tape spanning documented anywhere?

2006-06-13 Thread Toralf Lund
Toralf Lund wrote: Jon LaBadie wrote: On Tue, Jun 13, 2006 at 02:46:31PM +0200, Toralf Lund wrote: Normally I would agree, but I have to back up 3Tb of data organised as one single volume. The only simple option would be to have one 3Tb tape as well, but such a thing isn't available (to me

Re: Version 2.5.0: Have incronly/skip-fill issues been addressed

2006-06-13 Thread Toralf Lund
Jon LaBadie wrote: On Tue, Jun 13, 2006 at 11:46:20AM +0200, Paul Bijnens wrote: I'm still finding out a use for skip-full. It seems to be a weird option: when the planner had decided to make a full dump, then in the real run it is skipped. That would mean that you have to be carefully

When was bumppercent introduced?

2006-06-12 Thread Toralf Lund
... -- Toralf Lund

Is tape spanning documented anywhere?

2006-06-12 Thread Toralf Lund
I haven't been following the posts to this list too closely, or bothered to upgrade amanda, for some time (since our existing setup *works*...), so I didn't find out until right now that tape spanning is supported in the current release. Anyhow, I'd really like to know more about how the

filename ... has invalid characters

2006-06-12 Thread Toralf Lund
just look for /etc/amanda/config/amanda.conf...) -- Toralf Lund

Version 2.5.0p2: amstatus parse error for logfile from older version

2006-06-12 Thread Toralf Lund
I'm trying to run amstatus on existing logfiles after upgrading from version 2.4.4p3 to 2.5.0p2. Unfortunately, the command will most of the time fail with a message like: amstatus ks --file /dumps/amanda/ks/log/amdump.1 Using /dumps/amanda/ks/log/amdump.1 from Thu Jun 8 17:04:30 CEST 2006

Slow dump of small directory...

2006-02-23 Thread Toralf Lund
Any idea why I get the following? fileserv:/usr/freeware/etc/openldap 1 32k dumping0k ( 1.53%) (11:32:10) This is from amstatus on a currently running dump, and the time is now # date Thu Feb 23 12:32:35 CET 2006 I mean, why does this dump take so long? This is (as you can

Re: Slow dump of small directory...

2006-02-23 Thread Toralf Lund
Any idea why I get the following? fileserv:/usr/freeware/etc/openldap 1 32k dumping0k ( 1.53%) (11:32:10) This is from amstatus on a currently running dump, and the time is now # date Thu Feb 23 12:32:35 CET 2006 I mean, why does this dump take so long? I get these

Dump only files that match a specific pattern?

2006-01-24 Thread Toralf Lund
I have a disk containing all sorts of temporary data etc. that I haven't included in the amanda config so far. Now I've found, however, that there are *some* files on this disk that I want to back up after all. I can quite easily set up a file matching pattern that would include all those

Running multiple amdumps in paralell?

2005-09-12 Thread Toralf Lund
Something I've always wondered about: Is it safe to run multiple instances of amdump simultaneously? I mean, with different configs, but possibly the same hosts and disks? - Toralf

Re: AIT2 tape size?

2005-08-22 Thread Toralf Lund
Toralf Lund wrote: I've been meaning to ask about this for a long time: Does anyone here use AIT2 tapes, a.k.a. SDX-50C, for Amanda backup? What tape length are you using? [ ... ] I've now finally run amtape - after making absolutely sure H/W compression was off - and it said: -sh-2.05b

Re: AIT2 tape size?

2005-08-19 Thread Toralf Lund
Paul Bijnens wrote: Toralf Lund wrote: Paul Bijnens wrote: amtapetype will tell you too if hardware compression is on. OK. Does amanda have any built-in support for switching it off? I mean, can any of the changer scripts or whatever do this? Or even amdump itself

Re: AIT2 tape size?

2005-08-19 Thread Toralf Lund
[ ... ] (*) I used to say on all linux versions, but it seems there are different implementations in different versions. Some systems can control the tapesettings with the file /etc/stinit.def (see man stinit if that exists). Yes. I think maybe you can do something like that on this

AIT2 tape size?

2005-08-18 Thread Toralf Lund
I've been meaning to ask about this for a long time: Does anyone here use AIT2 tapes, a.k.a. SDX-50C, for Amanda backup? What tape length are you using? Please note that I'm not asking for the tapetype entry from the amanda.org archives, as it does not seem quite correct. I mean, the tape

Re: AIT2 tape size?

2005-08-18 Thread Toralf Lund
Alexander Jolk wrote: Toralf Lund wrote: the tape size is specified as 50Gb, and that's more or less what the length parameter in entry says, but it seems to me that it isn't actually possible to write that much data to these tapes. The maximum seems to be closer to 40Gb, actually

Re: AIT2 tape size?

2005-08-18 Thread Toralf Lund
Paul Bijnens wrote: Toralf Lund wrote: Ah, yes, of course. No, hardware compression is not supposed to be on. But I'm not sure it isn't... In fact, now that to mention it, I suspect it's on after all. I'll a have a closer look. And I very much doubt that the drive will auto-detect

Re: Multi-Gb dumps using tar + software compression (gzip)?

2004-11-05 Thread Toralf Lund
Toralf Lund wrote: Paul Bijnens wrote: Toralf Lund wrote: Other possible error sources that I think I have eliminated: 1. tar version issues - since gzip complains even if I just uncopress and send the data to /dev/null, or use the -t option. 2. Network transfer issues. I get errors even

amadmin force matches too many disks

2004-10-21 Thread Toralf Lund
I just noticed the following: $ amadmin ks/incr force fileserv /scanner amadmin: fileserv:/scanner/plankart is set to a forced level 0 at next run. amadmin: fileserv:/scanner/golg is set to a forced level 0 at next run. amadmin: fileserv:/scanner is set to a forced level 0 at next run. Why did

Re: amadmin force matches too many disks

2004-10-21 Thread Toralf Lund
Paul Bijnens wrote: Toralf Lund wrote: I just noticed the following: $ amadmin ks/incr force fileserv /scanner amadmin: fileserv:/scanner/plankart is set to a forced level 0 at next run. amadmin: fileserv:/scanner/golg is set to a forced level 0 at next run. amadmin: fileserv:/scanner is set

Re: Multi-Gb dumps using tar + software compression (gzip)?

2004-10-20 Thread Toralf Lund
if=00010.raid2._scanner4.7 bs=32k skip=1 | gzip -t 124701+0 records in 124701+0 records out gzip: stdin: invalid compressed data--crc error gzip: stdin: invalid compressed data--length error Greets Michael Toralf Lund schrieb: Since I'm still having problems gunzip'ing my large dumps - see separate

Re: Multi-Gb dumps using tar + software compression (gzip)?

2004-10-20 Thread Toralf Lund
Gene Heskett wrote: On Tuesday 19 October 2004 11:10, Paul Bijnens wrote: Michael Schaller wrote: I found out that this was a problem of my tar. I backed up with GNUTAR and compress server fast. AMRESTORE restored the file but TAR (on the server!) gave some horrible messages like yours. I

Re: Multi-Gb dumps using tar + software compression (gzip)?

2004-10-20 Thread Toralf Lund
Paul Bijnens wrote: Toralf Lund wrote: Other possible error sources that I think I have eliminated: 1. tar version issues - since gzip complains even if I just uncopress and send the data to /dev/null, or use the -t option. 2. Network transfer issues. I get errors even with server

Re: [paul.bijnens@xplanation.com: Re: Multi-Gb dumps using tar + software compression (gzip)?]

2004-10-20 Thread Toralf Lund
PROTECTED] To: Toralf Lund [EMAIL PROTECTED] Cc: Amanda Mailing List [EMAIL PROTECTED] Subject: Re: Multi-Gb dumps using tar + software compression (gzip)? Date: Wed, 20 Oct 2004 13:59:31 +0200 Message-ID: [EMAIL PROTECTED] User-Agent: Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.7.1) Gecko

Multi-Gb dumps using tar + software compression (gzip)?

2004-10-19 Thread Toralf Lund
Since I'm still having problems gunzip'ing my large dumps - see separate thread, I was just wondering: Some of you people out there are doing the same kind of thing, right? I mean, have 1. Dumps of directories containing several Gbs of data (up to roughly 20Gb compressed in my case.)

Re: Multi-Gb dumps using tar + software compression (gzip)?

2004-10-19 Thread Toralf Lund
Alexander Jolk wrote: Toralf Lund wrote: 1. Dumps of directories containing several Gbs of data (up to roughly 20Gb compressed in my case.) 2. Use dumptype GNUTAR. 3. Compress data using compress client fast or compress server fast. If you do, what exactly are your amanda.conf

Re: how to automate tape changing

2004-10-19 Thread Toralf Lund
Paul Bijnens wrote: Jukka Salmi wrote: Paul Bijnens -- amanda-users (2004-10-18 22:14:10 +0200): Before the chg-disk tape changer was written, I used the chg-multi changer with the file-driver. It's a little more complicated to configure, but the advantage is that it finds and load automatically

Re: Multi-Gb dumps using tar + software compression (gzip)?

2004-10-19 Thread Toralf Lund
Alexander Jolk wrote: Joshua Baker-LePain wrote: I think that OS and utility (i.e. gnutar and gzip) version info would be useful here as well. True, forgot that. I'm on Linux 2.4.19 (Debian woody), using GNU tar 1.13.25 and gzip 1.3.2. I have never had problems recovering files from huge

Re: how to automate tape changing

2004-10-18 Thread Toralf Lund
Jukka Salmi wrote: Hi, I'm using the chg-disk tape changer. When restoring files using amrecover, after adding some files and issuing the extract command, amrecover tells me what tapes are needed, and asks me to Load tape label now. I load the needed tape using amtape, and tell amrecover to

Re: tar/gzip problems on restore (CRC error, Archive contains obsolescentbase-64 headers...)

2004-10-15 Thread Toralf Lund
Toralf Lund wrote: Alexander Jolk wrote: Toralf Lund wrote: [...] I get the same kind of problem with harddisk dumps as well as tapes, and as it now turns out, also for holding disk files. And the disks and tape drive involved aren't even on the same chain. Actually, I'm starting to suspect

Re: tar/gzip problems on restore (CRC error, Archive contains obsolescentbase-64 headers...)

2004-10-14 Thread Toralf Lund
Gene Heskett wrote: On Wednesday 13 October 2004 11:07, Toralf Lund wrote: Jean-Francois Malouin wrote: [ snip ] Actually, I'm starting to suspect that gzip itself is causing the problem. Any known issues, there? The client in question does have a fairly old version, 1.2.4, I think

Re: tar/gzip problems on restore (CRC error, Archive contains obsolescentbase-64 headers...)

2004-10-14 Thread Toralf Lund
The fun part here is that I have two different tars and two different gzips - the ones supplied with the OS and SGI freeware variants installed on /usr/freeware (dowloaded from http://freeware.sgi.com/) Do not use the OS supplied tar! You'll hit a bug. Yes. I do seem to remember

Re: tar/gzip problems on restore (CRC error, Archive contains obsolescentbase-64 headers...)

2004-10-14 Thread Toralf Lund
Alexander Jolk wrote: Toralf Lund wrote: [...] I get the same kind of problem with harddisk dumps as well as tapes, and as it now turns out, also for holding disk files. And the disks and tape drive involved aren't even on the same chain. Actually, I'm starting to suspect that gzip itself

tar/gzip problems on restore (CRC error, Archive contains obsolescent base-64 headers...)

2004-10-13 Thread Toralf Lund
I'm having serious problems with full restore of a GNUTAR dump. Simply put, if I do amrestore, then tar xvf dump file, tar will exit with tar: Skipping to next header tar: Archive contains obsolescent base-64 headers tar: Error exit delayed from previous errors after extracting most, but not

Re: tar/gzip problems on restore (CRC error, Archive contains obsolescentbase-64 headers...)

2004-10-13 Thread Toralf Lund
Alexander Jolk wrote: Toralf Lund wrote: tar: Skipping to next header tar: Archive contains obsolescent base-64 headers 37800+0 records in 37800+0 records out gzip: stdin: invalid compressed data--crc error tar: Child returned status 1 tar: Error exit delayed from previous errors I've had

Re: tar/gzip problems on restore (CRC error, Archive contains obsolescentbase-64 headers...)

2004-10-13 Thread Toralf Lund
Jean-Francois Malouin wrote: [ snip ] Actually, I'm starting to suspect that gzip itself is causing the problem. Any known issues, there? The client in question does have a fairly old version, 1.2.4, I think (that's the latest one supplied by SGI, unless they have upgraded it very recently.)

Moving data index data etc. from one config to andother

2003-11-11 Thread Toralf Lund
Following our recent reorg of amanda configs, I've considered moving some data from index, curinfo and possibly the log of one config to the datadirs of another. The object would be to make amrecover think that certain tapes were written using this other config, although they were really

includefile directive?

2003-11-11 Thread Toralf Lund
Is anyone here using the includefile directive in their config? How exactly does it work? Does it apply to all config files, or just amanda.conf? What can the file contain - full config info, or just whatever is not set in the file including it? If I have two configs, can I have one

Re: More than one tape per run

2003-10-27 Thread Toralf Lund
Ah, I see. So it won't actually multiply tape size by runtapes when trying to figure out how much it can write... I'm not sure the functionaltiy is of much use to me then, but perhaps I could cheat and pretend the tapes are runtapes times larger than they really are? Won't buy you

Re: More than one tape per run

2003-10-27 Thread Toralf Lund
Paul Bijnens wrote: Toralf Lund wrote: Ah, I see. So it won't actually multiply tape size by runtapes when trying to figure out how much it can write... I'm not sure the functionaltiy is of much use to me then, but perhaps I could cheat and pretend the tapes are runtapes times larger than

Re: More than one tape per run

2003-10-24 Thread Toralf Lund
On Thu, Oct 23, 2003 at 05:37:02PM +0200, Toralf Lund wrote: runspercycle does not need to be changed. The runtapes means that for each run up to that number of tapes may be used (note: not must). You have to increase your tapecycle probably to cover the same dumpcycle(s), because

More than one tape per run

2003-10-23 Thread Toralf Lund
I'm thinking about using more than one tape, i.e. set runtapes parameter to a value 1, for my updated archival setup. Is there anything special I need to keep in mind when doing this? Also, how do I set up runspercycle in this case? Is it the total number of tapes runspercycle * runtapes, or

Re: More than one tape per run

2003-10-23 Thread Toralf Lund
Paul Bijnens wrote: Toralf Lund wrote: I'm thinking about using more than one tape, i.e. set runtapes parameter to a value 1, for my updated archival setup. Is there anything special I need to keep in mind when doing this? Also, how do I set up runspercycle in this case? Is it the total

Re: More than one tape per run

2003-10-23 Thread Toralf Lund
runspercycle does not need to be changed. The runtapes means that for each run up to that number of tapes may be used (note: not must). You have to increase your tapecycle probably to cover the same dumpcycle(s), because you'll burn twice as much tapes for each run. (well, burn,

amrecover: Why does it use disklist and log files

2003-10-22 Thread Toralf Lund
As far as I can't tell, amrecover won't work unless 1. Log file from the backup you are trying to recover is still present 2. DLE is still in the disklist Why? Shouldn't amrecover work from the index alone? - Toralf

tapecycle: Are no-reuse tapes counted?

2003-10-22 Thread Toralf Lund
It looks like Amanda will count *all* tapes written after the tape in question, even the ones marked as no-reuse, when comparing count with tapecycle to determine if a tape may be overwritten. Is this observation correct? Should no-reuse tapes be included like that? -- Toralf

How to handle archival runs (again)?

2003-10-22 Thread Toralf Lund
Reviewing the amanda config again in conjunction with a tape format update... I forget (and list search doesn't return anything conclusive): How do you people recommend handling archival runs? There are two obvious ways: 1. Via a special config. 2. Using the normal config, but special

Re: Snapshot vs. -p1 (was Re: Lot's of I/O errors)

2003-10-06 Thread Toralf Lund
Eric Siegerman wrote: On Mon, Jul 14, 2003 at 11:07:10AM +0200, Toralf Lund wrote: I've been getting a lot of *** A TAPE ERROR OCCURRED: [[writing file: I/O error]]. On Mon, Jul 14, 2003 at 01:44:26PM +0200, Toralf Lund wrote: Note that I've now gone back to amanda-2.4.4

Re: Files left on holding disk after successful dumps

2003-08-15 Thread Toralf Lund
On Thursday 14 August 2003 03:21, Toralf Lund wrote: On Wednesday 13 August 2003 02:55, Toralf Lund wrote: I suddenly realised that I have a lot of dump directories on my holding disk, even though dumps have generally been successful. The below amflush output should illustrate this. -sh

Re: Files left on holding disk after successful dumps

2003-08-14 Thread Toralf Lund
On Wednesday 13 August 2003 02:55, Toralf Lund wrote: I suddenly realised that I have a lot of dump directories on my holding disk, even though dumps have generally been successful. The below amflush output should illustrate this. -sh-2.05b$ /usr/sbin/amflush ks Scanning /dumps/amanda/hd

Re: NFS mounts

2003-08-14 Thread Toralf Lund
On Tue, Aug 12, 2003 at 04:08:47PM -0400, LaValley, Brian E wrote: Can I list an nfs mounted disk in the disklist file? I only ask because I am having trouble compiling for Solaris 8. The disklist's contents don't affect compiling one way or the other. What's the specific problem you're

Lot's of I/O errors...

2003-07-14 Thread Toralf Lund
I've mentioned this earlier, but not a lot came out of it: I've been getting a lot of *** A TAPE ERROR OCCURRED: [[writing file: I/O error]]. lately. This does not, however, happen all the time, and not for specific tapes, either. Also, I can't find any error messages related to the tape

Re: Lot's of I/O errors...

2003-07-14 Thread Toralf Lund
related to the holding disk handling or taping of images has changed since 2.4.4. - Toralf -- Martin Hepworth Senior Systems Administrator Solid State Logic Ltd +44 (0)1865 842300 Toralf Lund wrote: I've mentioned this earlier, but not a lot came out of it: I've been getting a lot

More flexible autoflush?

2003-06-19 Thread Toralf Lund
The addition of autoflush option in 2.4.3 was really very helpful, but I'm still not satisfied. What I really want, is to autoflush when one or two smallish DLE dumps are left on the holding disk, but not if the entire taper operation failed due to tape error or something. Comments? -- -

Unscheduling DLE (with archival involved.)

2003-06-19 Thread Toralf Lund
What is the right and proper way to unschedule the dump of a DLE? I thought the answer would be amadmin delete, then remove DLE from disklist, but it seems to me that this will prevent me from amrecover'ing the DLE from existing backups, which is something I want to be able to do. Notice that

I/O error when writing to tape

2003-06-19 Thread Toralf Lund
Just got *** A TAPE ERROR OCCURRED: [[writing file: I/O error]]. during a backup run. - Must be something wrong with the tape or tape drive, I thought, but it turns out that 1. I get this error for various different tapes when trying to amflush the dump to them. 2. I can write other dumps to

Re: I/O error when writing to tape

2003-06-19 Thread Toralf Lund
, perhaps... -- Martin Hepworth Senior Systems Administrator Solid State Logic Ltd +44 (0)1865 842300 Toralf Lund wrote: Just got *** A TAPE ERROR OCCURRED: [[writing file: I/O error]]. during a backup run. - Must be something wrong with the tape or tape drive, I thought, but it turns out

Re: amrecover Problem

2003-04-04 Thread Toralf Lund
Hi there, ive got a Problem with amrecover on an Debian GNU/Linux maschine. amrecover reports: No index records for disk for specified date If date correct, notify system administrator In debug files in /tmp i cant find any other Informations.Debugfile is there but the informations are the same

Sharing index between configs (full and incrementals)?

2003-04-03 Thread Toralf Lund
where all slots point to file:some directory. The disklist is shared between the configs. The question is simply, can they share the index as well? Will everything work all right if I simply specify the same indexdir for both configs? -- Toralf Lund [EMAIL PROTECTED] +47 66 85 51 22 ProCaptura

skip-full/skip-incr vs strategy

2003-04-03 Thread Toralf Lund
on the backup? -- Toralf Lund [EMAIL PROTECTED] +47 66 85 51 22 ProCaptura AS +47 66 85 51 00 (switchboard) http://www.procaptura.com/~toralf +47 66 85 51 01 (fax)

Re: backup with exclude

2003-04-03 Thread Toralf Lund
-Messaggio originale- Da: Valeria Cavallini [mailto:[EMAIL PROTECTED] Inviato: giovedi 3 aprile 2003 10.59 A: [EMAIL PROTECTED] Oggetto: backup with exclude Hi, I've read some tread on the hacker newsgroup and I've found that someone talks about the option exclude to exclude more then

Re: Sharing index between configs (full and incrementals)?

2003-04-03 Thread Toralf Lund
On 2003.04.03 16:54, Jon LaBadie wrote: On Thu, Apr 03, 2003 at 09:51:25AM +0200, Toralf Lund wrote: As I've indicated earlier, I want to write full backups to tape, but keep some or all of the incremental on the harddisk, so I've set up two different configs; one with skip-incr that will write

Re: amrecover: No index records for disk for specified date

2003-02-17 Thread Toralf Lund
[ ... ] What's the output of 'amadmin ks find mercedes-benz /usr/people/jfo'? Trying this helped me figure out what was wrong ;-) The command would list the expected dates and tape names when executed as root, but as amanda, I got No dump to list, which made it quite obvious that the

Re: amrecover: No index records for disk for specified date

2003-02-12 Thread Toralf Lund
On Tue, Feb 11, 2003 at 05:31:04PM +0100, Toralf Lund wrote: I'm getting error message No index records for disk for specified date when trying to recover a certain DLE using amrecover (version 2.4.3.) The full output from the session + some of the debug messages are included below

amrecover: No index records for disk for specified date

2003-02-11 Thread Toralf Lund
time Tue Feb 11 17:25:35 2003 -- Toralf Lund [EMAIL PROTECTED] +47 66 85 51 22 ProCaptura AS +47 66 85 51 00 (switchboard) http://www.procaptura.com/~toralf +47 66 85 51 01 (fax)

Full backup to tape, incrementals to disk?

2003-02-05 Thread Toralf Lund
I seem to remember that something like this has been discussed before, but I couldn't find anything in the archives ;-/ Anyhow, I'm thinking about setting up a config with full backups to tape and incrementals to harddisk - due to limited tape capacity (yes, I know incrementals are usually

host:disk does not support optional exclude (2.4.3 server 2.4.2p2 client)

2003-01-22 Thread Toralf Lund
and 2.4.2p2 clients -- Toralf Lund [EMAIL PROTECTED] +47 66 85 51 22 ProCaptura AS +47 66 85 51 00 (switchboard) http://www.procaptura.com/~toralf +47 66 85 51 01 (fax)

Inexplicable amandahostsauth failure

2003-01-16 Thread Toralf Lund
, anyone? -- Toralf Lund [EMAIL PROTECTED] +47 66 85 51 22 ProCaptura AS +47 66 85 51 00 (switchboard) http://www.procaptura.com/~toralf +47 66 85 51 01 (fax)

Re: Inexplicable amandahostsauth failure

2003-01-16 Thread Toralf Lund
I've had auth problems with one of the hosts I'm backing up for some time; amcheck says # amcheck -c ks Amanda Backup Client Hosts Check ERROR: bmw: [access as amanda not allowed from root@server] amandahostsauth failed I found out what was going on after all;

Re: Initial disk of amrecover set up incorrectly

2003-01-06 Thread Toralf Lund
amrecover will in my setup make a completely wrong guess about what disk to consider at startup in most cases. The example output included below should illustrate the problem; /usr/freeware/apache is not on the /u filesystem, and it has a separate disklist entry. Any ideas what is going on? ...

Initial disk of amrecover set up incorrectly

2002-12-20 Thread Toralf Lund
- /usr/freeware/apache amrecover -- Toralf Lund [EMAIL PROTECTED]

Re: tar/ not missing any new directories

2002-11-08 Thread Toralf Lund
At 10:56 AM 10/14/2002 +0200, Toralf Lund wrote: With tar, and some sort of a guarantee that no individual file will exceed the tape capacity, this can be done by breaking the disklist entries up into subdirs, Yes, that's what I'm doing. The problem with this is that something easily gets left

Host name aliases in disklist

2002-10-17 Thread Toralf Lund
. Is this a correct assumption? Why does it happen? Is there a way around it (obviously, I can change the hostname in disklist, but apart from that)? -- Toralf Lund [EMAIL PROTECTED] +47 66 85 51 22 Kongsberg Scanners AS +47 66 85 51 00 (switchboard) http://www.kscanners.no/~toralf

Re: Host name aliases in disklist

2002-10-17 Thread Toralf Lund
Hi, your assumption about the hostnames is correct. What you are doing is one of the big NO!NO!'s of amanda. Just like I suspected ;-/ Never list a single host in multiple concurent amanda runs, or, in your case, with different names in a single amanda configuration. amandad can only handle

Wildcards in disklist? (Was: What tapecycle value to use?)

2002-10-15 Thread Toralf Lund
On Monday 14 October 2002 04:56, Toralf Lund wrote: [...] Yes, that's what I'm doing. The problem with this is that something easily gets left out as new directories are created. How about 1. Allowing wildcards in the disklist file 2. Having some kind of auto expansion mode, where

Splitting up disklist entries (Was: Re: What tapecycle value to use?)

2002-10-15 Thread Toralf Lund
On Mon, Oct 14, 2002 at 10:56:07AM +0200, Toralf Lund wrote: With tar, ... , this can be done by breaking the disklist entries up into subdirs, Yes, that's what I'm doing. The problem with this is that something easily gets left out as new directories are created

Re: What tapecycle value to use?

2002-10-14 Thread Toralf Lund
On Fri, Oct 11, 2002 at 09:15:34AM +0200, Toralf Lund wrote: On Thu, Oct 10, 2002 at 01:38:18PM +0200, Toralf Lund wrote: Forgot to mention this earlier: I'm not using incrementals at all. Tapes from the same week will contain full backups of different directories, and a given

  1   2   >