I've just moved a disk from one server to another without really
changing anything with respect to how the clients see it; logically, the
disk still represents exactly the same volume on the network.
Is there any way I can change the amanda config so that the disk is
backed up via the new
Gene Heskett wrote:
On Thursday 28 August 2008, Toralf Lund wrote:
I've just moved a disk from one server to another without really
changing anything with respect to how the clients see it; logically, the
disk still represents exactly the same volume on the network. [... ]
Unforch
Toralf Lund wrote:
Gene Heskett wrote:
On Thursday 28 August 2008, Toralf Lund wrote:
I've just moved a disk from one server to another without really
changing anything with respect to how the clients see it; logically,
the
disk still represents exactly the same volume on the network
Dustin J. Mitchell wrote:
On Thu, Aug 28, 2008 at 7:48 AM, Gene Heskett [EMAIL PROTECTED] wrote:
The tar folks at gnu do not consider that a bug, but as a part of the
security.
To be fair, the tar developers did fix this -- in 1.21, IIRC.
As to the original question, no, I don't
Toralf Lund wrote:
Forgot to answer this earlier, I think...
If you can't kill sendsize, it's because it is hang in a system call.
It's often when it try to access a mount point.
Do you have a hanged mount point?
But yes, you are absolutely right. The host in question had problems
with an NFS
Forgot to answer this earlier, I think...
If you can't kill sendsize, it's because it is hang in a system call.
It's often when it try to access a mount point.
Do you have a hanged mount point?
But yes, you are absolutely right. The host in question had problems
with an NFS mount. I didn't
A few days ago I had some backup problems that turned out to be caused
by a hanging NFS mount, causing sendsize to lock up completely - see a
separate post on this. Now I have sorted out this problem, and it seemed
like amdump would once again start properly, but it turns out that the
backup
A few days ago I had some backup problems that turned out to be caused
by a hanging NFS mount, causing sendsize to lock up completely - see
a separate post on this. Now I have sorted out this problem, and it
seemed like amdump would once again start properly, but it turns out
that the backup
We just started to get a serious problem with our amdump execution
(Amanda 2.5.0p2). As usual, we don't thing we have changed anything at
all after the last successful dump
Symptoms:
1. amstatus says
fileserv:/scanner0 planner: [hmm, disk was
stranded on
I'm not sure this is the right place to ask questions like this, but
I'll give it a try:
On a certain host at work, the RH system installer (usually booted from
a customised DVD...) will fail to detect the existing Linux partitions.
The error message is (I think) Unknown partition table
I'm not sure this is the right place to ask questions like this, but
I'll give it a try:
Oh. No.
It most certainly isn't, since I didn't even post to the intended list.
I wanted Anaconda, not Amanda (that's automatic address lookup for you.)
Sorry.
- Toralf
Maybe I've missed something, but I can't see why a special script would
be necessary in this case.
[ ... ]
This script is not special for the situation in question. In contrast,
it is meant to be run after every backup to produce tape-labels with
information which files from
As long as the log files are available, the amandatape script that I
posted a while ago to this list will give you the info that you are
looking for. You can find it here:
Maybe I've missed something, but I can't see why a special script would
be necessary in this case. Why not use
Regarding my recent post regarding strategy incronly and skip-full,
I just found out more about the problems I've had in the past with this
simply by checking the amanda.conf manual page it says:
skip-full /boolean/
Default: no. If true and planner has scheduled a full backup,
Josef Wolf wrote:
On Fri, Jun 16, 2006 at 10:47:52AM -0400, Marlin Whitaker wrote:
On Jun 15, 2006, at 3:03 PM, Jon LaBadie wrote:
On Thu, Jun 15, 2006 at 12:33:16PM -0400, Marlin Whitaker wrote:
If I have a collection of tapes from previous amanda backups,
is there a
Another question related to one of my recent threads:
What do you think is a good value for bumppercent? Why?
- Toralf
I also have one other scenario in mind, though - which is one I've
actually come across a number of times: What if a certain DLE due for
backup is estimated to be slightly smaller than runtapes*tape size,
and thus dumped to holding disk, but then turns out to be slightly
larger?
Anyhow, I'd really like to know more about how the spanning actually
works. Is it documented anywhere? http://www.amanda.org/docs and FAQ
still say that the option does not exist...
Try http://wiki.zmanda.org/index.php/Splitting_dumps_across_tapes
Yes. Thanks. That's quite
2. What happens to the holding disk file after a dump is partially
written to tape? Will Amanda keep the entire file, or just what
will be written next time around? And what if the holding disk
data is split into chunks?
Amanda keeps the entire dump, and will be flushed
Another question related to amanda 2.5:
Does anyone know if the issues with skip-full and/or strategy
incronly been addressed? In the past, neither strategy incronly nor
skip-full have worked quite as expected. I'm afraid I can't remember
the full details, but one problem I've had, was that
Yes indeed. The whole DLE. A singe DLE still needs to be written
in one run, possibly using many tapes.
Oh no... Like I said, that's a big disappointment. I'm tempted to say
that it is not correct to claim that Amanda now suppots tape spanning,
if it can't span dumps across tapes written
Paul Bijnens wrote:
On 2006-06-13 12:10, Toralf Lund wrote:
Yes indeed. The whole DLE. A singe DLE still needs to be written
in one run, possibly using many tapes.
Oh no... Like I said, that's a big disappointment. I'm tempted to
say that it is not correct to claim that Amanda now suppots
To throw my $.02 in here, the situations would be very different.
If one is forced to have all DLEs tapeable in one amdump run,
then (theoretically), nothing will be left on the holding disk to
lose should said disk die.
But we're talking about a situation where the DLEs are not
tapeable.
Jon LaBadie wrote:
On Tue, Jun 13, 2006 at 02:46:31PM +0200, Toralf Lund wrote:
Normally I would agree, but I have to back up 3Tb of data organised as
one single volume. The only simple option would be to have one 3Tb
tape as well, but such a thing isn't available (to me at least
Toralf Lund wrote:
Jon LaBadie wrote:
On Tue, Jun 13, 2006 at 02:46:31PM +0200, Toralf Lund wrote:
Normally I would agree, but I have to back up 3Tb of data organised
as one single volume. The only simple option would be to have one
3Tb tape as well, but such a thing isn't available (to me
Jon LaBadie wrote:
On Tue, Jun 13, 2006 at 11:46:20AM +0200, Paul Bijnens wrote:
I'm still finding out a use for skip-full. It seems to be a weird
option: when the planner had decided to make a full dump, then in
the real run it is skipped. That would mean that you have to be
carefully
...
--
Toralf Lund
I haven't been following the posts to this list too closely, or bothered
to upgrade amanda, for some time (since our existing setup *works*...),
so I didn't find out until right now that tape spanning is supported in
the current release.
Anyhow, I'd really like to know more about how the
just look for /etc/amanda/config/amanda.conf...)
--
Toralf Lund
I'm trying to run amstatus on existing logfiles after upgrading from
version 2.4.4p3 to 2.5.0p2. Unfortunately, the command will most of the
time fail with a message like:
amstatus ks --file /dumps/amanda/ks/log/amdump.1
Using /dumps/amanda/ks/log/amdump.1 from Thu Jun 8 17:04:30 CEST 2006
Any idea why I get the following?
fileserv:/usr/freeware/etc/openldap 1 32k dumping0k (
1.53%) (11:32:10)
This is from amstatus on a currently running dump, and the time is now
# date
Thu Feb 23 12:32:35 CET 2006
I mean, why does this dump take so long? This is (as you can
Any idea why I get the following?
fileserv:/usr/freeware/etc/openldap 1 32k dumping0k (
1.53%) (11:32:10)
This is from amstatus on a currently running dump, and the time is now
# date
Thu Feb 23 12:32:35 CET 2006
I mean, why does this dump take so long?
I get these
I have a disk containing all sorts of temporary data etc. that I haven't
included in the amanda config so far. Now I've found, however, that
there are *some* files on this disk that I want to back up after all. I
can quite easily set up a file matching pattern that would include all
those
Something I've always wondered about:
Is it safe to run multiple instances of amdump simultaneously? I mean,
with different configs, but possibly the same hosts and disks?
- Toralf
Toralf Lund wrote:
I've been meaning to ask about this for a long time:
Does anyone here use AIT2 tapes, a.k.a. SDX-50C, for Amanda backup?
What tape length are you using? [ ... ]
I've now finally run amtape - after making absolutely sure H/W
compression was off - and it said:
-sh-2.05b
Paul Bijnens wrote:
Toralf Lund wrote:
Paul Bijnens wrote:
amtapetype will tell you too if hardware compression is on.
OK.
Does amanda have any built-in support for switching it off? I mean,
can any of the changer scripts or whatever do this? Or even amdump
itself
[ ... ]
(*) I used to say on all linux versions, but it seems there
are different implementations in different versions.
Some systems can control the tapesettings with the file
/etc/stinit.def (see man stinit if that exists).
Yes. I think maybe you can do something like that on this
I've been meaning to ask about this for a long time:
Does anyone here use AIT2 tapes, a.k.a. SDX-50C, for Amanda backup? What
tape length are you using?
Please note that I'm not asking for the tapetype entry from the
amanda.org archives, as it does not seem quite correct. I mean, the tape
Alexander Jolk wrote:
Toralf Lund wrote:
the tape size is specified as 50Gb, and that's more or less what the
length parameter in entry says, but it seems to me that it isn't
actually possible to write that much data to these tapes. The maximum
seems to be closer to 40Gb, actually
Paul Bijnens wrote:
Toralf Lund wrote:
Ah, yes, of course. No, hardware compression is not supposed to be
on. But I'm not sure it isn't... In fact, now that to mention it, I
suspect it's on after all. I'll a have a closer look. And I very much
doubt that the drive will auto-detect
Toralf Lund wrote:
Paul Bijnens wrote:
Toralf Lund wrote:
Other possible error sources that I think I have eliminated:
1. tar version issues - since gzip complains even if I just uncopress
and send the data to /dev/null, or use the -t option.
2. Network transfer issues. I get errors even
I just noticed the following:
$ amadmin ks/incr force fileserv /scanner
amadmin: fileserv:/scanner/plankart is set to a forced level 0 at next run.
amadmin: fileserv:/scanner/golg is set to a forced level 0 at next run.
amadmin: fileserv:/scanner is set to a forced level 0 at next run.
Why did
Paul Bijnens wrote:
Toralf Lund wrote:
I just noticed the following:
$ amadmin ks/incr force fileserv /scanner
amadmin: fileserv:/scanner/plankart is set to a forced level 0 at
next run.
amadmin: fileserv:/scanner/golg is set to a forced level 0 at next run.
amadmin: fileserv:/scanner is set
if=00010.raid2._scanner4.7 bs=32k skip=1 | gzip -t
124701+0 records in
124701+0 records out
gzip: stdin: invalid compressed data--crc error
gzip: stdin: invalid compressed data--length error
Greets
Michael
Toralf Lund schrieb:
Since I'm still having problems gunzip'ing my large dumps - see
separate
Gene Heskett wrote:
On Tuesday 19 October 2004 11:10, Paul Bijnens wrote:
Michael Schaller wrote:
I found out that this was a problem of my tar.
I backed up with GNUTAR and compress server fast.
AMRESTORE restored the file but TAR (on the server!) gave some
horrible messages like yours.
I
Paul Bijnens wrote:
Toralf Lund wrote:
Other possible error sources that I think I have eliminated:
1. tar version issues - since gzip complains even if I just uncopress
and send the data to /dev/null, or use the -t option.
2. Network transfer issues. I get errors even with server
PROTECTED]
To: Toralf Lund [EMAIL PROTECTED]
Cc: Amanda Mailing List [EMAIL PROTECTED]
Subject: Re: Multi-Gb dumps using tar + software compression (gzip)?
Date: Wed, 20 Oct 2004 13:59:31 +0200
Message-ID: [EMAIL PROTECTED]
User-Agent: Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.7.1) Gecko
Since I'm still having problems gunzip'ing my large dumps - see separate
thread, I was just wondering:
Some of you people out there are doing the same kind of thing, right? I
mean, have
1. Dumps of directories containing several Gbs of data (up to roughly
20Gb compressed in my case.)
Alexander Jolk wrote:
Toralf Lund wrote:
1. Dumps of directories containing several Gbs of data (up to roughly
20Gb compressed in my case.)
2. Use dumptype GNUTAR.
3. Compress data using compress client fast or compress server fast.
If you do, what exactly are your amanda.conf
Paul Bijnens wrote:
Jukka Salmi wrote:
Paul Bijnens -- amanda-users (2004-10-18 22:14:10 +0200):
Before the chg-disk tape changer was written, I used the chg-multi
changer with the file-driver. It's a little more complicated
to configure, but the advantage is that it finds and load automatically
Alexander Jolk wrote:
Joshua Baker-LePain wrote:
I think that OS and utility (i.e. gnutar and gzip) version info would be
useful here as well.
True, forgot that. I'm on Linux 2.4.19 (Debian woody), using GNU tar
1.13.25 and gzip 1.3.2. I have never had problems recovering files from
huge
Jukka Salmi wrote:
Hi,
I'm using the chg-disk tape changer. When restoring files using
amrecover, after adding some files and issuing the extract command,
amrecover tells me what tapes are needed, and asks me to Load tape
label now. I load the needed tape using amtape, and tell amrecover
to
Toralf Lund wrote:
Alexander Jolk wrote:
Toralf Lund wrote:
[...] I get the same kind of problem with harddisk dumps as well as
tapes, and as it now turns out, also for holding disk files. And the
disks and tape drive involved aren't even on the same chain.
Actually, I'm starting to suspect
Gene Heskett wrote:
On Wednesday 13 October 2004 11:07, Toralf Lund wrote:
Jean-Francois Malouin wrote:
[ snip ]
Actually, I'm starting to suspect that gzip itself is causing the
problem. Any known issues, there? The client in question does have
a fairly old version, 1.2.4, I think
The fun part here is that I have two different tars and two different
gzips - the ones supplied with the OS and SGI freeware variants
installed on /usr/freeware (dowloaded from http://freeware.sgi.com/)
Do not use the OS supplied tar! You'll hit a bug.
Yes. I do seem to remember
Alexander Jolk wrote:
Toralf Lund wrote:
[...] I get the same kind of problem with harddisk dumps as well as
tapes, and as it now turns out, also for holding disk files. And the
disks and tape drive involved aren't even on the same chain.
Actually, I'm starting to suspect that gzip itself
I'm having serious problems with full restore of a GNUTAR dump. Simply
put, if I do amrestore, then tar xvf dump file, tar will exit with
tar: Skipping to next header
tar: Archive contains obsolescent base-64 headers
tar: Error exit delayed from previous errors
after extracting most, but not
Alexander Jolk wrote:
Toralf Lund wrote:
tar: Skipping to next header
tar: Archive contains obsolescent base-64 headers
37800+0 records in
37800+0 records out
gzip: stdin: invalid compressed data--crc error
tar: Child returned status 1
tar: Error exit delayed from previous errors
I've had
Jean-Francois Malouin wrote:
[ snip ]
Actually, I'm starting to suspect that gzip itself is causing the
problem. Any known issues, there? The client in question does have a
fairly old version, 1.2.4, I think (that's the latest one supplied by
SGI, unless they have upgraded it very recently.)
Following our recent reorg of amanda configs, I've considered moving
some data from index, curinfo and possibly the log of one config to the
datadirs of another. The object would be to make amrecover think that
certain tapes were written using this other config, although they were
really
Is anyone here using the includefile directive in their config? How
exactly does it work? Does it apply to all config files, or just
amanda.conf? What can the file contain - full config info, or just
whatever is not set in the file including it? If I have two configs, can
I have one
Ah, I see. So it won't actually multiply tape size by runtapes when
trying to figure out how much it can write... I'm not sure the
functionaltiy is of much use to me then, but perhaps I could cheat and
pretend the tapes are runtapes times larger than they really are?
Won't buy you
Paul Bijnens wrote:
Toralf Lund wrote:
Ah, I see. So it won't actually multiply tape size by runtapes
when trying to figure out how much it can write... I'm not sure
the functionaltiy is of much use to me then, but perhaps I could
cheat and pretend the tapes are runtapes times larger than
On Thu, Oct 23, 2003 at 05:37:02PM +0200, Toralf Lund wrote:
runspercycle does not need to be changed. The runtapes means that
for each run up to that number of tapes may be used (note: not must).
You have to increase your tapecycle probably to cover the same
dumpcycle(s), because
I'm thinking about using more than one tape, i.e. set runtapes
parameter to a value 1, for my updated archival setup. Is there
anything special I need to keep in mind when doing this? Also, how do I
set up runspercycle in this case? Is it the total number of tapes
runspercycle * runtapes, or
Paul Bijnens wrote:
Toralf Lund wrote:
I'm thinking about using more than one tape, i.e. set runtapes
parameter to a value 1, for my updated archival setup. Is there
anything special I need to keep in mind when doing this? Also, how do
I set up runspercycle in this case? Is it the total
runspercycle does not need to be changed. The runtapes means that
for each run up to that number of tapes may be used (note: not must).
You have to increase your tapecycle probably to cover the same
dumpcycle(s), because you'll burn twice as much tapes for each run.
(well, burn,
As far as I can't tell, amrecover won't work unless
1. Log file from the backup you are trying to recover is still present
2. DLE is still in the disklist
Why? Shouldn't amrecover work from the index alone?
- Toralf
It looks like Amanda will count *all* tapes written after the tape in
question, even the ones marked as no-reuse, when comparing count with
tapecycle to determine if a tape may be overwritten. Is this observation
correct? Should no-reuse tapes be included like that?
--
Toralf
Reviewing the amanda config again in conjunction with a tape format
update...
I forget (and list search doesn't return anything conclusive): How do
you people recommend handling archival runs? There are two obvious ways:
1. Via a special config.
2. Using the normal config, but special
Eric Siegerman wrote:
On Mon, Jul 14, 2003 at 11:07:10AM +0200, Toralf Lund wrote:
I've been getting a lot of
*** A TAPE ERROR OCCURRED: [[writing file: I/O error]].
On Mon, Jul 14, 2003 at 01:44:26PM +0200, Toralf Lund wrote:
Note that I've now gone back to amanda-2.4.4
On Thursday 14 August 2003 03:21, Toralf Lund wrote:
On Wednesday 13 August 2003 02:55, Toralf Lund wrote:
I suddenly realised that I have a lot of dump directories on my
holding disk, even though dumps have generally been successful.
The below amflush output should illustrate this.
-sh
On Wednesday 13 August 2003 02:55, Toralf Lund wrote:
I suddenly realised that I have a lot of dump directories on my
holding disk, even though dumps have generally been successful. The
below amflush output should illustrate this.
-sh-2.05b$ /usr/sbin/amflush ks
Scanning /dumps/amanda/hd
On Tue, Aug 12, 2003 at 04:08:47PM -0400, LaValley, Brian E wrote:
Can I list an nfs mounted disk in the disklist file?
I only ask because I am having trouble compiling for Solaris 8.
The disklist's contents don't affect compiling one way or the
other. What's the specific problem you're
I've mentioned this earlier, but not a lot came out of it:
I've been getting a lot of
*** A TAPE ERROR OCCURRED: [[writing file: I/O error]].
lately. This does not, however, happen all the time, and not for specific
tapes, either. Also, I can't find any error messages related to the tape
related to the holding disk handling or taping
of images has changed since 2.4.4.
- Toralf
--
Martin Hepworth
Senior Systems Administrator
Solid State Logic Ltd
+44 (0)1865 842300
Toralf Lund wrote:
I've mentioned this earlier, but not a lot came out of it:
I've been getting a lot
The addition of autoflush option in 2.4.3 was really very helpful, but
I'm still not satisfied. What I really want, is to autoflush when one or
two smallish DLE dumps are left on the holding disk, but not if the entire
taper operation failed due to tape error or something.
Comments?
--
-
What is the right and proper way to unschedule the dump of a DLE? I
thought the answer would be amadmin delete, then remove DLE from
disklist, but it seems to me that this will prevent me from amrecover'ing
the DLE from existing backups, which is something I want to be able to do.
Notice that
Just got
*** A TAPE ERROR OCCURRED: [[writing file: I/O error]].
during a backup run. - Must be something wrong with the tape or tape
drive, I thought, but it turns out that
1. I get this error for various different tapes when trying to amflush the
dump to them.
2. I can write other dumps to
,
perhaps...
--
Martin Hepworth
Senior Systems Administrator
Solid State Logic Ltd
+44 (0)1865 842300
Toralf Lund wrote:
Just got
*** A TAPE ERROR OCCURRED: [[writing file: I/O error]].
during a backup run. - Must be something wrong with the tape or tape
drive, I thought, but it turns out
Hi there,
ive got a Problem with amrecover on an Debian GNU/Linux maschine.
amrecover reports:
No index records for disk for specified date
If date correct, notify system administrator
In debug files in /tmp i cant find any other Informations.Debugfile is
there but the informations are the same
where all slots point to file:some directory. The
disklist is shared between the configs. The question is simply, can they
share the index as well? Will everything work all right if I simply
specify the same indexdir for both configs?
--
Toralf Lund [EMAIL PROTECTED] +47 66 85 51 22
ProCaptura
on the backup?
--
Toralf Lund [EMAIL PROTECTED] +47 66 85 51 22
ProCaptura AS +47 66 85 51 00 (switchboard)
http://www.procaptura.com/~toralf +47 66 85 51 01 (fax)
-Messaggio originale-
Da: Valeria Cavallini [mailto:[EMAIL PROTECTED]
Inviato: giovedi 3 aprile 2003 10.59
A: [EMAIL PROTECTED]
Oggetto: backup with exclude
Hi,
I've read some tread on the hacker newsgroup and I've found that someone
talks about the option exclude to exclude more then
On 2003.04.03 16:54, Jon LaBadie wrote:
On Thu, Apr 03, 2003 at 09:51:25AM +0200, Toralf Lund wrote:
As I've indicated earlier, I want to write full backups to tape, but
keep
some or all of the incremental on the harddisk, so I've set up two
different configs; one with skip-incr that will write
[ ... ]
What's the output of 'amadmin ks find mercedes-benz /usr/people/jfo'?
Trying this helped me figure out what was wrong ;-) The command would
list
the expected dates and tape names when executed as root, but as amanda,
I
got No dump to list, which made it quite obvious that the
On Tue, Feb 11, 2003 at 05:31:04PM +0100, Toralf Lund wrote:
I'm getting error message
No index records for disk for specified date
when trying to recover a certain DLE using amrecover (version 2.4.3.)
The
full output from the session + some of the debug messages are included
below
time Tue Feb 11 17:25:35 2003
--
Toralf Lund [EMAIL PROTECTED] +47 66 85 51 22
ProCaptura AS +47 66 85 51 00 (switchboard)
http://www.procaptura.com/~toralf +47 66 85 51 01 (fax)
I seem to remember that something like this has been discussed before, but
I couldn't find anything in the archives ;-/
Anyhow, I'm thinking about setting up a config with full backups to tape
and incrementals to harddisk - due to limited tape capacity (yes, I know
incrementals are usually
and 2.4.2p2 clients
--
Toralf Lund [EMAIL PROTECTED] +47 66 85 51 22
ProCaptura AS +47 66 85 51 00 (switchboard)
http://www.procaptura.com/~toralf +47 66 85 51 01 (fax)
, anyone?
--
Toralf Lund [EMAIL PROTECTED] +47 66 85 51 22
ProCaptura AS +47 66 85 51 00 (switchboard)
http://www.procaptura.com/~toralf +47 66 85 51 01 (fax)
I've had auth problems with one of the hosts I'm backing up for some
time; amcheck says
# amcheck -c ks
Amanda Backup Client Hosts Check
ERROR: bmw: [access as amanda not allowed from root@server]
amandahostsauth failed
I found out what was going on after all;
amrecover will in my setup make a completely wrong guess about what disk
to consider at startup in most cases. The example output included below
should illustrate the problem; /usr/freeware/apache is not on the /u
filesystem, and it has a separate disklist entry. Any ideas what is
going
on?
...
- /usr/freeware/apache
amrecover
--
Toralf Lund [EMAIL PROTECTED]
At 10:56 AM 10/14/2002 +0200, Toralf Lund wrote:
With tar, and some sort of a guarantee that no individual file will
exceed the tape capacity, this can be done by breaking the disklist
entries up into subdirs,
Yes, that's what I'm doing. The problem with this is that something
easily gets left
. Is this a correct assumption? Why does it happen? Is there a way
around it (obviously, I can change the hostname in disklist, but apart
from that)?
--
Toralf Lund [EMAIL PROTECTED] +47 66 85 51 22
Kongsberg Scanners AS +47 66 85 51 00 (switchboard)
http://www.kscanners.no/~toralf
Hi,
your assumption about the hostnames is correct.
What you are doing is one of the big NO!NO!'s of amanda.
Just like I suspected ;-/
Never list a single host in multiple concurent amanda runs, or,
in your case, with different names in a single amanda configuration.
amandad can only handle
On Monday 14 October 2002 04:56, Toralf Lund wrote:
[...]
Yes, that's what I'm doing. The problem with this is that
something easily gets left out as new directories are created.
How about
1. Allowing wildcards in the disklist file
2. Having some kind of auto expansion mode, where
On Mon, Oct 14, 2002 at 10:56:07AM +0200, Toralf Lund wrote:
With tar,
... , this can be done by breaking the disklist
entries up into subdirs,
Yes, that's what I'm doing. The problem with this is that something
easily
gets left out as new directories are created
On Fri, Oct 11, 2002 at 09:15:34AM +0200, Toralf Lund wrote:
On Thu, Oct 10, 2002 at 01:38:18PM +0200, Toralf Lund wrote:
Forgot to mention this earlier: I'm not using incrementals at all.
Tapes
from the same week will contain full backups of different
directories,
and
a given
1 - 100 of 121 matches
Mail list logo