I am now just testing tar, without amanda and I get the following
results (just following the gnu doc):
charles@fiume:~ tar --create --file=/tmp/wine_docs_0.tar
--listed-incremental=/tmp/wine_docs.snar wine_docs/
charles@fiume:~ ll -tr /tmp
..
-rw-r--r-- 1 charles users 379944960 Feb 25 10:27
Greetings;
3 backups ago, with no change to the amanda.conf in months, I have
awakened to a hung tar task using 100% of a core, more than 5 hours
after it should have completed.
It is in that state now. How can I find what is causing this blockage?
Here is the report from yesterdays attempt,
Charles,
Can you post the output of the 'stat' command of one of the file before
the level 1 backup and after the the level 1 backup, to see what changed.
Jean-Louis
On 02/25/2014 04:53 AM, Charles Stroom wrote:
I am now just testing tar, without amanda and I get the following
results (just
Stefan,
amrestore can't restore if the dump is split on multiple file on tape,
is it your case?
jean-Louis
On 02/19/2014 05:10 AM, Stefan G. Weichinger wrote:
Am 18.02.2014 19:30, schrieb Stefan G. Weichinger:
$ amrestore --config daily /dev/nst0 hiro pigz
Restoring from tape daily04
Jean-Louis,
It's getting complicated, because in my previous post below I reported
that with option --atime-preserve=system a level 1 tar incremental
worked. However, some time later it didn't work any more, so it is
really unpredictable.
I have repeated part of below with some stat in between.
Amanda 3.3.4
Hi,
I've got a very slow amdump run going, looks like because I set the
part_cache_max_size too high and the servers memory is filled up.
What's the best way to abort this? I'm not worried about preserving what's
been written to tape so far. Do I just run amcleanup -k'? Or manually
Amanda 3.3.4
Hi,
If amanda is using memory cache for splits, is the cache shared between
simultaneous amdump runs, or does each try to grab that much memory?
I'm setup like this:
part_cache_type memory
part_cache_max_size 20G
and with
taper-parallel-write 2
and
inparallel 10
As said, I changed to tar 1.27 and now the problem has disappeared. I
cleaned all what was necessary and started with a full backup on all
DLEs. There was some strange tar message on the single dle at the
external client stremen (... /usr/local/bin/tar exited with status
2 ..), but as this was a