Re: moving to amanda 3.3.7p1

2015-02-26 Thread Toomas Aas

Wed, 25 Feb 2015 kirjutas Michael Stauffer mgsta...@gmail.com:



Directories:
  Application: /usr/local/libexec/amanda/application
  Configuration: /usr/local/etc/amanda
  GNU Tar lists: /usr/local/var/amanda/gnutar-lists
  Perl modules (amperldir): /usr/local/share/perl5
  Template and example data files (amdatadir): /usr/local/share/amanda
  Temporary: /tmp/amanda
WARNINGS:
  no user specified (--with-user) -- using 'amanda'
  no group specified (--with-group) -- using 'backup'


These are different dir locations than amanda 3.3.4 which I've been using
until now. Is this b/c I've built from source or is that a change effecting
rpm's too (not that it seems likely they'd be different I figure).


As you deduced, this is probably because of different installation  
method (source vs rpm). Whoever supplied the RPMs that you previously  
used, had them set up with different configuration settings.




The default user and group is now amanda:backup instead of
amandabackup:disk. If I want to use this server with existing clients
running 3.3.4 is this ok? Or should I rebuild with user amandabackup:disk?
Or upgrade clients to 3.3.7?


The user and group used on the server do not need to be the same as on  
 clients. You can continue using the existing clients.


--
Toomas Aas



Re: Leaving dumps on the holding disk?

2015-01-25 Thread Toomas Aas

Mon, 26 Jan 2015 kirjutas Jason L Tibbitts III ti...@math.uh.edu:


The usual restore operation I have to do is pulling things off of last
night's backup, which involves waiting for the library to load things
and such, and of course hoping that there's no problem with the tapes.
But there's a good chance that the backup image I need was just on the
holding disk, and if it hadn't been deleted then there would be no
reason to touch the tapes at all.  In fact, even with LTO6 tapes, I
should still be able to fit several tapes worth of backups on the
holding disk.

Is there any way to force amanda to delay deleting dumps until it
actually needs space on the holding disk?  Or, is there any particular
place I might start looking in order to hack this in somehow?


I don't know, but there is an alternative approach you could use. If  
your tapes are big enough to hold several days worth of dumps, you  
could delay writing dumps to tape until enough of them have gathered  
on the holding disk to fill a tape. That is what I am doing here.


Relevant parameters in my amanda.conf:

flush-threshold-dumped 100
flush-threshold-scheduled 100
taperflush 100
autoflush yes


--
Toomas Aas
Tartu linnakantselei arvutivõrgu peaspetsialist
tel 736 1274
mob 513 6493



Re: sw_vers not found on FreeBSD

2015-01-14 Thread Toomas Aas

To add an example to Eric's excellent summary:

 % /usr/bin/uname -rms
FreeBSD 10.1-RELEASE-p1 amd64


--
Toomas Aas
Tartu linnakantselei arvutivõrgu peaspetsialist
tel 736 1274
mob 513 6493



sw_vers not found on FreeBSD

2015-01-14 Thread Toomas Aas
I just installed a new Amanda server, version 3.3.6 on FreeBSD 10.1  
amd64. There seems to be a cosmetic issue during selfcheck, client  
tries to execute something called  /usr/bin/sw_vers, which does not  
exist (snippet of selfcheck.debug attached). Selfcheck succeeds  
despite of this problem.


--
Toomas Aas
Tartu linnakantselei arvutivõrgu peaspetsialist
tel 736 1274
mob 513 6493Wed Jan 14 13:31:35 2015: thd-0x803422e00: selfcheck: pid 94228 ruid 1002 euid 
1002 version 3.3.6: start at Wed Jan 14 13:31:35 2015
Wed Jan 14 13:31:35 2015: thd-0x803422e00: selfcheck: version 3.3.6
Wed Jan 14 13:31:35 2015: thd-0x803422e00: selfcheck: Executing: /usr/bin/uname 
'-s'

Wed Jan 14 13:31:35 2015: thd-0x803422e00: selfcheck: Executing: 
/usr/bin/sw_vers '-productName'

Wed Jan 14 13:31:35 2015: thd-0x803422e00: selfcheck: critical (fatal): error 
[exec /usr/bin/sw_vers: No such file or directory]
Wed Jan 14 13:31:35 2015: thd-0x803422e00: selfcheck: Executing: 
/usr/bin/sw_vers '-productVersion'

Wed Jan 14 13:31:35 2015: thd-0x803422e00: selfcheck: critical (fatal): error 
[exec /usr/bin/sw_vers: No such file or directory]


Re: I cannot see what is wrong: I am getting 'tiny' chunks, why?

2014-06-09 Thread Toomas Aas

Mon, 09 Jun 2014 kirjutas Robert Heller hel...@deepsoft.com:



Question: when using a disk-based 'virtual tape changer', is there is any
point in using a holding disk? I understand that a holding disk is a  
good idea

when using real tapes, as this helps in keeping the tape drive streaming --
that is the holding disk smooths out the 'bumps' in the actual  
backup process.


AFAIK, when writing directly to tape - even virtual tape - only one  
client can be dumped simultaneously. When dumping to holding disks,  
several dumps can run in parallel.


--
Toomas Aas



Backing up ACLs on FreeBSD 10

2014-03-17 Thread Toomas Aas

Hello!

This first FreeBSD 10 server is also my first encounter with  
filesystem ACLs. As thousands of Amanda users before me, I discovered  
that GNU tar doesn't back up ACLs and star is often recommended as an  
alternative which does. Only thing is, the star port doesn't want to  
build on FreeBSD 10 amd64. Or more precisely, smake, which is required  
to build star, does not build. Seems like it's been this way for quite  
a while and nobody has fixed the situation.


Before I spend too much time trying to fix it myself, I'd like to find  
out if there is perhaps an alternative solution that I'm missing. So,  
if anyone has a success story about backing up filesystem ACLs on  
FreeBSD 10 amd64 (with Amanda, of course), I'm all ears :)


Regards,
--
Toomas Aas



Re: Backup Strategy

2014-01-13 Thread Toomas Aas

On Sun, 12 Jan 2014 Gene Heskett wrote:



If separate configs, which I can't personally find an overpowering reason
for, you would need,  most likely, two separate tape libraries each
containing its own drive(s), or 2 separate big hard drives.


Actually two separate configs do not require separate tape libraries  
or drives, just tapes labelled with different labelstr.


By this I don't mean to imply that two separate configs are the best  
solution for OP.


--
Toomas Aas



Re: Zmanda Windows Client ZIP files

2013-11-13 Thread Toomas Aas

Hello Markus!



1) I can use amfetchdump to get what appears to be a zip file:
# file staff15.zip
staff15.zip: Zip archive data, at least v2.0 to extract

2) The unzip command (which has ZIP64 support) does not recognize  
this as a proper file:




Amanda dump images have a 32 kB special amanda header in the  
beginning, followed by the actual backed-up data. This header  
contains, among other things, human-readable information about how to  
extract the dump. In order to extract the dump, this 32 kB header  
should be discarded from the dumpfile and the remainder fed to the  
specific extract tool. On Unix-like systems this is usually done with  
dd. I don't have much experience with such things on Windows, but  
something similar surely exists.


HTH,
--
Toomas Aas



Re: tape size question

2013-05-23 Thread Toomas Aas

Hello!

On Thu, 23 May 2013 McGraw, Robert P wrote:



I use hardware compression to max the size of the tape. Uncompressed  
the tape is 800GB compressed the theoretical max size is 1.6TB. I  
know that I will not get the 1.6TB.


In the amanda.conf file I tell amanda that my tape is  length 1500  
gbytes and that my tape drive uses LEOM.


For a real life data point, on my LTO4 tape drive with HW compression  
enabled I get 900 GB to 1.1 TB.


--
Toomas Aas



Very slow amrecover across network

2012-12-13 Thread Toomas Aas

Server: Amanda 3.2.2 on FreeBSD 8.3
Client: Amanda 2.5.1p3 on FreeBSD 7.4
authentication: bsdtcp

When running amrecover on the client, the performance is very slow,  
ranging from 100 to 300 kB/s. Running amrecover directly on the server  
and restoring the same files, the performance is as expected (20-30  
MB/s).


These performance numbers are obtained from the vmstat screen of  
systat utility, observing the read performance of the tape drive (LTO2).


There doesn't seem to be any general network problems such as duplex  
conflict or heavy packet loss. Nightly amdumps from client to server  
run at normal speed (20 MB/s), also scp'ing files from server to  
client seems to work normally.


I looked at amandad.timestamp.debug on the server and  
amrecover.timestamp.debug on the client, but there are no obvious  
errors.


How to troubleshoot this issue?

--
Toomas Aas
Tartu linnakantselei arvutivõrgu peaspetsialist
tel 736 1274
mob 513 6493



waiting for writing to tape

2012-11-25 Thread Toomas Aas

Hello!

This is something I've been wanting to ask for a long time.

I have an amdump run in progress right now. It needs to dump a couple  
of hundred gigabytes spread across ca 10 DLEs, some of which are 100  
MB, some over 50 GB. Amdump has been running for 6 hours. Amstatus  
shows that two smallest DLEs are 'finished', one ~20GB DLE is  
'dumping' and 5 DLEs totalling 110 GB are in state 'dump done, waiting  
for writing to tape'.


Why doesn't Amanda write these 5 DLEs to tape while dumping the 6th  
one? The tape drive isn't doing anything (amstatus shows 'Taper  
status: idle'). I can't see any obvious parameters in amanda.conf that  
would control this. The current behaviour - first writing (almost) all  
the DLEs to holding disk and then from holdingdisk to tape - means  
that amdump run takes much longer compared to if the  
'wait-for-writing-to-tape' DLEs were taped simultaneously with dumping  
these DLEs that are later in the queue.


If it matters, this is Amanda 3.2.0 using vtapes with chg-disk.

--
Toomas Aas



Re: waiting for writing to tape

2012-11-25 Thread Toomas Aas

Hello!


Why doesn't Amanda write these 5 DLEs to tape while dumping the 6th one?


Answering my own questions seems to be this weekend's theme. At least  
I think I'm on to something... I am not usually awake during amdump  
runs, but now that I have time to observe it, I noticed that as I  
wrote the previous e-mail Amanda finished dumping the 6th DLE, went on  
to dump the next one, but in the meantime she has also taped one small  
DLE. So it seems that once a dumper is started, Amanda waits for it to  
finish and only then checks if there is another DLE for the taper to  
tape. So the current behaviour of taper sitting idle much of the time  
is caused by combination of 'dumporder sssS' and 'taperalgo first'  
that I currently have in my config.


--
Toomas Aas



Re: All DLEs failed [can't dump in degraded mode]

2012-11-24 Thread Toomas Aas

Hello again!



mail.tarkvarastuudio.ee / lev 1  FAILED [can't dump in degraded mode]

I can't understand why Amanda goes to degraded mode.



OK, this is another of these cases where you can't find the answer on  
Google until 1 minute *after* you post to the list. Turns out this is  
a known bug in 3.2.0:

https://forums.zmanda.com/showthread.php?3168-lev-1-FAILED-can-t-dump-in-degraded-mode

I configured a temporary holdingdisk and the problem is solved. Sorry  
for the noise.


--
Toomas Aas



All DLEs failed [can't dump in degraded mode]

2012-11-23 Thread Toomas Aas

Hello!

The holding disk in my Amanda server suffered a hardware failure and  
now temporarily Amanda has no holdingdisk. I removed the holdingdisk  
definition from amanda.conf and expected that dumps go directly to  
vtape. However, amdump run reports that all DLEs are failed, with  
message:


mail.tarkvarastuudio.ee / lev 1  FAILED [can't dump in degraded mode]

I can't understand why Amanda goes to degraded mode. The disk  
containing vtapes is definitely available. I can successfully run this  
command:


su backup -c touch /backup/slot2/testfile

Also, amcheck shows that there are no problems.

This is the content of taper.debug file from latest amdump run:

Sat Nov 24 08:26:21 2012: taper: pid 39920 ruid 3951 euid 3951 version  
3.2.0: start at Sat Nov 24 08:26:21 2012
Sat Nov 24 08:26:21 2012: taper: pid 39920 ruid 3951 euid 3951 version  
3.2.0: rename at Sat Nov 24 08:26:21 2012
Sat Nov 24 08:26:21 2012: taper: Amanda::Taper::Scan::traditional  
stage 1: search for oldest reusable volume
Sat Nov 24 08:26:21 2012: taper: Amanda::Taper::Scan::traditional  
oldest reusable volume is 'TAPE32'
Sat Nov 24 08:26:21 2012: taper: Amanda::Taper::Scan::traditional  
stage 1: searching oldest reusable volume 'TAPE32'
Sat Nov 24 08:26:21 2012: taper: Amanda::Taper::Scan::traditional  
result: 'TAPE32' on file:/backup/drive1 slot 2, mode 2
Sat Nov 24 08:26:29 2012: taper: pid 39920 finish time Sat Nov 24  
08:26:29 2012


--
Toomas Aas



Re: Running amcheck during amdump

2012-10-11 Thread Toomas Aas

On Thu, 11 Oct 2012 Toomas Aas wrote:


On Wed, 10 Oct 2012, Charles Curley wrote:



I believe running amcheck during an amdump run is a harmless error.



It doesn't probably do any harm, but it turns out that in this case  
the mail report is still sent, containing these warnings:


WARNING: skipping tape test because amdump or amflush seem to be running
WARNING: if they are not, you must run amcleanup


Which leads me to next question - what is the best way to detect, in a  
script, that amdump or amflush is running? Off the top of my head I  
can only think of something like 'ps | grep amdump', but that doesn't  
seem very elegant. I tried to read amcheck.c to find out how Amanda  
herself does it, but IANAP...


--
Toomas Aas



Re: Running amcheck during amdump

2012-10-10 Thread Toomas Aas

On Wed, 10 Oct 2012, Charles Curley wrote:



I believe running amcheck during an amdump run is a harmless error.



It doesn't probably do any harm, but it turns out that in this case  
the mail report is still sent, containing these warnings:


WARNING: skipping tape test because amdump or amflush seem to be running
WARNING: if they are not, you must run amcleanup

--
Toomas Aas



Running amcheck during amdump

2012-10-09 Thread Toomas Aas
I'm thinking about setting up an 'amcheck -m' run for early morning.  
This is a remote site doing backups to external USB hard disk, and  
they tend to forget to replace the disk once a week, so I'd like to  
have the FIX BEFORE RUN IF POSSIBLE e-mail waiting for them first  
thing in the morning. However, in case the disk has actually been  
repaced on time, the early morning amcheck run might coincide with  
amdump in progress. Can this have any bad effects?


--
Toomas Aas



Re: Running amcheck during amdump

2012-10-09 Thread Toomas Aas

On Wed, 10 Oct 2012 Charles Curley wrote:


However, I suspect that swapping out the vtape drive will cause
problems for amanda. For example, if you try a restore but the file to
be restored is on the swapped out disk, amanda will likely get
confused. It will be looking for a vtape that isn't there.


Actually, this setup (except the 'amcheck -m' part) has been in use  
for years and swapping disks has not caused any problems (other than  
remote operators occasionally forgetting to change the disk on time  
and dumps going to holding disk). Restores are rare enough so I don't  
remember if restoring from a vtape that is on the 'other' disk has  
been tried, but I don't see why it would cause any more serious  
problem than Amanda reporting 'tape not found', followed by a facepalm  
from the operator and connecting the right disk. Mounting the disk for  
restore needs to be done manually anyway - we normally mount the disk  
only at the start of amdump run and umount it immediately after amdump.


Dumping to a stationary disk and rsyncing to swappable disks is  
definitely more advanced and makes for smoother operation, but for our  
small setup I think the current system is not that bad. Besides, even  
when rsyncing to swappable disks, someone still has to swap these  
disks, and rsync does not, AFAIK, have a 'rsynccheck' which could yell  
at the operator in a way amcheck does ;)


--
Toomas Aas



Report complaining about host which is not in disklist

2012-09-26 Thread Toomas Aas

Hello!

This may be something that is fixed in most recent version (I'm  
running 3.2.2), but I still decided to mention it, since initially I  
was quite baffled by the error message.


I recently added a new client to my Amanda setup. I had two DLEs from  
this client in disklist, and it was dumped successfully for a couple  
of days. Then I decided to temporarily stop dumping this client, and  
commented out these entries in disklist.


After that, Amanda sends me mail every morning about RESULTS MISSING  
for *one* of these two DLEs:


FAILURE DUMP SUMMARY:
  hostname.mydomain.tld /var RESULTS MISSING

I triple-checked that the DLE is indeed commented out in disklist.  
Running 'amcheck -c' gave no errors.


This seems to be caused by running with 'flush-threshold-dumped'  
and/or 'flush-threshold-scheduled' parameters to gather multiple days  
worth of dumps to holding disk and then flush them to large tape. I  
discovered that one dump for this 'MISSING' DLE is still on holding  
disk, whereas there are no dumps of the other DLE on the holding disk.


The logfile in logdir/log.timestamp.0 has this line:
DISK planner hostname.mydomain.tld /var

This is pretty much all the traces of this commented-out host I could  
find on the server.


--
Toomas Aas


Re: backups level 1 unrecoverable wihtout level 0 on a big DLE?

2012-06-20 Thread Toomas Aas

Hello Joel!


Wow!
This is really a big surprise to me.
I can conclude that i can't guarantee several days of backup, but just
the last one (or at most the last 2) ?


You can certainly guarantee several days of backup, not only the last  
1 or 2. How many days exactly, depends on how you have set your  
dumpcycle, tapecycle and runspercycle. As I already wrote, you need to  
set these according to your recovery needs.


For example, consider dumpcycle of 7 days, runspercycle of 5 and  
tapecycle of 20 tapes. In worst case, you will have 4 oldest  
incremental backups of some DLE where the full that they are based on  
no longer exists. This being the case, the oldest level 0 is 22 days  
(16 tapes) old and you have a total of 4 full backups of this DLE.


It is true that you won't have full backups of every DLE from 28 days  
(20 tapes) ago. I can't see how this would be possible with Amanda's  
scheduling, unless you configure Amanda to do full backups only. Most  
of us don't have enough resources to do that...


--
Toomas Aas



Re: DLEs stuck in 'force' state

2012-05-25 Thread Toomas Aas

Hello Jean-Louis!

What are the first two lines of their info file,  
.../curinfo/kuller.raad.tartu.ee/_storage/info  
.../curinfo/kuller.raad.tartu.ee/_storage_lists/info

A command of 0 means there is no force command.


Both files begin with these two lines:

version: 0
command: 0

So it appears there should not be a force command.

On the other hand, I looked at tonight's amdump.1 file in the log  
directory, and for the /storage/lists DLE it says:


planner: time 0.163: setting up estimates for  
kuller.raad.tartu.ee:/storage/lists

kuller.raad.tartu.ee:/storage/lists overdue 7 days for level 0
setup_estimate: kuller.raad.tartu.ee:/storage/lists: command 1,  
options: nonelast_level -1 next_level0 -7 level_days 0getting  
estimates 0 (-3) -1 (-3) -1 (-3)

planner: time 0.170: setting up estimates took 0.068 secs

It says command 1, so this means command *was* set to 1 in the info  
file and full dump was made?


I'm not sure where the overdue 7 days for level 0 is coming from,  
considering that Amanda has been doing level 0 backups of this DLE for  
past several days.




Are you sure you do not set the force command somewhere else?



I can't think of any place where I would set it. I imagine it would be  
possible to do it with a cronjob, but I haven't set up anything like  
this.


The next scheduled amdump run is tonight at 22:15. I've set up an 'at'  
job to make a copy of info file at 22:14, to see if command is still  
set to 0.


--
Toomas Aas



DLEs stuck in 'force' state

2012-05-24 Thread Toomas Aas

Hello!

While exploring the recent GNU tar --one-file-system flag seems to  
have no effect issue, I forced some DLEs with 'amadmin force'. My  
understanding of 'force' has been that it forces the DLE to do a full  
backup on next run, and then returns to the normal planner behaviour.


However, my forced DLEs are now doing full backup every night. In the  
e-mail report I always have these lines:


NOTES:
  planner: Forcing full dump of kuller.raad.tartu.ee:/storage as directed.
  planner: Forcing full dump of kuller.raad.tartu.ee:/storage/lists  
as directed.


I tried to unforce these DLEs, but unsuccessfully:

# su backup -c amadmin BACKUP unforce kuller.raad.tartu.ee /storage
amadmin: no force command outstanding for  
kuller.raad.tartu.ee:/storage, unchanged.
amadmin: no force command outstanding for  
kuller.raad.tartu.ee:/storage/lists, unchanged.

# su backup -c amadmin BACKUP unforce kuller.raad.tartu.ee /storage/lists
amadmin: no force command outstanding for  
kuller.raad.tartu.ee:/storage/lists, unchanged.


How do I get these DLEs out of the 'force' state?

BTW, my Amanda server version is 3.2.2

--
Toomas Aas



Re: GNU tar --one-file-system flag seems to have no effect

2012-05-19 Thread Toomas Aas

Hello all,

Just to keep the amount of messages down, I'll respond to several  
questions at once.


Robert wrote:


Is tar on the OP system actually GNUTar? On *Linux* systems it generally
is, but I am not certain about *commercial* UNIX systems...


It is true that 'standard' tar on FreeBSD is BSD tar not GNU tar, but  
GNU tar is always installed together with Amanda (as  
/usr/local/bin/gtar) and Amanda is configured to use that - the  
runtar.*.debug files show that /usr/local/bin/gtar is being launched.


$ /usr/local/bin/gtar --version
tar (GNU tar) 1.22
Copyright (C) 2009 Free Software Foundation, Inc.


Does the disk spec in the disklist file contain wildcards?  tar ...
--one-file-system /storage/* will backup both disks (and no others
mounted under them).


Nope, there are no wildcards in the disklist

Nathan wrote:


 $ stat -f %N %d /storage /storage/some-file-not-under-lists
 $ stat -f %N %d /storage/lists /storage/lists/some-filename



Does that in fact show a different device number for the two
different commands (but the same device number for the two files in each
command)?


Yes, it does:

# stat -f %N %d /storage /storage/dumps/20120516/test.sql.gz
/storage 111
/storage/dumps/20120516/test.sql.gz 111

# stat -f %N %d /storage/lists /storage/lists/bbb
/storage/lists 125
/storage/lists/bbb 125


Do you also back up / on this system, by any chance?  If so, that would
seem to show that --one-file-system works some of the time.


Good point that I completely missed! This machine has been in backup  
for years, and disklist includes / with comp-root-tar dumptype. I'm  
sure I would have noticed if it suddenly would have baced up the  
entire machine, not just the 500 MB that are actually in /. None of  
the relevant dumptypes have been changed from their defaults in  
amanda.conf.


Jean-Louis wrote:


It might be a feature/bug with level 1 backup, try a full of /storage.


I did, and the dump of /storage did not include files under  
/storage/lists. So it *might* be a bug with level 1 backup, but in  
that case it's strange that it has never affected backup of /, as  
indicated by Nathan above.


--
Toomas Aas



Re: GNU tar --one-file-system flag seems to have no effect

2012-05-17 Thread Toomas Aas

Thu, 17 May 2012 kirjutas Christopher X. Candreva ch...@westnet.com:


One file system means just that. Unless /storage/lists is a separate
partition mounted at that point, /storage is one file system, as you say
above.  You need the explicit exclude.


That was exactly my point. Until day before yesterday, /storage/lists  
was just a subdirectory on /storage filesystem. Then I created a new  
filesystem, moved the contents of /storage/lists to that filesystem  
and mounted it under /storage/lists. So it is a separate filesystem,  
but it was still backed up as part of /storage.


$ df
Filesystem 1K-blocks UsedAvail Capacity  Mounted on
/dev/da0s1a   507630   254546   21247455%/
devfs  110   100%/dev
/dev/md0   31470 77642119027%/phpramdisk
/dev/da0s1f  8119342  4804070  266572664%/usr
/dev/da0s1e  4058062  2732312  100110673%/var
/dev/da0s1d   126702220149455219%/var/tmp
/dev/da0s2.journal  54576856 17684592 3252611635%/storage
devfs  110   100%/var/named/dev
/dev/da1s1a 34425972  9515844 2215605230%/storage/lists

--
Toomas Aas



Re: GNU tar --one-file-system flag seems to have no effect

2012-05-17 Thread Toomas Aas

Hello Nathan!



What exactly happened that convinced you the contents of /storage/lists
was still getting included in the dump for /storage ?



First I noticed in the e-mail report that level 1 backup of /storage  
was just a little bit bigger than level 0 backup of /storage/lists  
(9703 vs 9120 MB). Then I looked at the index file of /storage DLE on  
server for that day's amdump run, and noticed that all files in the  
/storage/lists directory were listed there.


--
Toomas Aas



Re: GNU tar --one-file-system flag seems to have no effect

2012-05-17 Thread Toomas Aas

Hello Jon!



Grasping at straws, any chance you also left the original files
in the directory lists after copying them to the new partition.

After mounting the new partition on /storage/lists they would be
masked and would not be accessible using file system semantics.
But dump-like programs could still see, and back-up, the masked files.
Tar should be using file system semantics, but like I said, grasping.



I just double-checked and I'm pretty sure I did delete the original  
files from the directory. When I umount /storage/lists, the directory  
appears empty.


--
Toomas Aas



GNU tar --one-file-system flag seems to have no effect

2012-05-16 Thread Toomas Aas
I had one big filesystem mounted on /storage which was backed up by a  
DLE using comp-user-tar dumptype. Yesterday I split out one  
subdirectory, /storage/lists, into separate partition and added  
another DLE for /storage/lists, also using comp-user-tar dumptype.


Tonight's amdump run seems to have backed up the contents of  
/storage/lists twice, once as /storage/lists DLE and once as part of  
/storage DLE.
I thought that '--one-file-system' flag passed by Amanda to gtar is  
supposed to prevent that, but in my case it doesn't seem to have an  
effect.


It's easy enough to just add an appropriate exclude statement to  
/storage DLE, but I'm curious whether someone else has seen such  
thing, and what might be the cause. It can't be very common, otherwise  
I would remember someone else reporting this.


Relevant software versions:
Client: amanda-client-2.6.1p2, gtar-1.22, FreeBSD 7.4
Server: amanda-server-3.2.2, FreeBSD 8.2

Contents of relevant runtar.debug file for DLE /storage:

1337197794.412932: runtar: pid 34403 ruid 1010 euid 0 version 2.6.1p2:  
start at Wed May 16 22:49:54 2012

1337197794.413066: runtar: version 2.6.1p2
1337197794.437903: runtar: /usr/local/bin/gtar version: tar (GNU tar) 1.22
1337197794.438480: runtar: config: BACKUP
1337197794.439032: runtar: pid 34403 ruid 0 euid 0 version 2.6.1p2:  
rename at Wed May 16 22:49:54 2012
1337197794.439334: runtar: running: /usr/local/bin/gtar --create  
--file - --directory /storage --one-file-system --listed-incremental  
/var/amanda/client/gnutar-lists/kuller.raad.tartu.ee_storage_1.new  
--sparse --ignore-failed-read --totals .

1337197794.439383: runtar: pid 34403 finish time Wed May 16 22:49:54 2012

--
Toomas Aas



Re: Failed tapes

2012-05-03 Thread Toomas Aas

Tue, 01 May 2012 kirjutas Paul Crittenden paul.critten...@simpson.edu:


What is the proper way of removing a failed tape from the tape rotation?


amrmtape YourConfig TapeLabel

--
Toomas Aas



Re: Splitting disk list entries using regular expressions

2012-04-19 Thread Toomas Aas

N, 19 apr   2012 kirjutas Alan Orth alan.o...@gmail.com:


  ? /bin/tar: ./homes/[a-d]*: Warning: Cannot stat: No such file or directory


What, exactly, does your entry in the disklist look like?

--
Toomas Aas



Re: Splitting disk list entries using regular expressions

2012-04-19 Thread Toomas Aas

Hello Alan!


The disk list in question looks like this:

   localhost   /export_homes1  /export {
user-tar
include ./homes/[a-d]*
   }


From amanda.conf manpage:

   All include expressions are expanded by Amanda, concatenated in one
   file and passed to GNU-tar as a --files-from argument. They must
   start with ./ and contain no other /.

It seems that Amanda doesn't handle excludes in subdirectories, but  
only in the top level directory of the DLE. Maybe you can get the  
results that you want by modifying the DLE like this:


localhost /export_homes1 /export/homes {
  user-tar
  include ./[a-d]*
}

--
Toomas Aas



Re: How to keep 1 year backup data? or 10 year backup data?

2012-01-18 Thread Toomas Aas

K, 18 jaan  2012 kirjutas Horacio hsan...@gmail.com:


In my configuration file I have

tapecycle 30
dumpcycle 7 days
runspercycle 7

storing on 30 virtual disks of 10GB each and running amdump every day in
the morning.


As I understand this will make one full backup a week and 6 incremental
backups the other days of the week. This is ok but using amrecover
history (see below) I can see it only keeps 29 days of backup data.


I counted 30 backups in the output you posted.



If I want one year of backup data do I need to create 365 virtual disks
and set tapecycle to 365?? What if I need to keep 10 years of backup
data due to client requirements?


If you need to be able to restore your data into the state as it was  
on any given day within past year, then yes, you need to have a backup  
for each day. That makes 365 backups per year or approximately 3650  
backups for 10 years (keep in mind the leap years).


I myself have two Amanda configurations - one that runs on every  
weekday and keeps backups for ca 6 months, and another that runs full  
backup of everything twice a year and is kept forever.


--
Toomas Aas



amrecover - can't talk to tape server: service amidxtaped:

2011-06-16 Thread Toomas Aas

Hello!

I'm installing a new Amanda server on FreeBSD 8.2, using FreeBSD ports  
which currently provide Amanda 3.2.2. I have successfully made the  
first backup, but restore fails:

--
amrecover add dnetc-freebsd8-amd64.tar.gz
Added file /home/toomas/dnetc-freebsd8-amd64.tar.gz
amrecover extract

Extracting files from holding disk on host pegasus.raad.tartu.ee.
The following files are needed:  
/holding/20110615162722/pegasus.raad.tartu.ee._usr.0


Extracting from file  /holding/20110615162722/pegasus.raad.tartu.ee._usr.0
amrecover - can't talk to tape server: service amidxtaped:
--

And the file is not extracted. Only errors I could find are in  
amandad.debug, which are actually Perl warnings:


Thu Jun 16 16:04:36 2011: amandad: ERROR service amidxtaped:
Thu Jun 16 16:04:36 2011: amandad: ERROR service amidxtaped: **  
(process:5181): WARNING **: Use of qw(...) as parentheses is  
deprecated at  
/usr/local/lib/perl5/site_perl/5.14.0/Amanda/Recovery/Clerk.pm line 231.

Thu Jun 16 16:04:36 2011: amandad: ERROR service amidxtaped:
Thu Jun 16 16:04:36 2011: amandad: ERROR service amidxtaped:

Such warnings are reported multiple times about various locations in  
Clerk.pm and Planner.pm:


Clerk.pm line 231
Clerk.pm line 265
Planner.pm line 231
Planner.pm line 342
Planner.pm line 393

I modified all these occurrences, adding additional parentheses around  
the qw() construct, for example


original:

 for my $rq_param qw(dump xfer_src_cb) {

new:

 for my $rq_param (qw(dump xfer_src_cb)) {

After that, I could run recovery successfully.

As I am not a Perl programmer and don't really know what it is I did,  
I wonder if this is a safe workaround?


--
Toomas Aas


Re: Multiple tape drive access

2011-04-26 Thread Toomas Aas

E, 25 apr   2011 kirjutas purpleshadow livedoor purplesha...@livedoor.com:


If there is a tape drive on a FC-SAN system, there
can be plural servers that can access the tape drive.

In this case, if plural amanda servers access the
tape drive simultaneously,  Which case will happen?


I have no experience with tape drives on FC SAN, but I do have some  
experience with FC SAN in general, so I'm going to take an educated  
guess. My guess is that multiple hosts should not be allowed to access  
the tape drive. SAN zoning or some similar solution should be  
configured so only one host can see the tape drive. Otherwise  
mulitple, possibly contradicting, SCSI commands arriving from multiple  
hosts will completely confuse the drive and as result, no host can  
really use it.


This is not an Amanda issue, but rather a lower level issue with SCSI  
subsystem on the hosts.


--
Toomas Aas



Re: amanda and lvm

2011-02-24 Thread Toomas Aas

T, 22 veebr 2011 kirjutas Joe Konecny jkone...@rmtohio.com:

I am setting up a new system using ubunto server 10.10.  I have 1  
4TB level 1+0
raid array consisting of 4 2TB drives which will be the main os  
drive and 1 2TB

level 0 raid array which will be the holding disk.  Should I set this server
up using lvm and create 1 6TB volume or should I create two (1 4TB  
and 1 2TB) lvm

volumes?


If I were you, I would definitely create two separate volumes. The  
intended workloads, as well as underlying RAID setup, are rather  
different. With two volumes, if one disk in your RAID0 dies, you lose  
only the holdingdisk, which is probably survivable. With one volume,  
you stand a good chance of losing the entire server which is not so  
good.


--
Toomas Aas



Re: Multiple level 0 backups: why does Amanda do that and how to make her stop doing it

2011-01-17 Thread Toomas Aas

Hello!


Amanda does a full backup of the /data on one day of the week and then
does level one backups for the rest of the week.  It does a level one
backup of the other 6 disks on the day it does the full of /data, and
then does full backups of the other 6 disks on the other 6 days.

Why does Amanda make so many full backups?


Probably Amanda sees that on the days when level 1 of /data is done  
there is plenty of tape space left over. In this case, Amanda may  
promote the smaller DLEs in order to better utilize the available  
tape space and enable faster recoveries (the less different levels you  
have to go through during recovery, the faster you can recover).



Is there a configuation parameter we need to set to change this?


'maxpromoteday' is probably what you're looking for.

--
Toomas Aas



Re: Got no header and data from server / Can't locate object method

2010-12-15 Thread Toomas Aas

T, 14 dets  2010 kirjutas Jean-Louis Martineau martin...@zmanda.com:


Try this one line patch, you can patch the installed Clerk.pm file


Thanks Jean-Louis, after applying this patch I could run amrecover
successfully. However, the other problem remains in that when I run  
amdump or amflush, taper fails with this message:


  taper: FATAL Can't locate object method new via package  
Amanda::Xfer::Dest::Taper::Splitter (perhaps you forgot to load  
Amanda::Xfer::Dest::Taper::Splitter?) at  
/usr/local/lib/perl5/site_perl/5.12.2/Amanda/Taper/Scribe.pm line 706.


Maybe you have an idea about that one too?

--
Toomas Aas



Re: Got no header and data from server / Can't locate object method

2010-12-15 Thread Toomas Aas

Hello Jean-Louis!

What's the output of: grep -i AMANDA_COMPONENTS  
/usr/local/lib/perl5/site_perl/5.12.2/Amanda/Constants.pm


$AMANDA_COMPONENTS =  client amrecover ndmp;



Add the same 'use Amanda::XFerServer;' in  
/usr/local/lib/perl5/site_perl/5.12.2/Amanda/Taper/Scribe.pm




Thanks a lot, looks like this helped. I started an amflush, and it  
seems to be running successfully. It's flushing the 4th DLE out of 12  
as I type this.


--
Toomas Aas



Re: Got no header and data from server / Can't locate object method

2010-12-15 Thread Toomas Aas

K, 15 dets  2010 kirjutas Jean-Louis Martineau martin...@zmanda.com:


Toomas Aas wrote:



$AMANDA_COMPONENTS =  client amrecover ndmp;
That's the problem, it looks like the server code is installed but  
it is not in that line.
I don't know how the FreeBSD port collection works, but it is not  
compiled correctly.


Looks like this has already been fixed in the FreeBSD ports collection  
on December 3rd. I had built my new Amanda before that, so I missed  
out on that update.


Quoting the cvsweb:

- Amanda/Constants.pm is installed by amanda-client, and it includes
  constat variable as of client components only.  When amanda-server
  installed, Constants.pm should include also server components.
- To work around it, use pkg-install/pkg-deinstall script to tweak
  this variable by inplace updating.

Reported by:Christian Gusenbauer c...@gmx.at

--
Toomas Aas



No index records after upgrade

2010-12-13 Thread Toomas Aas

Hello!

I just finished a major system upgrade on a small one-machine setup  
(client=server). This involved upgrading the OS from FreeBSD 6.4 to  
8.1 and also upgrading or at least rebuilding all the installed  
applications. Amanda was upgraded from 2.5.1 to 3.2.0, and GNU tar  
from 1.15.1 to 1.23.


The directories that are used as indexdir and logdir were preserved.  
Amanda configuration file was brought over from the old setup and  
following changes were made:


removed deprecated keywords: tapebufs, amrecover_do_fsf, amrecover_check_label

repaced the old chg-disk changer with new chg-disk changer, defined so:
tapedev ext-disk
define changer ext-disk {
tpchanger chg-disk:/backup
}

The user who amanda runs as is the same as before (backup). Hostname  
is also same, and the name resolution in DNS is correct.


Amcheck passes with no problems.

However, I can't restore anything using amrecover:

amrecover BACKUP
AMRECOVER Version 3.2.0. Contacting server on mail.mydomain.ee ...
220 mail AMANDA index server (3.2.0) ready.
Setting restore date to today (2010-12-13)
200 Working date set to 2010-12-13.
200 Config set to BACKUP.
200 Dump host set to mail.mydomain.ee.
Use the setdisk command to choose dump disk to recover
amrecover setdisk /usr
200 Disk set to /usr.
500 No dumps available on or before date 2010-12-13
No index records for disk for specified date
If date correct, notify system administrator

The result is same for any disk, not just /usr

In my amanda.conf I have:
infofile /var/amanda/BACKUP/curinfo   # database DIRECTORY
logdir   /var/amanda/BACKUP   # log directory
indexdir /var/amanda/BACKUP/index # index directory

All the files seem to be there in /var/amanda and are owned by the  
backup user, but for reasons I don't understand Amanda isn't using  
them. I've looked at amrecover.debug and amindexd.debug files, but  
nothing stands up.


This is not a critical problem, because I can recover manually using  
tar and gzip, and I'll overwrite all the vtapes in couple of weeks  
anyway (hoping that I *can* recover the backups I'll be making from  
now on). But I still think that it should work and would like to  
understand why it doesn't.


--
Toomas Aas



Got no header and data from server / Can't locate object method

2010-12-13 Thread Toomas Aas

Hello again!

Now that I got my tapelist problem sorted out, I still have a problem  
recovering backups.


Example session:

==
# amrecover BACKUP
AMRECOVER Version 3.2.0. Contacting server on mail.mydomain.ee ...
220 mail AMANDA index server (3.2.0) ready.
Setting restore date to today (2010-12-14)
200 Working date set to 2010-12-14.
200 Config set to BACKUP.
200 Dump host set to mail.mydomain.ee.
Use the setdisk command to choose dump disk to recover
amrecover setdevice file:/backup
Using tape file:/backup.
Tape server unspecified, assumed to be mail.mydomain.ee.
amrecover setdisk /usr
200 Disk set to /usr.
amrecover cd local
/usr/local
amrecover add bin
Added dir /local/bin/ at date 2010-12-09-02-55-22
amrecover extract

Extracting files using tape drive file:/backup on host mail.mydomain.ee.
The following tapes are needed: LINT27

Extracting files using tape drive file:/backup on host mail.mydomain.ee.
Load tape LINT27 now
Continue [?/Y/n/s/d]? y
Got no header and data from server, check in amidxtaped.*.debug and  
amandad.*.debug files on server

amrecover quit
200 Good bye.
==

I checked the *.debug files as suggested, but didn't see any obvious  
errors. However, there is something rather interesting in  
/var/log/messages:


Dec 14 00:03:53 mail amidxtaped[85826]: Can't locate object method  
new via package Amanda::Xfer::Source::Recovery (perhaps you forgot  
to load Amanda::Xfer::Source::Recovery?) at  
/usr/local/lib/perl5/site_perl/5.12.2/Amanda/Recovery/Clerk.pm line 544.


Bug or PEBKAC?

--
Toomas Aas



Re: Got no header and data from server / Can't locate object method

2010-12-13 Thread Toomas Aas

Hello Jean-Louis!

Do you have the  
'/usr/local/lib/perl5/site_perl/5.12.2/Amanda/XferServer.pm' file?


Yes.


How do you configured amanda?


I installed Amanda from FreeBSD port, the configure line for  
amanda-server looks like this:


  $ ./configure --libexecdir=/usr/local/libexec/amanda  
--without-amlibexecdir --with-amandahosts --with-fqdn  
--with-dump-honor-nodump --prefix=/usr/local --disable-glibtest  
--with-user=backup --with-group=operator --with-bsdtcp-security  
--with-bsdudp-security --with-ssh-security  
--with-gnutar-listdir=/var/amanda/gnutar-lists  
--with-gnutar=/usr/local/bin/gtar --without-client --without-gnuplot  
--disable-s3-device --prefix=/usr/local --mandir=/usr/local/man  
--infodir=/usr/local/info/ --build=amd64-portbld-freebsd8.1


--
Toomas Aas



Re: amanda and LTO3 or LTO4 tapes

2010-11-30 Thread Toomas Aas

T, 30 nov   2010 kirjutas Uwe Beger uwe.be...@unixprojekt.de:

I read that some cheaper Adaptec (1045, 1405 IIRC) are not  
compatible with LTO4! Is this correct? I prefer a Novell SuSE based  
backup system. Any pitfalls known for such a combination?


As a data point: I have one backup server running SLES11 SP1, which  
connects to HP MSL2024 library with two HP Ultrium 1760 drives (LTO4,  
SAS) using Adaptec 1045 adapter. It's working just fine. This  
particular server isn't running Amanda but I really don't think Amanda  
would have any compatibility problems with this hardware/OS combo.


--
Toomas Aas



Re: runtar exited with status 1

2010-03-25 Thread Toomas Aas

On 25.03.2010 07:42, Toomas Aas wrote:


Upgrading tar to 1.22 seems to have made no difference.


Interestingly, having read tonight's mail report to the end, I see that 
the DLE in question is reported as having been backed up fine:


FAILURE DUMP SUMMARY:
   kuller.raad.tartu.ee /var lev 0  FAILED 
/usr/local/libexec/amanda/runtar exited with status 1: see 
/tmp/amanda/client/TLV/sendsize.20100324221506.debug


/---/
DUMP SUMMARY:
DUMPER STATS TAPER STATS
HOSTNAME DISK L ORIG-kB OUT-kB  COMP% MMM:SS KB/s   MMM:SS  KB/s
--
kuller   /var 0 3152880 1269203 40.3  6:42   3157.6 7:15  2920.3

Tonight's amdump.1 file seems to confirm that the DLE was, in fact, backed 
up. So it seems that the FAILURE DUMP SUMMARY is incorrect about listing 
this DLE as failed.


BTW, on previous runs this DLE was listed twice in the DUMP SUMMARY 
section: Level 0 was listed as MISSING and Level 1 or 2 was listed as 
successful. I don't know whether this change was due to tar upgrade or 
just a coincidence that Amanda decided to take a level 0 of this DLE 
tonight. I suspect the latter.


--
Toomas


Re: runtar exited with status 1

2010-03-25 Thread Toomas Aas

On 25.03.2010 08:03, Toomas Aas wrote:


So it seems that the FAILURE DUMP SUMMARY is incorrect about
listing this DLE as failed.


As a workaround, I switched this DLE to estimate calcsize. Tonight's run 
went fine.


--
Toomas


Re: runtar exited with status 1

2010-03-24 Thread Toomas Aas

On 23.03.2010 12:03, Toomas Aas wrote:



FAILURE DUMP SUMMARY:
kuller.raad.tartu.ee /var lev 0 FAILED /usr/local/libexec/amanda/runtar
exited with status 1: see
/tmp/amanda/client/TLV/sendsize.2010031502.debug



Upgrading tar to 1.22 seems to have made no difference. Hoping to see some 
error message, I tried to run the runtar command (as seen in the log 
below) manually as backup user, but only thing it says is:


runtar: Can only be used to create tar archives

indicating that it's not happy about the arguments.

Any more debugging ideas would be welcome.

1269461940.658871: sendsize: calculating for amname /var, dirname /var, 
spindle -1 GNUTAR

1269461940.659015: sendsize: getting size via gnutar for /var level 0
1269461940.661138: sendsize: pipespawnv: stdoutfd is 3
1269461940.661482: sendsize: Spawning /usr/local/libexec/amanda/runtar 
runtar TLV /usr/local/bin/gtar --create --file /dev/null --numeric-owner 
--directory /var --one-file-system --listed-incremental 
/var/amanda/client/gnutar-lists/kuller.raad.tartu.ee_var_0.new --sparse 
--ignore-failed-read --totals --exclude-from 
/tmp/amanda/sendsize._var.20100324221900.exclude . in pipeline
1269461942.245368: sendsize: /usr/local/bin/gtar: ./amavis/amavisd.sock: 
socket ignored
1269461953.947821: sendsize: /usr/local/bin/gtar: 
./amavis/.spamassassin/auto-whitelist: file changed as we read it
1269461969.836289: sendsize: /usr/local/bin/gtar: 
./db/nut/snmp-ups-rae3-ups1: socket ignored
1269461971.432378: sendsize: /usr/local/bin/gtar: 
./spool/postfix/active/94B2739847: Warning: Cannot stat: No such file or 
directory
1269461971.715155: sendsize: Total bytes written: 3225733120 (3.1GiB, 
100MiB/s)

1269461971.715731: sendsize: .
1269461971.715774: sendsize: estimate time for /var level 0: 31.054
1269461971.715818: sendsize: estimate size for /var level 0: 3150130 KB
1269461971.715853: sendsize: waiting for runtar /var child
1269461971.716432: sendsize: after runtar /var wait
1269461971.717202: sendsize: errmsg is /usr/local/libexec/amanda/runtar 
exited with status 1: see /tmp/amanda/client/TLV/sendsize.20100324221506.debug


--
Toomas


runtar exited with status 1

2010-03-23 Thread Toomas Aas

Hello!

I upgraded Amanda from 2.5.1p3 to 2.6.1p2 on my FreeBSD 7.2 box (which is 
Amanda server for my network). There have been two scheduled amdump runs 
after the upgrade, and I seem to have problem backing up one particular DLE 
on the server itself. Other DLEs on the server as well as other clients 
(2.5.1 and even 2.4.5) seem fine. Error message in Amanda report regarding 
this DLE is:


FAILURE DUMP SUMMARY:
   kuller.raad.tartu.ee /var lev 0  FAILED 
/usr/local/libexec/amanda/runtar exited with status 1: see 
/tmp/amanda/client/TLV/sendsize.2010031502.debug


The excrept from sendsize.2010031502.debug is at the end of the 
message. The only error I can see there is file changed as we read it, 
regarding one file. In my earlier experience with Amanda 2.5.1p3 this kind 
of errors didn't cause the dump to be FAILED, just STRANGE. Is it possible 
that new runtar treats errors returned from gtar somehow differently? Maybe 
I should upgrade gtar on this machine from 1.19 to something newer?


BTW, after failing level 0 of this DLE, Amanda seems to successfully take a 
level 1 of it.


1269289094.062058: sendsize: calculating for amname /var, dirname /var, 
spindle -1 GNUTAR

1269289094.062260: sendsize: getting size via gnutar for /var level 0
1269289094.064651: sendsize: pipespawnv: stdoutfd is 3
1269289094.064958: sendsize: Spawning /usr/local/libexec/amanda/runtar 
runtar TLV /usr/local/bin/gtar --create --file /dev/null --numeric-owner 
--directory /var --one-file-system --listed-incremental 
/var/amanda/client/gnutar-lists/kuller.raad.tartu.ee_var_0.new --sparse 
--ignore-failed-read --totals --exclude-from 
/tmp/amanda/sendsize._var.2010031814.exclude . in pipeline
1269289095.869283: sendsize: /usr/local/bin/gtar: ./amavis/amavisd.sock: 
socket ignored
1269289119.744394: sendsize: /usr/local/bin/gtar: 
./amavis/.spamassassin/bayes_seen: file changed as we read it
1269289119.805778: sendsize: /usr/local/bin/gtar: 
./db/nut/snmp-ups-rae3-ups1: socket ignored

1269289122.286540: sendsize: Total bytes written: 3071877120 (2.9GiB, 104MiB/s)
1269289122.287088: sendsize: .
1269289122.287131: sendsize: estimate time for /var level 0: 28.222
1269289122.287150: sendsize: estimate size for /var level 0: 2999880 KB
1269289122.287181: sendsize: waiting for runtar /var child
1269289122.288126: sendsize: after runtar /var wait
1269289122.288648: sendsize: errmsg is /usr/local/libexec/amanda/runtar 
exited with status 1: see /tmp/amanda/client/TLV/sendsize.2010031502.debug


--
Toomas Aas


Re: The Return of the README

2009-12-16 Thread Toomas Aas

Charles Curley wrote:


I'm attaching the last draft of the README. Dustin, you are welcome to
commit it; it is at least an improvement.


Looks great. But I would like to propose some more additions:

Instead of:

-
Amanda is a backup system designed to archive many
computers on a network to a single large-capacity tape drive.
-

maybe say:

-
Amanda is a backup system designed to archive many
computers on a network to a single large-capacity tape drive
or hard disk partition.
-

Backup to disk is a popular requirement these days, and if we say early in 
the README that AMANDA can do that then we may increase our sales ;)


--
Toomas Aas

... I haven't lost my mind; I know exactly where I left it.


Re: Dump failures

2009-11-18 Thread Toomas Aas

Mike R wrote:



NOTE: skipping tape-writable test
Tape daily-010 label ok
Server check took 0.410 seconds

The flush worked but I think I screwed up the labeling of the vtapes 
in the process




So should I be concerned about this? Are the tapes alright based on the 
results of the relabeling and is incorrect labeling something I should 
be concerned about? Any suggestions on my /var/www dump that reports 
problems due to files changing?




I've lost the beginning of this thread, but considering that flushing to 
daily-010 worked it should be fine. Maybe the labels do not look as nice as 
you'd want, but as long as they match the labelstr parameter in amanda.conf 
you're technically OK. If you want, you can always re-label the tapes 
manually later (just before the tape is about to be reused). I'd recommend 
doing this one tape at a time and avoiding fancy shell scripting stuff that 
might break in your particular environment.


--
Toomas Aas


Re: [Amanda-users] Amanda Configuration

2009-06-16 Thread Toomas Aas

khalil_noura wrote:


I labeled the tapes when I created a vtape test envirement. when I run amlabel 
I get the message : amlabel: label Backup-012 already on a tape.

my amanda.conf file is configured as this :

tpchanger chg-disk# the tape-changer glue script
tapedev file://space/vtapes/Backup/slots  # the no-rewind tape device to 
be used
#tapetype HP Ultrium 3-SCSI
#changerfile /etc/amanda/Backup/changer
#changerfile /etc/amanda/Backup/changer-status
changerfile /etc/amanda/Backup/changer.conf
changerdev /dev/null

when I change tapedev to dev/nst0''  I get error message (amlabel: could not load slot 
current: Virtual-tape directory dev/nst0 does not exist.)


If you change tapedev to /dev/nst0, you should also comment out tpchanger 
and other changer-related stuff in amanda.conf.


--
Toomas


Re: [Amanda-users] Amanda Configuration

2009-06-16 Thread Toomas Aas

khalil_noura wrote:


is it ok if I send you a copy of my amanda.conf file? I am not sure if
I am missing something on the config file.


If you post it to the list, it is quite likely that someone can point out
whether there is anything wrong with it. Just to make sure, does /dev/nst0 
actually
exist on your system? does the command 'mt -f /dev/nst0 status' output 
anything meaningful? Also, you mentioned that you labelled your vtapes 
during testing, but did you also label your LTO tapes?


--
Toomas


Re: [Amanda-users] Script Exec format error ?

2009-04-11 Thread Toomas Aas

Topper wrote:


head -1 /usr/libexec/amanda/application/abas-stop
#! /bin/sh


That space between ! and / looks terribly suspicious...

--
Toomas Aas

... Like I said before, I never repeat myself!


Re: Timeout waiting for ack after adding dle

2009-03-01 Thread Toomas Aas
Sunday 01 March 2009 04:59:54 kirjutasid sa:

 Toomas Aas wrote at 11:04 +0200 on Feb 28, 2009:

   I have a single-machine (client==server) setup which has been working
   well for quite a long time. It's running Amanda 2.5.1p3 on FreeBSD 6.4.
  
   Yesterday I added a new disk to the machine, mounted it under /db and
   added corresponding entry to the disklist. On tonights backup run,
   Amanda backed up  first two small DLEs but all the rest (including the
   newly added one) failed with:
  
   host.domain.ee  /usr lev 1  FAILED [cannot read header: got 0 instead
   of 32768]
   host.domain.ee  /usr lev 1  FAILED [cannot read header: got 0 instead
   of 32768]
   host.domain.ee  /usr lev 1  FAILED [too many dumper retry: [request
   failed: timeout waiting for ACK]]


 sendbackup is dying early - possible your timeouts are set too low
 in amanda.conf.

 Is this new DLE big?  Lots of files?

The new DLE is not that big. Its 'raw capacity' is 21 GB, ca 25000 files, but 
most of it are MySQL and PostgreSQL database files which are excluded from 
the DLE.

 It's also possible you're hitting a udp datagram size limit.  This can
 be improved with a sysctl tweak, or a source patch or using tcp
 (sorry - don't recall if amanda 2.5.1 supports the latter).

Thanks for the idea, I'll increase the net.inet.udp.maxdgram sysctl. 

I also looked at sendbackup debug files on the client, but the only error 
there is the same 'index tee cannot write [Broken pipe]':

sendbackup: debug 1 pid 11508 ruid 3951 euid 3951: start at Sat Feb 28 
04:06:23 2009
sendbackup: version 2.5.1p3
Could not open conf file /usr/local/etc/amanda/amanda-client.conf: No such 
file or directory
Reading conf file /usr/local/etc/amanda/BACKUP/amanda-client.conf.
sendbackup: debug 1 pid 11508 ruid 3951 euid 3951: rename at Sat Feb 28 
04:06:23 2009
  sendbackup req: GNUTAR /db  0 1970:1:1:0:0:0 
OPTIONS 
|;auth=BSD;compress-fast;index;exclude-file=./mysql;exclude-file=./pgsql/base;
  parsed request as: program `GNUTAR'
 disk `/db'
 device `/db'
 level 0
 since 1970:1:1:0:0:0
 options 
`|;auth=BSD;compress-fast;index;exclude-file=./mysql;exclude-file=./pgsql/base;'
sendbackup: start: host.domain.ee:/db lev 0
sendbackup: time 0.000: spawning /usr/bin/gzip in pipeline
sendbackup: argument list: /usr/bin/gzip --fast
sendbackup-gnutar: time 0.000: pid 11510: /usr/bin/gzip --fast
sendbackup-gnutar: time 0.001: doing level 0 dump as listed-incremental 
to '/var/amanda/gnutar-lists/host.domain.ee_db_0.new'
sendbackup-gnutar: time 0.011: doing level 0 dump from date: 1970-01-01  
0:00:00 GMT
sendbackup: time 0.012: spawning /usr/local/libexec/amanda/runtar in pipeline
sendbackup: argument list: runtar BACKUP 
GNUTAR --create --file - --directory /db --one-file-system --listed-incremental 
/var/amanda/gnutar-lists/host.domain.ee_db_0.new --sparse --ignore-failed-read 
--totals --exclude-from /tmp/amanda/sendbackup._db.20090228040623.exclude .
sendbackup-gnutar: time 0.012: /usr/local/libexec/amanda/runtar: pid 11512
sendbackup: time 0.012: started backup
sendbackup: time 0.014: started index creator: /usr/local/bin/gtar -tf - 
2/dev/null | sed -e 's/^\.//'
sendbackup: time 469.114: index tee cannot write [Broken pipe]
sendbackup: time 469.114: pid 11511 finish time Sat Feb 28 04:14:12 2009


Timeout waiting for ack after adding dle

2009-02-28 Thread Toomas Aas

Hello!

I have a single-machine (client==server) setup which has been working  
well for quite a long time. It's running Amanda 2.5.1p3 on FreeBSD 6.4.


Yesterday I added a new disk to the machine, mounted it under /db and  
added corresponding entry to the disklist. On tonights backup run,  
Amanda backed up  first two small DLEs but all the rest (including the  
newly added one) failed with:


host.domain.ee  /usr lev 1  FAILED [cannot read header: got 0 instead  
of 32768]
host.domain.ee  /usr lev 1  FAILED [cannot read header: got 0 instead  
of 32768]
host.domain.ee  /usr lev 1  FAILED [too many dumper retry: [request  
failed: timeout waiting for ACK]]


This shouldn't be a firewall problem, since the firewall on the  
machine is set to unconditionally pass all traffic on loopback  
interface and I couldn't find any relevant dropped packets in the  
firewall log. Also amcheck -c passes with no errors.


I looked at the amdump.1 file, and the first indication of any problem  
is on the 3rd DLE (which is the newly added one - coincidence?):


driver: result time 2761.656 from chunker0: FAILED 00-5 [cannot  
read header: got 0 instead of 32768]


(2761 seconds is approximately 04:06 local time)

Couldn't see anything wrong before that. In the server's general error  
log there are just these messages tonight:


Feb 28 04:14:12 host sendbackup[11511]: index tee cannot write [Broken pipe]
Feb 28 04:15:02 host sendbackup[11632]: index tee cannot write [Broken pipe]
Feb 28 04:15:02 host sendbackup[11612]: index tee cannot write [Broken pipe]
Feb 28 04:15:18 host sendbackup[11652]: index tee cannot write [Broken pipe]
Feb 28 04:15:28 host sendbackup[11644]: index tee cannot write [Broken pipe]
Feb 28 04:17:47 host sendbackup[11664]: index tee cannot write [Broken pipe]
Feb 28 04:23:37 host sendbackup[11673]: index tee cannot write [Broken pipe]
Feb 28 04:26:02 host sendbackup[11659]: index tee cannot write [Broken pipe]
Feb 28 04:29:40 host sendbackup[11684]: index tee cannot write [Broken pipe]

What could be wrong?

Actually I did notice one possible problem. After adding the new DLE,  
the total number of DLEs is now 13. Bad luck?


--
Toomas Aas


Re: [Amanda-users] Scheduling backups

2008-12-14 Thread Toomas Aas

amita0204 wrote:


- If I specify dumpcycle as 0, then I would get full backup every day.
Is that correct? 


Yes.


Is it equivalent to specifying dumpcycle as 1 day?


Interesting question. I'll leave that for someone else to answer.


- We can tell Amanda how many backups we want, but we can't specify
particular time for it. But then one can run amcheck and amdump using
crontab. So if one specifies dumpcycle and runspercycle and also uses
crontab for running amdump then will one setting override the other
one?


By can't specify particular time we mean that we generally can't expect 
full dumps to be run during any particular Amanda run. What we DON'T mean 
is that we can't specify when exactly the backup process is being run 
(hopefully this sentence had a correct amount of negatives). This is 
specified exactly in the crontab, which is the typical way for scheduling 
backups. Amanda doesn't have a will of her own which would allow her to 
wake up on a random moment during the day and start making backups. It is 
administrator's responsibility to schedule the amdump process so that it's 
interference with normal daily system usage is minimal. Crontab is 
arguably the best way to achieve this.



- Can I specify when not to run any backup process? I would like backs
to run only during system idle time. So is there any setting that can
be done in Amanda?


See above.

--
Toomas Aas

... Show respect to age. Drink good scotch.


Re: Device mapper doesn't seem to be guilty, all disk majors are still 8, but...

2008-11-09 Thread Toomas Aas

Gene Heskett wrote:

I can't find anything in the latest amanda, but tar-1.20's NEWS file says it 
now has a --no-check-device configuration option.


Unforch:
[EMAIL PROTECTED] tar-1.20]$ ./configure --no-check-device
configure: error: unrecognized option: --no-check-device
Try `./configure --help' for more information.

Same if root.  What the heck?



I haven't looked at tar 1.20 myself, but I guess --no-check-device is a 
run-time, not configure-time option.


--
Toomas
... Kindred: Fear that relatives are coming to stay.


Re: [Amanda-users] Few newbe questions

2008-11-08 Thread Toomas Aas
Friday 07 November 2008 09:38:55 kirjutas peter7:

 1. I want to store the backup on-site (on NAS) and then copy it to tapes to
 take off-site. Does Amanda support it? How would the restoration from tapes
 work if NAS went down?

I've never done something like this, but one topic which is sometimes 
discussed on this list and seems to cover your need is RAIT. This way each 
backup is done simultaneously to two backup devices, in your case to NAS and 
to tape. You could restore from either. Maybe not exactly what you had in 
mind, but perhaps covers the same requirement?

 2. Let's say that during a dumpcycle user creates a file and then one of
 the following days the file is deleted. How easy/difficult is it to restore
 the file not knowing when exactly it was created and deleted?

If the filename is known, then it's not terribly difficult. Using amrecover, 
just go to the directory where the file is and do a 'ls'. If you don't see 
the file, then use the 'setdate' command of amrecover to get a view of this 
directory from the day before. If you still don't see the file, move back 
another day etc. If you reach the beginning of your tape history and haven't 
found the file, you're out of luck :)

 3. I want to leave Amanda to do its magic, and have a weekly dumpcycle, but
 I'd like to always keep previous dumpcycle data on-site. How do I configure
 that? Do I need to have two separate configs?

There is no need for two separate configs. You should just have a sufficient 
number of tapes so that you can set your 'tapecycle' to contain enough tapes 
of two full dumpcycles. This way Amanda won't overwrite any tapes from the 
currently running dumpcycle and previous dumpcycle, meaning that you always 
have two weeks worth of backups.

 4. Each night Amanda performs a full backup of part of the data.
 What about files that have been deleted since the last backup and not yet
 backed up in the current dumpcycle?

If your Amanda is set up in a typical way, then besides the full backups of 
some DLEs, incremental backups of other DLEs are also done each night. If the 
file was created after last full backup then it will exist in some 
incremental backup. Except, of course, if the file was created and then 
deleted on the same day, in which case Amanda had no chance to catch it.

--
Toomas


Re: newbie help - client backup

2008-08-14 Thread Toomas Aas

Brian Bruns wrote:


Here is, I think, the relevant log file from the client machine:

[EMAIL PROTECTED] DailySet1]# tail sendbackup.20080808140957.debug
sendbackup-gnutar: time 0.001: pid 8298: /bin/gzip --fast
sendbackup-gnutar: time 0.002: doing level 0 dump as
listed-incremental to '/var/lib/amanda/gnutar-lists/web3_etc_0.new'
sendbackup-gnutar: time 0.003: doing level 0 dump from date:
1970-01-01  0:00:00 GMT
sendbackup: time 0.006: spawning /usr/lib/amanda/runtar in pipeline
sendbackup: time 0.006: argument list: runtar DailySet1 gtar --create
--file - --directory /etc --one-file-system --listed-incremental
/var/lib/amanda/gnutar-lists/web3_etc_0.new --sparse --xattrs
--ignore-failed-read --totals .
sendbackup: time 0.006: started index creator: /bin/tar -tf -
2/dev/null | sed -e 's/^\.//'
sendbackup-gnutar: time 0.006: /usr/lib/amanda/runtar: pid 8301
sendbackup: time 0.006: started backup
sendbackup: time 90.005: index tee cannot write [Broken pipe]
sendbackup: time 90.005: pid 8299 finish time Fri Aug  8 14:11:27 2008


It's been a while since I had to debug an amanda problem and I have no 
experience with Fedora...


Corresponding to sendbackup.debug file on the client, there should be a 
dumper.debug file on the server, describing the server side view of 
backing up this particular DLE. Maybe more clues can be found there.


90 seconds looks suspiciously like some kind of network timeout. Maybe a 
firewall on the server (or between server and client) preventing packets 
sent from the client reaching the server?


--
Toomas

... If you leave me, can i come too?


Re: Out of space problem

2008-05-06 Thread Toomas Aas

T, 06 mai   2008 kirjutas Nigel Allen [EMAIL PROTECTED]:


I'm experiencing an odd problem with a USB DAT drive that keeps running
out of space. Apologies for the length of the post.

The drive is supposed to be 36 / 72 GB.

Here's the kind of thing I see when I run a level 0 dump.


These dumps were to tape DailySet1-18.
*** A TAPE ERROR OCCURRED: [No more writable valid tape found].


[snip]


STATISTICS:



Output Size (meg)   33927.133927.10.0


[snip]


[snip]


define tapetype HP-DAT72 {
   comment HP DAT72 USB with hardware compression on
   length 72 G
}


[snip]



define dumptype custom-compress {
  global
  program GNUTAR
  comment Dump with custom client compression
  exclude list /etc/amanda/exclude.gtar
  compress client custom
  client_custom_compress /usr/bin/bzip2
}


[snip]

mail.airsolutions.com.aumapper/VolGroup00-LogVol00
custom-compress

mail.airsolutions.com.ausda1custom-compress


Any idea where I can start would be appreciated (apart from bigger
tape or less data).


Start by not believing that your 36/72 GB tapedrive can actually  
record 72 GB :)


Continue by turning off hardware compression on the tape drive. As  
your configuration indicates, you are using software compression. If  
this compressed data is then sent to the tapedrive, it gets fed  
through the hardware compression algorithm, but as the input data is  
already compressed, the resulting hardware compressed data is  
actually larger than the input.


Additionally, you would very likely benefit from splitting your  
disklist into smaller entries, because your data size is pretty close  
to what your tape can hold. By using more DLEs, Amanda has a better  
chance to spread out the work throughout the dumpcycle and not  
overfilling the tape.


--
Toomas Aas


Re: DLT8000 vs. VS80

2008-02-26 Thread Toomas Aas

Stefan G. Weichinger wrote:


The seller says the tapes were written to with a VS80-drive and that it
is possible that this format stays somehow permanently written to the
tape so that I would not be able to re-use it with my DLT8000.

I am unsure about this and would like to hear your thoughts and/or
experiences.


From my experience I would say the seller is correct. I haven't tried 
using DLTVS80 tapes with DLT8000, but I know for a fact that the opposite 
can't be done - tapes that were written with DLT8000 cannot be read with 
DLTVS. The only way to re-use them would be to degauss them first, but I 
don't have that kind of equipment so I haven't tried.


--
Toomas Aas


Re: Can't open tapedev

2008-01-28 Thread Toomas Aas

Steve Newcomb wrote:


How do I get off this list?


The instructions are below. They are for another list, but this list works 
exactly the same:


http://lists.herald.co.uk/pipermail/lois-bujold/2007-December/006849.html

--
Toomas

... It's not an optical illusion. It just looks like one.


Re: 2.4 vs 2.5

2007-12-20 Thread Toomas Aas

[EMAIL PROTECTED] wrote:

Can the clients be upgraded after the server or should it all be done 
simultaneously ?


Probably you don't need to upgrade everything at once. My current setup 
which works well here is:


Server: 2.5.0p2
Client1: 2.4.4p4
Client2: 2.5.1p3
Client3: 2.4.5

--
Toomas Aas


Re: first time rotating tapes with Amanada

2007-11-26 Thread Toomas Aas

Gil Vidals wrote:

I'm new to Amanda and I'm preparing to rotate tapes for the first time. 
I have an autoloader that holds 10 tapes all under the amanda.conf as 
DailySet1. And I just ordered 10 more (new) tapes, but I'm not clear as 
to when do I pull out tapes and put in the new ones???


I currently have these dump cycle settings and my intention is for the 
fulls to occur every 5 days
  dumpcycle 5 days   
  runspercycle 5 
  tapecycle 10 tapes 


I understand that the plan is to stop using your current 10 tapes and 
start using the new 10 tapes instead? In that case, you should mark your 
existing tapes as not reusable


amadmin DailySet1 no-reuse Tape01
amadmin DailySet1 no-reuse Tape02
...

then just take out the tapes, put in new tapes and label them with new 
labels

amlabel DailySet1 Tape11
amlabel DailySet1 Tape12
...

Sorry if I misunderstood your intention.

--
Toomas

... 24 hours in a day, 24 beers in a case. Coincidence?


Sun 622 DLT8000 drive

2007-09-19 Thread Toomas Aas

Hello!

I'm thinking about buying an used DLT8000 drive for the puropse of 
occasionally reading some old DLT tapes (not with Amanda, so sorry for 
the OT). Sun Model Number 622, Part Number 599-2347-02 is available on 
eBay, but I can't find any online documentation about this specific 
drive. Maybe someone on this list has one, and can point me to some 
online docs?


I assume that it is basically a standard SCSI DLT drive and should work 
fine with non-Sun server, right?


--
Toomas Aas
... Would a fly without wings be called a walk?


Re: Problems after splitting up a DLE

2007-06-28 Thread Toomas Aas

Toomas Aas wrote:

So, it seems very likely that all my problems were caused by this one 
corrupt library. I went on and installed the latest available version 
from FreeBSD ports, which is 2.5.1p3. We'll see how tonight's run goes.


Tonight's run was fine, except that I had failed to specify the correct 
gnutar-listdir to the port and hence all my incremental dumps were 
actually fulls, since previous gnutar-lists were not found.


To remedy this situation, I removed the amanda-client and amanda-server 
ports once again, in order to reinstall them with correct 
gnutar-listdir. And when removing the port, I again got the same error 
message about the same file:


pkg_delete: '/usr/local/lib/libamandad.a' fails original MD5 checksum - 
not deleted.


Having exactly the same library to go corrupt during two compilations of 
Amanda seems too much to be just a coincidence. The box generally seems 
healthy, there are no random signal 11-s or other common indications of 
faulty hardware. This is really a mystery.


--
Toomas Aas
--
... Boy, that lightning came a little clo-!!***NO CARRIER


Problems after splitting up a DLE

2007-06-27 Thread Toomas Aas

Hello!

My Amanda setup is 2.5.1p2 on a single FreeBSD 6.2 server backing up 
only itself to vtapes using GNU tar 1.15.1. It has been running well for 
over 6 months, but then I screwed things up by splitting up a large DLE 
into two smaller ones.


Initially I had in my disklist:

hostname.com /storage/www   comp-user-tar-big   1   local


Then I modified it like this:

mail.tarkvarastuudio.ee wwwAN /storage/www {
comp-user-tar-big
exclude ./[^a-nA-N]*
}   1   local
mail.tarkvarastuudio.ee wwwOZ /storage/www {
comp-user-tar-big
exclude ./[a-nA-N]*
}   1   local

The comp-user-tar-big dumptype is defined like this:

define dumptype comp-user-tar-big {
comp-user-tar
maxpromoteday 0
}

Initially, when I created these two new DLEs, I commented out wwwOZ, so 
as not to overwhelm my vtapes with two new disks requiring full dump on 
one run.


The first amdump run with new DLE wwwAN ended in amandad dumping core. 
In amreport output there was this message:


hostname.com  wwwAN lev 0  FAILED [err create 
/var/amanda/BACKUP/index/hostname.com/wwwAN/20070622052001_0.gz.tmp: 
Operation not permitted]


On the second amdump run, this problem no longer happened. In fact the 
DLE got dumped successfully and index was also created. However, there 
was a different error message in amreport:


driver: FATAL infofile update failed (hostname.com,'wwwAN')

'infofile' directive in my amanda.conf points to 
/var/amanda/BACKUP/curinfo, and Amanda is running as user 'backup'. The 
permissions on 'curinfo' directory are like this:


#ls -ld /var/amanda/BACKUP/curinfo
drwxr-xr-x  3 backup  wheel  512 Jan  6  2004 /var/amanda/BACKUP/curinfo

In fact, Amanda had even created the subdirectory for wwwAN, but then 
she decides that she can't create files in it?


#ls -l /var/amanda/BACKUP/curinfo
drwx--  19 backup  wheel  512 Jun 27 06:08 hostname.com

#ls -l /var/amanda/BACKUP/curinfo/hostname.com | grep wwwAN
drwx--  2 backup  wheel  512 Jun 25 09:36 wwwAN

If I su to backup, I can successfully create files manually in 
/var/amanda/BACKUP/curinfo/hostname.com/wwwAN, so this doesn't seem to 
be a permissions problem?


Tonight was the third run with this new DLE in my disklist. This time, 
the amreport message had simply this:


hostname.com  wwwAN RESULTS MISSING

Another DLE, which has been in my disklist for a long time, also 
reported RESULTS MISSING.


I looked at amdump.1 and log.timestamp.0 files. I can post them, if 
anyone is interested. But the essence of the matter seems to be that 
sendsize successfully creates estimates for this DLE, but the actual 
backup is never done (there is no 'driver' section in amdump.1 logfile 
regarding this DLE and also there is no sendbackup.debug file 
corresponding to this DLE in /tmp/amanda). I haven't been able to dig up 
any particular error messages regarding *why* the backup was not done.


Can anyone provide any recommendations based on this vague description?

--
Toomas Aas





--
... If it wasn't for C, we'd be using BASI, PASAL and OBOL!


Re: Problems after splitting up a DLE

2007-06-27 Thread Toomas Aas

K, 27 juuni 2007 kirjutas Toomas Aas [EMAIL PROTECTED]:


My Amanda setup is 2.5.1p2 on a single FreeBSD 6.2 server backing up
only itself to vtapes using GNU tar 1.15.1. It has been running well
for over 6 months, but then I screwed things up by splitting up a large
DLE into two smaller ones.


Having pondered over this very weird problem for a while, I wasn't  
really able to come up with any better plan than to upgrade Amanda to  
newer version. So I went to remove the existing version with the  
standard FreeBSD command of:


pkg_delete amanda-server-2.5.1p2,1

and it responded:

pkg_delete: '/usr/local/lib/libamandad.a' fails original MD5 checksum  
- not deleted.


So, it seems very likely that all my problems were caused by this one  
corrupt library. I went on and installed the latest available version  
from FreeBSD ports, which is 2.5.1p3. We'll see how tonight's run goes.


--
Toomas Aas


infofile update failed

2007-06-25 Thread Toomas Aas

Hello!

I'm running a single-server configuration of Amanda 2.5.1p2 on FreeBSD 6.2. 
Recently I added a new DLE. The next amdump run this DLE got dumped, but 
this message was in the report:


driver: FATAL infofile update failed (hostname.com,'newDLE')

The FAQ-o-Matic says that this is permissions problem. If it is, I cannot 
figure out how to fix it. In my amanda.conf I have infofile set up as this:


infofile /var/amanda/BACKUP/curinfo

I can see that Amanda has actually created the directory 
/var/amanda/BACKUP/curinfo/hostname.com/newDLE, with permissions 0700 and 
owned by backup:wheel ('backup' is my Amanda user). If I su to backup, I 
can manually create files in this directory and walk down to this directory 
from / with chdir. /var partition has 25 GB free space. So where's the problem?


--
Toomas Aas


Re: autoflush and amstatus

2007-05-28 Thread Toomas Aas

E, 28 mai   2007 kirjutas James Brown [EMAIL PROTECTED]:


After enabling 'autoflush' in amanda.conf, and
starting new amdump, I saw two entries for a backup
job that was already on holding disk.  One was waiting
to be flushed, the other was getting estimates.  (I
had to kill the job
since I didn't want to run the 500GB backup again!).

With autoflush enabled, will Amanda attempt to run
another backup of the DLE that needs to be flushed?


In my experience, yes. 'Autoflush' just means that as part of the  
amdump run, Amanda also tries to flush the contents of holding disk to  
tape. If an earlier dump of a particular DLE is on the holding disk  
and new amdump is run, that doesn't mean that this particular DLE  
doesn't get dumped. Why should it? Do you not want to have the up-to  
date backup of this DLE? At least on holding disk?



I am currently running 2.5.1p2.


Me too.

--
Toomas Aas


Re: Hardware suggestion

2007-05-27 Thread Toomas Aas

Olivier Nicole wrote:


What aveage tap write rate do you see in your reports (in the
statistics)?


My setup is one server, data on software-based mirror on two SATA HDDs, 
a separate PATA HDD used as holdingdisk, vtapes on external USB HDDs. 
Taping speed (from holdingdisk to external HDD) is about 20 MB/s. 
Dumping speed varies greatly, but for level 0s of larger DLEs I see up 
to 11 MB/s if using software compression and up to 25 MB/s if not using 
software compression).


--
Toomas Aas


Re: amanda 2.4.4, compression issue

2007-05-22 Thread Toomas Aas

Brian Cuttler wrote:


When I tried a restore yesterday I found that the files I brought
back had a long numeric number preceeding the original path. I will
try a restore again when the current amdump run completes to see if
this is still the case. Is this normal, or perhaps an artifact of
not having a proper/correct verion gtar ?


This is a well known indication of bad version of GNU tar.

--
Toomas Aas


External HDD recommendations

2007-05-05 Thread Toomas Aas

Hello!

The 160 GB external HDDs that we are currently using for backups are  
getting too small, so I'm thinking about getting new bigger ones.  
Would be interested to hear whether anyone has had particularily bad  
(or good) experience with any of the popular brands. Thanks in advance.


--
Toomas Aas


Re: which report do I believe?

2007-04-30 Thread Toomas Aas

Steven Settlemyre wrote:

I am looking to restore a disk from one of my machines and am having a 
little trouble understanding which report to believe. Using amadmin 
find, I see there was a level 0 on 4/12 and again on 4/19.


When I look at the daily reports I see on 4/12:

pop:/files1.0 in-memory
 taper: no split_diskbuffer specified: using fallback split size of 
10240kb to buffer

and

HOSTNAME DISKLORIG-kB OUT-kB  COMP%  MMM:SS   KB/s 
MMM:SS   KB/s
-- --- 
-
pop /files1 0   170241929961504   58.5  178:38  926.5 
178:39  929.4



Then on 4/19 i see:

NOTES:
 planner: Incremental of pop:/files1 bumped to level 3.

and

HOSTNAME DISKLORIG-kB OUT-kB  COMP%  MMM:SS   KB/s 
MMM:SS   KB/s
-- --- 
-

pop /files1 0   170891209965664   58.3  148:35 1117.9  30:39 5418.9

So is 4/19 a level 3 or level 0?


It's a level 0. If Amanda would have done an incremental, she would have 
done a level 3, but ultimately she decided to do a full instead.


Also, in the amadmin find, I see that 4/12 has 973 parts, whereas 4/19 
only has 2 parts.


Not sure about that, since I really don't know what the 'part' output of 
amadmin find means :)


--
Toomas


Re: Trouble with Quantum DLT-S4 tapedrive

2007-04-19 Thread Toomas Aas

Richard Stockton wrote:


I am unable to get amanda to write to my new Quantum DLT-S4 drive.
I am trying to write directly to tape with no holding space.

OS: FreeBSD 6.2-RELEASE #0
Amanda: 2.5.1p3

Tapes label okay, and amcheck is happy, but when I do an amdump bak15
I get errors.  Here are the log and amdump files, plus the lines that
repeat in my syslog, and the various .debug files and my amanda.conf
(sorry for the lengthy email, but I wanted you to have all the info).

log:
DISK planner bak-05 aacd3s1d
START planner date 20070412122332
START driver date 20070412122332
WARNING planner Last full dump of bak-05:aacd3s1d on tape  overwritten 
in 1 run.

STATS driver startup time 0.020
START taper datestamp 20070412122332 label VOL153 tape 0
FINISH planner date 20070412122332 time 131.949
INFO taper tape VOL153 kb 0 fm 1 writing file: Input/output error

 FAIL taper bak-05 aacd3s1d 20070412122332 0 [out of tape]

Well, gosh. Almost immediately after taping begins there is an I/O 
error. This doesn't look good at all.



/var/log/messages:
Apr 12 12:26:54 bak-05 kernel: (sa0:ahc2:0:5:0): Unexpected busfree in 
Data-out phase

Apr 12 12:26:54 bak-05 kernel: SEQADDR == 0x85
Apr 12 12:26:54 bak-05 kernel: (sa0:ahc2:0:5:0): WRITE FILEMARKS. CDB: 
10 0 0 0 1 0
Apr 12 12:26:54 bak-05 kernel: (sa0:ahc2:0:5:0): CAM Status: SCSI Status 
Error
Apr 12 12:26:54 bak-05 kernel: (sa0:ahc2:0:5:0): SCSI Status: Check 
Condition
Apr 12 12:26:54 bak-05 kernel: (sa0:ahc2:0:5:0): Deferred Error: ABORTED 
COMMAND csi:0,0,0,2 asc:44,82

Apr 12 12:26:54 bak-05 kernel: (sa0:ahc2:0:5:0): Vendor Specific ASCQ
Apr 12 12:26:54 bak-05 kernel: (sa0:ahc2:0:5:0): Retries Exhausted
Apr 12 12:26:54 bak-05 kernel: (sa0:ahc2:0:5:0): tape is now frozen- use 
an OFFLINE, REWIND or MTEOM command to clear this state.


This really looks like a hardware problem, or compatibility problem 
between your tape drive and FreeBSD (which would be a surprise). Since 
labelling tapes is successful, the tape drive must be at least somewhat 
functional.


Are you able to write any significant amount of data (more than Amanda's 
32 kB label) to tape with utilities such as dump or tar?


What is your tape blocksize set to? Amanda internally uses 32 kB 
blocksize and I have seen some problems when tape blocksize is set to 
something else, such as 1 kB. I've found it best to set the blocksize to 
variable (mt blocksize 0).


--
Toomas Aas


Re: Calling FreeBSD users

2007-04-10 Thread Toomas Aas

Charles Sprickman wrote:

I've had little luck getting 2.5.1p3 working properly with the new 
bsdtcp auth method.  Before I give up and revert to an old 2.4.x version 
is there anyone here using the newer releases on FreeBSD (client + 
server)?  And if so, which auth method are you using?


2.5.1p2 is running successfully here on FreeBSD 6.2-RELEASE (amd64). This 
is one-machine setup with client and server. Authentication is bsd.


amanda-client.conf:
conf BACKUP
index_server server.domain.tld
tape_server server.domain.tld
tapedev file:/backup
auth bsd

inetd.conf:

Line 1:
amanda  dgram   udp waitbackup 
/usr/local/libexec/amanda/amandad amandad -auth=bsd amdump amindexd amidxtaped


Line 2:
amandaidx   stream  tcp nowait  backup 
/usr/local/libexec/amanda/amindexd amindexd -auth=bsd amdump amindexd 
amidxtaped


Line 3:
amidxtape   stream  tcp nowait  backup 
/usr/local/libexec/amanda/amidxtaped amidxtaped -auth=bsd amdump amindexd 
amidxtaped



--
Toomas Aas


Re: size of level 1 equal to size of level 0

2007-02-11 Thread Toomas Aas

Gene Heskett wrote:


On Wednesday 31 January 2007 01:27, Toomas Aas wrote:

Hello!

First, the system details:
FreeBSD 6.2-RELEASE
Amanda 2.5.1p2
GNU tar 1.16
All DLEs use GNU tar for backup

I noticed a strange problem today with this rather new Amanda setup.
Yesterday, a level 0 backup was done of one particular DLE. This DLE is
ca 50 GB in size, which compresses to ca 40 GB. Today's amdump run
started doing level 1 of this same DLE and wrote ca 35 GB, at which
point the HDD holding vtapes ran out of space.

Level 1 backup of this particular DLE definitely shouldn't be so big. I
looked at sendsize.debug and noticed an interesting problem. The
estimates for level 0 backups for all DLEs are the same size as
estimates for level 0.




I wonder if that version of tar is biting us again.  Can you back up to 
1.15-1 and give that a try?




Looks like going back to tar 1.15.1 really helped. With tar 1.16 (as 
well as with tar 1.16.1) the behaviour was really strange - first two 
amdump runs the incrementals were smaller than fulls (as expected), but 
then suddenly all the incrementals became as big as fulls. I first tried 
going to 1.16.1 and added the patches discussed here:


http://www.nabble.com/FW:-tar-1.16-listed-incremental-failure-t2920103.html

That didn't help. Then I installed tar 1.15.1 and I've now had 3 
successful amdump runs. That's why it took me so long to report back. 
Hopefully my luck lasts.


--
Toomas


Re: size of level 1 equal to size of level 0

2007-01-31 Thread Toomas Aas

Gene Heskett wrote:

I wonder if that version of tar is biting us again.  Can you back up to 
1.15-1 and give that a try?


Some googling found this, which seems related. Looks like downgrading tar
is the way to go. How I miss the good old days of 1.13.25!

http://www.nabble.com/FW:-tar-1.16-listed-incremental-failure-t2920103.html



size of level 1 equal to size of level 0

2007-01-30 Thread Toomas Aas

Hello!

First, the system details:
FreeBSD 6.2-RELEASE
Amanda 2.5.1p2
GNU tar 1.16
All DLEs use GNU tar for backup

I noticed a strange problem today with this rather new Amanda setup. Yesterday, 
a level 0
backup was done of one particular DLE. This DLE is ca 50 GB in size, which 
compresses
to ca 40 GB. Today's amdump run started doing level 1 of this same DLE and 
wrote ca 35 GB,
at which point the HDD holding vtapes ran out of space.

Level 1 backup of this particular DLE definitely shouldn't be so big. I looked 
at sendsize.debug
and noticed an interesting problem. The estimates for level 0 backups for all 
DLEs are the same
size as estimates for level 0.

On previous night's amdump run, this problem did not happen - estimates for 
incremental backups
were noticeably smaller than level 0.

I noticed  an interesting warning message about gnutar_calc_estimates in 
sendsize.debug, but
this was also present in sendsize.debug for these amdump runs that didn't have the 
incremental
equal to full problem. So I'm not sure if this is the problem at all.

Below is the snippet of sendsize.debug for one DLE, which shows level 1 
estimated as big as
level 0. I'd appreciate any ideas regarding what might be the reason for this.

sendsize[11170]: time 0.636: calculating for amname /storage, dirname /storage, 
spindle 1
sendsize[11170]: time 0.636: getting size via gnutar for /storage level 0
sendsize[11170]: time 0.658: spawning /usr/local/libexec/amanda/runtar in 
pipeline
sendsize[11170]: argument list: runtar BACKUP /usr/local/bin/gtar --create --file /dev/null --directory /storage --one-file-system 
--listed-incremental /var/amanda/gnutar-lists/server.domain.ee_storage_0.new --sparse --ignore-failed-read --totals --exclude-from 
/tmp/amanda/sendsize._storage.20070131052001.exclude .

sendsize[11170]: time 139.437: Total bytes written: 51728097280 (49GiB, 
357MiB/s)
sendsize[11170]: time 139.440: .
sendsize[11170]: estimate time for /storage level 0: 138.782
sendsize[11170]: estimate size for /storage level 0: 50515720 KB
sendsize[11170]: time 139.440: waiting for runtar /storage child
sendsize[11170]: time 139.440: after runtar /storage wait
gnutar_calc_estimates: warning - seek failed: Illegal seek
sendsize[11170]: time 139.441: getting size via gnutar for /storage level 1
sendsize[11170]: time 139.905: spawning /usr/local/libexec/amanda/runtar in 
pipeline
sendsize[11170]: argument list: runtar BACKUP /usr/local/bin/gtar --create --file /dev/null --directory /storage --one-file-system 
--listed-incremental /var/amanda/gnutar-lists/server.domain.ee_storage_1.new --sparse --ignore-failed-read --totals --exclude-from 
/tmp/amanda/sendsize._storage.20070131052220.exclude .

sendsize[11170]: time 206.462: Total bytes written: 51728097280 (49GiB, 
745MiB/s)
sendsize[11170]: time 206.468: .
sendsize[11170]: estimate time for /storage level 1: 66.563
sendsize[11170]: estimate size for /storage level 1: 50515720 KB
sendsize[11170]: time 206.468: waiting for runtar /storage child
sendsize[11170]: time 206.468: after runtar /storage wait
gnutar_calc_estimates: warning - seek failed: Illegal seek
sendsize[11170]: time 206.468: done with amname /storage dirname /storage 
spindle 1



Taper exited with signal 1

2007-01-25 Thread Toomas Aas

Hello!

Moving to new server here with my Amanda setup using vtapes on external 
HDDs. Old server was FreeBSD 4.11, Amanda 2.4.4, GNU tar 1.13.25. New 
server is FreeBSD 6.2, Amanda 2.5.1p2, GNU tar 1.16. All my DLEs use GNU 
tar.


Amcheck ran almost successfully on new server (one DLE reported 
Permission denied), but amdump failed for all DLEs with message
Dump to tape failed and Notes section of Amanda report contains 
suspicious message taper exited with signal 1. I wonder if this is the 
known problem with GNU tar 1.16 returning 1 if any file changes during 
backup, or is there something else fishy here? I suspect it's something 
else, because in case of GNU tar problem, the errors should be from 
dumper not taper?



 Original Message 
FAILURE AND STRANGE DUMP SUMMARY:
server  /storage/home   lev 0  FAILED [dump to tape failed]
server  /storage/mail   lev 0  FAILED [dump to tape failed]
server  /usrlev 0  FAILED [dump to tape failed]
server  /storage/dumps  lev 0  FAILED [dump to tape failed]
server  /varlev 0  FAILED [dump to tape failed]
server  filesOZ lev 0  FAILED [dump to tape failed]
server  filesAN lev 0  FAILED [dump to tape failed]
server  /   lev 0  FAILED [dump to tape failed]
server  Archive lev 1  FAILED [dump to tape failed]


NOTES:
  planner: Last full dump of server:filesAN on tape LINT10 overwritten 
on this run.
  planner: Last full dump of server:filesOZ on tape LINT10 overwritten 
on this run.
  planner: Last full dump of server:Archive on tape LINT06 overwritten 
in 1 run.

  taper: Received signal 1
  taper: Received signal 1
  planner: server Archive 20070125224719 0 [dumps too big, 39024330 KB, 
full dump delayed]

  driver: taper pid 39526 exited with code 1



taper.debug file contains the following:

=== beginning of file ===
taper: debug 1 pid 39526 ruid 3951 euid 3951: start at Thu Jan 25 
22:47:19 2007

taper: pid 39526 executable taper version 2.5.1p2
taper: debug 1 pid 39526 ruid 3951 euid 3951: rename at Thu Jan 25 
22:47:19 2007

taper: page size = 4096
taper: buffer size is 32768
taper: buffer[00] at 0x800542000
taper: buffer[01] at 0x80054a000
taper: buffer[02] at 0x800552000
taper: buffer[03] at 0x80055a000
taper: buffer[04] at 0x800562000
taper: buffer[05] at 0x80056a000
taper: buffer[06] at 0x800572000
taper: buffer[07] at 0x80057a000
taper: buffer[08] at 0x800582000
taper: buffer[09] at 0x80058a000
taper: buffer[10] at 0x800592000
taper: buffer[11] at 0x80059a000
taper: buffer[12] at 0x8005a2000
taper: buffer[13] at 0x8005aa000
taper: buffer[14] at 0x8005b2000
taper: buffer[15] at 0x8005ba000
taper: buffer[16] at 0x8005c2000
taper: buffer[17] at 0x8005ca000
taper: buffer[18] at 0x8005d2000
taper: buffer[19] at 0x8005da000
taper: buffer structures at 0x8005e2000 for 480 bytes
changer_query: changer return was 10 1
changer_query: searchable = 0
changer_find: looking for LINT10 changer is searchable = 0
taper: Building type 2 (TAPESTART) header of size 32768 using:
taper: Contents of *(dumpfile_t *)0x7fffcb80:
taper: type = 2 (TAPESTART)
taper: datestamp= '20070125224719'
taper: dumplevel= 0
taper: compressed   = 0
taper: encrypted= 0
taper: comp_suffix  = ''
taper: encrypt_suffix   = ''
taper: name = 'LINT10'
taper: disk = ''
taper: program  = ''
taper: srvcompprog  = ''
taper: clntcompprog = ''
taper: srv_encrypt  = ''
taper: clnt_encrypt = ''
taper: recover_cmd  = ''
taper: uncompress_cmd   = ''
taper: encrypt_cmd  = ''
taper: decrypt_cmd  = ''
taper: srv_decrypt_opt  = ''
taper: clnt_decrypt_opt = ''
taper: cont_filename= ''
taper: is_partial   = 0
taper: partnum  = 0
taper: totalparts   = 0
taper: blocksize= 32768

=== end of file ===

Thanks for any ideas,
--
Toomas Aas


DOH: taper exited with signal 1

2007-01-25 Thread Toomas Aas
Please ignore my previous message. I shouldn't have killed the terminal 
from which I started amdump! Amanda is still in testing on the new 
server so amdump doesn't run from cron yet. I started another amdump and 
it's happily taping now (until I manage to kill it again).


--
Toomas Aas


Has amanda's handling of vtape changed ?

2007-01-25 Thread Toomas Aas

Hello!

It's me again with taking my vtapes based config from old Amanda 2.4.4 
server to new Amanda 2.5.1p2 server. I have my tapetype definition set 
so that one 160 GB partition is divided into 5x50GB vtapes. This is done 
considering that on most nights Amanda doesn't really fill the vtape so 
there is actually safe to write more than 160GB/5 to a vtape on some 
other nights. The 50 GB limit is there just so Amanda doesn't plan too 
many full dumps for any single run.


define tapetype HARDDISK {
comment Backup to external HDD
length 51200 mbytes
filemark 0 kbytes
speed 4 kbytes
}

On the old server this consideration seemed to hold true - on some 
nights less than 40 GB got dumped, on some days even more than the 51200 
MB specified in the tapetype (when Amanda miscalculated slightly). The 
only time there was an out of tape error was when the external HDD 
actually filled up.


Yesterday, I made the first amdump run on new server. Since amdump 
hadn't been run for several days, Amanda tried to schedule as many full 
backups as possible. Most of the DLEs were backed up successfully, but 
the last one failed with


server  Archive  lev 1  FAILED [out of tape]
server  Archive  lev 1  FAILED [data write: Broken pipe]
server  Archive  lev 1  FAILED [dump to tape failed]

And the external HDD didn't fill up, it still has 40 GB free.

However the size of vtape from this amdump run is suspiciously close to 
my length parameter:


# pwd
/backup/slot6
# du -m
51213   .

Hence my question: Does Amanda now enforce the length parameter when 
writing to a vtape? In such a way that if length is reached in the 
middle of taping the DLE then taping is stopped and out of tape announced?


Here's the end of amandad.debug from this run:

amandad: time 4353.619: stream_accept: connection from 12.34.56.78.1026
amandad: try_socksize: send buffer size is 65536
amandad: try_socksize: receive buffer size is 65536
amandad: time 4353.619: stream_accept: connection from 12.34.56.78.1026
amandad: try_socksize: send buffer size is 65536
amandad: try_socksize: receive buffer size is 65536
amandad: time 4353.619: stream_accept: connection from 12.34.56.78.1026
amandad: try_socksize: send buffer size is 65536
amandad: try_socksize: receive buffer size is 65536
security_close(handle=0x50d100, driver=0x80085d040 (BSD))
security_stream_seterr(0x527000, write error on stream 63480: Connection 
reset by peer)

amandad: time 6183.197: sending NAK pkt:

ERROR write error on stream 63480: write error on stream 63480: 
Connection reset by peer



And interestingly Amanda actually dumped core at the time which matches 
the timestamp of this amandad.debug file:


Jan 26 01:26:20 server kernel: pid 40527 (amandad), uid 3951: exited on 
signal 11 (core dumped)
Jan 26 01:26:20 server sendbackup[41450]: index tee cannot write [Broken 
pipe]
Jan 26 01:26:20 server sendbackup[41447]: error [/usr/local/bin/gtar 
returned 2, compress returned 1]
Jan 26 01:26:20 server inetd[16676]: 
/usr/local/libexec/amanda/amandad[40527]: exited, signal 11



Here is also the relevant sendbackup.debug from the client (which is 
actually the same as the server):


Could not open conf file /usr/local/etc/amanda/amanda-client.conf: No 
such file or directory

Reading conf file /usr/local/etc/amanda/BACKUP/amanda-client.conf.
sendbackup: debug 1 pid 41447 ruid 3951 euid 3951: rename at Fri Jan 26 
00:55:50 2007
  sendbackup req: GNUTAR Archive /storage/files/Archive 1 
2007:1:11:3:34:13 OPTIONS |;auth=BSD;compress-fast

;index;
  parsed request as: program `GNUTAR'
 disk `Archive'
 device `/storage/files/Archive'
 level 1
 since 2007:1:11:3:34:13
 options `|;auth=BSD;compress-fast;index;'
sendbackup: start: server.domain.ee:Archive lev 1
sendbackup: time 0.000: spawning /usr/bin/gzip in pipeline
sendbackup: argument list: /usr/bin/gzip --fast
sendbackup-gnutar: time 0.001: pid 41449: /usr/bin/gzip --fast
sendbackup-gnutar: time 0.001: error opening 
'/var/amanda/gnutar-lists/serverArchive_0': No such file or directory
sendbackup-gnutar: time 0.001: doing level 1 dump as listed-incremental 
to '/var/amanda/gnutar-lists/serverArchive_1.new'
sendbackup-gnutar: time 0.005: doing level 1 dump from date: 1970-01-01 
 0:00:00 GMT
sendbackup: time 0.006: spawning /usr/local/libexec/amanda/runtar in 
pipeline
sendbackup: argument list: runtar BACKUP GNUTAR --create --file - 
--directory /storage/files/Archive --one-file-system 
--listed-incremental /var/amanda/gnutar-lists/serverArchive_1.new 
--sparse --ignore-failed-read --totals .

sendbackup-gnutar: time 0.007: /usr/local/libexec/amanda/runtar: pid 41451
sendbackup: time 0.007: started backup
sendbackup: time 0.007: started index creator: /usr/local/bin/gtar -tf 
- 2/dev/null | sed -e 's/^\.//'

sendbackup: time 1829.721: 118: strange(?):
sendbackup: time 1829.721: index tee cannot write 

Re: using removable virtual tapes [changed subj line]

2007-01-22 Thread Toomas Aas

On Sun, Jan 21, 2007 at 12:47:34PM +0100, Fritz Wittwer wrote:

i have an amanda installation (Version 2.4.5) where i make all the backups 
onto disks. This disks are in a removable container so i can swap the 
disks. I plan to swap the disks each week or so, so in case of a site 
disaster i have one Disk somewhere off site.
So i have th situation that i have for example virtual tapes daily00 .. 
daily 99, but at any given time, only 20 of them are online, e.g daily 
20..daily39.

How can i handle this with amanda, do i have to edit
..etc/amanda/config/tapelist file each time i swap the disks?


While the canonical changer for using vtapes is nowadays chg-disk, I just 
retired a vtapes based config that was using chg-multi and operated exactly 
as described above. I had two external HDDs, each divided into 5 
sub-directories named SLOT1...SLOT5. On first disk I had tapes labelled 
TAPE01...TAPE05 in these slots, on second disk I had tapes labelled 
TAPE06..TAPE10. The disks were swapped every Monday.


There was no need to manually modify tapelist or any other amanda config 
file when swapping disks. Everything worked automatically.


I'm just beginning to explore if/how the same kind of automation can be 
achieved with chg-disk.


--
Toomas Aas 
|arvutivõrgu peaspetsialist | head specialist on computer networks|
|Tartu Linnakantselei   | Tartu City Office   |
skype: toomas_aas --- +372 736 1274




Re: Hardware compression and dump size problems

2007-01-04 Thread Toomas Aas

Chris Cameron wrote:

I have a DLT-8000. 40/80 Gig tapes. I want to backup a 50 gig partition 
and am going to use hardware compression. Amanda says the dump won't fit.


This configuration was working with server compression turned on and a 
different tape drive.


Error:

NOTES:
 planner: cgydc002 /dev/vg01/lv_data 20070103 0 [dump larger than tape, 
53128750 KB, full dump delayed]

 taper: tape Wk2Tue kb 2088288 fm 4 [OK]


My tape type:
define tapetype DLT8000-40 {
   comment just produced by tapetype program
   length 37482 mbytes
   filemark 2362 kbytes
   speed 5482 kps



Amanda doesn't really know anything about hardware compression. If you are 
using hardware compression, you need to guesstimate yourself how much of 
your data will fit to the tape. In the tapetype above you have specified 
length 37482 MB, so Amanda doesn't see any sense in trying to write a 
53128750 kB dumpfile to tape. You need to increase the length by the factor 
of how much you think your hardware compressor is able to compress your data.


BTW, I'm using a DLT1 drive (40 GB native) with hardware compression. My 
typical backup run is ca 40 GB, but after the new year, when Amanda 
autoflushed all the christmas-time dumps from the holding disk, it hit EOT 
at approximately 52 GB. Your data is different from mine, of course, but I 
do think that being able to tape this 53128750 kB dump with 40 GB tape 
drive and hardware compression cannot be taken for granted.


--
Toomas Aas 
|arvutivõrgu peaspetsialist | head specialist on computer networks|
|Tartu Linnakantselei   | Tartu City Office   |
skype: toomas_aas --- +372 736 1274




Re: RAIT vtapes: all slots are suddenly empty

2006-11-30 Thread Toomas Aas

Gene Heskett wrote:


On Thursday 30 November 2006 08:19, Uwe M. Kaufmann wrote:



(brought to you by Amanda 2.4.5)
SuSE Linux 10.0


Humm, and why is SuSE serving up such an old version of amanda?
IIRC that version does not support vtapes (but somebody will correct me 
I'm sure), so I'm confused by the names below.  And by your apparent use 
of the chg-multi utility with vtapes.  Another example:


Actually Amanda 2.4.5 does support vtapes. I have an Amanda config 
currently running version 2.4.4 and using chg-multi, because chg-disk 
didn't exist back when this setup went live, and chg-multi was a 
commonly used workaround for working with vtapes. We're not all as up to 
speed with latest Amanda versions as you are :)


--
Toomas


Re: using disk instead of tape

2006-09-02 Thread Toomas Aas

Phil Howard wrote:


What I would like to know is how Amanda handles backup to disk.  I did find
a file driver.  I'm not sure if that is meant to be the to disk method
or not.  It certainly would have some complications, depending on how one
considered disks as equivalent to tapes.


File driver is indeed what is used to perform backup to disk.


The complication would be the steps involved in handling a disk.  I would
consider plugging the disk in (USB, Firewire, or eSATA) to be the rough
equivalent of inserting a tape into a manual tape drive.  The question is
what will AMANDA do with a disk that has merely been plugged in.  Can it
be configured to, or does it just understand that it needs to, mount the
disk?  What if the disk is new and not yet formatted?


AFAIK, Amanda has no built-in functionality to handle removable disks. I 
just wrote a little script for that purpose and it has worked quite 
reliably for past two years.


Another note about comparing disk to tape from Amanda's POV - disks 
(removable or not) are often handled not as individual tapes but as sort 
of virtual tape changer. The disk partition is divided into 
subdirectories which Amanda handles as slots in a tape changer. I myself 
use two removable disks, each holding 5 slots and only change the disk 
once a week.



What it seems this file driver probably does not do, which a disk driver
(if such a thing exists) could do, is handle the disk as a raw device.  It
could create a partition to be the equivalent of a tape file, and write the
dump/tar image directory to the partition sectors.  When done (or when it
knows exactly how many sectors there will be), it could update the partition
table to represent the exact size.  The next tape file could be written
after it and a partition table entry added for that partition/file.


That functionality (if it will be created) should IMHO be optional, 
considering people who aren't using removable disks but for example just 
one partition on their RAID. If I were one of such people, I wouldn't 
feel too comfortable about Amanda re-writing my server's partition table 
every day. It also seems to me that such functionality would need to be 
programmed separately for each OS - quite a bit of work.


--
Toomas Aas


Re: False alarm in e-mail report

2006-08-28 Thread Toomas Aas

Jean-Louis Martineau wrote:

 I looks like the first attempt to dump tsensor.raad.tartu.ee:/usr failed
 but the seconds attempt succeeded.

Yes, I had the look at log and amdump files and looks like this is what 
happened.

 Could you send me the amdump.?? and log.datestamp file so that I can
 look at them and be sure of what I said.

OK, I sent them off-list.

 You sent the sendbackup.*.debug file for the successful dump, you should
 have one for the failed dump?

Yes, I have. See below.

 Did you have a problem with tsensor.raad.tartu.ee during that night,
 look at system log.

Yes, I had. Again, the firewall mistakenly blocked some packets at the
time when the failure occurred. Looks like I really have to upgrade the
OS on the client to FreeBSD 6.1 which has a newer version of IPFilter, where
this issue is supposed to be fixed.

Here's the sendbackup debug file from failed dump:

- cut 
sendbackup: debug 1 pid 91056 ruid 1002 euid 1002: start at Thu Aug 24 22:16:47 
2006
/usr/local/libexec/amanda/sendbackup: version 2.4.5
  parsed request as: program `DUMP'
 disk `/usr'
 device `/usr'
 level 0
 since 1970:1:1:0:0:0
 options `|;auth=BSD;index;'
sendbackup: try_socksize: send buffer size is 65536
sendbackup: time 0.000: stream_server: waiting for connection: 0.0.0.0.11085
sendbackup: time 0.001: stream_server: waiting for connection: 0.0.0.0.11086
sendbackup: time 0.001: stream_server: waiting for connection: 0.0.0.0.11087
sendbackup: time 0.001: waiting for connect on 11085, then 11086, then 11087
sendbackup: time 30.003: stream_accept: timeout after 30 seconds
sendbackup: time 30.003: timeout on data port 11085
sendbackup: time 58.601: stream_accept: connection from 194.126.106.100.11091
sendbackup: time 58.645: stream_accept: connection from 194.126.106.100.11080
sendbackup: time 58.645: pid 91056 finish time Thu Aug 24 22:17:46 2006
- cut 

And from my firewall logs:

- cut 
Aug 24 22:17:01 tsensor ipmon[298]: 22:17:00.258059 fxp0 @0:28 b 
194.126.106.100,11087 - 213.35.176.146,11085 PR tcp len 20 48 -S IN OOW
Aug 24 22:17:04 tsensor ipmon[298]: 22:17:03.460014 fxp0 @0:28 b 
194.126.106.100,11087 - 213.35.176.146,11085 PR tcp len 20 48 -S IN OOW
Aug 24 22:17:10 tsensor ipmon[298]: 22:17:09.659603 fxp0 @0:28 b 
194.126.106.100,11087 - 213.35.176.146,11085 PR tcp len 20 48 -S IN OOW
Aug 24 22:17:22 tsensor ipmon[298]: 22:17:21.862321 fxp0 @0:28 b 
194.126.106.100,11087 - 213.35.176.146,11085 PR tcp len 20 48 -S IN OOW
- cut 

This indicates that some packets were dropped because they were considered as
being out of window by the firewall. Note that the port numbers and timestamps
in these two logs match.

--
Toomas Aas


Broken pipe on a big DLE

2006-08-24 Thread Toomas Aas
, finished in 
0:54 at Thu Aug 24 03:18:09 2006
sendbackup: time 2402.293:  93:  normal(|):   DUMP: 44.47% done, finished in 
0:49 at Thu Aug 24 03:18:28 2006
sendbackup: time 2702.282:  93:  normal(|):   DUMP: 49.93% done, finished in 
0:45 at Thu Aug 24 03:18:39 2006
sendbackup: time 3002.282:  93:  normal(|):   DUMP: 55.30% done, finished in 
0:40 at Thu Aug 24 03:18:57 2006
sendbackup: time 3302.282:  93:  normal(|):   DUMP: 59.90% done, finished in 
0:36 at Thu Aug 24 03:20:21 2006
sendbackup: time 3602.284:  93:  normal(|):   DUMP: 63.92% done, finished in 
0:33 at Thu Aug 24 03:22:23 2006
sendbackup: time 3902.281:  93:  normal(|):   DUMP: 68.38% done, finished in 
0:30 at Thu Aug 24 03:23:35 2006
sendbackup: time 4202.287:  93:  normal(|):   DUMP: 72.81% done, finished in 
0:26 at Thu Aug 24 03:24:40 2006
sendbackup: time 4502.281:  93:  normal(|):   DUMP: 76.80% done, finished in 
0:22 at Thu Aug 24 03:26:11 2006
sendbackup: time 4802.378:  93:  normal(|):   DUMP: 80.26% done, finished in 
0:19 at Thu Aug 24 03:28:12 2006
sendbackup: time 5102.311:  93:  normal(|):   DUMP: 84.16% done, finished in 
0:15 at Thu Aug 24 03:29:31 2006
sendbackup: time 5402.289:  93:  normal(|):   DUMP: 87.76% done, finished in 
0:12 at Thu Aug 24 03:31:05 2006
sendbackup: time 5702.281:  93:  normal(|):   DUMP: 91.87% done, finished in 
0:08 at Thu Aug 24 03:31:56 2006
sendbackup: time 6002.280:  93:  normal(|):   DUMP: 96.28% done, finished in 
0:03 at Thu Aug 24 03:32:23 2006
sendbackup: time 6268.139: index tee cannot write [Broken pipe]
sendbackup: time 6268.139: pid 36026 finish time Thu Aug 24 03:32:57 2006
sendbackup: time 6268.139: 117: strange(?): sendbackup: index tee cannot write 
[Broken pipe]
sendbackup: time 6268.140:  93:  normal(|):   DUMP: Broken pipe
 sendbackup debug 

--
Toomas Aas


Is GNU tar 1.13.25 still good with 2.5.0p2?

2006-08-18 Thread Toomas Aas

Hello!

I'm planning to upgrade my Amanda server (currently 2.4.5) to 2.5.0p2. 
I'm wondering whether GNU tar 1.13.25 is still officially considered a 
good version, or is it absolutely required to upgrade to 1.15.1?


I noticed that when installing Amanda from FreeBSD ports, the 
installation pulls in gtar from ports (archivers/gtar), which is 
currently version 1.15.1. However, my FreeBSD 5.4 seems to include GNU 
tar 1.13.25 installed with base FreeBSD system as /usr/bin/gtar and I 
was thinking, maybe there is no need to have two gtars on my system?


BTW, I currently use dump for backups, so gtar is only used for indexes.


Re: Is GNU tar 1.13.25 still good with 2.5.0p2?

2006-08-18 Thread Toomas Aas

Ian Turner wrote:


On Friday 18 August 2006 11:35, Toomas Aas wrote:

BTW, I currently use dump for backups, so gtar is only used for indexes.


No, Amanda will even use dump to generate indices. So actually you don't need 
tar at all.


Of course, what was I thinking.


Re: Monthly archival tapes

2006-08-08 Thread Toomas Aas

Joe Donner (sent by Nabble.com) wrote:


dumpcycle 0 (to force a full backup on every run, because all our data fit
comfortably onto a single tape every night, and amdump only runs for 4.5
hours)


Lucky you :)


runspercycle 5 days (to do an amdump each day Monday to Friday)
tapecycle 21 tapes (to have 4 weeks' worth of historical backups + one
extra tape for good measure)

How do I now handle taking out one tape a month for long-term archiving?  


If my tapes are labelled daily-1, daily-2, etc., then how do I take out one
tape a month but make sure that this doesn't confuse amanda, and that I will
be able to restore from that tape in the future?  


Just pick the tape you want to take out of the rotation and mark it as 
no-reuse with amadmin.


Do I add a new tape each time in my numbering sequence?  


IMHO this is the best solution. Just add in a new tape and amlabel it 
with next number.



Can I reuse tape labels but somehow cause amanda to not overwrite the
information needed to do restores from the archived tapes?


I guess you could maybe do this if you really want to. Copy the 
necessary log and index files to somewhere outside the Amanda 
directories, then amrmtape the archived tape and amlabel the new tape 
with the same label. Then you could use amrestore to restore from these 
tapes without indexes, or copy in the index and log directories you 
previously moved out of the way and use amrecover. But I think this way 
you would just make restoring more difficult for yourself. And restoring 
is the part of backup that really should be easy, because often you need 
to do it quickly with The Boss breathing at your back :)


In your place I would just keep on labelling new tapes with 
ever-increasing numbers. Actually, this is exactly what I do in my setup.


--
Toomas


Re: FreeBSD client

2006-04-29 Thread Toomas Aas

John Clement wrote:

I can't find any documentation on getting the client working on BSD so 
started going by all the information I've gleened troubleshooting the 
Linux machines here.  


Getting the client working on FreeBSD is nothing special :) Just install 
the client (I understand that's already done) and add the amandad entry 
to /etc/inetd.conf.


 I can't find a .amandahosts file, do I need to create this and if so
 where?  Or should this information go somewhere else?

This file needs to be in the home directory of the amanda user (in your 
case, operator). You can find out what this directory is by running

chsh operator.

I believe the default home directory for operator on FreeBSD is /

Generally when installing Amanda on FreeBSD I prefer to not use operator 
but still create a special 'amanda' user and add this user to group 
'operator'. Operator's standard home directory being / is one of the 
reasons - I don't like the idea of amanda files living directly under /. 
The alternative is to change the home directory of operator to somewhere 
else, which I also don't like. I *have* done it on some machines, 
though, and they have been running for several years without problems.


I assume /tmp/amanda should exist on the machine and be writable and 
ownder by operator:operator (operator being the default username the 
client seems to install by, and operator being BSD's equiv of 'disk' 
group), is this so?


Yes. But this directory should be created automatically by Amanda.


Re: Firewall problems with Amanda

2006-04-27 Thread Toomas Aas

Mary Kennedy wrote:


My server is configured with the following config statement:
./configure --prefix /opt/backups --with-user=amandaUser 
--with-group=amandaGroup --with-smbclient=/usr/bin/smbclient 
--with-gnutar=/bin/tar 
--with-gnutar-listdir=/opt/backups/var/amanda/gnutar-lists/ 
--with-index-server=amandaServerName 
--with-tape-server=amandaServerName --with-tape-device=/dev/nst0 
--with-portrange=850,859 --with-udpportrange=850,859


I believe the TCP portrange should be unprivileged (ports above 1024).

--
Toomas Aas 
|arvutivõrgu peaspetsialist | head specialist on computer networks|
|Tartu Linnakantselei   | Tartu City Office   |
- +372 736 1274




Re: dump larger than available tape space

2006-04-17 Thread Toomas Aas

David Leangen wrote:


Apparently, my backup is larger than my tape space. But, shouldn't it
span multiple tapes? If I'm not mistaken, this is what happened in
earlier versions.


It should be vice versa, actually. The possibility of single DLE to span 
multiple tapes is new in version 2.5, so this should not have happened 
in earlier versions - unless you were using the tape spanning patch 
which existed outside the official Amanda source tree with versions 2.4.


The entire backup could span multiple tapes even with version 2.4 if you 
were using a changer (as I see you were) and runtapes was set to more 
than 1 in your config file. But every single DLE had to fit on one tape.


--
Toomas Aas 
|arvutivõrgu peaspetsialist | head specialist on computer networks|
|Tartu Linnakantselei   | Tartu City Office   |
- +372 736 1274




Re: maxpromoteday

2006-04-10 Thread Toomas Aas

04/10/06 kirjutas Paul Bijnens [EMAIL PROTECTED]:


The Amanda planner schedules a level 0 of a DLE on the due date.
To see which one that is, do amadmin config due.
Then Amanda looks at the total amount of level 0's, and compares
it to the balanced amount (total amount / runspercycle).
If the amount is less then balanced, Amanda will try to promote
some level 0's that are due the next few days, to run this day.

The maxpromotedays means that Amanda will not promote this DLE
more than this numbers of days in advance.


Thanks, now I understand. I think I leave my smaller DLEs with their  
defaults and set the maxpromotedays for some larger DLEs to 2.


--
Toomas Aas


maxpromoteday

2006-04-09 Thread Toomas Aas

Hello!

I'm trying to avoid the situation where some DLEs get multiple level0 
backups during the dumpcycle. This is because I'm running with vtapes 
that are somewhat over-drafting the actual capacity of the 
backup-harddisk.


It seems from the documentation that I should set 'maxpromoteday' to 2, 
1 or even 0. That's fine, I'll do that. But I must admit that I don't 
quite understand the meaning of maxpromoteday parameter, except that 
setting it to smaller number causes Amanda to promote dumps less 
aggressively. Can anyone explain it? If I set maxpromoteday to, say, 3, 
then how does this '3' actually influence the planner?


--
Toomas Aas


Re: number of drives in a RAIT system?

2006-04-09 Thread Toomas Aas

stan wrote:


I'm wondering if I can do something with, say, 5
old DDS2 drives to give me a decent size?


AFAIK, and if I may loosely compare RAIT to RAID, there is no RAIT0, so 
to speak. Meaning that RAIT doesn't give you a larger 'virtual' backup 
media, it merely allows to 'mirror' your backup to two (or more?) devices.


Amanda can do what you want, but this doesn't involve RAIT. If you want 
a single backup run to use 5 DDS2 tapes, you should set 'numtapes' to 5 
in amanda.conf and configure a changer. If you don't have an actual tape 
changer hardware, you can configure a 'human' changer using chg-manual.


--
Toomas Aas


Experience with LTO2 media

2006-04-05 Thread Toomas Aas

Hello!

Just curious - what are people's experiences with different brands of LTO2
media? I've lost two HP LTO2 tapes within last two months due to the drive
reporting 'media write errors'. Should I switch to some other manufacturer?

--
Toomas Aas


Re: amcheck test

2006-04-05 Thread Toomas Aas

04/05/06 kirjutas Salvatore Enrico Indiogine:


Amanda Tape Server Host Check
-
Holding disk /space/amandahold/test: 356151020 kB disk space
available, using 356151020 kB
amcheck-server: slot 3: date 20060401 label TEST-3 (active tape)
amcheck-server: slot 4: date 20060404 label TEST-4 (active tape)
amcheck-server: slot 5: date 20060405 label TEST-5 (active tape)
amcheck-server: slot 1: not an amanda tape
amcheck-server: slot 2: date 20060331 label TEST-2 (active tape)
ERROR: new tape not found in rack
   (expecting a new tape)


...


I guess that that amanda is not able to reuse tape 1.   Correct guess?


Yes.


 If yes, why not?


It seems that the label on vtape 1 has somehow been destroyed. Because  
of this, Amanda doesn't think that this tape belongs to her. What  
are the contents of directory /space/vtapes/test/slots/slot1 ?



The tapecycle=5, thus amanda should know that it
needs to go back to 1 after 5 and I have labelled all of them at the
beginning using amlabel.


If the label on vtape 1 were correct, that's exactly what would happen.

--
Toomas Aas



Re: Question about vtape size

2006-02-03 Thread Toomas Aas

Jon LaBadie wrote:


How are current users sizing their vtapes and what has
been their experience in disk usage?


I've divided my ~160GB removable disks into 5 vtapes each. I have a total of 
8 DLEs, two of which are 40 GB (after compression) and the rest are no 
larger than 2 GB. On the days when no large DLEs get a level 0, vtape 
usage is only 4 GB or so, whereas on other days it is, of course 40 GB.


So I've set the vtape size to 51 GB. 5x51  160, but since most of the slots 
get filled to 10% of the vtape size, this is no problem. It has generally 
worked well for past couple of years. Interestingly enough, I'm having my 
first problem with this approach just this week. Past couple of weeks, the 
changing of removable disks has been repeatedly delayed/forgotten, hence 
Amanda is desperately trying to do level 0 of everything. Once the disk got 
replaced, it started writing new level 0s of the large DLEs to the first 
slots on the disk, while the previous level 0s were still in the last slots. 
So the disk filled up. We've forgotten to replace the disks on time before, 
but this is the first time it has caused such problem. Probably because this 
time we forgot it really bad, for several weeks in the row.


I guess if I would split up these large DLEs into smaller pieces using 
excludes, the problem wouldn't be as bad.


When the disks are changed properly, they get filled to ~67%.


--
... Press any key to continue or any other key to quit.


Re: restore amanda dumps from dead linux machine to solaris 9

2005-11-30 Thread Toomas Aas

Joshua Kuperman wrote:

While I have no problem dumping the files  on 
the solaris machine; I can't seem to figure out what is in them.  Except 
for a few text characters on the tape label I'm stumped. the  dd command 
works as expected. I don't remember the order, etc and I  don't have the 
output from any of the jobs which would have told me  what partitions 
from what machines where on which files on the tape.  


The first 32 KB block of each tape file should contain a text message 
with host and DLE name, date when the backup was made and command needed 
to extract the backup. Are you saying your tape files don't have this?




--
... Jesus has changed your life. Save changes (Y/N)?


Re: Dell PowerVault 122T LTO2 tapetype sanity check.

2005-10-28 Thread Toomas Aas

Jon LaBadie wrote:


On Wed, Oct 26, 2005 at 04:33:19PM -0700, Alan Jedlow wrote:


Hi,

I know the docs say to turn hardware compression
off before running amtapetype, unfortunately I've
found no info on how to disable hw compression on a
Dell PowerVault 122T LTO2 library.

How bad is this result ?

---
-bash-3.00$ /usr/sbin/amtapetype -e 200g -f /dev/nst0 -t Ultrium2
Writing 4096 Mbyte   compresseable data:  40 sec
Writing 4096 Mbyte uncompresseable data:  350 sec
WARNING: Tape drive has hardware compression enabled
Estimated time to write 2 * 204800 Mbyte: 35000 sec = 9 h 43 min
wrote 6422528 32Kb blocks in 98 files in 6420 seconds (short write)
wrote 6422528 32Kb blocks in 196 files in 6719 seconds (short write)
define tapetype Ultrium2 {
  comment just produced by tapetype prog (hardware compression on)
  length 200704 mbytes
  filemark 0 kbytes
  speed 31300 kps
}
---




Most drives use a compression algorithm that expands,
rather than compressing, data that is random or nearly so.

LTO drives have the nearly unique feature for dealing with
this problem in that they sense the expansion potential and
turn off compression themselves.


This is confirmed by the length parameter in the produced tapetype 
above. 200 GB is the native length of LTO2 tape, so it seems that the 
drive did not try to compress the incoming data. Looks like there's no 
reason to worry.


--
Toomas Aas 
|arvutivõrgu peaspetsialist | head specialist on computer networks|
|Tartu Linnakantselei   | Tartu City Office   |
- +372 736 1274





  1   2   >