Re: amstatus question

2007-11-09 Thread Paul Lussier
Krahn, Anderson [EMAIL PROTECTED] writes:

 While a DLE is dumping to tape does the amstatus page dynamically
 updates the amount dumped to tape during the dump. Or does it wait until
 its done?

 Its been sitting pretty at 15071m for some time.

I'm fairly certain it updates dynamically, though slowly.
You can verify this by using the watch command on amstatus:

 watch -n 2 'amstatus config | grep dumping'

Also, I wrote the following script to keep an eye on amstatus.  It's
just a wrapper around amstatus, but pulls out only the most
interesting information from it.  You can see the status of the DLEs
changing in something approximating real time.

-- 
Thanks,
Paul

#!/bin/sh

DEFAULT='weekly'
CONF=${1:-$DEFAULT}
AMSTAT_CMD=amstatus $CONF
AMSTAT_FLAGS='--dumping --dumpingtape --waitdumping  --waittaper --writingtape'
TMPFILE=/tmp/stat.$$
SLEEPTIME=60

function cleanup {
rm -f $TMPFILE
exit 1;
}

trap cleanup SIGHUP SIGINT SIGTERM

clear

while true
do
  estimate=`$AMSTAT_CMD --gestimate | grep -v Using`
  if [ $estimate !=  ]; then
  $AMSTAT_CMD --gestimate | grep -v Using
  else
  $AMSTAT_CMD $AMSTAT_FLAGS  $TMPFILE
  dumping=`egrep '(k|m|g) (dump|flush)ing' $TMPFILE`
  writing=`egrep '(k|m|g) writing to' $TMPFILE`
  action=`echo $dumping | perl -pe 's/.* (\w+ing).*/\u$1/'`
#  count=`awk '!/^Using/  /wait for (dump|flush)|(writing to|dumping)/ 
{print $1}' $TMPFILE | wc -l`
  count=`awk '!/^Using/  /wait|dump|flush|writing to/ {print $1}' 
$TMPFILE | wc -l`
  date
  echo Waiting on: $count file systems
  echo 
  if [ ! -z $dumping ]; then
  echo $action:
  echo $dumping | perl -pe 's/\) (\w)/\)\n$1/g;s/dumping//g' |\
awk '{print,sprintf(%-31s,$1), ,$2,
  sprintf(%3s,$3),sprintf(%3s,$4),$5,$6}'
  echo 
  fi

  if [ ! -z $writing ]; then
  echo -n Writing:
  echo $writing | perl -pe 's/\) (\w)/\)\n$1/g;s/writing to tape//g' | 
awk '{print $1$2,$3,$4,$5,$6}'
  echo 
  fi


  TAPES_CMD=$($AMSTAT_CMD --summary | awk '/^ +tape/ {print}')
  if [ ! -z $TAPES_CMD ]; then
  echo Tapes written to so far:
  echo $TAPES_CMD
  echo 
  fi

  if [ $count == 0 ]; then
  cleanup
  fi

  # Print out the file systems waiting waiting to be dealt with
  awk '!/^Using|(k|m|g) (writ|dump|flush)ing|^ *$/' $TMPFILE | colrm 30 40
  fi
  sleep $SLEEPTIME
  clear
done


multiple lbl-templ lines?

2007-10-22 Thread Paul Lussier

Hi all,

Is there a way to use multiple lbl-templ lines simultaneously?

I'd really like to print out the DLT labels to send offsite with my
tapes inside the cases, but it would be really nice to also use the
8x11 template to print out page to keep onsite for quick reference.

We have some people who are allowed to recall tapes from our offsite
facility, but who don't actually manage/manipulate amanda.  It would
be nice to have them be able to reference a binder containing the
latest reports and be able to tell them to recall all tapes necessary
for a restore.

I can also see this being useful in a scenario where you need to
rebuild your amanda server, which used to back itself up :)

-- 
Thanks,
Paul


Re: multiple lbl-templ lines?

2007-10-22 Thread Paul Lussier
Jean-Louis Martineau [EMAIL PROTECTED] writes:

 You can only define one lbl-templ, but you can easily print another one:

 amreport config -o TAPETYPE:XYZ:lbl-templ=/path/to/label-8.5x11.ps
 -l /path/to/logdir/log.x
 Replace XYZ by the tapetype you use.

Ahhh, right.  Thank you for pointing that out!

-- 
Thanks,
Paul


driver failed

2007-10-17 Thread Paul Lussier

Hi all,

I'm seeing an error I've never seen before, and google didn't turn up
anything useful.  amstatus is reporting this for several file systems
on different machines:

  ra:/var 0 driver: (aborted:[request failed: error sending REQ: send
  REQ to resource-assembly.permabit.com failed: Transport endpoint is
  already connected])(too many dumper retry)

Could this be a result of too many dumpers running?  I've got maxdumps
at 25 and inparallel set to 32.  The udp port range is only 840-860,
which means I have at least 5 too few ports to bind to, right?  Could
the cause for these errors be that I'm simply trying to do too many
dumps with too few ports?

If so, then either increasing the number of ports or decreasing the
number of simultaneous dumps ought to solve the problem, right?

-- 
Thanks,
Paul


Re: sendsize finishes, planner doesn't notice...

2007-10-12 Thread Paul Lussier
Jean-Louis Martineau [EMAIL PROTECTED] writes:

 If sendsize is not running, it's because it crashed.

Hmm, it's definitely not running, but I don't see any trace of a crash.
Is there more verbose logging that can be turned on somewhere?

 I don't understand why amandad finish before sendsize, can you post
 complete amandad and sendsize debug files.

Of course, attached.  amdump is still running, btw, so I can send that
log, or any other that's useful.

Thanks again!
-- 
Thanks,
Paul



sendsize.20071009224835.debug.bz2
Description: sendsize debug log


amandad.20071009224834.debug.bz2
Description: amandad debug log


Re: sendsize finishes, planner doesn't notice...

2007-10-12 Thread Paul Lussier
Jean-Louis Martineau [EMAIL PROTECTED] writes:

 Why you never posted the error in the amandad debug file?

I thought I had.  I've got etimeout set to 72000, so seeing it timeout
near 21000 set off alarms for me.

 ---
 amandad: time 21603.544: /usr/local/libexec/sendsize timed out waiting
 for REP data
 amandad: time 21603.781: sending NAK pkt:
 
 ERROR timeout on reply pipe

 ---

 amanda have a timeout of 6 hours (21600 seconds).

Can you point me to where in the docs this is mentioned?  I've never
seen this menioned before (though I wasn't really looking for it) and
I can't seem to find it anywhere right now (running on no sleep and no
caffeine!)

 You can change it in amanda-src/amandad.c
 Change the value of REP_TIMEOUT.

 Since the estimate is really slow, you could try calcsize or server.

I had intentionally avoided using either of those because:

 a) I'm trying to set up a new configuration which has not history and
'server' option indicates it needs historical data to estimate with.

 b) I wanted to use 'client' to be as accurate as possible in order to
create the historical data 'server' requires so I could eventually
switch to that.

I noticethat in 'man amanda.conf' for the estimate or
(c,d,e)timeout parameter there is no mention of what the maximum
timeout is (it must be in here somewhere, I'm just not finding it...)

I set my (e,d)timeout to 72000, or 20 hours. Could there be mention in
the documentation of what the max timeout is (21600) closer to the
various timeout parameters, *or* some kind of warning if amanda.conf
has timeout parameters which are set in excess of compiled in limits?

Also, is there some means of checking the amanda.conf file for these
types of parameter violations?  If not, I could probably come up with
a config-file parser/checker like this (with a little guidance) if
people were interested. My complete ignorance of the code base informs
me: It's just a simple perl script. No, really! :)

Thanks for hitting me with a clue.  I'll go recompile now :)
-- 
Thanks,
Paul



Re: sendsize finishes, planner doesn't notice...

2007-10-12 Thread Paul Lussier
Jean-Louis Martineau [EMAIL PROTECTED] writes:

 Paul Lussier wrote:
 You should add a spindle for dle on the same physical disk, it can be
 a lot faster.
 
 I don't understand this statement.  Could you clarify please?
   
 man amanda
 search for spindle
 All DLE of a physical disk should have the same spindle (0).
 It's generally faster to run them sequentially instead of in parallel,
 just think about head movement.

Ahh, right, I've been down that route before.  This system is an NFS
appliance like a NetApp containing ~5TB striped across a single RAID5
array.  In this case, head movement (i.e. thrashing) isn't an issue.

Consider for a moment, an NFS server with 20 exports all on the same
spindle being accessed simultaneously by several hundred clients.
Since the specs on this file server are supposed handle this scenario,
having 1 of those clients doing simultaneous recursions of all it's
exports should hardly put any stress on the system.

In fact, when I had spindles set on the individual DLEs such that the
backups occurred sequentially, the estimate was taking far longer than
it is now.  Currently the estimates for these DLEs in parallel are at
about 9 hours.  Sequentially we were looking at somewhere close to 35.

Several of those DLEs are up around 500-600GB each, and therefore
*each one* takes close to 9 hours.  The aggregate time when done
sequentially is 9n, where n=# of DLEs of that similar size.

I think in parallel is fine, we just need to get the amandad and
sendsize to cooperate.

 Are you suggesting this is currently possible, or that it might be a
 good solution for the future? 

 for future.

That's what I suspected.  

Thank you for all your help.  I've recompiled with a higher
REP_TIMEOUT (15) and am re-running the test to be sure that's it.

Provided I don't run into any more problems after that, I'll likely
set my estimates to 'calcsize' for the next test and see what happens.

-- 
Thanks,
Paul


Re: sendsize finishes, planner doesn't notice...

2007-10-12 Thread Paul Lussier
Jean-Louis Martineau [EMAIL PROTECTED] writes:

 Paul Lussier wrote:

 Can you point me to where in the docs this is mentioned?

 It's not documented, it's not a server limit, it's a client limit we
 added do be sure amandad will eventually terminate.

Ahh, that's why I never knew about it :) Perhaps some mention of it
could be made in the docs for the next release.  With storage sizes
only ever increasing, it's probably only a matter of time before
someone else runs into this (if they're lucky, they'll search these
archives :)
   
 historical data are build from successful backup, first estimate will
 be way off, but it will learn.

Oh, okay.  I didn't realize it could learn that way.

 You should add a spindle for dle on the same physical disk, it can be
 a lot faster.

I don't understand this statement.  Could you clarify please?

 A solution could be to add an 'etimeout' in amanda-client.conf,
 amandad could use it instead of REP_TIMEOUT.
 Maybe the server could send it's own timeout to amandad.

Are you suggesting this is currently possible, or that it might be a
good solution for the future?  I saw in amandad.c there are comments
mentioning that REP_TIMEOUT and ACK_TIMEOUT should be configurable.
I think that's a good future direction :)
-- 
Thanks,
Paul


Re: [amanda-2.5.2p1] Bug in chg-zd-mtx: prefix Missing

2007-10-12 Thread Paul Lussier
Svend Sorensen [EMAIL PROTECTED] writes:

 chg-zd-mtx is missing a line for the configured prefix.  It has a line
 for @exec_prefix@, but if that wasn't specified during configure, it
 defaults to @[EMAIL PROTECTED]  This results in the following block after
 `make`.

Heh, I just ran across that this afternoon as well and was going to
submit the same patch :)

-- 
Thanks,
Paul


Re: sendsize finishes, planner doesn't notice...

2007-10-11 Thread Paul Lussier
Jean-Louis Martineau [EMAIL PROTECTED] writes:

 It's weird.

 Do you have an amdump log file or just amdump.1?
 The only way to get this is if you killed amanda process on the
 server, maybe a server crash.
 Do you still have amanda process running on the server?

I do now. I started amanda off Tuesday night at Tue Oct  9 22:48:34 2007.

According the /var/log/amanda/amandad/amandad.20071009224834.debug file:

  amandad: time 21604.147: pid 26218 finish time Wed Oct 10 04:48:39 2007

According to sendsize.20071009224835.debug:

amanda2:/var/log/amanda/client/offsite# tail sendsize.20071009224835.debug 
errmsg is /usr/local/libexec/runtar exited with status 1: see 
/var/log/amanda/client/offsite/sendsize.20071009224835.debug
sendsize[26687]: time 37138.237: done with amname /permabit/user/uz dirname 
/permabit/user spindle -1
sendsize[26379]: time 37823.330: Total bytes written: 541649408000 (505GiB, 
14MiB/s)
sendsize[26379]: time 37823.453: .
sendsize[26379]: time 37823.453: estimate time for /permabit/user/eh level 0: 
37823.251
sendsize[26379]: time 37823.453: estimate size for /permabit/user/eh level 0: 
528954500 KB
sendsize[26379]: time 37823.453: waiting for runtar /permabit/user/eh child
sendsize[26379]: time 37823.453: after runtar /permabit/user/eh wait
errmsg is /usr/local/libexec/runtar exited with status 1: see 
/var/log/amanda/client/offsite/sendsize.20071009224835.debug
sendsize[26379]: time 37823.537: done with amname /permabit/user/eh dirname 
/permabit/user spindle -1

So, sendsize claims to be done, yet planner doesn't think so:

  planner: time 16531.383: got partial result for host amanda2 disk \
 /permabit/user/uz: 0 - -2K, -1 - -2K, -1 - -2K
  [...]
  planner: time 16531.384: got partial result for host amanda2 disk \
 /permabit/user/eh: 0 - -2K, -1 - -2K, -1 - -2K

amdump is currently still running, amandad has finished, but we're
still waiting for estimates which will never arrive.

I also find it disturbing that the debug log I'm looking at,
sendsize.20071009224835.debug, tells me to look at the log I'm looking
at for further information:
 
errmsg is /usr/local/libexec/runtar exited with status 1: see \
/var/log/amanda/client/offsite/sendsize.20071009224835.debug

Any idea why amandad is dying before sending the estimate data back to
the planner?  My etimeout is currently set to:

  # grep timeout /etc/amanda/offsite/amanda.conf
  etimeout  72000  # number of seconds per filesystem for estimates.
  dtimeout  72000 # number of idle seconds before a dump is aborted.
  ctimeout30  # maximum number of seconds that amcheck waits
  amanda2:/var/log/amanda/server/offsite# su - backup -c 'amadmin offsite 
config' | grep -i timeout
  ETIMEOUT  72000
  DTIMEOUT  72000
  CTIMEOUT  30

  amanda2:/var/log/amanda/server/offsite# /usr/local/sbin/amgetconf offsite 
etimeout
72000

su - backup -c 'amadmin offsite version'
build: VERSION=Amanda-2.5.2p1
   BUILT_DATE=Tue Sep 4 15:45:27 EDT 2007
   BUILT_MACH=Linux amanda2 2.6.18-4-686 #1 SMP Mon Mar 26 17:17:36 UTC 
2007 i686 GNU/Linux
   CC=gcc-4.2
   CONFIGURE_COMMAND='./configure' '--prefix=/usr/local' '--enable-shared' 
'--sysconfdir=/etc' '--localstatedir=/var/lib' 
'--with-gnutar-listdir=/var/lib/amanda/gnutar-lists' 
'--with-index-server=localhost' '--with-user=backup' '--with-group=backup' 
'--with-bsd-security' '--with-amandahosts' 
'--with-smbclient=/usr/bin/smbclient' '--with-debugging=/var/log/amanda' 
'--with-dumperdir=/usr/lib/amanda/dumper.d' '--with-tcpportrange=5,50100' 
'--with-udpportrange=840,860' '--with-maxtapeblocksize=256' 
'--with-ssh-security'
paths: bindir=/usr/local/bin sbindir=/usr/local/sbin
   libexecdir=/usr/local/libexec mandir=/usr/local/man
   AMANDA_TMPDIR=/tmp/amanda
   AMANDA_DBGDIR=/var/log/amanda CONFIG_DIR=/etc/amanda
   DEV_PREFIX=/dev/ RDEV_PREFIX=/dev/ DUMP=UNDEF
   RESTORE=UNDEF VDUMP=UNDEF VRESTORE=UNDEF XFSDUMP=UNDEF
   XFSRESTORE=UNDEF VXDUMP=UNDEF VXRESTORE=UNDEF
   SAMBA_CLIENT=UNDEF GNUTAR=/bin/tar
   COMPRESS_PATH=/bin/gzip UNCOMPRESS_PATH=/bin/gzip
   LPRCMD=/usr/bin/lpr MAILER=/usr/bin/Mail
   listed_incr_dir=/var/lib/amanda/gnutar-lists
defs:  DEFAULT_SERVER=localhost DEFAULT_CONFIG=DailySet1
   DEFAULT_TAPE_SERVER=localhost HAVE_MMAP NEED_STRSTR
   HAVE_SYSVSHM LOCKING=POSIX_FCNTL SETPGRP_VOID DEBUG_CODE
   AMANDA_DEBUG_DAYS=4 BSD_SECURITY RSH_SECURITY USE_AMANDAHOSTS
   CLIENT_LOGIN=backup FORCE_USERID HAVE_GZIP
   COMPRESS_SUFFIX=.gz COMPRESS_FAST_OPT=--fast
   COMPRESS_BEST_OPT=--best UNCOMPRESS_OPT=-dc


Am I missing something extremely obvious?  I've been using amanda for
over a decade, and I can't figure out why she's behaving like this.

If there's any more information you need in order to help me figure
this out, please let me know, the suspense here is killing me :)

-- 
Thanks,
Paul


Excluding browser cache directories

2007-10-05 Thread Paul Lussier

Hi all,

Has anyone come up with a decent way to exclude browser cache directories?
I see lots of errors in my logs like:

 ./foo/.mozilla/firefox/1914opvg.Remote/Cache/0D5C1D68d01: Warning: Cannot 
stat: No such file or directory

The problem seems to be the complete lack of standard locations for
browser caches combined with the ridiculous number of possible
browsers.

I suppose the easiest thing to do is run find on my user partitions to
locate them all, then enumerate them, but I was hoping someone here
migh have a better solution.

-- 
Thanks,
Paul


sendsize finishes, planner doesn't notice...

2007-10-04 Thread Paul Lussier

Hi all,

I'm using amanda 2.5.1p1-2.1 from Debian/stable.

I have several file systems which take hours to estimate and dump.
My amanda.conf contains:

  etimeout  10800 # 3 hours
  dtimeout   7200 # 2 hours
  ctimeout 30

My sendsize log reports the following:

  $ egrep estimate (time|size) for sendsize.20071003113105.debug \
  |grep '/permabit/user'|sort
  ...
  sendsize[8132]: estimate size for /permabit/user/eh level 0: -1 KB
  sendsize[8132]: estimate time for /permabit/user/eh level 0: 18804.285
  sendsize[8136]: estimate size for /permabit/user/il level 0: 470515080 KB
  sendsize[8136]: estimate time for /permabit/user/il level 0: 33523.568
  sendsize[8137]: estimate size for /permabit/user/mp level 0: 388366900 KB
  sendsize[8137]: estimate time for /permabit/user/mp level 0: 31830.040
  sendsize[8144]: estimate size for /permabit/user/qt level 0: 438384190 KB
  sendsize[8144]: estimate time for /permabit/user/qt level 0: 33232.123
  sendsize[8151]: estimate size for /permabit/user/uz level 0: 502958220 KB
  sendsize[8151]: estimate time for /permabit/user/uz level 0: 33453.437
  sendsize[8301]: estimate size for /permabit/user/assar level 0: 169842670 KB
  sendsize[8301]: estimate time for /permabit/user/assar level 0: 15124.977

I'm assuming that the number which is not in KB is in seconds.  Which
means that the lowest one of these took over 5 hours to complete, and
I need to increase both (e,d)timeout to at least 9 hours to accomodate
the highest of these.

The strange thing is that all these estimates *did* complete from what
I can tell in the sendsize log.  Yet the planner doesn't seem to think
they have:

  $ amstatus offsite | grep getting
  amanda2:/permabit/release  getting estimate
  amanda2:/permabit/user/eh  getting estimate
  amanda2:/permabit/user/il  getting estimate
  amanda2:/permabit/user/mp  getting estimate
  amanda2:/permabit/user/qt  getting estimate
  amanda2:/permabit/user/uz  getting estimate

I *assume* it's because of the timeout bug in amanda 2.5.1:

  $ amadmin offsite config | grep -i timeout
  ETIMEOUT  22
  DTIMEOUT  21
  CTIMEOUT  190030

Which seems to indicate that planner is going to sit aroud for 61+
hours waiting for estimates to show up ?  What I'm not quite certain
of though, is why doesn't planner notice that these DLEs have
completed?  It noticed all the other DLEs have completed their
estimate phase, so why not these?

Is there something in the logs I can look for to determine how planner
notices that sendsize has completed for a given DLE?

-- 
Thanks,
Paul


Re: sendsize finishes, planner doesn't notice...

2007-10-04 Thread Paul Lussier
Jean-Louis Martineau [EMAIL PROTECTED] writes:

 It's weird.

 Do you have an amdump log file or just amdump.1?
 The only way to get this is if you killed amanda process on the
 server, maybe a server crash.
 Do you still have amanda process running on the server?

No, the reason it's a .1 is because I killed the process on the server
after 12 hours of inactivity.  I'm currently running another dump
attempt with locally compiled 2.5.2 vs. the Debian package.  My theory
is that 2.5.2 doesn't have this problem.

I could have let it run to completion, but it would have taken 3 days or so...
-- 
Thanks,
Paul


Re: amanda not dumping in parallel?

2007-10-03 Thread Paul Lussier
Jean-Louis Martineau [EMAIL PROTECTED] writes:

 What's the maxdumps setting?

DOH!  I don't actually have that one set, so it's defaulting to 1 :(
-- 
Thanks,
Paul


Re: amanda not dumping in parallel?

2007-10-03 Thread Paul Lussier
Chris Hoogendyk [EMAIL PROTECTED] writes:

 Could you post your config file?

Sure, no problem.  See below.

 There are a couple of things that could cause this. One example would
 be if you don't have a holding disk.

Nope, I've got a 2TB holding disk.

 If you are going direct to tape, then it won't dump in parallel

Right, actual dumps seem to happen in parallel, just not the
estimates.  Which I think might be the maxdumps setting Jean-Louis
pointed out.  I for some reason had overlooked that setting and it was
using the default.

 If that is your configuration, it could also contribute to your
 speed issues in other ways, for example causing shoe-shining on
 your LTOs, which would slow things down more. I don't think I've
 seen an answer on that question yet.

I think there's more going on with that than just amanda performance.
I think this server is completely mis-configured, I think our network
is probably suffering from the same misconfiguration, as is the NAS
we're trying to back.  All three were put together by the same person
who has since left.  I'm inheriting multiple messes which impact each
other significantly and it's impossible to tell the root cause of each
of the various problems.

 There could also be issues with what partitions or spindles things are
 mounted on, and contention from that perspective (referring to the
 speed issue).

Everything there is on a NAS, so technically everything is striped out
over N drives in a RAID5 array.  I was thinking that you'd have
everything virtually on the same spindle in this case, but then it
was pointed out that we have 300 other systems NFS mounting from this
NAS.  So, if one host can't read from all file systems on the NAS
simultaneously, it's more likely a problem with that host than it is
with the NAS.

I'm getting very frustrated because I just want to rip it all apart
and do it right, but we don't have the time for that.  Grrr.

-- 
Thanks,
Paul

Here's my config:

org# Subject line prefix for reports.
mailto [EMAIL PROTECTED] # space separated list of recipients.

dumpuser backup   # user to run dumps under

maxdumps   16   # The maximum number of backups from a single host\
# that Amanda will attempt to run in parallel.

inparallel 32   # maximum dumpers that will run in parallel (max 63)
# within the constraints of network bandwidth
# and holding disk space available

displayunit g # Possible values: k|m|g|t
# Default: k. 
# The unit used to print many numbers.
# k=kilo, m=mega, g=giga, t=tera

netusage  1024 mbps # maximum net bandwidth for Amanda, in KB per sec


dumpcycle7  # the number of days in the normal dump cycle
runtapes 4  # number of tapes to be used in a single run of amdump

tapecycle   10 tapes# the number of tapes in rotation
# dumpcycle * runtapes * 6

bumpsize20 Gb   # minimum savings (threshold) to bump level 1 - 2
bumppercent  0  # minimum savings (threshold) to bump level 1 - 2
bumpdays 1  # minimum days at each level
bumpmult 1.5# threshold = bumpsize * bumpmult^(level-1)

etimeout  10800  # number of seconds per filesystem for estimates.
dtimeout  7200  # number of idle seconds before a dump is aborted.
ctimeout30  # maximum number of seconds that amcheck waits
# for each client host
usetimestamps true 
labelstr ^S[0-9][0-9]-T[0-9][0-9]$

tapebufs 40 # A positive integer telling taper how many
# 32k buffers to allocate.  WARNING! If this
# is set too high, taper will not be able to
# allocate the memory and will die.  The
# default is 20 (640k).

tpchanger chg-zd-mtx  # the tape-changer glue script
tapedev /dev/nst1 # the no-rewind tape device to be used
changerfile /etc/amanda/offsite/overland-mtx
changerdev /dev/sg1

maxdumpsize -1   # Maximum number of bytes the planner will
 # schedule for a run 
 # (default: runtapes * tape_length).

amrecover_do_fsf yes # amrecover will call amrestore with the
 # -f flag for faster positioning of the tape.
amrecover_check_label yes# amrecover will call amrestore with the
 # -l flag to check the label.
amrecover_changer changer  # amrecover will use the changer if you restore
 # from this device: amrecover -d changer

holdingdisk hd1 {
comment main holding disk
directory /backups/amanda/offsite # where the holding disk is
use 1700 Gb # how much space can we use on it
chunksize 1 Gb
}

reserve 0 # Don't 

amanda not dumping in parallel?

2007-10-02 Thread Paul Lussier

Hi all,

I recently changed my disklist such that all DLEs which pertain to my
NFS appliance have a -1 spindle entry.  My understanding of the man
page was that a -1 spindle setting for a set of DLEs on the same host
means they would be backed up in parallel.

I've checked with amadmin that the configuration change for these DLEs
reflect a -1 spindle, and I have 'inparallel' set to 16.  Yet when I
look at the process table for the host in question, amstatus tells me
that all the DLEs are 'getting estimate', yet there's only single tar
process running for the estimate phase.

Did I do something wrong, or are estimates run sequentially and only
once the estimates are in are the actual dumps performed in parallel?

Thanks.

-- 
Thanks,
Paul


Backing up a NAS in a timely fashion

2007-09-20 Thread Paul Lussier

Hi folks,

I have an ONStor NAS (Bobcat) with about 5.5TB of usable space
(currently we're at 4.6 used).  I'm looking for suggestions on how to
back this up efficiently with AMANDA.

We've been using AMANDA for forever, but the time to back up this data
is growing and we're trying to figure out how to keep the times down.

We have the data currently split up into 19 different DLEs and each
are backed up using GNU tar.  We've been backing them up sequentially
(i.e. all on the same spindle) under the assumption that, since it's a
RAID 5 set, trying to do them in parallel is just going to thrash all
the disks.  Is this reasonable?  Or should we be able to back these up
in parallel.

Another thought was to have multiple backup clients for the NAS where
each client would be responsible for backing up some subset of the 19
DLEs.  The thought here was to distribute the compute power around
rather than having just one client responsible for everything.

There is talk about going away from AMANDA for this one server to
something which supports NDMP and can dump directly to our LTO3 drive
as well, but that will cost in time, money, and training.  If there's
a way to continue doing this with AMANDA, we'd like to pursue that
route.

Any help or guidance in this area would be appreciated.

-- 
Thanks,
Paul


amandad keeps dying on me...

2007-09-05 Thread Paul Lussier

Hi all,

I'm using Debian/stable, amanda 2.5.1p1 (note 2.5.1, NOT 2.5.2).

For some reason amandad keeps dying on me.  I can't find any reason in
any of my logs for this.  Currently, I still have the following
processes running:

  amanda2:/var/log/amanda/amandad# ps -ef |grep backup
  backup   25763  9089  0 05:31 pts/100:00:00 /bin/sh /usr/sbin/amdump 
offsite amanda2
  backup   25772 25763  0 05:31 pts/100:00:00 /usr/lib/amanda/planner 
offsite amanda2
  backup   25773 25763  0 05:31 pts/100:00:00 /usr/lib/amanda/driver 
offsite amanda2
  backup   25774 25773  0 05:31 pts/100:00:00 taper offsite
  backup   25775 25773  0 05:31 pts/100:00:00 dumper0 offsite
  backup   25776 25773  0 05:31 pts/100:00:00 dumper1 offsite
  backup   25777 25773  0 05:31 pts/100:00:00 dumper2 offsite
  backup   25778 25773  0 05:31 pts/100:00:00 dumper3 offsite
  backup   25779 25773  0 05:31 pts/100:00:00 dumper4 offsite
  backup   25780 25773  0 05:31 pts/100:00:00 dumper5 offsite
  backup   25781 25773  0 05:31 pts/100:00:00 dumper6 offsite
  backup   25782 25773  0 05:31 pts/100:00:00 dumper7 offsite
  backup   25783 25773  0 05:31 pts/100:00:00 dumper8 offsite
  backup   25784 25773  0 05:31 pts/100:00:00 dumper9 offsite
  backup   25785 25773  0 05:31 pts/100:00:00 dumper10 offsite
  backup   25786 25773  0 05:31 pts/100:00:00 dumper11 offsite
  backup   25787 25773  0 05:31 pts/100:00:00 dumper12 offsite
  backup   25788 25773  0 05:31 pts/100:00:00 dumper13 offsite
  backup   25789 25773  0 05:31 pts/100:00:00 dumper14 offsite
  backup   25790 25773  0 05:31 pts/100:00:00 dumper15 offsite
  backup   25791 25774  0 05:31 pts/100:00:00 taper offsite
  backup   29382 29381  0 12:47 pts/300:00:00 -sh

The client in question is 'amanda2', which is NFS mounting several
file systems from an OnStor NFS server.

amstatus reports:

   Using /var/log/amanda/offsite/amdump.1 from Wed Sep  5 05:31:03 EDT 2007

   amanda2:/00g waiting to flush
   amanda2:/00g estimate done
   amanda2:/home02g waiting to flush
   amanda2:/home02g estimate done
   amanda2:/nfs00g waiting to flush
   amanda2:/nfs00g estimate done
   amanda2:/nfs/RT 1   39g waiting to flush
   amanda2:/nfs/RT 0  582g estimate done
   amanda2:/nfs/archive0   11g waiting to flush
   amanda2:/nfs/archive0   11g estimate done
   amanda2:/nfs/backups00g waiting to flush
   amanda2:/nfs/backups00g estimate done
   amanda2:/nfs/builds 15g waiting to flush
   amanda2:/nfs/builds 0  117g estimate done
   amanda2:/nfs/debian 0   22g waiting to flush
   amanda2:/nfs/debian 0   22g estimate done
   amanda2:/nfs/patent 01g waiting to flush
   amanda2:/nfs/patent 01g estimate done
   amanda2:/nfs/release0  236g estimate done
   amanda2:/nfs/software   0   24g estimate done
   amanda2:/nfs/system 0   10g estimate done
   amanda2:/nfs/user   00g estimate done
   amanda2:/nfs/user/ad0   74g partial estimate done
   amanda2:/nfs/user/assar getting estimate
   amanda2:/nfs/user/ehgetting estimate
   amanda2:/nfs/user/ilgetting estimate
   amanda2:/nfs/user/mpgetting estimate
   amanda2:/nfs/user/qtgetting estimate
   amanda2:/nfs/user/uzgetting estimate
   amanda2:/usr 00g waiting to flush
   amanda2:/usr 00g estimate done
   amanda2:/var 01g waiting to flush
   amanda2:/var 01g estimate done

   SUMMARY  part  real  estimated
  size   size
   partition   :  33
   estimated   :  16 1085g
   flush   :  1185g
   failed  :   00g   (  0.00%)
   wait for dumping:   00g   (  0.00%)
   dumping to tape :   00g   (  0.00%)
   dumping :   0 0g 0g (  0.00%) (  0.00%)
   dumped  :   0 0g 0g (  0.00%) (  0.00%)
   wait for writing:   0 0g 0g (  0.00%) (  0.00%)
   wait to flush   :  1185g85g (100.00%) (  0.00%)
   writing to tape :   0 0g 0g (  0.00%) (  0.00%)
   failed to tape  :   0 0g 0g (  0.00%) (  0.00%)
   taped   :   0 0g 0g (  0.00%) (  0.00%)
   16 dumpers idle : not-idle
   taper idle
   network free kps:   1048576
   holding space   :  1700g (100.00%)
0 dumpers busy :  0:00:00  (  0.00%)

But there is no estimate being done.  There is no tar process running,
amandad is not running.  The last thing 

Re: etimeout ignored?

2007-09-04 Thread Paul Lussier
Ralf Auer [EMAIL PROTECTED] writes:

 Hi everybody,

 I have a little problem with the 'etimeout' setting in my
 amanda.conf.  I have set 'etimeout' to 900. To my understanding this
 makes Amanda wait for 15 minutes per DLE and client, then a timeout
 should occur.

 For some reason this value seems to be ignored here, because for my
 buisy clients Amanda still waits for the estimates after several
 hours! The clients have only two DLEs, so I would expect her to wait
 at max 30 Minutes, not more.

 I'm using 2.5.2p1 version, everything else is doing fine, nothing
 special to be found in the log-files.

 Any ideas what I could have done wrong?

I'm seeing something similar.  I have a host which routinely gets no
estimates for sever of the (NFS mounted) file systems.  I've got my
etimeout set way high (10800 sec, or 3 hours) because the file systems
are really big.

I'm using 2.5.1p1-2.1 from the debian packages.  The server being
backed up is (unfortunately) my backup server itself.  There are 22
file systems on this host, 18 of which are NFS mounts from our NFS
appliance (an OnStor). Of those 18, 6 never complete the estimate.

I've moved etimeout from as low as 3600 to as high as 10800 thinking
it might be timing out too soon (these are, in some cases, 100+ GB
file systems.  However, this latest run was kicked off Sat Sep 1
21:49:31 EDT 2007.  Which at this point, is over 84 hours.  Even if
the timeout for the host was set to (etimeout * Num_hosts_DLEs),
that's still only 66 hours.  My backups haven't even begun dumping
yet, because amanda is still waiting for estimates from those 6 file
systems.  That estimate is *never* going to happen, given that amandad
isn't currently running, nor is there an estimate process (tar to
/dev/null) running on the host in question.

Can someone point me at which logs I should be looking at to find out
why amandad died?

And can someone tell me why these file systems are not timing out when
they should?

--
Thanks,
Paul


Re: etimeout ignored?

2007-09-04 Thread Paul Lussier

 Paul Bijnens schrieb:

 How did you came to the conclusion that the server still waits for the
 estimates?  amstatus?  which output?  of just the fact that amanda is
 still running? or...

Ralf Auer [EMAIL PROTECTED] writes:

 a 'amgetconf NewSetup etimeout' returns '900'. So, it seems to be
 configured correctly.

Interesting, I'm running 2.5.1p1-2.1 from Debian packages, and I get:

  $ amgetconf offsite etimeout
  amgetconf: getconf_str: np is not a CONFTYPE_STRING|CONFTYPE_IDENT: 26

I'm not entirely sure what this means.  However, I'm certainly taking
*this* as something bad:

   $ amadmin offsite config| grep -i etimeout
   ETIMEOUT  22
   $ grep etimeout offsite/amanda.conf
   etimeout 10800  # number of seconds per filesystem for estimates.

So, for some reason, amanda is running with an expectation of 61.11
hours of timeout *per* DLE?  No wonder this is taking forever!

 I came to the conclusion that my server still waits for the estimate
 by issuing the 'amstatus' command. It told me, that the server is
 still 'waiting for estimate' for one host, all other hosts were in
 'estimate done' state.

Which is exactly what I did.  I also looked at the running processes
(like planner) and straced them to verify they were in a holding
pattern.

 And last night I watched my backup running

I'm still watching mine, it's much like watching the grass grow.
Except, the lawn makes progress and requires me to do something
occassionally :)

 Another frequent monday-morning-no-coffee-yet problem encountered is
 that you're looking at the wrong config file, or etimeout appears twice
 in the config file.  Verify with:

   amgetconf daily etimeout

 You can also set etimeout to a negative value, to avoid the
 multiplication of the number of DLE's by the etimeout value.

I'll try the negative value, but I'm *very* curious to know where
amanda got an etimeout value of 22?  I've checked my other
timeouts as well, and they're also completely wrong:

  $ grep -i timeout offsite/amanda.conf
  etimeout 10800  # number of seconds per filesystem for estimates.
  dtimeout  1800  # number of idle seconds before a dump is aborted.
  ctimeout30  # maximum number of seconds that amcheck waits
  $ amadmin offsite config| grep -i timeout
  ETIMEOUT  22
  DTIMEOUT  193000
  CTIMEOUT  190030

I just did a quick test and set my etimeout in the amanda.conf file to
30.  I then did:

  $ amadmin offsite config| grep -i timeout
  ETIMEOUT  190030
  DTIMEOUT  193000
  CTIMEOUT  190030

Why is amanda adding to the timeout?  And by what algorithm ?  It adds
19 for thet e/ctimeouts, but 191200 for the dtimeout?  Yet when
etimeout is set to 10800, it adds 209200 ?  These numbers don't seem
like unix time values, either...

Something seems very broken...

I'll be glad to add my entire amanda.conf setup if anyone thinks it's useful.

--
Thanks,
Paul


Re: etimeout ignored?

2007-09-04 Thread Paul Lussier
Dustin J. Mitchell [EMAIL PROTECTED] writes:

 Does anyone else see this problem?  Paul, I'm not a Debian user -- is
 there a quick way to see what patches the distro has applied?

Yeah, I can install the source package which will separate out the
upstreamm source from the patches. 

I'll take a look at that and see if I can find something.

Thanks for the confirmation that something is officially horked :)

-- 
Thanks,
Paul


Re: multiple drives, one config?

2007-08-23 Thread Paul Lussier
Marc Muehlfeld [EMAIL PROTECTED] writes:

 hi,

 Paul Lussier schrieb:
 Is it possible to have amanda use both of these drives simultaneously
 so my writing-to-tape phase goes faster?

 Currently you can't write simultaneously with a single configuration at
 the same time.

That's a shame.

 Chg-multi is something that may help you. But it doesn't work with
 different sized tapedrives, because there is only one tapetype
 option you can specify.
 http://wiki.zmanda.com/index.php/Changers#chg-multi_.28formerly_chg-generic.29

Different sized drives isn't an issue, both are LTO3.  I'll take a
look at this, but I'm not sure it's a good fit.

 Alternatively, can I write the same data to both drives such that I
 create a duplicate tape set (one to keep on site, one to send off ?).

 You can setup RAIT:
 http://www.amanda.org/docs/rait.html
 http://wiki.zmanda.com/index.php/How_To:Set_Up_RAIT_(Redundant_Array_of_Independent_Tapes)

 But currently only 3 or 5 drive sets are supported.

I'm not looking to do RAIT, unless there's a RAIT-mirror option.

I could, I suppose, have a second set of tapes in the library, and
when amdump finishes, run dd from one tape to another, but it would
seem to be preferable to write the data once to both drives
simultaneously (i.e. mirrored tapes).  That doesn't sound possible
though.

--
Thanks,
Paul


multiple drives, one config?

2007-08-22 Thread Paul Lussier

Hi all,

I've just installed a second drive into my Overland ARCVault tape
library.  I'm using chg-zd-mtx, and can manipulate tapes in to/out of
both drives.

Is it possible to have amanda use both of these drives simultaneously
so my writing-to-tape phase goes faster?

Alternatively, can I write the same data to both drives such that I
create a duplicate tape set (one to keep on site, one to send off ?).

--
Thanks,
Paul


amverify reports problems

2007-08-21 Thread Paul Lussier

Hi folks,

I just ran amverify on my most recent amanda run from this past
weekend and saw this in the amverify report:
  ...
  ** Error detected ()
  amrestore: missing file header block
  amrestore: WARNING: not at start of tape, file numbers will be offset
  amrestore: error reading file header: Input/output error
  amrestore: could not fsf /dev/nst0: Input/output error
  ** No header
  0+0 in
  0+0 out
  [ 5 more of these elided ]
  Too many errors.
  Loading next slot...


This happened on at least 2 tapes.  Further, when it got to checking
that all the chunks were there, I see this:

  Split dump amanda2._RT.20070817.0 should have 54 total pieces
  Spanning chunk part 41 is missing!
  Spanning chunk part 42 is missing!
  Spanning chunk part 43 is missing!
  Spanning chunk part 44 is missing!
  Spanning chunk part 45 is missing!

So, I'm fairly confident that this file system was not properly backed
up.  More disturbing to me though, are the 2 or more tapes which
reported Too many errors.  Is this a problem with the tape, the
drive, or amanda?  And how do I prove which one it is?  I'm leaning
towards the tapes, but haven't ruled out the drive.

I'm currently using Fuji LTO3 tapes with an Overland ARCVault tape
library changer containing an HP LTO3 drive.  The tapes I'm writing to
are brand new, just out of the box, never before been used.

Any help or ideas would greatly be appreciated.
--
Thanks,
Paul


Re: OT: LTO Barcodes

2007-06-15 Thread Paul Lussier
Harald Schioeberg [EMAIL PROTECTED] writes:

 Hi,

 definately off-topic, but i have written a script to generate sheets
 with LTO barcode labels.

Hi, thanks a lot for sharing this with us, I for one am quite grateful.

One question I have though, is what barcode symbology is supported?
I have 2 changes, one of which uses Code 39, and the other Code 128.

Thanks again!
--
Thanks,
Paul


mtx reports I/E slot incorrectly

2007-06-14 Thread Paul Lussier

Hi all,

I have an Overland ARCVault 24 LOT3 tape changer library which can
reserve either 1 or 12 of it's slots as Import/Export slots.

When I reserve 12 slots, the number of slots available, as seen by mtx
is 12, which makes sense.  When I reserve only a single slot for I/E,
the changer uses slot 1, but mtx reports slot 24:

$ mtx -f /dev/sg1 status
 Storage Changer /dev/sg1:1 Drives, 24 Slots ( 1 Import/Export )
Data Transfer Element 0:Empty
 Storage Element 1:Full :VolumeTag=004038L2
   [...]
 Storage Element 23:Empty
 Storage Element 24 IMPORT/EXPORT:Empty

Is this a matter of mtx assuming that cleaning and I/E slots are
always the last slot by convention?  Or is there a way to configure
MTX to report this correctly?

Or, is this a bug with mtx, and should I report it directly to the mtx list?

Thanks.
--
Seeya,
Paul


Parallel dumps of a single file system?

2006-05-23 Thread Paul Lussier
Hi all,I have a 1 TB RAID5 array which I inherited. The previous admin configured it to be a single file system as well. The disklist I have set up currently splits this file system up into multiple DLEs for backup purposes and dumps them using gtar.
In the past, on systems with multiple partitions, I would configure all file systems on different physical drives to be backed up in parallel. Since this system has but 1 file system, I've been backing them up one at a time.
But since this is a RAID array, it really wouldn't matter whether this were many file systems or 1, since everything is RAIDed out across all disks, would it?So, my question is this: Am I doing the right thing by dumping these DLEs serially, or can I dump them in parallel?
For example, I have my user directories split out like this in the disklist file: space-monster:/u1/user space-monster:/u1/user/ad space-monster:/u1/user/eh space-monster:/u1/user/il space-monster:/u1/user/mp
 space-monster:/u1/user/qt space-monster:/u1/user/uzSo, does it matter whether I have a RAID array with 1 or 23 file systems on it? Am I going about this the correct way, or can I use some parallelism?
Thanks,--Paul


Re: Parallel dumps of a single file system?

2006-05-23 Thread Paul Lussier
On 5/23/06, Andreas Hallmann [EMAIL PROTECTED] wrote:
Since in the raid blocks are spreed sequentially (w.r.t the file) amongmost (raid5) of the avail platters, it will behave more like a singlespindle with more layers. So, my question is this: Am I doing the right thing by dumping these
 DLEs serially, or can I dump them in parallel?Dumping this DLEs sequentially is your only option to keep spindlemovements low. So your doing it the way I would do it.Anything else should reduce your throughput.
Does that imply that if this RAID set were split into multiple file systems, I'd still be better off dumping them one at a time?I'm looking for ways to speed up my daily incremental backups. We may well be purchasing a new RAID array in the near future. Which may allow me to migrate the existing data to it and split it up into multiple file systems, then go back and re-format the old one.
Thanks,--Paul


Re: tuning the estimate phase?

2006-05-05 Thread Paul Lussier
On 5/2/06, Paul Bijnens [EMAIL PROTECTED] wrote:
The client runs a gtar --sparse --totals -f /dev/null --otheroptsNo piping through gzip, no transfer over the network.Gnutar itself has special code for handling output to /dev/null, anddoesn't even read the files in that case (unless the stat() indicates it
is a sparse file, for which it depends on the version of gtar how ithandles that -- some versions read sparse files.).Doing a stat() for each file/directory of the filesystem can bestressing the server yes indeed.
Sidemark:because the output is not piped through gzip, Amanda canonly guess how much it will compress.Therefor it builds up a historyof compression rates for each DLE.The default assumed compression
rate for a new DLE (without history) can be tuned by the amanda.confparameter comprate.Since I'm using no-compress (I'm using hw compression on the drive), does amanda ignore even the default compression rate in this case?
 Since the entire array is a single file system, even the backup of individual hierarchies seems to result in this blocking.
 Does this sound like a reasonable theory? If so, is there a way I can tune the estimation to be nicer ?Avoid doing running multiple gtar processes at the same timeby specifying the spindle in the disklist.
I am specifying spindle in the disklist, and have my user partition specified thusly: space-monster /u1/user/ad /u1/user {  user-high-tar   include ./[a-d]*
  include append ./[A-D]* } 1Other partitions follow for e-h, i-l, etc. all with a spindle of 1.
Are you sure it happens during estimate?Yes. We've been keeping logs of when our NFS clients experience the timeouts and correlated them to the times when the estimates are running our our NFS server. All the NFS timeouts occur during the estimates, and as far as I know, we never see timeouts during the actual dump across the network back to the amanda host.
Another possibility is to revert to faster/less accurate estimatestrategies:calcsize is faster (but if stat() is indeed the
problem, this will not help much).There is also a only statistically based estimate, see:http://wiki.zmanda.com/index.php/Amdump:_results_missing#Timeout_during_estimate.3F
Thanks, I'll take a look. 


tuning the estimate phase?

2006-05-02 Thread Paul Lussier
Hi all,Is it possible to tune the estimate phase of a backup run? We appearto be getting NFS timeouts experienced by our NFS clients during theestimate phase when the NFS server is getting backed up.
The going theory is this that during the estimation phase, amanda isdoing a gtar|gzip -c /dev/null. And, as we all know, the bandwidthof /dev/null is damn near impossible to beat :)During the actual dumping of data, the gtar|gzip is getting sent back
across the wire, and therefore gtar gets constrained by the bandwidthof the network, which even at GigE is significantly lower than that of/dev/null. As a result, during the estimation phase, amanda is taking
over the disk IO to the RAID array and the NFS daemons are competingfor r/w access.Since the entire array is a single file system, even the backup ofindividual hierarchies seems to result in this blocking.
Does this sound like a reasonable theory? If so, is there a way I cantune the estimation to be nicer ?Any pointers, comments, suggestions, etc. welcome.--Seeya,Paul


Re: using gnutar and file permissions

2003-11-13 Thread Paul Lussier

In a message dated: Thu, 13 Nov 2003 14:13:04 EST
Joshua Baker-LePain said:

On Thu, 13 Nov 2003 at 2:07pm, [EMAIL PROTECTED] wrote

 I'd prefer to use tar to perform backups, primarilly for portability 
 of restores.  However, amanda is running as user 'backup' which 
 doesn't seem to have read permissions in many places.

Amanda calls tar via the setuid root 'runtar' wrapper.  So when it runs 
the backups, it can get everything.  If amcheck is complaining, it's 
because selfcheck doesn't run setuid root.

Nope, amcheck runs just fine, but the dump report states that the
backup of that 'drive' FAILED.

I assumed that it was a permissions problem, but since runtar is suid 
(which I checked after you mentioned it) then that's not the issue.

I'll have to dig into the *.debug files more. Any ideas what could be 
the issue?

Thanks,

-- 
Seeya,
Paul

GPG Key fingerprint = 1660 FECC 5D21 D286 F853  E808 BB07 9239 53F1 28EE

 If you're not having fun, you're not doing it right!




Re: Who uses Amanda?

2002-06-23 Thread Paul Lussier


On Fri, 25 Jan 2002, KEVIN ZEMBOWER wrote:

- (This might seem like a stupid question to this group, but) I'm being
- challenged by the folks who can't get my firewall setup to work with
- Amanda that I should adopt a more industry-standard backup product.
- Hogwash. But, I would like to at least offer an answer.
-
- Anyone have any guesses how many institutions and individuals are using
- amanda?
-
- Anyone know, or want to self-disclose, some noteworthy institutions
- using amanda? If you think this would clog up the list too badly, email
- me privately at [EMAIL PROTECTED], and after a week or so, I'll
- compile a list and post it to the email list.

I've been using Amanda for years.  Currently, I'm the Senior SysAdmin 
for Mission Critical Linux (http://www.missioncriticallinux.com), and 
I've been using Amanda here with an HP SureStore 818 DLT changer 
backing up Linux systems on both sides of our firewall without a 
problem.

Before coming to MCLX, I was at a small division of Bay Networks 
(bought by Nortel, then sold to Arris Interactive) which also used 
Amanda.  I also used Amanda at 3Com and Raytheon.

So, there you have it, there's a list of companies that did use 
Amanda for backups at least while I was there.  I have no knowledge 
of what any of them use now, however. 

Hope that helps :)

Seeya,
Paul
-- 
Paul Lussier(877) 625-4689/(978) 606-0256
Senior Systems and Network Engineer 100 Foot of John Street
Mission Critical Linux  Lowell, Ma, 01852




Re: How to use xfsdump with Amanda?

2002-02-18 Thread Paul Lussier


In a message dated: Thu, 03 Jan 2002 11:08:35 +1100
Ben Wong said:

Hi,

I have several partitions using XFS and would like to ask how I can use
xfsdump with Amanda?

The backup server runs on Debian Linux and the debian packages amanda-server
amanda-common and amanda-client are installed.

You should be able to just configure, compile, and install the amanda 
client sw on that system and it should work.  The ./configure script 
should look for xfsdump and note it's location for use with xfs 
partitions.

I didn't have to do anything special for my system running xfs under 
Linux, other than create a symlink to where amanda was looking for 
xfsdump (IRIX places the binary in a different location than it ends 
up under Linux, and configure didn't detect it).  But that's since 
been fixed IIRC.
-- 

Seeya,
Paul


  God Bless America!

...we don't need to be perfect to be the best around,
and we never stop trying to be better. 
   Tom Clancy, The Bear and The Dragon






Problems with dumps

2002-02-18 Thread Paul Lussier


Hi all,

I'm having trouble getting one of my clients backed up.  There are 13
file systems on the client which need to be dumped, totalling about 
12GB of data. 

I'm getting error messages like the following for several of the file 
systems:

hacluster1 /dev/sda12 lev 0 FAILED [dumps too big, but cannot\
incremental dump skip-incr disk]

I know that amanda seems to think that the dumps are too big, and 
failing these file systems because I've dis-allowed incremental 
backups.  However, I've also specified the use of 2 tapes for the 
backups, and amanda doesn't seem to be filling both:

Estimate Time (hrs:min)0:10
Run Time (hrs:min) 9:26
Dump Time (hrs:min)7:51   7:49   0:02
Output Size (meg)   54711.454709.12.2
Original Size (meg) 89777.289755.3   22.0
Avg Compressed Size (%)60.9   61.0   10.2   (level:#disks ...)
Filesystems Dumped   33 28  5   (1:5)
Avg Dump Rate (k/s)  1981.3 1991.6   15.7
Tape Time (hrs:min)6:50   6:50   0:00
Tape Size (meg) 54712.454710.02.4
Tape Used (%) 156.3  156.30.0   (level:#disks ...)
Filesystems Taped33 28  5   (1:5)

From the 'Tape Used' it appears that amanda is only filling 50% of 
the second tape.  The 'Tape Size' seems to indicate I'm only filling 
about 55GB worth of tape.  I'm using a DLT7000 drive with DLT4 tapes.
I should be able to get 70GBs worth of data across 2 tapes, no? So, I 
should be able to get another 15GBs onto the second tape, by my 
calculations, which is fine, since there's less than 12GB currently 
failing.

Any ideas?

Thanks,
-- 

Seeya,
Paul


  God Bless America!

...we don't need to be perfect to be the best around,
and we never stop trying to be better. 
   Tom Clancy, The Bear and The Dragon






Re: Who uses Amanda?

2002-01-29 Thread Paul Lussier


On Fri, 25 Jan 2002, KEVIN ZEMBOWER wrote:

- (This might seem like a stupid question to this group, but) I'm being
- challenged by the folks who can't get my firewall setup to work with
- Amanda that I should adopt a more industry-standard backup product.
- Hogwash. But, I would like to at least offer an answer.
-
- Anyone have any guesses how many institutions and individuals are using
- amanda?
-
- Anyone know, or want to self-disclose, some noteworthy institutions
- using amanda? If you think this would clog up the list too badly, email
- me privately at [EMAIL PROTECTED], and after a week or so, I'll
- compile a list and post it to the email list.

I've been using Amanda for years.  Currently, I'm the Senior SysAdmin 
for Mission Critical Linux (http://www.missioncriticallinux.com), and 
I've been using Amanda here with an HP SureStore 818 DLT changer 
backing up Linux systems on both sides of our firewall without a 
problem.

Before coming to MCLX, I was at a small division of Bay Networks 
(bought by Nortel, then sold to Arris Interactive) which also used 
Amanda.  I also used Amanda at 3Com and Raytheon.

So, there you have it, there's a list of companies that did use 
Amanda for backups at least while I was there.  I have no knowledge 
of what any of them use now, however. 

Hope that helps :)

Seeya,
Paul
-- 
Paul Lussier(877) 625-4689/(978) 606-0256
Senior Systems and Network Engineer 100 Foot of John Street
Mission Critical Linux  Lowell, Ma, 01852





Re: Tape eject

2002-01-16 Thread Paul Lussier


In a message dated: Wed, 16 Jan 2002 13:10:28 EST
Steve said:

Any pointers would be appreciated.


man amtape.





-- 

Seeya,
Paul


  God Bless America!

 If you're not having fun, you're not doing it right!

...we don't need to be perfect to be the best around,
and we never stop trying to be better. 
   Tom Clancy, The Bear and The Dragon





Re: XFS, Linux, and Amanda

2002-01-15 Thread Paul Lussier


In a message dated: Mon, 14 Jan 2002 18:41:04 EST
Brandon D. Valentine said:

On Mon, 14 Jan 2002, Joshua Baker-LePain wrote:

Hmmm, you did 'rpm -e' the RPM version, right?  Pre-build amanda=bad.

Word.  Especially the moronic way in which RedHat has decided to build it.

I'm glad I'm not the only one who thought this!  I've had arguments 
with others over this issue, where they feel that you should always 
use the pre-build package if there's one available!

-- 

Seeya,
Paul


  God Bless America!

 If you're not having fun, you're not doing it right!

...we don't need to be perfect to be the best around,
and we never stop trying to be better. 
   Tom Clancy, The Bear and The Dragon





Finding tapes

2002-01-11 Thread Paul Lussier


Hi all,

I've been asked to locate a set up tapes based on dates.  Any idea 
how I do this?  Basically I just need to search the date stamps for 
each tape and tell my boss which tapes contain backups for the 
specified period of time.

I looked at the amadmin command, but the sub-commands all seem to 
want a hostname.  I need the information for *all* systems.

Basically, if the tape was written to during the specified range, I 
need to know.

Any ideas?

Thanks,


-- 

Seeya,
Paul


  God Bless America!

 If you're not having fun, you're not doing it right!

...we don't need to be perfect to be the best around,
and we never stop trying to be better. 
   Tom Clancy, The Bear and The Dragon





Re: S.O.S.

2002-01-11 Thread Paul Lussier


In a message dated: Fri, 11 Jan 2002 13:05:58 -0200
=?iso-8859-1?q?T=FAlio=20Machado=20de=20Faria?= said:

Ok,

I modifed my amanda.com with:

runtapes  2

but, the amcheck is not check my second tape unit.
Where is problem?

Are you saying you want to backup to 2 tapes or to 2 different tape 
drives?
-- 

Seeya,
Paul


  God Bless America!

 If you're not having fun, you're not doing it right!

...we don't need to be perfect to be the best around,
and we never stop trying to be better. 
   Tom Clancy, The Bear and The Dragon





Re: S.O.S.

2002-01-11 Thread Paul Lussier


In a message dated: Fri, 11 Jan 2002 14:24:01 -0200
=?iso-8859-1?q?T=FAlio=20Machado=20de=20Faria?= said:

2 different tape drivers.

I don't think you can do this easily.  You'll need to set up 
different amanda configurations each pointing to a different drive 
and split the total number of file systems to back up between the 2 
different 'disklist' files for the 2 different configurations.
-- 

Seeya,
Paul


  God Bless America!

 If you're not having fun, you're not doing it right!

...we don't need to be perfect to be the best around,
and we never stop trying to be better. 
   Tom Clancy, The Bear and The Dragon





Re: amandahostauth failed

2002-01-08 Thread Paul Lussier


In a message dated: Tue, 08 Jan 2002 09:43:10 EST
Charles Farinella said:

As suggested we have fixed our .amandahosts file to reflect the fully
qualified hostname.  Now I get the following:

Amanda Backup Client Hosts Check

ERROR: localhost.localdomain: [could not access /dev/rd/c0d0p7
(/dev/rd/c0d0p7): Permission denied]

This is my first attempt at installing Amanda, so I am unsure of what this
means.  Who 's permissions are denied?  The amandauser has access to the
devices in question, the program was built with proper (I think)
configuration, the amcheck script has the following permissions:

Charlie,

Can you provide the 'ls -l' output for those devices?

/dev/rd/c0d0p7
/dev/rd/c0d0p2
/dev/rd/c0d0p6
/dev/rd/c0d0p1
/dev/rd/c0d0p5

Also, can you provide the 'ls -l' output for the /etc/dumpdates file 
on the client in question.
-- 

Seeya,
Paul


  God Bless America!

...we don't need to be perfect to be the best around,
and we never stop trying to be better. 
   Tom Clancy, The Bear and The Dragon





Re: How to use xfsdump with Amanda?

2002-01-03 Thread Paul Lussier


In a message dated: Thu, 03 Jan 2002 11:08:35 +1100
Ben Wong said:

Hi,

I have several partitions using XFS and would like to ask how I can use
xfsdump with Amanda?

The backup server runs on Debian Linux and the debian packages amanda-server
amanda-common and amanda-client are installed.

You should be able to just configure, compile, and install the amanda 
client sw on that system and it should work.  The ./configure script 
should look for xfsdump and note it's location for use with xfs 
partitions.

I didn't have to do anything special for my system running xfs under 
Linux, other than create a symlink to where amanda was looking for 
xfsdump (IRIX places the binary in a different location than it ends 
up under Linux, and configure didn't detect it).  But that's since 
been fixed IIRC.
-- 

Seeya,
Paul


  God Bless America!

...we don't need to be perfect to be the best around,
and we never stop trying to be better. 
   Tom Clancy, The Bear and The Dragon





Problems with dumps

2002-01-03 Thread Paul Lussier


Hi all,

I'm having trouble getting one of my clients backed up.  There are 13
file systems on the client which need to be dumped, totalling about 
12GB of data. 

I'm getting error messages like the following for several of the file 
systems:

hacluster1 /dev/sda12 lev 0 FAILED [dumps too big, but cannot\
incremental dump skip-incr disk]

I know that amanda seems to think that the dumps are too big, and 
failing these file systems because I've dis-allowed incremental 
backups.  However, I've also specified the use of 2 tapes for the 
backups, and amanda doesn't seem to be filling both:

Estimate Time (hrs:min)0:10
Run Time (hrs:min) 9:26
Dump Time (hrs:min)7:51   7:49   0:02
Output Size (meg)   54711.454709.12.2
Original Size (meg) 89777.289755.3   22.0
Avg Compressed Size (%)60.9   61.0   10.2   (level:#disks ...)
Filesystems Dumped   33 28  5   (1:5)
Avg Dump Rate (k/s)  1981.3 1991.6   15.7
Tape Time (hrs:min)6:50   6:50   0:00
Tape Size (meg) 54712.454710.02.4
Tape Used (%) 156.3  156.30.0   (level:#disks ...)
Filesystems Taped33 28  5   (1:5)

From the 'Tape Used' it appears that amanda is only filling 50% of 
the second tape.  The 'Tape Size' seems to indicate I'm only filling 
about 55GB worth of tape.  I'm using a DLT7000 drive with DLT4 tapes.
I should be able to get 70GBs worth of data across 2 tapes, no? So, I 
should be able to get another 15GBs onto the second tape, by my 
calculations, which is fine, since there's less than 12GB currently 
failing.

Any ideas?

Thanks,
-- 

Seeya,
Paul


  God Bless America!

...we don't need to be perfect to be the best around,
and we never stop trying to be better. 
   Tom Clancy, The Bear and The Dragon





estimates too big, backup times out

2001-12-21 Thread Paul Lussier


Hi all,

I have a system which I'm trying to back-up, but it's being stubborn.

It appears that the amount of data being sent for the estimate is too 
big.  I have 10 file systems on this system which need to be backed 
up.  I only back this system up once a week, and therefore only 
require level 0 dumps to be done.  However, when I watch the .debug
files, it appears that it's attempting to send estimates for both a 
level 0 and 1 for each file systems.

I have 'dumpcycle 0' set for these file systems, is there something 
else I'm missing which would suppress the estimates for a level 1?

Thanks,


-- 

Seeya,
Paul


  God Bless America!

...we don't need to be perfect to be the best around,
and we never stop trying to be better. 
   Tom Clancy, The Bear and The Dragon





Re: estimates too big, backup times out

2001-12-21 Thread Paul Lussier


In a message dated: Fri, 21 Dec 2001 10:16:17 EST
Jean-Louis Martineau said:

It's doing level 1 estimate in case it will go in degraded mode.

Look for 'strategy' in `man amanda`

A, okay.  It looks like I need to use both:

strategy noinc
no-incr yes

in addition to:

dumpcycle 0

Thanks for the pointer, it *appears* to be working.  Of course, now 
that I said that, something else will go wrong :)

Thanks again!
-- 

Seeya,
Paul


  God Bless America!

...we don't need to be perfect to be the best around,
and we never stop trying to be better. 
   Tom Clancy, The Bear and The Dragon





timeout waiting for ack

2001-12-18 Thread Paul Lussier


I'm having problems backing up one particular host on my network.  
amandad.debug reports:

amandad: waiting for ack: timeout, retrying
amandad: waiting for ack: timeout, retrying
amandad: waiting for ack: timeout, retrying
amandad: waiting for ack: timeout, retrying
amandad: waiting for ack: timeout, giving up!
amandad: pid 32317 finish time Tue Dec 18 10:57:51 2001

There is another host (theoretically) identically configured which backs up fine.
The 2 hosts in question are on my DMZ, protected by a firewall.  The 
rule sets for the 2 hosts are identical (I've been over them with the 
networking guy 3 times).  Yet one host backs up fine, the other does 
not.

Can someone point me in the right direction of what to look at?

Thanks,

-- 

Seeya,
Paul


  God Bless America!

...we don't need to be perfect to be the best around,
and we never stop trying to be better. 
   Tom Clancy, The Bear and The Dragon





Re: timeout waiting for ack

2001-12-18 Thread Paul Lussier


In a message dated: Tue, 18 Dec 2001 11:17:17 EST
Paul Lussier said:

I'm having problems backing up one particular host on my network.  
amandad.debug reports:

   amandad: waiting for ack: timeout, retrying
   amandad: waiting for ack: timeout, retrying
   amandad: waiting for ack: timeout, retrying
   amandad: waiting for ack: timeout, retrying
   amandad: waiting for ack: timeout, giving up!
   amandad: pid 32317 finish time Tue Dec 18 10:57:51 2001

It appears that problem is that there are simply too many file 
systems on this particular client.  I'm guessing the problem is that 
there's too much information to cram into a UDP packet, and that's 
why it's failing.

Anyone have any ideas as to how to work around this?  I kind of
have to back up the entire system all together, but I'd be interested 
in some creative ways to get all my data on the same tape set.

Thanks,
-- 

Seeya,
Paul


  God Bless America!

...we don't need to be perfect to be the best around,
and we never stop trying to be better. 
   Tom Clancy, The Bear and The Dragon





Re: timeout waiting for ack

2001-12-18 Thread Paul Lussier


In a message dated: Tue, 18 Dec 2001 13:38:58 EST
Yura Pismerov said:

http://amanda.sourceforge.net/cgi-bin/fom?_highlightWords=firewallfile=16

Thanks, except it's not a firewall problem, nor was amcheck reporting 
errors.  

As I pointed out in a subsequent post/followup, the problem appears 
to be simply too many file systems on that one client.

Thanks anyway.
-- 

Seeya,
Paul


  God Bless America!

...we don't need to be perfect to be the best around,
and we never stop trying to be better. 
   Tom Clancy, The Bear and The Dragon





Re: dump fails with bread and lseek trouble

2001-12-14 Thread Paul Lussier


In a message dated: Fri, 14 Dec 2001 09:29:54 GMT
Thomas Robinson said:

Hi,

Has anyone seen or hear of this problem. 

Ayup, I get it all the time.  I have no idea what the problem is 
though.  It seems to come and go sporatically.

I've run e2fsck -c /dev/sda5 to no avail. I also upgraded the dump utility tha
t came with the standard Red Hat 7.1 install to dump-04b21 to dump-04b22. Any 
ideas what can cause this and how I might fix it?

No, but if anyone has any ideas, please post them to the list, I'll 
try anything :)

Thanks,
-- 

Seeya,
Paul


  God Bless America!

...we don't need to be perfect to be the best around,
and we never stop trying to be better. 
   Tom Clancy, The Bear and The Dragon





Re: Strange dump error

2001-11-27 Thread Paul Lussier


In a message dated: Mon, 26 Nov 2001 22:58:35 +0100
Bernhard R. Erdmann said:

Doublecheck /etc/fstab for this entry. Dump gets the corresponding block
device to the filesystem in the disklist by looking at /etc/fstab.


Hmmm, does that mean that if there's not an fstab entry that there 
will be problems?

Some of these file systems may or may not be physically present, 
since this is a cluster and they may have migrated to the other node.
Therefore the cluster software manages which filesystems are mounted 
at boot time, and as such, they are not entered into /etc/fstab.

Maybe I should re-write the disklist to contain the block device 
instead of the filesystem names instead?

-- 

Seeya,
Paul


  God Bless America!

...we don't need to be perfect to be the best around,
and we never stop trying to be better. 
   Tom Clancy, The Bear and The Dragon





Re: AMANDA WEBSITE ???

2001-10-25 Thread Paul Lussier


In a message dated: Wed, 24 Oct 2001 17:13:09 EDT
Mitch Collinsworth said:

On Wed, 24 Oct 2001, Rivera, Edwin wrote:

 we may have to reboot the internet for this.

Which of course means we first have to send a notice to all users...

We might want to think about doing this during off-hours as well.

Ahm, should we take a survey to figure out which timezone most users 
are in first?

Oh, we have a recent backup of the internet in case the reboot fail, 
right?
-- 

Seeya,
Paul


  God Bless America!

...we don't need to be perfect to be the best around,
and we never stop trying to be better. 
   Tom Clancy, The Bear and The Dragon





Re: FAQ o matic broken?

2001-10-17 Thread Paul Lussier


In a message dated: Tue, 16 Oct 2001 18:33:30 CDT
Marc W. Mengel said:

I'm asking around on amanda-hackers.  It looks like the cgi-bin
directory for the amanda pages on sourceforge has been deleted...

Do!  Were they using Amanda to back that system up ? :)
-- 

Seeya,
Paul


  God Bless America!

...we don't need to be perfect to be the best around,
and we never stop trying to be better. 
   Tom Clancy, The Bear and The Dragon





amandad and sendsize hanging on client

2001-09-28 Thread Paul Lussier


Hi all,

I recently added a new client to my backup rotation.  The first night 
all went well.  Last night however the client did not get backed up. 
According to the report, the client was off-line.

This morning, I got the amcheck error report stating:

ERROR: ebeneezer NAK: amandad busy

When I logged into ebeneezer, ps revealed that amandad and sendsize 
were still running:

  # ps -auxw  |grep amanda
  amanda 2378 0.0 0.3 2176 860 ? S 11:49 0:00 amandad
  amanda 2379 0.0 0.3 1832 772 ? S 11:49 0:00 /usr/local/amanda/libexec/selfcheck

an strace -p of the amandad process reveals:

  # strace -p 2378
  select(6, [0 5], NULL, NULL, NULL unfinished ...

an strace of the selfcheck process hangs (i.e., it doesn't return anything)

The debug logs don't seem to reveal anything (I've attached them in 
case anyone else sees something I missed).

The client is a RH7.1 system running xinetd with user=amanda, 
group=disk.  Amanda is a member of both groups amanda and disk.

The server is a debian 2.2 system.

Both systems are running amanda 2.4.2p1

Thanks for any insight anyone can provide!

Seeya,
Paul





amandad: debug 1 pid 2604 ruid 200 euid 200 start time Fri Sep 28 13:55:49 2001
amandad: version 2.4.2p2
amandad: build: VERSION=Amanda-2.4.2p2
amandad:BUILT_DATE=Wed Sep 26 14:52:19 EDT 2001
amandad:BUILT_MACH=Linux ebeneezer.lowell.mclinux.com 2.4.3-12 #1 Fri Jun 8 
15:05:56 EDT 2001 i686 unknown
amandad:CC=gcc
amandad: paths: bindir=/usr/local/amanda/bin
amandad:sbindir=/usr/local/amanda/sbin
amandad:libexecdir=/usr/local/amanda/libexec
amandad:mandir=/usr/local/amanda/man AMANDA_TMPDIR=/tmp/amanda
amandad:AMANDA_DBGDIR=/tmp/amanda
amandad:CONFIG_DIR=/usr/local/amanda/etc/amanda
amandad:DEV_PREFIX=/dev/ RDEV_PREFIX=/dev/ DUMP=/sbin/dump
amandad:RESTORE=/sbin/restore SAMBA_CLIENT=/usr/bin/smbclient
amandad:GNUTAR=/bin/gtar COMPRESS_PATH=/bin/gzip
amandad:UNCOMPRESS_PATH=/bin/gzip MAILER=/usr/bin/Mail
amandad:listed_incr_dir=/usr/local/amanda/var/amanda/gnutar-lists
amandad: defs:  DEFAULT_SERVER=ebeneezer.lowell.mclinux.com
amandad:DEFAULT_CONFIG=DailySet1
amandad:DEFAULT_TAPE_SERVER=ebeneezer.lowell.mclinux.com
amandad:DEFAULT_TAPE_DEVICE=/dev/null HAVE_MMAP HAVE_SYSVSHM
amandad:LOCKING=POSIX_FCNTL SETPGRP_VOID DEBUG_CODE
amandad:AMANDA_DEBUG_DAYS=4 BSD_SECURITY USE_AMANDAHOSTS
amandad:CLIENT_LOGIN=amanda FORCE_USERID HAVE_GZIP
amandad:COMPRESS_SUFFIX=.gz COMPRESS_FAST_OPT=--fast
amandad:COMPRESS_BEST_OPT=--best UNCOMPRESS_OPT=-dc
got packet:

Amanda 2.4 REQ HANDLE 00A-D8D70708 SEQ 1001699761
SECURITY USER amanda
SERVICE selfcheck
OPTIONS ;
GNUTAR /usr/local 0 OPTIONS 
|;bsd-auth;compress-fast;index;exclude-list=/usr/local/amanda/lib/amanda/exclude.gtar;
GNUTAR /prod 0 OPTIONS 
|;bsd-auth;compress-fast;index;exclude-list=/usr/local/amanda/lib/amanda/exclude.gtar;
GNUTAR / 0 OPTIONS 
|;bsd-auth;compress-fast;index;exclude-list=/usr/local/amanda/lib/amanda/exclude.gtar;


sending ack:

Amanda 2.4 ACK HANDLE 00A-D8D70708 SEQ 1001699761


bsd security: remote host head.lowell.mclinux.com user amanda local user amanda
amandahosts security check passed
amandad: running service /usr/local/amanda/libexec/selfcheck
amandad: got packet:

Amanda 2.4 REQ HANDLE 00A-D8D70708 SEQ 1001699761
SECURITY USER amanda
SERVICE selfcheck
OPTIONS ;
GNUTAR /usr/local 0 OPTIONS 
|;bsd-auth;compress-fast;index;exclude-list=/usr/local/amanda/lib/amanda/exclude.gtar;
GNUTAR /prod 0 OPTIONS 
|;bsd-auth;compress-fast;index;exclude-list=/usr/local/amanda/lib/amanda/exclude.gtar;
GNUTAR / 0 OPTIONS 
|;bsd-auth;compress-fast;index;exclude-list=/usr/local/amanda/lib/amanda/exclude.gtar;


amandad: received dup P_REQ packet, ACKing it
sending ack:

Amanda 2.4 ACK HANDLE 00A-D8D70708 SEQ 1001699761


amandad: got packet:

Amanda 2.4 REQ HANDLE 00A-D8D70708 SEQ 1001699761
SECURITY USER amanda
SERVICE selfcheck
OPTIONS ;
GNUTAR /usr/local 0 OPTIONS 
|;bsd-auth;compress-fast;index;exclude-list=/usr/local/amanda/lib/amanda/exclude.gtar;
GNUTAR /prod 0 OPTIONS 
|;bsd-auth;compress-fast;index;exclude-list=/usr/local/amanda/lib/amanda/exclude.gtar;
GNUTAR / 0 OPTIONS 
|;bsd-auth;compress-fast;index;exclude-list=/usr/local/amanda/lib/amanda/exclude.gtar;


amandad: received dup P_REQ packet, ACKing it
sending ack:

Amanda 2.4 ACK HANDLE 00A-D8D70708 SEQ 1001699761




selfcheck: debug 1 pid 2605 ruid 200 euid 200 start time Fri Sep 28 13:55:49 2001
/usr/local/amanda/libexec/selfcheck: version 2.4.2p2
selfcheck: checking disk /usr/local



Re: 7 tapes/1 month

2001-09-20 Thread Paul Lussier


In a message dated: Thu, 20 Sep 2001 16:12:02 +0200
Nicolae Mihalache said:

Hello!

I have a tape changer with 7 tapes and I want to run a dumpcycle of one
month. So only 6 tapes can be used for a full backup resulting an
average of 1 tape/5 days. In the meantime the backup should be holded on
the holding disk so a tape write should happen only when the resulting
data is big  enough to fill a reasonable percent of the tape. How can I
configure amanda to behave in such a way without manual intervention? By
default amanda tries to write a tape each day.

Your best bet is to write a shell script run from cron to automate 
the amflush command.

I would set up amanda so that it essentially failed each night, 
resulting in all the data being kept on the holding disk.  Run a 
script from cron that figured out the amount of data on the holding 
disk, and at the appropriate data level, ran amflush.  Of course, 
because amflush is an interactive process, you'll need to use 
something like 'expect' or perl's exect.pm module to handle this type 
of thing.

Hope that helps.

-- 

Seeya,
Paul


  God Bless America!

...we don't need to be perfect to be the best around,
and we never stop trying to be better. 
   Tom Clancy, The Bear and The Dragon





runtapes question

2001-09-20 Thread Paul Lussier


Hi all,

If you set the 'runtapes' option to something  1, is there anything 
else you need to do?

Do you need to specify the beginning/ending slots or anything,
or does amanda assume that you X number of tapes in sequential
order in your changer and will automagically issue a 'slot next'
command X-1 times until the last tape is used?

Thanks,

-- 

Seeya,
Paul


  God Bless America!

...we don't need to be perfect to be the best around,
and we never stop trying to be better. 
   Tom Clancy, The Bear and The Dragon





Re: Backing up a system twice, simultaneously ?

2001-09-01 Thread Paul Lussier


In a message dated: Wed, 22 Aug 2001 17:39:54 CDT
Marc W. Mengel said:

I have a script that will rsh over to some list of nodes and make a
disklist, and one to check your disklist and tell you if new filesystems
have appeared, and suggest a new disklist.   I can post them if
anyone's interested.

Hmmm, that might be quite useful.  I'd like to take a look at it if 
you wouldn't mind posting it.

Thanks,
-- 

Seeya,
Paul

...we don't need to be perfect to be the best around,
and we never stop trying to be better. 
   Tom Clancy, The Bear and The Dragon

 If you're not having fun, you're not doing it right!





Getting a TOC from an existing tape?

2001-09-01 Thread Paul Lussier


Hi all,

I just checked the FAQ-O-Matic and docs/, but couldn't find this answer.
(btw, looks like that fom needs some clean-up, there are a bunch of
 empty New Item items in there).

I have a tape, and want to find out what's on it directly from
the tape.  I know I've seen this answered some where before, but
can't seem to remember wherer.

Thanks,


-- 

Seeya,
Paul

...we don't need to be perfect to be the best around,
and we never stop trying to be better. 
   Tom Clancy, The Bear and The Dragon

 If you're not having fun, you're not doing it right!





Amanda Faq-O-Matic

2001-08-30 Thread Paul Lussier


Hi all,

As I was browsing the Amanda FOM, I noticed that not only are there a 
bunch of empty New Item listings, but also that there is a lot of 
information missing from the FAQ that I would expect to be in there.

If whomever is in charge of the FOM is willing, I'll gladly volunteer 
to clean it up and start adding the missing FAQs to it.

Feel free to contact me on or off list about this :)


-- 

Seeya,
Paul

...we don't need to be perfect to be the best around,
and we never stop trying to be better. 
   Tom Clancy, The Bear and The Dragon

 If you're not having fun, you're not doing it right!





Re: rewinding problem

2001-08-30 Thread Paul Lussier


In a message dated: Thu, 30 Aug 2001 18:21:13 +0200
Sandor Feher said:

HI,

I use 2.4.2p2 and I have a weird problem with my tapes.
Sometimes they work sometimes not. Rather not in these time.
I back up my data to a hp surestore t4000 and I got the following
message:

bash-2.04$ /usr/sbin/amcheck DailySet1
Amanda Tape Server Host Check
-
Holding disk /images/tmp: 312064 KB disk space available, that's plenty
ERROR: /dev/nst0: rewinding tape: Input/output error
   (expecting tape Napi02 or a new tape)
NOTE: skipping tape-writable test
Server check took 30.173 seconds


What happens if you run amcheck again immediately after this?

I see similar problems with 'amtape conf current' or 'amlabel' 
quite often, but if I run the same command again (sometimes I need to 
do it twice) it eventually comes back correctly.

I'm not sure what causes it, but I'd guess there's some time-out 
somewhere that needs to be increased.  It almost seems that amanda 
isn't waiting long enough for the tape drive to get back to her.
-- 

Seeya,
Paul

...we don't need to be perfect to be the best around,
and we never stop trying to be better. 
   Tom Clancy, The Bear and The Dragon

 If you're not having fun, you're not doing it right!





Re: rewinding problem

2001-08-30 Thread Paul Lussier


In a message dated: Thu, 30 Aug 2001 14:06:33 EDT
Mitch Collinsworth said:

On Thu, 30 Aug 2001, Paul Lussier wrote:

 I'm not sure what causes it, but I'd guess there's some time-out 
 somewhere that needs to be increased.  It almost seems that amanda 
 isn't waiting long enough for the tape drive to get back to her.

Are you using a tape changer or library?  This is a known issue with
some changers.  I haven't seen reports of it with standalone drives,
but it's certainly a possibility.

I'm using a tape changer, the HP SureStore 818 DLT Autochanger to be 
exact.
-- 

Seeya,
Paul

...we don't need to be perfect to be the best around,
and we never stop trying to be better. 
   Tom Clancy, The Bear and The Dragon

 If you're not having fun, you're not doing it right!





--with-testing?

2001-08-27 Thread Paul Lussier


Hi all,

What is the purpose of this switch?  I thought it was so one could 
install 2 parallel versions of amanda, one of which would have 
different names than those of the previously installed instance.

However, things didn't seem to get installed with what I chose as a 
suffix.

Am I missing something?

Thanks,


-- 

Seeya,
Paul

...we don't need to be perfect to be the best around,
and we never stop trying to be better. 
   Tom Clancy, The Bear and The Dragon

 If you're not having fun, you're not doing it right!





Re: support for backup images larger than the tape drive

2001-08-24 Thread Paul Lussier


In a message dated: Fri, 24 Aug 2001 10:06:25 CDT
[EMAIL PROTECTED] said:

 Can anybody tell me if amanda supports backup images larger than the tape 
 drive.

No, it does not.  The feature is being actively worked on, but no working
code exists as of yet.

Of course, the fact that it has been so long and it is still unimplemented
gives a clue as to how difficult a job it is. 

By backup image, I'm interpreting that as a single file system.  Is 
that correct?  What about being able to have mutiple tapes per backup 
run?  I thought I saw something about that mentioned recently.  I 
thinks it's mentioned in the docs as well, that you can have a 
multi-tape run.  

Thanks,


-- 

Seeya,
Paul

...we don't need to be perfect to be the best around,
and we never stop trying to be better. 
   Tom Clancy, The Bear and The Dragon

 If you're not having fun, you're not doing it right!





Re: Backing up a system twice, simultaneously ?

2001-08-21 Thread Paul Lussier


Hi all,

Is there a way to tell amanda to back up everything on a system?

This would solve my problem if amanda were able to essentially do a df
of the system, find out what file systems were mounted locally, and 
then back them all up.

That way, which ever node in my cluster is the master is essentially 
irrelevant.

Sorry for thinking out loud here :)
-- 

Seeya,
Paul

...we don't need to be perfect to be the best around,
and we never stop trying to be better. 
   Tom Clancy, The Bear and The Dragon

 If you're not having fun, you're not doing it right!





Documentation Discrepancies

2001-07-27 Thread Paul Lussier


Hi all,

I was just reading through the docs for 2.4.2p2 and noticed a few 
discrepancies and/or places for improvement:

amanda-2.4.2p2/README   - still refers in several places to 2.4.1
- does not mention what's new in 2.4.2 at all

amanda-2.4.2p2/docs/FAQ - refers to Oren Teich's FAQ at
  http://ugrad-www.cs.colorado.edu/~teich/amanda
  which no longer exists.

amanda-2.4.2p2/docs/WHATS.NEW
- states the document is for 2.3, this is misleading
  IMO, and should rather only include what's new
  in 2.4.2 since 2.4.1.  Maybe there should be
  a COMING.ATTRACTIONS doc?

amanda-2.4.2p2/docs/UPGRADE
- still refers to upgrading from a pre-2.4.0 config
- does not mention anything about upgrading within
  2.4.x 


www.amanda.org  - should probably have links to all these docs
  to make it easier to access.  

- might want to have a Features page
  listing what the current/stable release does.

- not a lot of useful info there for someone
  looking to see if Amanda could be useful
  to them.  Granted, they could grab the source
  and read docs/*, but most want a quick summary.
  


Since I'm not sure  who's in charge of documentation, I sent this to
both -users and -hackers.  Hope this helps some :)
-- 

Seeya,
Paul

...we don't need to be perfect to be the best around,
and we never stop trying to be better. 
   Tom Clancy, The Bear and The Dragon

 If you're not having fun, you're not doing it right!





Re: one tape is not enough

2001-07-13 Thread Paul Lussier


In a message dated: Fri, 13 Jul 2001 09:41:24 +0200
Kris Boulez said:

Isn't there an option since amanda 2.2 to use multiple tapes for one
dump (see docs/MULTITAPE ). Is this just a 'would be nice to have' or
are people using this in production.

Good question, I don't know.  It's been some time since I've looked 
at 2.4.2, I'm still running on a pre-release of 2.4.2 and 2.4.1 
merged together (don't fix what ain't broken, right? :)

If this support is in there, then great, it definitely be used by a 
lot of people (including myself in the not to distant future :)
-- 

Seeya,
Paul

...we don't need to be perfect to be the best around,
and we never stop trying to be better. 
   Tom Clancy, The Bear and The Dragon

 If you're not having fun, you're not doing it right!





Re: one tape is not enough

2001-07-12 Thread Paul Lussier


In a message dated: Wed, 11 Jul 2001 13:40:13 +0200
Vicente Vives said:

My problem is very 'simple'...
I have to dump 13 GB using 2 GB tapes.

Can i do this?
How ?

You can, you just need to be creative in how you do it.  There are 
several approaches you could take:

- split the 13G into smaller (2GB) chunks and use a 
  different config for each one.  Using a tape stacker/library
  you can easily write a shell script which would run out
  cron and manipulate the tapes for you, placing the next tape
  in after the previous has finished.

- Manually run the level 0 dump once (week/month/year, 
  whatever) and run with only incrementals on a daily
  (or otherwise frequent) basis. Set your bumpsize
  sufficiently such that you get a huge savings in space
  when bumping to the next level dump.


- stagger the introduction of the data to be backed up
  across several days so the level 0 for each data set
  isn't all on one day.  For example, given the following
  file systems to back up:

 Filesystem  1k-blocks  Used Available Use% Mounted on
 /dev/hda1  202220 69687122093  36% /
 /dev/hda7  101089 12285 83585  13% /home
 /dev/hda5  497829  2519469608   1% /tmp
 /dev/hda2 1517952915420525420  64% /usr
 /dev/hda3  608756 78756499076  14% /var
 /dev/sda126209780  25024392919112  96% /home1
 /dev/sda226209812  25433196510340  98% /home2
 /dev/sda521172948   3390312  17567528  16% /nfs
 /dev/hda8 6783660   1775812   4663256  28% /usr/local


  you could stagger the introduction of each to the backup
  cycle like this:

Mon /, /home, /usr
Tue /var, /usr/local
Wed /home1
Thu /home2
Fri /nfs

  (granted, the sizes of my file systems are such that even 
   doing this wouldn't help if I were limited to 2G tapes, but
   it's for example only, but you get the point :)


Hope that helps some.

-- 

Seeya,
Paul

...we don't need to be perfect to be the best around,
and we never stop trying to be better. 
   Tom Clancy, The Bear and The Dragon

 If you're not having fun, you're not doing it right!





Re: Investigating Amanda

2001-06-05 Thread Paul Lussier


In a message dated: Tue, 05 Jun 2001 13:14:06 EDT
Peter Matulis said:

Hi,
 
First post.  Here are my initial questions with using Amanda:

1. Is it possible to use the same tape for multiple (daily) runs?  How?

I'm not quite sure what you're asking here.  Do you mean

- append multiple runs to the same tape?
- run backups more frequently than once a day, overwriting the 
  same tape each time?
- something else entirely?  

2. Is it possible to simply do full backups (full dumps) manually?
   Without any schedule?  How?

Sure, man amdump.  Just run the command on the configuration of your 
choice by hand whenever you want.  

3. Do I need to run any other software/daemons on the server and/or
   client for network-wide backups to occur?

Install the amanda server sw on the server, the client sw on the 
clients, and follow the INSTALL directions.  That should be it.
You'll obviously need either dump/restore, GNU tar, or some 
equivalent installed on the client to actually perform the backups, 
of course.

4. Does using Samba (smbmount) to back up Windows clients really work?

Don't know the answer to that one, I don't back up any Windows 
systems.  Anyone else?
-- 

Seeya,
Paul

If we spent half as much time coding as we do bashing
 others for not supporting Free/Open Source Software,
  just imagine how much cooler we would be!

 If you're not having fun, you're not doing it right!





Re: New tape...

2001-06-05 Thread Paul Lussier


In a message dated: Tue, 05 Jun 2001 18:37:48 BST
David Galveias said:

I'm running amanda for the first time.
When i run amdump it asks for a new tape. I have all my tapes already
labeled. What can i do?

What do mean by labeled?  You stuck a label on the cartridge or you 
ran amlabel on them?
-- 

Seeya,
Paul

If we spent half as much time coding as we do bashing
 others for not supporting Free/Open Source Software,
  just imagine how much cooler we would be!

 If you're not having fun, you're not doing it right!





Re: Amanda RedHat w kernel 2.2.19

2001-04-27 Thread Paul Lussier


In a message dated: Fri, 27 Apr 2001 11:09:12 EDT
Joshua Baker-LePain said:

One of the reasons for the move to 2.2.19 is NFSv3 support (finally).

Well, IMO, the only and best reason for moving to 2.2.19 is to get 
rid of the ptrace root exploit which exists in all kernels  2.2.18, 
and, AFAIK, all 2.4.x kernels.

NFS v3 support is *there*, however, it's not incredibly stable.  
We've been testing it here, and noticed some flakiness, especially if 
not all the clients have been upgraded to use NFSv3 support.  The 
server doesn't easily back down to providing NFSv2 from v3 when the 
client only has v2 support.

I don't know all the particulars, but I can find out what our NFS 
guys found if anyone wants more specific details.

You want locking.

Yes, your really do want locking :)  Though isn't locking and NFS 
kind of an oxymoron?
-- 

Seeya,
Paul

It may look like I'm just sitting here doing nothing,
   but I'm really actively waiting for all my problems to go away.

 If you're not having fun, you're not doing it right!





Re: NFS mounted holding disk

2001-04-17 Thread Paul Lussier


In a message dated: Tue, 17 Apr 2001 10:42:13 BST
Anthony Worrall said:

One reason would be the NFS server has a 1Gb interface and the clients and
tape server have only 100Mb.

Okay, so now you're saturating the 100Mb interface on the tape server 
twice?  I still don't see the advantage in this.
-- 

Seeya,
Paul

It may look like I'm just sitting here doing nothing,
   but I'm really actively waiting for all my problems to go away.

 If you're not having fun, you're not doing it right!





Re: Linux (Debian) and large files?

2001-02-06 Thread Paul Lussier

On Mon, Feb 05, 2001 at 11:30:36PM +0100, Christoph Scheeder wrote:
 Hi,
 sorry to interupt,but debian does not use rpm packageformat.
 it has it own format called dpkg, so www.rpmfind.com is a bad idea...

No, it's not necessarilly a bad idea.  You can still use RPMs with Debian.
Either by installing the rpm .deb and manually using rpm to install it, or
by using alien to convert the rpm to deb and then using dpkg to install the
resultant .dep package.

Seeya,
Paul

It may look like I'm just sitting here doing nothing,
   but I'm really actively waiting for all my problems to go away.

 If you're not having fun, you're not doing it right!



Re: Getting us started

2001-01-18 Thread Paul Lussier



Hi Michael,

Where to shop for anything seems to be a pretty loaded question.  I would 
start with whomever your hardware vendors are, then shop around to see who has 
what for what price.  Compare things like hw support, RMA policies, 
re-stocking fees etc.

Obviously, for a tape drive, you first need to decide what kind you want 
before you shop around.  What you need is going to be quite dependant upon 
how much you want to back up, how often, how reliable you want the backups to 
be and for how long, how much you've budgeted for the hardware and for the 
recurring costs of things like tapes and cleaning cartridges.

I prefer DLT because it has a pretty high capacity, it's got a good 
reputation, a lot of people are using it, so drivers and knowledge are easily 
obtainable, and I've got experience with it.  I back up a lot of data, so
an autochanger was a must for me (especially since I'm lazy :)

Other people don't see the need for a changer, others are happy with DDS 
formats, and some can't afford DLT, so must use something less. 

I guess I'd recommend deciding what you need for capacity, and then 
researching what types of devices fit that need for you.  Once you have that,
it might be a little easier for us to recommend a particular brand or 
something based on your criteria.

I hope that helps.  Oh, btw, I buy most of my data through a VAR called 
TechData, for what that's worth :)
-- 

Seeya,
Paul

It may look like I'm just sitting here doing nothing,
   but I'm really actively waiting for all my problems to go away.

 If you're not having fun, you're not doing it right!





Re: DLT Changer Recomendations

2000-12-19 Thread Paul Lussier

In a message dated: Mon, 18 Dec 2000 18:54:20 EST
Mitch Collinsworth said:

Well don't let that stop you.  Get on amanda-users and ask.  Someone
will know how.  I haven't done it on Linux myself yet, but plenty of
people have.

-Mitch


On Mon, 18 Dec 2000, George Kelbley wrote:

 Yeah, several people have suggested that.  Problem is we can't figure
 out how to create the /dev/chg or whatever device.  Our makedev won't do
 it . . .

You need to have the generic scsi driver compiled into your kernel.  In my 
amanda.conf file, I have:

tpchanger "chg-scsi"# the tape-changer glue script
changerfile "/usr/local/etc/amanda/chg-scsi.conf"
tapedev "0" # the no-rewind tape device to be used

Then changerfile has:

number_configs  1
eject   1   # optional: Tapedrives need an eject command
sleep   180 # Seconds to wait until the tape gets ready
cleanmax20  # optional: How many times could a cleaning tape
# get used
changerdev  /dev/sgb# this is the device used to move tapes around

config  0   # this is what matches the `tapedev "0"'
# line from amanda.conf
drivenum0
dev /dev/nst0   # this is the actual tape drive itself
startuse0   # The slots associated with the drive 0
enduse  6
statfile/usr/local/etc/amanda/tape-slot
# The file where the actual slot is stored  

cleancart   7   # the slot where the cleaning cartridge
# for drive 0 is located
cleanfile   /usr/local/etc/amanda/tape-clean
# The file where the cleanings are recorded
usagecount  /usr/local/etc/amanda/totaltime 


I hope this helps somewhat.
-- 
Seeya,
Paul

   I'm in shape, my shape just happens to be pear!

 If you're not having fun, you're not doing it right!





Re: DLT Changer Recomendations

2000-12-19 Thread Paul Lussier


In a message dated: Tue, 19 Dec 2000 15:50:50 MST
George Kelbley said:

Actually the clouds are beginning to clear, we figured this out earlier
today,

Doh!  I had started to compose this e-mail early this morning but got 
distracted and forgot to send it :)

however we still can't seem to get everything to talk to the
right stuff.  The changer comes up as sg3  when we boot at least. So,
still figuring and tinkering but we may be making progress.

It took me a while, and once it worked, I didn't touch it.  Also, be aware, I 
had a lot of trouble with the chg-scsi code from the 2.4.2 CVS snapshot from
about May.  I actually had to use the chg-scsi code from a May or June CVS 
snapshot of 2.5.0 and merge that in with the 2.4.2 CVS code.

I haven't tried the latest 2.4.2 release to determine whether or not the 
chg-scsi code works (something about mucking with a working system :)

Thanks for the help,

No problem :)

 ain't open source great?

Absolutely!  I've never been able to pay for support as good as I get on the 
amanda mailing list.  The answers may not always be correct or applicable
go my exact problem, but at least I get a bunch of things to try, which is 
usually better than I get from any vendor :)
-- 
Seeya,
Paul

   I'm in shape, my shape just happens to be pear!

 If you're not having fun, you're not doing it right!





Re: DLT Changer Recomendations

2000-12-18 Thread Paul Lussier

In a message dated: Fri, 15 Dec 2000 15:54:52 MST
Greg Skafte said:

I'm looking at getting a couple of DLT Changers, and seeking advice
on peoples preferneces and opions.

I'm using the HP SureStore 818 w/ DLT7000 drive.  I've had it for about 8 
months now, and it's rock solid.  I've also used the Quantum PowerStor L200,
which I believe the HP to be an OEM of with some minor microcode changes.

The 818 only comes with DLT8000 drives now, the price is somewhere around
$7-8k.

Hope that helps.
-- 
Seeya,
Paul

   I'm in shape, my shape just happens to be pear!

 If you're not having fun, you're not doing it right!





Re: HP SureStore DLT Autoloader 818

2000-12-18 Thread Paul Lussier

In a message dated: Mon, 18 Dec 2000 15:48:08 +0100
Juan Jos Ferrer said:


Hi
  Has anybody a HP SureStore DLT Autoloader 818 ? 
I have problems with the tape changer (HP C6280-8000) in linux (2.2.16). 
If you know an easy form to change tapes... please tell me.

Ayup, works just dandy for me.  However, the version of amanda I'm using is 
kindof a Kludge.  I'm using the chg-scsi code from a May CVS snapshot of 
2.5.0, merged into a September snapshot of 2.4.2.

I have not tried the most recent release of 2.4.2 yet, so I don't know if that 
works or not.  At one point someone told me the chg-scsi code was identical 
between 2.4.2 and 2.5.0, however, when I tried just the Sept. 2.4.2 snapshot, 
nothing worked correctly, so I've stuck with my kludged configuration until I 
can figure out how to get the released 2.4.2 to work. (I'm desperately afraid 
to muck with a working config though :)
-- 
Seeya,
Paul

   I'm in shape, my shape just happens to be pear!

 If you're not having fun, you're not doing it right!





Re: Strange errors setting up a new host.

2000-12-13 Thread Paul Lussier

In a message dated: Wed, 13 Dec 2000 22:55:13 PST
Phil Davis said:

FAILURE AND STRANGE DUMP SUMMARY:
 dhcp  /dev/ida/c0d0p6 lev 0 FAILED [disk /dev/ida/c0d0p6 offline on dhcp?]
 dhcp  /dev/ida/c0d0p7 lev 0 FAILED [disk /dev/ida/c0d0p7 offline on dhcp?]
 dhcp  /dev/ida/c0d0p5 lev 0 FAILED [disk /dev/ida/c0d0p5 offline on dhcp?]
 dhcp  /dev/ida/c0d0p8 lev 0 FAILED [disk /dev/ida/c0d0p8 offline on dhcp?]
 dhcp  /dev/ida/c0d0p12 lev 0 FAILED [disk /dev/ida/c0d0p12 offline on dhcp?]
 

 So anyone have any clue as to what this error really means?  its
 exceptionally vague  since this machine uses these partitions, therefore
 they cannot be offline...

 any ideas?

Did you check to see if your amanda user is in the group which owns these 
devices?  Also, are the partitions rw by that group?
-- 
Seeya,
Paul

   I'm in shape, my shape just happens to be pear!

 If you're not having fun, you're not doing it right!





Re: Auto-eject tapes

2000-12-12 Thread Paul Lussier

In a message dated: Tue, 12 Dec 2000 09:36:58 GMT
"Christopher P. Mills" said:

Easy.

Set the cron line to include a tape eject command after running amdump.

In my configuration, the line looks something like:

30 03 * * * /usr/local/amanda/sbin/amdump default; mt -f /dev/nst0 rewoffl

I think you could also do something like:

amdump default  amtape default eject

Though I don't remember if the amtape command is only for changers.  It might 
be, which would explain why it works for me :)
-- 
Seeya,
Paul

   I'm in shape, my shape just happens to be pear!

 If you're not having fun, you're not doing it right!





Re: Auto-eject tapes

2000-12-12 Thread Paul Lussier

In a message dated: Tue, 12 Dec 2000 09:44:29 CST
Bill Carlson said:

On Tue, 12 Dec 2000, Paul Lussier wrote:

 I think you could also do something like:

  amdump default  amtape default eject


Does amdump exit with a nonzero status on some errors now? I seem to
remember that it always exited with zero. Nothing in the FAQ-O-Matic, I
know this has come up before.

I honestly don't know.  I don't actually use the above line, which is why I 
said "I think you can" :)

I would expect that if there were nothing wrong with the amdump process it 
should exit with 0, but if something went wrong it would exit with non-0
status.  What I expect and what reality is may be 2 completely different 
things for reasons I'm not aware of :)
-- 
Seeya,
Paul

   I'm in shape, my shape just happens to be pear!

 If you're not having fun, you're not doing it right!





Re: amrecover problems

2000-11-25 Thread Paul Lussier


In a message dated: Sat, 25 Nov 2000 17:09:24 EST
"John R. Jackson" said:

  EOF, check amidxtaped.debug file on amanda.

Would it help if we made this message 72 point, bold and rang the
bell?  :-) :-)

Probably not, but it might be a fun touch to add ;)

What's in amidxtaped.debug on amanda???

Nothing that tells me anything, that's the problem.

I currently have a restore going, so as soon as it finishes, I'll rerun it and
let you know.

Interestingly enough, I tried the following to get to the 2nd tape, and it 
seems to work (well, it's still running, anyway :)

- Placed tape 1 in the drive
- Answered 'Y' to Load 1st tape now question
- Let the restore of tape 1 run.
- At the end, when it asked if it should load 2nd tape
  I loaded 2nd tape (in a second window with amtape)
- Answered 'Y' to the Load 2nd tape now question
- It failed with:

EOF, check amidxtaped.debug file on amanda.
amrecover: short block 0 bytes
UNKNOWN file
amrecover: Can't read file header
extract_list - child returned non-zero status: 1

- I answered 'N' to Continue
- re-added the path I wanted to restore
- typed extract
- Answered 'Y' to Load 1st tape question
- It failed (obviously, wrong tape in drive
- Answered 'Y' to Load 2nd tape now question
- It's still extracting from 2nd tape

Don't know what's going to happen when it's ready for the 3rd tape. I'm 
assuming the same cycle will work.

One question I have though is, should I have to manually load each tape into 
the drive (by manually, I mean using 'amtapee config slot X') or should I 
just be able to answer 'Y' when it asks me "Load next tape now" and expect 
amanda to find the correct tape in the changer library?

Thanks,

-- 
Seeya,
Paul

   I'm in shape, my shape just happens to be pear!

 If you're not having fun, you're not doing it right!





Re: ejecting amanda tape - after dump finishes

2000-11-17 Thread Paul Lussier

In a message dated: Fri, 17 Nov 2000 15:48:40 GMT
Denise Ives said:

Can Amanda be configured to automatically eject the tape from the tape
drive when a dump to tape is finished? 

Well, there's not much to it, in amanda's crontab entry, just do something
like:

55 23 * * 0 amdump DailySet1  amtape DailySet1 eject
or
55 23 * * 0 amdump DailySet1;amtape DailySet1 eject

Or should I write a perl or shell script to automate the both the dump and
eject tape process?

This is also a perfectly viable option.  Personally, I like simplicity, so I'd 
probably opt for one of the above methods.  But, as Dave W. pointed out,
if you use an autochanger, you can just run amcheck right after the dump is 
done, and this will accomplish the same thing.

The way I handle it with my autochanger is to set up crontab entries for
each day of the week to run 'amtape config slot #' to make sure the correct 
slot is loaded each day.  A little verbose, and probably better handled
with a script, but it works:

45 23 * * 1 amtape DailySet1 slot 1

This runs 10 minutes before amdump runs.  I have one for each day of the week.
I did it this way so that it was obvious to anyone else what was going on.
I may place it all in one script and exec that instead, but either way works 
just fine.

-- 
Seeya,
Paul

   I'm in shape, my shape just happens to be pear!

 If you're not having fun, you're not doing it right!





setting max reuse number for a tape?

2000-11-10 Thread Paul Lussier


Hi all,

Is there a way to tell amanda to not re-use a tape more than X number of times?

I'd like to limit my tape re-use to something like 7 times or so before I 
remove it from the configuration and replace it with a new one.

Thanks,


-- 
Seeya,
Paul

   I'm in shape, my shape just happens to be pear!

 If you're not having fun, you're not doing it right!





amrecover: Unexpected server end of file

2000-11-10 Thread Paul Lussier


Hi all, could someone please explain what this error means?

[root@amanda amanda]# amrecover -d /dev/st0
AMRECOVER Version 2.4.2. Contacting server on amanda ...
220 amanda AMANDA index server (2.4.2) ready.
200 Access OK
Setting restore date to today (2000-11-10)
200 Working date set to 2000-11-10.
200 Working date set to 2000-11-10.
amrecover: Unexpected server end of file

Any idea what "file" came to an unexpected end?

I just started getting these today and can't figure out why.  Any help is 
greatly appreciated.
-- 
Seeya,
Paul

   I'm in shape, my shape just happens to be pear!

 If you're not having fun, you're not doing it right!





Re: amrecover: Unexpected server end of file

2000-11-10 Thread Paul Lussier

In a message dated: Fri, 10 Nov 2000 16:51:33 EST
"John R. Jackson" said:

Never mind, I was stupid, I forgot the config name :)

Which obviously should be better handled, i.e. an explicit message back
to amrecover.  So it wasn't exactly all your falut :-).

I already have a note to myself to fix this.

Oh, okay :)

Thanks,
-- 
Seeya,
Paul

   I'm in shape, my shape just happens to be pear!

 If you're not having fun, you're not doing it right!