Re: hardware/config question from a newbie

2004-02-13 Thread Jay Lessert
On Fri, Feb 13, 2004 at 08:31:03AM -0900, Kevin S Secor wrote:
 sorry bout that the OS is sol 9

Great.  So the only caveat here is that I've never personally run
MTX/sgen on a fibre-connected robot.  No reason that shouldn't be
totally transparent (just like the tape drives themselves), I've just
never done it myself.

My more-or-less standard Solaris/Amanda/MTX spiel:

Build and install MTX, from http://mtx.badtux.net.

Read the shell script contrib/config_sgen_solaris.sh from the
MTX source directory and use that to teach yourself what to do.  Resist
the temptation to just run the script; even if it would work, it's not
exactly what you want.  The steps would be something like:

0)  Read the sgen(7D) man page first, just to get a little familiar
with it.  You don't have to completely understand it.

You will *not* use sgen(7D) devices to read/write your tape drives
(you already have st(7D) devices for that).  You will only use
sgen(7D) to talk to the robot.

[That's not *quite* true... MTX's tapeinfo is can be very useful,
 and you might want to look at using sgen(7D) for that later.]

1)  You already have /kernel/drv/sgen.conf, with all entries commented
out, correct?

Create a new sgen.conf that points at your robot.  If, for
example, it is at SCSI target 5, sgen.conf might look like:

device-type-config-list=changer;
name=sgen class=scsi target=5 lun=0;

The tape drive and robot should be at different addresses in some
way; they might be two different targets, both lun=0, or one
target with two luns (logical unit numbers).

probe-scsi-all should show you.


2)  # rem_drv sgen

You shouldn't see any alarming messages here, sgen shouldn't be
loaded at all.

# add_drv -v sgen

The -v should make comments about finding your controller and making
device entries for it.

Now you should have /dev/scsi/changer/c?t0d0, and you should be
able to go:

# mtx -f /dev/scsi/changer/c?t0d0 inquiry
# mtx -f /dev/scsi/changer/c?t0d0 status

-- 
Jay Lessert   [EMAIL PROTECTED]
Accelerant Networks Inc.   (voice)1.503.439.3461
Beaverton OR, USA(fax)1.503.466.9472


Re: changed from hw comp to sw comp

2004-02-04 Thread Jay Lessert
On Wed, Feb 04, 2004 at 10:49:01AM -0500, [EMAIL PROTECTED] wrote:
 I'm slightly baffled at the Tape Time though - it seems to have gone down. 
 I expected it to remain about the same, half the data at half the tape 
 write speed.
[clip]
 Typical results with large backup run using hardware compression:
 Output Size (meg)   69757.957995.611762.3
 Original Size (meg) 69757.957995.611762.3
 Avg Compressed Size (%) -- -- --(level:#disks ...)
 
 Tape Time (hrs:min)3:02   2:16   0:46
 Tape Size (meg) 69757.957995.611762.3
 Tape Used (%) 100.0   82.9   17.1   (level:#disks ...)
 Filesystems Taped93 19 74   (1:74)
 Avg Tp Write Rate (k/s)  6541.9 7264.8 4388.6

 One of the first runs using software compression:
 Output Size (meg)   27744.523489.8 4254.7
 Original Size (meg) 56949.045437.611511.4
 Avg Compressed Size (%)48.7   51.7   37.0   (level:#disks ...)
 
 Tape Time (hrs:min)1:25   1:08   0:17
 Tape Size (meg) 27744.623489.8 4254.8
 Tape Used (%)  57.8   48.69.2   (level:#disks ...)
 Filesystems Taped91 27 64   (1:64)
 Avg Tp Write Rate (k/s)  5560.0 5919.3 4164.3

Doesn't look *too* mysterious.  The spec'ed native SCSI rate for
an SDX-500 is 6MB/s, and on large files (the fulls) you get
exactly that, 5.9MB/s.

But your system is just not capable of feeding the SCSI interface on
the drive much faster than that.  On the HW compression run, you know
you're getting at least 70/50 compression, so the tape drive should be
sucking data at 8.4MB/s, but you're only getting 7.2MB/s on large
files.

This could your SCSI controller, PCI congestion on the motherboard,
the kernel not recognizing the southbridge on the motherboard
optimally, etc.

Doesn't matter with your new configuration, though.  You've got
(apparently) lots of client CPUs to spread the SW compression time
over, and you're saturating the tape drive with your precompressed
data, so all is good.

-- 
Jay Lessert   [EMAIL PROTECTED]
Accelerant Networks Inc.   (voice)1.503.439.3461
Beaverton OR, USA(fax)1.503.466.9472


Re: amrestore and compression

2004-01-21 Thread Jay Lessert
On Wed, Jan 21, 2004 at 03:06:30PM -0500, Aaron Smith wrote:
   I'm hoping someone can clear something up for me.  Does amrestore
 uncompress an image as it restores it?

The amrestore(8) man page says it does.

Amrestore  normally  writes output files in a format understood by
restore or  tar, even  if  the backups on the tape are compressed.

-r   Raw output.  Backup images are output exactly  as  they
  are  on the tape, including the amdump headers.

-- 
Jay Lessert   [EMAIL PROTECTED]
Accelerant Networks Inc.   (voice)1.503.439.3461
Beaverton OR, USA(fax)1.503.466.9472


Re: amtapetype

2004-01-16 Thread Jay Lessert
On Fri, Jan 16, 2004 at 07:17:43PM -0500, Gene Heskett wrote:
 On Friday 16 January 2004 15:16, Alastair Neil wrote:
  I just ran amtypetype -f /dev/rmt/0bn, a Sun Storedge L9 autochanger
[clip]
 Making use of software compression can beat the hardwares efficiency, 
 sometimes by quite large values.

Just the usual not everybody runs x86 Linux comments. :-)

As always, the tradeoffs to remember are:

- With amanda, SW compression will let you get more data
  on the tape, because gzip --fast will get a higher compression
  ratio than HW compression, and because amanda can predict more
  accurately what the compression ratio will be.

  (gzip's compression ratio advantage is not nearly as great with
  LTO and SDLT drives, but gzip certainly beats Alastair's DLT8000)

- If you have big disks and slow CPUs (as is arguably the case on
  even the fastest Sun Enterprise server), gzip may simply not be a
  viable option, because you can't afford to wait for it to
  finish.

  I'm not kidding.  Been there, done that.

 The tape has now been written in compressed mode, so even if you turn 
 the switches off, it will set it back to compressed during the tape 
 recognition phase after you insert the tape.

That is not true for Solaris.  With a properly configured st.conf, you
will always write exactly the density you ask for based on the device
you're calling (as tapedev) in amanda.conf.  If you use Quantum's
recommended st.conf for DLT8000, for example, you will write:

/dev/rmt/0ln:   35GB uncompressed
/dev/rmt/0mn:   35GB compressed
/dev/rmt/0hn:   40GB uncompressed
/dev/rmt/0cn:   40GB compressed
/dev/rmt/0un:   40GB compressed

(For reads, the Solaris st(7D) driver will auto-read whatever density
 is there regardless of what device you call.)

-- 
Jay Lessert   [EMAIL PROTECTED]
Accelerant Networks Inc.   (voice)1.503.439.3461
Beaverton OR, USA(fax)1.503.466.9472


Re: How to fix annoying break in tape sequence?

2003-12-01 Thread Jay Lessert
On Mon, Dec 01, 2003 at 02:56:05PM +, Dave Ewart wrote:
 dumpcycle 7 days
 runspercycle 5
 tapecycle 20 tapes
[clip]
 Last Friday, we were due to use OurName-B-Fri, but because of a disk
 space problem in /var/log, the job failed.  AMANDA is still expecting to
 use OurName-B-Fri, rather than OurName-C-Mon which would be part of the
 usual schedule.  After putting OurName-C-Mon in the drive, amcheck
 produces this:
 
 ERROR: cannot overwrite active tape OurName-C-Mon
(expecting tape OurName-B-Fri or a new tape)
 
 Tried a suggestion I found in the AMANDA FAQ, which was to set
 OurName-B-Fri as no-reuse so that it wouldn't ask for it, but this
 still wouldn't make it want to use OurName-C-Mon:
 
 ERROR: cannot overwrite active tape CRUK-Weekly-C-Mon
(expecting a new tape)
 
 What can I try here?  Basically, the contents of both OurName-B-Fri and
 OurName-C-Mon are 'disposable', since they are both due to be
 overwritten in any case.

No.  You set 'tapecycle 20', and OurName-B-Fri is the 20th tape at
the moment, not OurName-C-Mon.  tapecycle means TAPEcycle, not RUNcycle
or DAYcycle, or anything else, and OurName-C-Mon is not over-writable
until it becomes the 20th tape.  Make sense?

If you want to be able to arbitrarily skip tapes, you have to reduce
tapecycle.

-- 
Jay Lessert   [EMAIL PROTECTED]
Accelerant Networks Inc.   (voice)1.503.439.3461
Beaverton OR, USA(fax)1.503.466.9472


Re: backup lasts forever on large fs

2003-11-11 Thread Jay Lessert
On Tue, Nov 11, 2003 at 08:50:58PM +0100, Zoltan Kato wrote:
 Looks like the estimate has timed out after a 1/2 hour. I do not know why
 estimation takes so long. What is more interesting: after amdump has
 finished there is still a gtar process running:

You mentioned, I think, that this is a file system with 2M inodes
(files and directories) consumed.

FWIW, my experience on Solaris 8 ufs with inode counts around 1M, and
GNU tar 1.13.19/1.13.25, is that estimates are very slow.  The backup
is slow, too, but you only do that once-- estimates (always?
typically?) run 3X for level 0 and two levels of incremental.

I changed to ufsdump because of this.

I assume the --listed-incremental functionality is the problem, but
don't know if it is a bug or a feature, or what.

Make sure that nothing (or as close to nothing as possible) is
competing with amanda for disk arm movement; I had etimeout and
dtimeout problems with amanda once, but only on Saturday night; there
was a weekly 'find / -name core' type job running at the same time, and
the contention for disk resources was killing both jobs.

-- 
Jay Lessert   [EMAIL PROTECTED]
Accelerant Networks Inc.   (voice)1.503.439.3461
Beaverton OR, USA(fax)1.503.466.9472


Re: Out of tape?

2003-10-31 Thread Jay Lessert
On Fri, Oct 31, 2003 at 04:40:53PM -0500, Paul Singh wrote:
 START taper datestamp 20031031 label DailySet101 tape 0
 INFO taper tape DailySet101 kb 224 fm 1 writing file: Input/output error
 FAIL taper beaker sdb2 20031031 0 [out of tape]
 ERROR taper no-tape [[writing file: Input/output error]]

Most likely bad tape or bad tape drive.  out of tape is just amanda's
attempt to explain the failure, but you'll likely find I/O error in
/var/*/messages.

First step is to confirm you can reliably write/read large amounts of
data to/from the drive without amanda (just use tar) and reliably move
back and forth to multiple files on the tape (using mt).

-- 
Jay Lessert   [EMAIL PROTECTED]
Accelerant Networks Inc.   (voice)1.503.439.3461
Beaverton OR, USA(fax)1.503.466.9472


Re: test recovery

2003-10-29 Thread Jay Lessert
On Wed, Oct 29, 2003 at 04:16:39PM +1100, Barry Haycock wrote:
 
 Something like
 amrestore -p /dev/rmt/0 host_name file_system | ufsrestore -mi

Straight from amrestore(8), more like:

amrestore -p /dev/rmt/0 host_name file_system | ufsrestore ibf 2 -

No dash on ufsrestore options, see ufsrestore(1M).  You *really* don't
want the m flag, do you?  That makes a mess.

-- 
Jay Lessert   [EMAIL PROTECTED]
Accelerant Networks Inc.   (voice)1.503.439.3461
Beaverton OR, USA(fax)1.503.466.9472


Re: NFS mount as second holding disk [really tar/dump]

2003-10-24 Thread Jay Lessert
On Fri, Oct 24, 2003 at 04:06:40PM +0200, Hans Kinwel wrote:
 Despite all my searching, I couldn't find that message on the list again.
 However, I found something better: the official word from the dump
 authors.
 
 Here it is.  http://dump.sourceforge.net/isdumpdeprecated.html

Excellent, hadn't read this before.  They hit most all the important
points.  They seem to miss:

1)  ACLs.  There are things you can do with ACLs that are painful to
accomplish without them, and GNU tar doesn't support them, period.

2)  Not *all* the world is a Linux box.  Yet.  :-)

The Solaris kernel has a unified VM/VFS interface (actually, I guess
most/all SYSVR4 kernels do, as well as FreeBSD, and perhaps other
BSD's, not knowledgable there), making it possible in theory and
in practice (mostly) for ufsdump to be aware of file system buffer
activity.

As an aside on the speed issue... on estimate phase, 'ufsdump S' kicks
GNU tar's butt.  It's not obvious to me exactly *why*; GNU tar is smart
enough to know it's output file is /dev/null and not bother reading
data blocks, but on my particular big file systems with lots of files,
three estimate passes take literally hours longer with GNU tar compared
to ufsdump.

-- 
Jay Lessert   [EMAIL PROTECTED]
Accelerant Networks Inc.   (voice)1.503.439.3461
Beaverton OR, USA(fax)1.503.466.9472


Re: NFS mount as second holding disk

2003-10-24 Thread Jay Lessert
On Fri, Oct 24, 2003 at 10:50:17AM -0400, Gene Heskett wrote:
 major headache.  Or one can make a snapshot, but would not that 
 snapshot take up as much space as the original?

No.  Solaris fssnap and Linux LVM snapshots are copy on write.  The
physical manifestation of the snapshot is a chunk of disk somewhere
that is only touched when the original file system is modified.

 And how long would 
 it take to do a snapshot on a 40gig filesystem?

It is essentially instant.

-- 
Jay Lessert   [EMAIL PROTECTED]
Accelerant Networks Inc.   (voice)1.503.439.3461
Beaverton OR, USA(fax)1.503.466.9472


Re: NFS mount as second holding disk

2003-10-23 Thread Jay Lessert
On Thu, Oct 23, 2003 at 08:32:31AM -0500, Dan Willis wrote:
 Has anyone successfully used an NFS mount as a secondary holding disk?

I currently am.  Actually it's the *only* holdingdisk. :-)

 Can backups still be run through dump or should they all be tar going
 this route?

Makes absolutely no difference.  It's just holdingdisk, ordinary file
open/write/read, nothing special from a functionality POV.

 Or is this just not advisable at all?

As long as you've comprehended the bandwidth costs, there is no
particular problem.  In my case, I'm taking about a 25% backup window
elapsed time hit, but a cheap 200GB IDE disk in a castoff Linux box is
freeing up six very expensive Solaris fibre-channel disks for other
purposes.  (As soon as I get around to acquiring GB-E and HVD SCSI
cards for the linux box, *it* becomes the new Amanda server and the
25% hit goes away...)

-- 
Jay Lessert   [EMAIL PROTECTED]
Accelerant Networks Inc.   (voice)1.503.439.3461
Beaverton OR, USA(fax)1.503.466.9472


Re: NFS mount as second holding disk

2003-10-23 Thread Jay Lessert
On Thu, Oct 23, 2003 at 07:53:32PM -0400, Gene Heskett wrote:
 On Thursday 23 October 2003 09:32, Dan Willis wrote:
 Hello. I am still experimenting with my Amanda setup. I have
[clip]
 Generally speaking, dump is not the prefered utility for use with 
 amanda.  We seem to have gradually come to prefer tar, in any version 
 1.13-19 or higher.

Gene, though that is the conventional wisdom for Linux ext2/ext3
because of kernel buffer inconsistency issues, I don't think Dan
mentioned his OS; for some OS's, like Solaris, (ufs)dump is arguably
the preferred solution unless you require excludes or subdirectories.

-- 
Jay Lessert   [EMAIL PROTECTED]
Accelerant Networks Inc.   (voice)1.503.439.3461
Beaverton OR, USA(fax)1.503.466.9472


Re: multiple ethernet cards

2003-10-09 Thread Jay Lessert
On Thu, Oct 09, 2003 at 03:51:40PM -0400, Jonathan Swaby wrote:
 Is it possible to run two amanda jobs at the same time on different
 ethernet cards.

Sure.

The two amanda jobs part is just having two configurations with two
disklists and running them in parallel:

01  23  *   *   0-5 /home/amanda/bin/config0
01  23  *   *   0-5 /home/amanda/bin/config1

The two ethernet cards part is a little trickier, and is a properly a
network administration question, not an Amanda question.  :-)

If:
You have two routed subnets,

The amanda server is dual-homed (i.e., has two hostnames),

All DLEs in config0 are on subnet 0, all DLEs in config1 are
on subnet 1,

...then it's trivial and just works.

If you're one big switched LAN, and your switches are VLAN-capable, and
you have control over the VLAN organization, you can do essentially the
same thing.

If you're one big switched LAN, and you can't do VLAN, then you try to
find out if your server and switch are capable of trunking.

-- 
Jay Lessert   [EMAIL PROTECTED]
Accelerant Networks Inc.   (voice)1.503.439.3461
Beaverton OR, USA(fax)1.503.466.9472


Re: Irix 6.5 GNU tar dump speed

2003-10-08 Thread Jay Lessert
On Wed, Oct 08, 2003 at 07:36:29PM +0200, Alexander Jolk wrote:
 ethernet.  I'm getting dump rates around 2MB/s from Origin200 servers
 running Irix 6.5, which makes my backups take all day.
 
 I have already disabled client compression and enabled server
 compression.  That gave me a boost from 800KB/s to 2MB/s, so I don't
 believe I'm saturating the network.

Make sure you're doing 'compress server fast', not 'best' (--fast is
the default, so probably not your problem; --best is at least 4X
slower).

I find that a 2.4GHz P4 will 'gzip --fast' a data stream at
10-12MB/s, with no other load on the box.

Run the same tar that amanda runs by hand, standalone on the Origin200
box and see what you get.  You can get the command line from
/tmp/amanda on the Amanda client, assuming that is enabled.

 Do any of you have ideas how I can get a significant speed-up here?  I
 need a factor of 5 at least, the more the better, and I can't believe
 Irix's xfs is that slow.  For comparison, a 2.4GHz Xeon running Linux
 with ext3 gives me 40MB/s.

Is that the same file system content as the Origin200 box?  In my
experience, 100GB comprising 100 1GB files tars MUCH faster than
100GB comprising 10M 10KB files, and 4MB/s would be on the low end
of possible for the latter (depending on the disk hardware, etc.).

Next you make sure it's not just a network problem (duplex mismatch,
etc.) by pushing a single big file with rcp/ftp.

-- 
Jay Lessert   [EMAIL PROTECTED]
Accelerant Networks Inc.   (voice)1.503.439.3461
Beaverton OR, USA(fax)1.503.466.9472


Re: Why a difference in the sizes of the original and the restored directories?

2003-09-22 Thread Jay Lessert
On Mon, Sep 22, 2003 at 04:13:27PM -0300, Bruno Negrão wrote:
 
 I fully backed up the /etc directory of one of my servers and restored in the
 /root/etc directory. These two directories differ each other on its sizes, where
 the original /etc directory is slightly bigger than its backup.

 Is this normal?

du is going to include the size of the directories themselves, so
a small size difference (with the original larger) is not surprising,
even if there is no actual data change in the original:

% cat dirtest
#!/bin/sh
mkdir foo
/bin/du -k foo
num=1
while [ $num -lt 1 ] ; do
echo $num  foo/file.$num
num=`expr $num + 1`
done
/bin/du -k foo
rm foo/*
/bin/du -k foo

% ./dirtest
1   foo
10207   foo
208 foo

See?

However, /etc does contain dynamic files (mtab, ntp/drift,
mail/statistics), so the difference could be quite real.

diff -r would tell you.

-- 
Jay Lessert   [EMAIL PROTECTED]
Accelerant Networks Inc.   (voice)1.503.439.3461
Beaverton OR, USA(fax)1.503.466.9472


Re: DAMN! AN AMRECOVER DELETED ALL MY /ROOT CONTENTS!!!

2003-09-22 Thread Jay Lessert
On Mon, Sep 22, 2003 at 05:38:18PM -0300, Bruno Negrão wrote:
 Paul, just to be precise, when I selected the files to be backed up, I
 issued an add * command. Since the * symbol doesn´t select any file
 beginning with an ., I didn´t selected the . directory! And if the *
 of amrecover includes files begining with a dot, it shouldn´t because it
 isn´t the standard!

Here's what's going on.

Doesn't have anything to do with what you select, or how.  It is simply
how amrecover calls GNU tar.

amrecover always(?) calls GNU tar with '-xpGvf'.  From the GNU tar
manual:

--incremental (-G) in conjunction with --extract (--get, -x) causes
tar to read the lists of directory contents previously stored in
the archive, delete files in the file system that did not exist in
their directories when the archive was created, and then extract
the files in the archive.

So Paul is stretching it a bit when he says it's tar, not amanda,
because it is amanda that throws the -G flag.  This is *exactly* the
behavior you must have do to an accurate full restore, of course,
which is why it is there.

 Also, was amanda that called tar this way, so I think it´s amanda´s fault.
 See, i couldn´t specify in any place what would be the tar options I wanted.

recover-src/extract_list.c (only semi-joking).

One of the dazzling beauties of amanda is that you *can* tweak and tune
it if you need to.  I admin both amanda and NT Backup Exec (for an
Exchange server), and I would KILL to get my hands on the source to
change some of the truly brain-damaged monkey-business that BE pulls.

 If amanda is a backup tool it shouldn´t act as a destruction tool any
 time! I think what happened to me can happen with anyone and this is a big
 risk, don´t you think?

You can always shoot yourself in the foot with *any* restore tool by
running in the wrong place at the wrong time.  If you think about it
for awhile, you realize that it is impossible to make this operation
risk-free.

 I sincerely think that amanda´s  developers must find a way completely avoid
 this risk.

I do not think that is possible.  You can kill yourself just as
completely with an over-write as a remove, for example.

It *might* be useful to modify amrecover and it's man page to be
a little more complete and self-consistent:

1)  amrecover never calls *dump with the r flag, and always calls
GNU tar with the -G flag.

It might be useful if this was at least documented in the man page,
so that one knows when one must use amrestore instead of
amrecover.

2)  It would certainly be possible to add user flags to amrecover to
control dump r and tar -G; I don't have any suggestions on how
to do this in a way that is coherent and understandable, though.
:-)

-- 
Jay Lessert   [EMAIL PROTECTED]
Accelerant Networks Inc.   (voice)1.503.439.3461
Beaverton OR, USA(fax)1.503.466.9472


Re: Copying from one tape to another tape

2003-09-19 Thread Jay Lessert
On Thu, Sep 18, 2003 at 03:54:27PM -0700, Joshua D. Bello wrote:
 I am attempting to copy from one tape to another using dd, and getting
 the following results:
 
 [15:37:[EMAIL PROTECTED]:/local0/tmp# dd bs=32k if=/dev/rsa1 of=/dev/rsa0
 1+0 records in
 1+0 records out
 32768 bytes transferred in 1.628442 secs (20122 bytes/sec)
 
 The return is instantaneous, and obviously this copy is not working.  I
 no longer have the amanda logs or index files for this tape.

You're really close!  Amanda stores multiple files on the tape; so far
you've only copied the first file (the tape label).  Assuming that
your no-rewind devices are /dev/nrsa1  /dev/nrsa0, something like
this should work better:

#!/bin/sh
mt -f /dev/nrsa1 rewind
mt -f /dev/nrsa0 rewind
filenum=0
while true ; do
echo copying tape file number $filenum
dd bs=32k if=/dev/nrsa1 of=/dev/nrsa0 || exit
filenum=`expr $filenum + 1`
done

I assume dd will return fail status when /dev/nrsa1 hits EOM, but
I've not tested that; keep an eye on it.  IIRC, your total number
of files should be #DLEs + 2.

-- 
Jay Lessert   [EMAIL PROTECTED]
Accelerant Networks Inc.   (voice)1.503.439.3461
Beaverton OR, USA(fax)1.503.466.9472


Re: configuration question

2003-09-10 Thread Jay Lessert
On Wed, Sep 10, 2003 at 10:24:24AM -0700, bao wrote:
 But for networks where restoring occurs once in a while, tape management 
 is a tedious task.

All the more reason to let amanda do it for you.

 I'm just an intern,
 and the company's plan is to have a set-up that is easy that anyone 
 can do.

It's hard to see how to make it much easier than typing 'amrecover'
and grabbing the tapes (or files) you're told to use.

-- 
Jay Lessert   [EMAIL PROTECTED]
Accelerant Networks Inc.   (voice)1.503.439.3461
Beaverton OR, USA(fax)1.503.466.9472


Re: dumpcycle question

2003-09-04 Thread Jay Lessert
On Thu, Sep 04, 2003 at 01:16:20PM -0400, Charlie Wiseman wrote:
 doing disk backups.  So, we want to take a level 0 (with 
 Amanda) of all of our systems twice a week, say Mondays 
 and Thursdays, and no backups in between.  Is this possible 
 with Amanda?

Sure.

 How exactly does it decide when to do the dump
 if I specified dumpcycle=1 week and runspercycle=2?

If this is *always* full, *never* any incrementals, then dumpcycle
loses it's normal meaning and you use the special value 'dumpcycle 0'.

Personally, I set 'runspercycle 1' also, but I'm not sure that makes
any difference.

-- 
Jay Lessert   [EMAIL PROTECTED]
Accelerant Networks Inc.   (voice)1.503.439.3461
Beaverton OR, USA(fax)1.503.466.9472


Re: maybe this is a dumb question

2003-08-27 Thread Jay Lessert
On Wed, Aug 27, 2003 at 12:54:12PM -0500, Chris Barnes wrote:
 Jay Lessert [EMAIL PROTECTED] wrote:
  On Wed, Aug 27, 2003 at 01:33:01PM -0400, Jeremy L. Mordkoff wrote:
  My policy is to never restore files in place.
 
  I agree that is a good practice (doesn't prevent Chris' student's
  proposed exploit, though).
 
 Actually, I think it might.

Chris,

I don't remember your exact example, but not in all cases (unless I'm
missing something obvious...):

#!/bin/sh
cd /home/joebob/src
ln -s /bin
sleep 86400
rm bin
mkdir bin
cp -p /home/joebob/bin/my_ls bin/ls
sleep 86400
rm -r *

mail -s restore request [EMAIL PROTECTED] 'EOMESSAGE'
Dear helpful admin,

I accidentally did an 'rm -r *' on my src directory this morning:

/home/joebob/src

Could you please restore it?  Thanks!

-Joebob-
EOMESSAGE

That does it, right?  Doesn't matter what file system or what host the
the amrecover is run on, I've got /bin/ls on that box when it's done
(subject to my qualifiers re: OS and program earlier in the thread).

-- 
Jay Lessert   [EMAIL PROTECTED]
Accelerant Networks Inc.   (voice)1.503.439.3461
Beaverton OR, USA(fax)1.503.466.9472


Re: maybe this is a dumb question

2003-08-26 Thread Jay Lessert
On Tue, Aug 26, 2003 at 10:34:49AM -0500, Chris Barnes wrote:
 The concern is that when a restore is run, the softlink to the /usr/bin
 directory will be recreated, then the file will be restored into that
 directory, overwriting the file that is supposed to be there (ie.
 creating a security issue).
 
 1) Is this possible, or does Amanda already do something to prevent
 this?

Chris,

Give your student worker a cookie (or a beer if they're old enough).
Though this isn't a new exploit technique, it sure looks to me like if
one:

- Uses 'program DUMP'
- Uses amrecover

Then your proposed exploit would work.  extract_files_child()
in extract_list.c just calls 'restore x', and I just tested that
ufsrestore (Solaris) will behave exactly as you describe.

If instead you run:

amrestore | ufsrestore r

you're safe, though this is not so convenient for partial
restores.  :-)

I did not test from inside amrecover; if there is deep magic there
I am missing, I'd like to hear about it.  From an Amanda point of
view, this is an issue with 'program', not with Amanda, of course.

I did not test 'tar -xpG' (that's how amrecover calls GNU tar).

 2) If it is possbile, are there any security considerations we need to
 take into consideration when running backups or restore jobs?

Yes.  :-)

I'm *really* glad I don't admin a student or ISP environment!
If I did, I would tripwire everything, I guess.

- 
Jay Lessert   [EMAIL PROTECTED]
Accelerant Networks Inc.   (voice)1.503.439.3461
Beaverton OR, USA(fax)1.503.466.9472


Re: amrestore error.

2003-08-21 Thread Jay Lessert
On Thu, Aug 21, 2003 at 03:28:24PM +0100, Keith Foster wrote:
 I'm trying to do a remote recovery of a Solaris server, I have booted to

 rsh -n -l user server /usr/local/sbin/amrestore -p /dev/rmt/0bn server
 name / | ufsrestore rvbfd 2 - for the level 0 

 I then load the incremental tape and run the same command I get:

 Dump date: Tue Aug 19 18:09:34 2003
 Dumped from: Mon May 19 05:32:04 2003
 Level 1 dump of / on server:/dev/md/dsk/d0

 I don't understand where it gets the Dumped from date, and what is meant

Keith,

ufsdump got it from an entry in /etc/dumpdates that looked like this:

/dev/md/dsk/d0 0 Mon May 19 05:32:04 2003

The obvious (but not necessarily correct) answer is that you managed to
run the full (at least) without the amanda 'record yes' set.  Does
/etc/dumpdates match your amanda records exactly?

ufsdump stores the Dump date and Dump from on the tape.  Dump
from is the epoch for level 0, and read from /etc/dumpdates for
level 1,2,3...

When you run 'ufsrestore r', as you ufsrestore each tape in order,
it leaves the level/Dump date/Dump from information behind in
./restoresymtable.

Since 'ufsrestore r' *removes* files as well as adding them, it is
VERY careful about all the Dump date and Dump from information
matching up EXACTLY, to make you:

1)  Run the tapes in the right order, not 0, 1, 3, 2.
2)  Run the correct set of tapes.

If the dates don't match up, because of /etc/dumpdates problems
for example, you cannot run 'ufsrestore r'.

If you are *sure* your tapes are all proper, you can run 'ufsrestore x .'
instead.  You will not get file removal.

-- 
Jay Lessert   [EMAIL PROTECTED]
Accelerant Networks Inc.   (voice)1.503.439.3461
Beaverton OR, USA(fax)1.503.466.9472


Re: syncing amanda's non-standard schedule

2003-08-15 Thread Jay Lessert
On Fri, Aug 15, 2003 at 06:47:44PM +, Alexander Shenkin wrote:
 of week 2's backup is labelled wednesday-2.  If I forget to replace the 
 tape one day, tuesday-2 for example, I just edit the tapelist and put 
 tuesday-2 at the top, so amanda expects wednesday-2 as the next tape on 
 wednesday.  no problem.

But what did you do with Tuesday's holdingdisk data?

One amanda-way-to-do-it is:

1)  Forget to load tape.

2)  Amanda writes backups to holdingdisk

3)  You realize the next day you forgot to load tape, you
amflush to the tape you should have used, then load
the next tape.

Then you've kept to your tidy little self-imposed tape schedule, and
even better, you don't have blank/obsolete tapes sitting on the shelf
among the good tapes.  (Don't feel bad; I too like to keep to my own
tidy little self-imposed tape schedule, even though I don't *have* to.)

 This was all working great for a few weeks -- four actually.  but now i'm 
 running into problems.  Amanda won't overwrite used tapes until tapecycle 
 (= 20 in my case) tapes have been written to.  And, since I've skipped a 
 couple tapes, amanda doesn't want to write to monday-1, or any other used 
 tape for that matter.

As you've pointed out, 'tapecycle 20' means absolutely, positively,
NEVER over-write a tape until you've written 20 tapes.  Amanda is doing
*exactly* what you've ordered it to do.

 Is there a way I can get around this?

Just drop tapecycle to 15.  Then you can have as many as 5
blank/obsolete tapes scattered through the rotation.  There is
absolutely nothing preventing you from having tapecycle  length of
tapelist.

This won't work the way you want if you have a changer; amanda will
tend to rummage through the changer looking for an optimum
(oldest/blank) tape to use.  But IIRC, with chg-manual, it'll use what
you give it as long as you don't violate tapecycle.

-- 
Jay Lessert   [EMAIL PROTECTED]
Accelerant Networks Inc.   (voice)1.503.439.3461
Beaverton OR, USA(fax)1.503.466.9472


Re: Lengthy Estimate Times

2003-08-15 Thread Jay Lessert
On Fri, Aug 15, 2003 at 11:51:07AM -0800, Mike Tibor wrote:
 Is it normal to require about 3000 seconds for each of the three dump
 estimates on a 1+ TB Linux ext3 filesystem that's only about 1% full?

If the 1% is 1,000 10MB files, absolutely NOT.

If the 1% is 10,000,000 1KB files, and if you're using GNU tar,
then my experience is that you would be lucky if it was that fast.

(I switched from GNU tar to ufsdump --not an option for you unless
you use a snapshot-- a couple of years ago because of multi-hour
estimate times on file systems with 2-3M files.

-- 
Jay Lessert   [EMAIL PROTECTED]
Accelerant Networks Inc.   (voice)1.503.439.3461
Beaverton OR, USA(fax)1.503.466.9472


Re: syncing amanda's non-standard schedule

2003-08-15 Thread Jay Lessert
On Fri, Aug 15, 2003 at 08:18:03PM +, Alexander Shenkin wrote:
 I know I'm treading into RTFM territory here, but...
 
 if i establish a holding disk and a tape fails, i'll have to flush to the 
 correct tape, and then insert the correct one after that, right?

You *can* do that, and I *would* do that, but you don't have to.

If you set autoflush, you can just leave the Tuesday tape on the shelf,
slap in the Wednesday tape, and Amanda will put Tuesday's *and*
Wednesday's backups on that tape (space permitting).

But then that's sliding down the slippery slope away from the
nice tidy little tape rotation, isn't it?  :-)

-- 
Jay Lessert   [EMAIL PROTECTED]
Accelerant Networks Inc.   (voice)1.503.439.3461
Beaverton OR, USA(fax)1.503.466.9472


Re: amanda newbie

2003-08-14 Thread Jay Lessert
On Tue, Aug 12, 2003 at 10:32:33AM -0400, Shashi Kanbur wrote:
 Ive just run a backup but it wouldnt change tapes automatically. It
 stopped the process after it got to the end of tape1. amflush wont change
 tapes either. I think I have confifured amanda.conf correctly to do this.

It looks like you have ('runtapes 4', tpchanger, changerfile, changerdev).

Just a thought, you should be able to do: 'tpchanger chg-zd-mtx'
(not full path), just in case some part of amanda is doing
pattern-match on tpchanger value and not expecting full-path.

 I can use chg-zd-mtz to manipulate the robot and amtape as well to load
 tapes from particular slots to the current drive etc. The drive is in
 random mode.

That all sounds good.  There should be debug files for changer
and amanda itself in /tmp/amanda (by default) and there should
also be some signs of distress in logdir/amdump.*.

-- 
Jay Lessert   [EMAIL PROTECTED]
Accelerant Networks Inc.   (voice)1.503.439.3461
Beaverton OR, USA(fax)1.503.466.9472


Re: Run time

2003-08-08 Thread Jay Lessert
On Wed, Aug 06, 2003 at 11:26:31AM -0500, Bruntel, Mitchell L wrote:
 Quick Question:
 
 How long should a amcheck take?  is it safe to presume that a job
 that has my chg-zd.mtx running with 41 mins is hung, and I should kill
 it?

Normally takes a few seconds, PLUS as long as it takes for it to
find a suitable tape.

With 81 slots, I suppose it *could* take 41 minutes to search all slots
if there were no suitable tapes.  Probably not.

Check the chg-zd-mtx debug file to see what it thinks it is
doing.

truss -p the chg-zd-mtx process to see what's going on there.

ptree the chg-zd-mtx process to see if there's an mtx child out
in the ozone.

-- 
Jay Lessert   [EMAIL PROTECTED]
Accelerant Networks Inc.   (voice)1.503.439.3461
Beaverton OR, USA(fax)1.503.466.9472


Re: Another MTX Question:

2003-08-07 Thread Jay Lessert
On Wed, Aug 06, 2003 at 02:46:03PM -0700, Jay Lessert wrote:
 On Wed, Aug 06, 2003 at 03:28:27PM -0500, Bruntel, Mitchell L wrote:
  (and next step isn't good either... 
  sh chg-zd-mtx -reset
  Bus Error(coredump)
 
 Reading a config file should help.  In addition, you'll be less likely
 to get surprised down the road if you run the chg-zd-mtx installed
 in libexec.  Your copy in DailySet1 won't be updated my 'make install'.

Thinking about it for another 5 seconds, it is *really* unlikely
that /bin/sh coredumped on Solaris, no matter *what* we handed
it for input.

More likely that mtx coredumped; do a 'file ./core' to find out.

Also, 'chg-zd-mtx -reset' actually degenerates to
'chg-zd-mtx -slot first'.  Once you're reading your config
file, unless you fix your $firstslot, you'll be loading
your cleaning tape, right?  You don't want to do that.

You want 'firstslot=1'.

And if I were you, I would set $lastslot to some reasonable small
value; 7, 10, something like that instead of 81, until you have
everything else working smoothly.

-- 
Jay Lessert   [EMAIL PROTECTED]
Accelerant Networks Inc.   (voice)1.503.439.3461
Beaverton OR, USA(fax)1.503.466.9472


Re: Another MTX Question:

2003-08-07 Thread Jay Lessert
On Wed, Aug 06, 2003 at 03:28:27PM -0500, Bruntel, Mitchell L wrote:
 My chg-zd-mtx config file shows:
 havereader=1 
 BUT 
 Following the instructions in the chg-zd-mtx.config file I am testing, and do :
 * Run this:
 #
 #   .../chg-zd-mtx -info
 #   echo $?  (or echo $status if you use csh/tcsh)
 I do this: 
 sh chg-zd-mtx -info
 and I get 
 1 79 1--missing 4th # here!  

So this tells you for sure that chg-zd-mtx is NOT reading your config
file.  From chg-zd-mtx.sh.in:

###
# Check if a barcode reader is configured or not.  If so, it
# passes the 4th item in the echo back to amtape signifying it
# can search based on barcodes.
###
reader=
if [ $havereader -eq 1 ]; then
reader=1
fi

if [ $currentslot -lt $firstslot -o $currentslot -gt $lastslot ]; then
currentslot=$firstslot  # what current will get
fi
set x $slot_list
shift   # get rid of the x
numslots=$#
Exit 0 $currentslot $numslots 1 $reader

From your amanda.conf below, you still have:

 changerfile /usr/local/etc/amanda/DailySet1/chg-zd-mtx.conf

And so unless your chg-zd-mtx config file is named chg-zd-mtx.conf.conf,
it ain't gonna be found.  Please change your changerfile to:

changerfile /usr/local/etc/amanda/DailySet1/chg-zd-mtx

And you'll be much happier.  For example, the chg-zd-mtx comments
say:

# All variables are in changerfile.conf where changerfile is set
# in amanda.conf.  For example, if amanda.conf has:
#
#   changerfile=/etc/amanda/Dailyset1/CHANGER
#
# the variables must be in /etc/amanda/Dailyset1/CHANGER.conf.

 (and next step isn't good either... 
 sh chg-zd-mtx -reset
 Bus Error(coredump)

Reading a config file should help.  In addition, you'll be less likely
to get surprised down the road if you run the chg-zd-mtx installed
in libexec.  Your copy in DailySet1 won't be updated my 'make install'.

-- 
Jay Lessert   [EMAIL PROTECTED]
Accelerant Networks Inc.   (voice)1.503.439.3461
Beaverton OR, USA(fax)1.503.466.9472


Re: Hardware Compression

2003-08-04 Thread Jay Lessert
On Mon, Aug 04, 2003 at 10:28:29AM -0500, Bob Zahn wrote:
 I have a couple of Seagate Ultrium 100/200GB tape drives in a Sun/Quantum 
 ATL L25 tape library. When I try to run hardware compression 
 (/dev/rmt/0hbn) on them I still can only fit 100GB of data according to 
 Amanda. I ran amtapetype and got the following:
 
 define tapetype ATL-LTO1 {   
 comment Produced by amtapetype prog (hardware compression on)
 length 104980 mbytes

Bob,

amtapetype feeds uncompressible data so that you get an accurate
picture of what your tape drive will do when presented with default
Amanda output (default is 'compress client fast').

The 105GB you got is testament to the ability of your LTO drive to
sense uncompressible data and auto-bypass HW compression.

If you think about it for awhile, you'll realize that it makes no sense
to run amtapetype with compression on; there is no way that amtapetype
can predict the compressibility of your data today (or a year from
now).

When you choose HW compression, you're signing up to be your own
amtapetype, so to speak.  You can totally ignore filemark, of course,
unless your disklist is literally 100's of entries long.

When I was bringing up my LTO drives (about a year ago), I did
several amdumps to disk (one for each DLE type I had) and did
fill the tape loops like this:

#!/bin/tcsh -f
set count=1
while (1)
echo transfer $count
time dd if=/a6/backup/spitfire._a1.20020718.0 of=/dev/rmt/1cn obs=32k || exit
@ count++
end

to find out how much compression I was really getting (my answer was
average about 2.05X).  I set my length to rather less than that,
180GB.  Just had a 175GB run this weekend:

STATISTICS:
  Total   Full  Daily
      
Estimate Time (hrs:min) 0:53
Run Time (hrs:min) 13:09
Dump Time (hrs:min)11:50  11:50   0:00
Output Size (meg)   175498.8   175498.80.0
Original Size (meg) 175498.8   175498.80.0
Avg Compressed Size (%) -- -- --
Filesystems Dumped11 11  0
Avg Dump Rate (k/s)   4219.9 4219.9   --

Tape Time (hrs:min) 2:49   2:49   0:00
Tape Size (meg) 175499.2175499.2   0.0
Tape Used (%)   97.5   97.50.0
Filesystems Taped 11 11  0
Avg Tp Write Rate (k/s)  17706.417706.4   --

Needless to say, if your payload is all satellite image jpegs
(for example), your length will need to be set lower...

-- 
Jay Lessert   [EMAIL PROTECTED]
Accelerant Networks Inc.   (voice)1.503.439.3461
Beaverton OR, USA(fax)1.503.466.9472


Re: Disk Definition question

2003-08-04 Thread Jay Lessert
On Mon, Aug 04, 2003 at 12:32:12PM -0500, Bruntel, Mitchell L, SOLCM wrote:
 Question:  getting permission denied..but..
 Amanda runs as AMANDA/operator (solaris 2.8)

 DUMP /dev/dsk/c0t0d0s3  0 OPTIONS |;auth=bsd;compress-fast;

 ERROR [could not access /dev/rdsk/c0t0d0s3 (/dev/dsk/c0t0d0s3): Permission denied]

 [EMAIL PROTECTED]: /dev/dsk #  ls -al /dev/dsk/c0t0d0s3 
 
 lrwxrwxrwx   1 root root  46 May 16  2000 /dev/dsk/c0t0d0s3 - 
 ../../devices/[EMAIL PROTECTED],0/[EMAIL PROTECTED],1/[EMAIL PROTECTED]/[EMAIL 
 PROTECTED],0:d
 [EMAIL PROTECTED]: /dev/dsk #  ls -al ../../devices/[EMAIL PROTECTED],0/[EMAIL 
 PROTECTED],1/[EMAIL PROTECTED]/[EMAIL PROTECTED],0:d
 brw---   1 root operator 153,  3 May 16  2000 ../../devices/[EMAIL 
 PROTECTED],0/[EMAIL PROTECTED],1/[EMAIL PROTECTED]/[EMAIL PROTECTED],0:d

  ^^

So ufsdump is running user=amanda/group=operator, and the device is *only*
readable by owner (root), so you get Permission denied.  Quite right.

% sudo chmod g+r /devices/[EMAIL PROTECTED],0/[EMAIL PROTECTED],1/[EMAIL 
PROTECTED]/[EMAIL PROTECTED],0:d

Repeat as appropriate for the other devices.

-- 
Jay Lessert   [EMAIL PROTECTED]
Accelerant Networks Inc.   (voice)1.503.439.3461
Beaverton OR, USA(fax)1.503.466.9472


Re: Host Question on tape advancment

2003-08-04 Thread Jay Lessert
On Mon, Aug 04, 2003 at 12:53:28PM -0500, Bruntel, Mitchell L, SOLCM wrote:
 Here;s the question:
 Should amanda be advancing the tape every time it runs amcheck like this?
 in my changer, tapes 1-30 are amanda labeled tapes
 
 (background I have a 80 slot tape changer.) 
 My amcheck Host says:
 Amanda Tape Server Host Check
 -
 Holding disk /dump: 8972115 KB disk space available, that's plenty
 amcheck-server: slot 39: tape_rdlabel: tape open: /dev/rmt/1bn: I/O error
 amcheck-server: fatal slot 40: slot 40 move failed
 ERROR: new tape not found in rack
(expecting a new tape)

Just like amdump, amcheck will check current slot for an optimally
writable tape (based on tapecycle, the content of tapelist, and
what is actually loaded in the changer).

If it doesn't find it in 'slot current', it will roll through the
entire changer looking for it.

In your case, the tape in 'slot current' generated I/O error.  That
is a really bad sign, probably threw errors in /var/adm/messages, and
is the first thing you should look at.  At best(?), it is a bad tape
that needs to be discarded.  At worst(?), there are cabling and/or
configuration problems and you don't have a reliable R/W channel to
your tape drives yet.

Hmmm.  I notice you say 1-30 are amanda labeled, perhaps chg-scsi is
not clever enough to handle having 81 slots declared in chg-scsi.conf,
but only 30 slots populated, and just blows up on empty slots?  I am
not a chg-scsi user...

-- 
Jay Lessert   [EMAIL PROTECTED]
Accelerant Networks Inc.   (voice)1.503.439.3461
Beaverton OR, USA(fax)1.503.466.9472


Re: no slots available error with amanda and chg-zd-mtx

2003-08-01 Thread Jay Lessert
On Fri, Aug 01, 2003 at 11:44:07AM -0400, Woodman, Joel wrote:
 11:21:49 Running: mtx status
 11:21:49 Exit code: 127
  Stderr:
 /usr/lib/amanda/chg-zd-mtx: line 381: mtx: command not found

Easy.

/usr/lib/amanda/chg-zd-mtx is just a shell script that sets $PATH.

chg-zd-mtx is telling you that mtx is not in that path.  For a quick
fix, just copy your mtx binary to libexec.

mtx is searched for at configure time ('egrep mtx config.cache').
Either you installed mtx after you built amanda, or moved it
afterward, or...

-- 
Jay Lessert   [EMAIL PROTECTED]
Accelerant Networks Inc.   (voice)1.503.439.3461
Beaverton OR, USA(fax)1.503.466.9472


Re: no slots available error with amanda and chg-zd-mtx

2003-08-01 Thread Jay Lessert
On Fri, Aug 01, 2003 at 09:37:51AM -0700, Jay Lessert wrote:
 On Fri, Aug 01, 2003 at 11:44:07AM -0400, Woodman, Joel wrote:
  11:21:49 Running: mtx status
  11:21:49 Exit code: 127
   Stderr:
  /usr/lib/amanda/chg-zd-mtx: line 381: mtx: command not found
 
 Easy.
 
 /usr/lib/amanda/chg-zd-mtx is just a shell script that sets $PATH.

Oops, followup to my own posting, sigh.  Above is true but irrelevant.

The mtx called from chg-zd-mtx is hardwired at configure time:

% egrep '^MTX' /usr/local/libexec/chg-zd-mtx
MTX=/usr/local/bin/mtx

Edit the definition, or re-configure and rebuild.

-- 
Jay Lessert   [EMAIL PROTECTED]
Accelerant Networks Inc.   (voice)1.503.439.3461
Beaverton OR, USA(fax)1.503.466.9472


Re: no slots available error with amanda and chg-zd-mtx

2003-08-01 Thread Jay Lessert
On Fri, Aug 01, 2003 at 03:07:50PM -0400, Woodman, Joel wrote:
 # /usr/lib/amanda/chg-zd-mtx -info
 2 2 1
 
 As I understand the documentation, this means that my current slot is 2, I
 have 2 slots and the changer can travel backwards, as noted by 1.

Yup.

 Is it
 normal for mtx to define slots as slots with tapes in them, versus slots
 in total?

Normal for chg-zd-mtx, that is, yes.  It does an 'mtx status' and
keeps a list of occupied slots within the firstslot-lastslot
range.  No sense mucking about with empty slots.

 Also, if I run /usr/sbin/amtape cas-unix show amanda will show the tapes
 in the pod, but won't unload the last tape it examined without a manual
 amtape eject. Is this normal,

Yes.  In general, all(?) amanda tape operations leave the tape in the
drive and un-rewound unless you explicitly command otherwise.

-- 
Jay Lessert   [EMAIL PROTECTED]
Accelerant Networks Inc.   (voice)1.503.439.3461
Beaverton OR, USA(fax)1.503.466.9472


Re: Backup to disk on another machine

2003-07-29 Thread Jay Lessert
[E-mailed and Cc'ed]

On Tue, Jul 29, 2003 at 12:38:06PM -0500, Chris Barnes wrote:
 My question is: how do I do this if the disk is on another computer?
 
 Does earth nfs mount the large disk on moon and simply write to it?

Yes.  Obviously, if you are already saturating earth's network with
backup traffic, the additional taper traffic won't be welcome.

 I assume that this will run on earth as root so that it can have
 permissions to get to all the files, right?

No.  It should be the same user that runs amdump/amandad/etc. and
owns your holdingdisk subdirectories.  By default, amanda.  No root
export necessary.

-- 
Jay Lessert   [EMAIL PROTECTED]
Accelerant Networks Inc.   (voice)1.503.439.3461
Beaverton OR, USA(fax)1.503.466.9472


Re: New tape not found in rack, but they are there

2003-07-28 Thread Jay Lessert
On Mon, Jul 28, 2003 at 06:55:12AM -0700, Adam Ardis wrote:
 I'm hoping someone can help me with this, i've looked
 around and can't answer the question myself.  I've got
 two DLT7000 drives directly attached to my Solaris
 server for backups, and they will run normally.  On
 occassion, I'll get a report back like:
 
 *** A TAPE ERROR OCCURRED: [label DS036 or new tape
 not found in rack]. Some dumps may have been left in
 the holding disk. Run amflush again to flush them to
 tape. The next 2 tapes Amanda expects to used are:
 DS036, DS035.
 
 Now, there are two tapes in the drives:
 
 twdev001:/home/amanda$ amtape staging show
 amtape: scanning all 2 slots in tape-changer rack:
 slot 2: date 20030722 label DS026
 slot 1: date 20030722 label DS010

Unless your dumpcycle is  7 days, you do *NOT* want to over-write
these!

 My questions are why does the backup want to use other
 tapes, and why won't it just use what's in the drives?

Because you have set dumpcycle and tapecycle to tell it not to.

  Can I get Amanda to just use whatever is in teh
 drives and not care about a retention?

Sure, but unless you know what you're doing (and you don't yet), that
is a really bad idea.

 We rotate the
 tapes out daily, since we need to do it by hand
 anyway.  They go offsite for a month and then back
 into rotation, so we don't really follow any rules as
 to which tape to put in when.  The first two tapes in
 tapelist are what's in the drives, DS026 and DS010,

DS026 and DS010 were written 6 days ago (20030722), NOT more than a
month ago.

 but we may not necessarily have the last two, which
 I'm guessing are what Amanda expects, DS035 and DS036.
 
 twdev001:/home/amanda/staging$ more tapelist
 20030722 DS010 reuse
 20030722 DS026 reuse
 20030721 DS029 reuse
 20030718 DS025 reuse
 20030718 DS008 reuse
 ...Continues down 
 20030616 DS009 reuse
 20030610 DS027 reuse
 20030610 DS035 reuse
 20030526 DS036 reuse

You've not said, but I'm guessing that you're running some sort of
full-only configuration.

And I'm guessing you've set tapecycle to exactly the number of tapes
you've labeled.

Amanda will decline to overwrite a tape newer than tapecycle runs
old.  You can set tapecycle to 0 if you want.

Amanda will in any case also decline to overwrite the newest run,
because if you do, and the current run fails, you've lost not one
run, but two, right?

-- 
Jay Lessert   [EMAIL PROTECTED]
Accelerant Networks Inc.   (voice)1.503.439.3461
Beaverton OR, USA(fax)1.503.466.9472


Re: New tape not found in rack, but they are there

2003-07-28 Thread Jay Lessert
On Mon, Jul 28, 2003 at 09:58:25AM -0700, Adam Ardis wrote:
 I never want to overwrite, but we're supposed to
 rotate them out daily.

I assume you mean you *do* want to overwrite, but only overwrite old
tapes, for some definition of old.

In that case, Amanda did exactly the right thing on your behalf,
by preventing you from overwriting your *newest* tapes.

 It's possible these weren't
 moved out, but my real question here is that I don't
 want amanda to be dependent on the tapelist.

In the environment you're in (apparently not full-only, dumpcycle 1
week), you really cannot do that.

 Do I
 have to go by what the rotation is listed in
 tape-list,

If your total number of labeled tapes is  tapecycle, you can have
a choice of multiple old tapes, but you still need tapelist.

 or can I continue to put in any tape?  If I
 can't, then i'll try to rearrange the tapes and the
 tape list to be accurate, but since we have operators
 moving the tapes, I wanted them to just put an
 available tape in, rather than a specific tape.

Well, if you're doing incrementals, you have to have your tapes at
least *semi* under control, right?  You can't just feed random tapes
lest you clobber the tape(s) you need to restore a file you
removed two days ago.

 And
 by available I mean one that came back to us offsite
 that can be overwritten.

Ah, *that* we can probably work with.

  Because you have set dumpcycle and tapecycle to tell
  it not to.
 
 Can you be more specific as to how to do this?

Sure, though it would be even better if *you* were more specific!  :-)
But here's a cooked up example along the lines you're talking
about.

Suppose your particulars are:

Do fulls at least 1X/week.

Run amdump 5X/week, 1 tape/run.

All tapes are transported offsite the day after being written.

Keep the last 4 weeks worth of tapes offsite at all times.

Keep about 1 weeks worth of writable tapes onsite at
all times.

dumpcycle 1 week
runspercycle 5
tapecycle 20

You would amlabel at least 25 tapes.  All except the most-recently-
used 20 would be writable.

Each day when the newest tape is dropped off at offsite storage,
the oldest tape is returned.  Any of the onsite tapes can be
used (since #-of-tapes is  tapecycle).

 Ok I understand.  If we fail to remove the tape and
 have it overwrite daily, I will lose the day before's
 data.  And if we don't remove the tape that's full, we
 don't have space to do the next backup.  Any
 suggestions, other than make sure we remove teh tapes
 when they are full? :)

Suggestion above.  BTW, s/full/used/g.

-- 
Jay Lessert   [EMAIL PROTECTED]
Accelerant Networks Inc.   (voice)1.503.439.3461
Beaverton OR, USA(fax)1.503.466.9472


Re: FQDNs in .amandahosts

2003-07-18 Thread Jay Lessert
On Fri, Jul 18, 2003 at 04:39:49PM -0700, S. Keel wrote:
 Hello everyone, I have a problem with using FQDNs in my .amandahosts file
 on both the backup server and the client machines.  I am planning a backup
 of several machines inside a private network, without any DNS.  So I'm
 concerned that the backup will fail because none of the machines would
 actually resolve.

No problem.  If you've got a synch'ed copy of /etc/hosts on all
amanda server/clients, that should work just fine.

-- 
Jay Lessert   [EMAIL PROTECTED]
Accelerant Networks Inc.   (voice)1.503.439.3461
Beaverton OR, USA(fax)1.503.466.9472


Re: Pre-emptive sanity check Question

2003-07-15 Thread Jay Lessert
On Tue, Jul 15, 2003 at 07:20:12AM -0500, Bruntel, Mitchell L, SOLCM wrote:
 1) The Exabyte folks said the X80 likes to work in ASPI mode, and
that there are no drivers needed for this mode supplied by them.

You can now safely ignore the Exabyte folks, since they've proven
themselves clueless.  :-)  ASPI is an old SCSI API, originated by
Adaptec 10 years ago or so, I think, and only exists in the DOS and
Windows environment.  There is no such thing as an ASPI mode for your
library, its a non-sequitur.

It would be like saying a text file is in emacs mode or vim mode
because you used emacs or vim to read/write it.

 This leads me to think the best changer I could use is chg-scsi, and
 I have the following questions about that file.  I would guess that
 this is the file that tries to drive the elements in regular aspi mode
 too..

Forget ASPI.  It has absolutely nothing to do with you, your library,
or amanda.  You should be able to make either chg-scsi or chg-zd-mtx
work, whichever you feel like.  Nice to have choices.

 Here's a small snippet from my new chg-scsi file.
 # the device that is used for the tapedrive 1
 startuse451
 enduse  451
 AND HERE's the question.   This # (451) is what Exabyte calls the
 ELEMENT index assignment for my 80 cartridge library, and came out of
 the users manual

I don't know an X80 from a hole in the ground, but there is no way that
'startuse 451'/'enduse 451' can be correct.  You're telling amanda the
config has one and only one slot, which (even if 451 is a legal slot,
which I doubt) is not what you want.  What does 'mtx status' say (I say
that only because I don't know what the chg-scsi equivalent is)?

 number_configs  3  #(because I have 3 drives.  BOY am I smart!!!) 
 # # configs being set to how many tape drives!

As long as you're aware that a single amanda config only knows how to
use a single tape drive (except for RAIT).

-- 
Jay Lessert   [EMAIL PROTECTED]
Accelerant Networks Inc.   (voice)1.503.439.3461
Beaverton OR, USA(fax)1.503.466.9472


Re: Pre-emptive sanity check Question

2003-07-15 Thread Jay Lessert
On Tue, Jul 15, 2003 at 01:58:20PM -0500, Bruntel, Mitchell L, SOLCM wrote:
 OK.  Jay makes sense here:  
 1) Jay says: I have choices:  chg-scsi or chg-zd-mtx.
 but I have a third config file in the distribution called chg-scsi-solaris.conf

The files: example/chg-scsi-{hpux,linux,solaris}.conf are just example
chg-scsi config files with OS-specific typical device names inserted;
they do not represent different tpchanger executables.  You'll not find
libexec/chg-scsi-solaris, right?

 The X80 has 80 slots for tapes. (elements)   Tape Drive 1 is element
 451, 2 is 452, 3 is 453,

I'm sure there is an excellent and highly amusing reason for that.  :-)

and the cleaning cartridge lives in slot 0

 and element 401-405 are the only way to put tapes in/out of unit!

You're apparently intended to initially populate the library through
the little five-tape import/export hole, and then seldom/never touch
the tapes again.  Unless you were planning on rotating tapes off-site,
that's actually a perfectly good way to go, I guess.

Amanda won't do anything with the import/export slots, you'll deal with
that outside amanda somewhere.

With mtx, you would fill the import/export slots, then have a script
do something like:

mtx transfer 401 1
mtx transfer 402 2
mtx transfer 403 3
mtx transfer 404 4
mtx transfer 405 5

 so it should be 'startuse 1', 'enduse 80'

If you're planning on using that many tapes, yes.  Only specify as many
slots as you're actually intending to use in this amanda config.

 Now can someone explain the following:
 a) what is the relationship of mtx to chg-scsi

None.

chg-scsi: is a C program that controls a SCSI tape library and directly
implements the amanda changer interface described in
docs/TAPE.CHANGERS.  Part of the amanda source distribution.

mtx: A C program that controls a SCSI tape library and does NOT
directly implement the amanda changer interface.  It is very useful
to use by hand, however ('mtx load 3 452' would load the tape in
slot 3 to tape drive 452), and a wrapper script can convert it to
amanda.  Source from http://mtx.badtux.net/.

There is another, older mtx, which I have never seen in the wild.

chg-zd-mtx: A sh script wrapper around mtx which converts it to the
amanda changer interface.

 b) what is the relationship of chg-scsi.conf and the files in the
/usr/local/libexec directory (/usr/local/libexec/chg-scsi)

libexec/chg-scsi is a C program that expects to find a configuration
file somewhere.  chg-scsi.conf might be that file.

-- 
Jay Lessert   [EMAIL PROTECTED]
Accelerant Networks Inc.   (voice)1.503.439.3461
Beaverton OR, USA(fax)1.503.466.9472


Re: Strange errors with Amanda

2003-07-12 Thread Jay Lessert
On Sat, Jul 12, 2003 at 10:36:00AM -0400, Jonathan B. Bayer wrote:
 My problem comes when I try to backup other systems.  amcheck reports no
 errors, but when I do the backup I get the message listed below.
 
 Before I post the entire configuration, I'm wondering if this is simply
 a timeout problem.  Right now I have the timeout set to:
 
   etimeout 3000

I don't think you're running into real estimate timeouts (certainly not
at 3000*#DLE!).

Drop the disklist back to one small entry per client.  /usr, maybe.
That'll totally eliminate any capacity/delay/interaction problems.

Then clear /tmp/amanda on the server and clients and do a run.
Assuming it fails more-or-less the same way, crawl through the
debug files in /tmp/amanda on the server and one of the clients,
as well as the amdump.x and log.date files on the server.  I
think *somewhere* in there is a useful error message.

Since your server treats itself just like any other client, you
have known-good output to compare against.

Even though you're passing amcheck, there are parts of the amanda
network activity that don't get exercised until amdump runs.  It smells
like a Linux iptables problem, but I'm a Solaris guy and know just
about zip on that subject.

-- 
Jay Lessert   [EMAIL PROTECTED]
Accelerant Networks Inc.   (voice)1.503.439.3461
Beaverton OR, USA(fax)1.503.466.9472


Re: (yet another) question along steps of getting operational...

2003-07-09 Thread Jay Lessert
[Mailed and Cc'ed]

On Wed, Jul 09, 2003 at 01:37:16PM -0500, Bruntel, Mitchell L, SOLCM wrote:
 mtx   1.2.17
 
 BUT I still haven't figured out how to refer to my robot!?...
 
 (duh. running solaris 5.8)
 I dont see a device in /dev/rmt...  (but I see the drives there...)

For Solaris 8 and mtx, you want to try the sgen(7D) driver for your
robot first.

You put appropriate entries in /kernel/drv/sgen.conf (it is commented empty
by default, so you *have* no devices for it yet), do an 'add_drv sgen' and
you should get device entries in /dev/scsi/changer.

The sgen.conf device-type for library robots is changer.

In the mtx source distribution, see contrib/config_sgen_solaris.sh.  Don't
run the script (in any case, there's a CVS command in it that will fail), use
it as an example/template.

-- 
Jay Lessert   [EMAIL PROTECTED]
Accelerant Networks Inc.   (voice)1.503.439.3461
Beaverton OR, USA(fax)1.503.466.9472


Re: (yet another) question along steps of getting operational...

2003-07-09 Thread Jay Lessert
On Wed, Jul 09, 2003 at 03:31:13PM -0400, Jon LaBadie wrote:
 On Wed, Jul 09, 2003 at 11:59:23AM -0700, Jay Lessert wrote:
  You put appropriate entries in /kernel/drv/sgen.conf (it is commented empty
  by default, so you *have* no devices for it yet), do an 'add_drv sgen' and
  you should get device entries in /dev/scsi/changer.
  
  The sgen.conf device-type for library robots is changer.
 
 Mine showed up in /dev/scsi/sequential rather than changer.

That might be, and I'm sure it works, but sequential is sgen-ese for
tape drive.  I guess your sgen.conf device-type-config-list has
sequential instead of or in addition to changer?

Even then, it's sort of strange if the robot is returning sequential
(0x01) for an inquiry type ID, instead of changer (0x08), but...

Anyway, changer is sgen-ese for media library robot.

 You might also edit st.conf to examine other lun's besides 0 (only 0
 is scanned in the default file).

Yup, in any case, you can't do diddly if you don't know the target and
LUN setting for each device on the bus.

-- 
Jay Lessert   [EMAIL PROTECTED]
Accelerant Networks Inc.   (voice)1.503.439.3461
Beaverton OR, USA(fax)1.503.466.9472


Re: Is it possible to backup several servers to one tape for 3 days?

2003-07-03 Thread Jay Lessert
On Thu, Jul 03, 2003 at 01:02:03PM -0700, Anwar Ruff wrote:
 Is it possible to backup several servers to one tape
 in a consecutive manner over several day (e.g., 5
 days)?

Not the way you're thinking.

One easy thing to do in amanda to install a large cheap holdingdisk,
set reserve to some low value, leave the tape out of the drive,
run backup for 5 days, then amflush them all to tape at once.

If you mirror or RAID the holdingdisk, it's probably as reliable
as leaving the tape in the drive and concatenating (which amanda
will not do).

-- 
Jay Lessert   [EMAIL PROTECTED]
Accelerant Networks Inc.   (voice)1.503.439.3461
Beaverton OR, USA(fax)1.503.466.9472


Re: Diferential Backup [What does differential backup mean]

2003-06-25 Thread Jay Lessert
[This is a meta-answer, because I've heard people say differential
 backup, but I had no darn idea what it meant, and I figure there
 might be others out there in the same boat!]

On Wed, Jun 25, 2003 at 08:30:41AM -0300, Roberto Samarone Araujo (RSA) wrote:
   It it possible to use amanda to do diferential backups ? Are there any
 specific configuration ?

AFAICT, there are two meanings to differential when you're talking
about backups.

1)  Databases.  For databases (e.g., Oracle, SQL) you are often backing
up a small number of very large files (and sometimes not files at
all, but raw partitions).  In this context, differential means
backing up only those blocks within a large file (or raw partition)
that have changed.

This is obviously the realm of a special-purpose backup program,
Amanda itself definitely doesn't know anything about it.

2)  Windows file systems.  Files on (all?) Windows file systems have an
archive bit.  The intent is that this bit is set when a file is
modified.  The intent is that a full backup clears all the archive
bits.  In this context:

An incremental backup gets files that have changed since the full
or previous incremental, and re-clears the archive bits.

A differential backup gets files that have changed since the full,
and leaves the archive bits alone.

So this is sort of a round-about way of getting the same
differences in behavior you get my controlling level in dump.
Strictly speaking, Amanda doesn't know anything about this either.

Since you're apparently a Windows guy, I'm guessing you mean windows
differential, not database differential.  Is that correct?  And then
the answer to your question depends on what type of file system you're
backing up and what underlying backup engine (e.g., tar, dump,
smbclient) you're using.

-- 
Jay Lessert   [EMAIL PROTECTED]
Accelerant Networks Inc.   (voice)1.503.439.3461
Beaverton OR, USA(fax)1.503.466.9472


Re: HP DLT1e tapetype [Solaris density/compression defaults]

2003-06-20 Thread Jay Lessert
On Thu, Jun 19, 2003 at 11:47:53PM -0400, Jon LaBadie wrote:
  I'm not sure what documentation you're referring to, but for every
  compression-capable drive I've ever used (DDS2, DDS3, DLT-4000,
  DLT-7000, DLT-8000, LTO-1) both the Solaris factory st driver and the
  tape-vendor-supplied st.conf default to the highest-possible density
  and compression factor.
 
 Actually Jay the default can be specified in the st.conf file to match
 any of the l, m, h, or u/c entries and for 15 of the 49 entries
 in my file the default does not correspond to the u/c mode.

Jon, you are exactly right, and I was... not.  :-)

In my (meager) defense, I did also mention tape-vendor-supplied
st.conf default, and for example, the Quantum DLT/Solaris
recommendation is:

From:
http://www.quantum.com/AM/support/DLTtapeDrivesMedia/TechnicalDocuments/Default.htm

Where the Solaris install PDF does explain all the funky mode bits,

 If you are installing a DLT4000 add the following line:
  DLTtape data = 1, 0x38, 0, 0x8639, 4, 0x17, 0x18, 0x82, 0x83, 3;
 If you are installing a DLT7000 add the following line:
 DLTtape data = 1, 0x38, 0, 0x8639, 4, 0x82, 0x83, 0x84, 0x85, 3;
 If you are installing a DLT8000 add the following line:
 DLTtape data = 1, 0x38, 0, 0x8639, 4, 0x84, 0x85, 0x88, 0x89, 3;
 If you are installing a Super DLTtape add the following line:
 DLTtape data = 1, 0x38, 0, 0x8639, 4, 0x90, 0x91, 0x90, 0x91, 3;

This is what I'm currently using for DLT, and it does default to max
density/compression, FWIW.

-- 
Jay Lessert   [EMAIL PROTECTED]
Accelerant Networks Inc.   (voice)1.503.439.3461
Beaverton OR, USA(fax)1.503.466.9472


Re: HP DLT1e tapetype [Solaris density/compression defaults]

2003-06-20 Thread Jay Lessert
On Fri, Jun 20, 2003 at 01:28:04PM -0400, Jon LaBadie wrote:
 On Fri, Jun 20, 2003 at 10:18:39AM -0700, Jay Lessert wrote:
  From:
  http://www.quantum.com/AM/support/DLTtapeDrivesMedia/TechnicalDocuments/Default.htm
  
  Where the Solaris install PDF does explain all the funky mode bits,
 
 Amazing.  I guess Quantum thinks no one will ever have more than one
 type of DLT drive installed at the same time.  Seems like it would be
 more efficient to put all of them in at one time and give each a different
 DLTtape data label.

Yeah, I'm not complaining too much.  The referenced PDF is the best
Solaris tape drive install doc I've ever seen; it actually explicitly
defines all the density codes, so you know exactly what you're
getting.  I don't mind editing a few characters in the data property
names...

-Jay-


Re: this is one I haven't seen [Solaris sgen/st confusion]

2003-06-20 Thread Jay Lessert
On Fri, Jun 20, 2003 at 05:23:15PM -0500, FM Taylor wrote:
 After an upgrade all of my /dev/rmt/devices are now 
 /dev/scsi/sequential/devices.  That in itself was no problem.

Oh, yes it is!

/dev/rmt/0n (or whatever) *better* point to st(7D) device entries, NOT
sgen(7D) sequential entries.

Someone has recently fiddled with /kernel/drv/sgen.conf, put
some sequential entries in, then managed to flush the pre-existing
/dev/rmt entries somehow.  It took some work.  And now you
get to undo it!

 However, I am now getting this strange error, and I don't know how to 
 fix it.
 
 tape_rdlabel: tape open: 0: No such file or directory

The sgen(7D) driver is *not* a superset of st(7D).  It is nice to
have sgen devices around for tapeinfo, but you cannot use them
for dd, tar, cpio, ufsdump, etc.  Probably not even for mt.

Pull the sequential entries out of sgen.conf, 'rem_drv sgen', and
'add_drv sgen' (to put your changer entries back).

Now (I think, haven't done this lately), you just 'devfsadm -C -v -c tape'.

Good luck!

-- 
Jay Lessert   [EMAIL PROTECTED]
Accelerant Networks Inc.   (voice)1.503.439.3461
Beaverton OR, USA(fax)1.503.466.9472


Re: LTO1 tapetype

2003-06-19 Thread Jay Lessert
On Thu, Jun 19, 2003 at 11:33:59PM +0200, Paul Bijnens wrote:
 However, the hardware compression algorithm seems to be a very
 good one: the measured capacity is still about 100 GByte.
 This means that the algorithm does not fall into the known pitfall
 of blindly imposing it's compression engine to an uncompressable
 data stream.  (gnuzip does this too, compress does not.)

That is correct, and should be the case for all LTO Ultrium drives,
they've got big write caches and are supposed to decide block by block
(I don't know how big a block is) whether to compress that block or
not.

 If this is really the case, then, maybe it's not necessary
 to disable hardware compression at all.  And maybe, there isn't
 even a possibility to do it (just as there is no setting to
 tune your error correcting bits).

You can definitely disable compression.  For example, in Solaris land,
with an up-to-date factory st driver (or with the HP st.conf), the
l,h,m devices all have compression disabled, and I can make my
drives slow and small (I use HW compression :-) any time I want.
(The default, c, and u devices have compression on).

The 13MB/s tape rate seen here is pretty normal.  The standard
datasheet LTO-1 native sustained spec is 15MB/s, and I routinely get
20MB/s over a 50GB amanda holdingdisk image with SW compression off/HW
compression on.

-- 
Jay Lessert   [EMAIL PROTECTED]
Accelerant Networks Inc.   (voice)1.503.439.3461
Beaverton OR, USA(fax)1.503.466.9472
 
 --
 Paul
 
 

-- 
Jay Lessert   [EMAIL PROTECTED]
Accelerant Networks Inc.   (voice)1.503.439.3461
Beaverton OR, USA(fax)1.503.466.9472


Re: HP DLT1e tapetype

2003-06-19 Thread Jay Lessert
On Thursday 19 June 2003 14:09, Ean Kingston wrote:
I'm using Solaris and, according to the documentation, it should not
 be using hardware compression unless I specify the 'compress'
 device (/dev/rmt/0cn) as opposed to the one I did use
 (/dev/rmt/0n).

I'm not sure what documentation you're referring to, but for every
compression-capable drive I've ever used (DDS2, DDS3, DLT-4000,
DLT-7000, DLT-8000, LTO-1) both the Solaris factory st driver and the
tape-vendor-supplied st.conf default to the highest-possible density
and compression factor.

That is, I would be *very* surprised if the 0n device for your DLT1
drive doesn't do 80GB (compressed) mode.

I've not used DLT1 myself, but if you show us the st.conf you're
using we can confirm/deny...

-- 
Jay Lessert   [EMAIL PROTECTED]
Accelerant Networks Inc.   (voice)1.503.439.3461
Beaverton OR, USA(fax)1.503.466.9472


Re: NEC autoloaders

2003-06-10 Thread Jay Lessert
On Tue, Jun 10, 2003 at 04:38:09PM +0100, Tom Brown wrote:
  Has anyone had any experience using LTO autoloaders from NEC and amanda?
 
  Specifically this one
 
  http://www.nec-cebit.com/products/LL0101H.asp
 
 if nobisy has had any joy with these autoloaders can anyone recommend a LTO
 autoloader to work with amanda that has about 8 tapes?

Chances are nobody had ever heard of it before, I hadn't.

As far as changers go, if it works with mtx, it will work with Amanda
(through chg-zd-mtx).  Check out the compatibility list at
http://mtx.badtux.org/.

-- 
Jay Lessert   [EMAIL PROTECTED]
Accelerant Networks Inc.   (voice)1.503.439.3461
Beaverton OR, USA(fax)1.503.466.9472


Re: Holding disks and the disk output driver

2003-06-10 Thread Jay Lessert
On Tue, Jun 10, 2003 at 11:03:37AM -0700, Ted Cabeen wrote:
 Joshua Baker-LePain [EMAIL PROTECTED] writes:
  On Tue, 10 Jun 2003 at 10:36am, Ted Cabeen wrote
  If you're using the disk output driver to run backups to a large disk
  array, is there any reason to use a holding disk?
  
  If you have lots of clients *and* you have the disk space, yes, as it will 
  increase amanda's efficiency (through parallelism).
 
 Ahh.  So without a holding disk, amanda will only dump one filesystem
 at a time.  Got it.  Thanks.
 
 How many clients is lots?

*One* client can be lots, if it has 1 large DLE, and has the
CPU/disk/network resources to run 1 dump simultaneously without
getting in it's own way.

Given sufficient holdingdisk, you use maxdumps and spindle numbers to
control the behavior.

-- 
Jay Lessert   [EMAIL PROTECTED]
Accelerant Networks Inc.   (voice)1.503.439.3461
Beaverton OR, USA(fax)1.503.466.9472


Re: Few questions.

2003-06-03 Thread Jay Lessert
On Mon, Jun 02, 2003 at 10:13:49AM -0500, Brendon Colby wrote:
 1. If I have a 16 tape tapecycle, and a dumpcycle of 7, does this mean I 
 effectively have 16 days of backups?

Yes.

 As in, I can do a restore of up to 16 
 days?

Yes.

 2. If I disable hardware compression and use software compression, how would 
 that affect my tapetype definition?
 
 define tapetype DLT8000 {
 comment Quantaum (IBM) DLT-8000
 length 37482 mbytes
 filemark 2362 kbytes
 speed 5482 kps
 }

With SW compression, length should be the native tape capacity (40GB in
the case of a DLT-8000).  37.5GB is close enough for now.

-- 
Jay Lessert   [EMAIL PROTECTED]
Accelerant Networks Inc.   (voice)1.503.439.3461
Beaverton OR, USA(fax)1.503.466.9472


Re: Questions about AMANDA

2003-05-31 Thread Jay Lessert
On Fri, May 30, 2003 at 11:58:28AM -0300, Roberto Samarone Araujo (RSA) wrote:
 Hi,
 
   I have a backup policy and I would like to convert it to Amanda. The
 backup policy is:
 
  One full backup - One time per month (without following
 incremental backups)

Fine.  Separate amanda config with:

dumpcycle 0  # full only
runspercycle 1   # full only
tapecycle 1  # full only
strategy noinc   # full only
index yes# you don't have to, but it's nice
record no# so you don't confuse the other config

  One full backup - One time per week and incrementals backups
 until the next week. For example, if I make a full backup on monday, the
 incrementals backups should be from tuesday until sunday.

You don't do this.  You do:

dumpcycle 1 week # At least one full per week
runspercycle 7   # Run 7X/week
tapecycle 14 # At least two weeks, more is better
index yes
record yes

So you don't know if the full will be on Monday or not, but you know
you will get at least one full/week.  You don't know that all the
other runs will be incremental, though they probably will be
unless amanda decides to move the full day to balance tape usage.

   Another questions:
 
  How can I restore a backup using Amanda ?

Usually, amrecover, sometimes amrestore.

  If one of the clients that have been backuped is down (lose all
 data), How  can I restore a backup on that client ?

Read docs/RESTORE.  :-)

  How can I change the label of a tape ?

amlabel -f

-- 
Jay Lessert   [EMAIL PROTECTED]
Accelerant Networks Inc.   (voice)1.503.439.3461
Beaverton OR, USA(fax)1.503.466.9472


Re: Frontend , UI for amanda [really, what is Bacula?]

2003-03-31 Thread Jay Lessert
On Mon, Mar 31, 2003 at 04:09:36PM -0800, Gordon Pritchard wrote:
   But I am looking at Bacula now also...

http://www.bacula.org
http://sourceforge.net/projects/bacula

Was not aware of Bacula before, and it looks like Kern Sibbald has done
some cool stuff, and Bacula certainly has some features that would be
nice to have in Amanda, BUT Bacula:

- Mandates MySQL or SQLite as the server engine behind it's server
  catalog.  This is not an option.

- Uses it's own client backup engine, not *dump/*tar.

- Uses multiple-interthreaded-blocks-from-different-backup-streams
  type tape format.

  (Bacula provides a small, lightweight tape-reader utility to use
   for disaster recovery, though.)

  Kern recommends that nobody actually *use* multiple backup streams
  in production yet, though, until it's better tested.

[Disclaimer: all of the above three points are from reading the docs,
 not from reading the source or asking the author, so I could be
 wrong.]

In an ideal world, three years ago I would have arranged for Kern to
receive a post-hypnotic suggestion to enhance Amanda.  :-)

-- 
Jay Lessert   [EMAIL PROTECTED]
Accelerant Networks Inc.   (voice)1.503.439.3461
Beaverton OR, USA(fax)1.503.466.9472


Re: 'insert'-error

2003-03-03 Thread Jay Lessert
[Posted and Cc'ed]

On Mon, Mar 03, 2003 at 03:58:08PM +0200, Sinan KALKAN wrote:
   i have installed and configured amanda-2.4.3 for seagate-travan 
   tapes but i get the following looping error when i try to use 
   any of amanda commands. while getting these messages on the console, 
   'kernel: ide-tape: Reached idetape_chrdev_open' is recorded in 
   /var/log/messages.

   by the way, i can write to and read from the tapes by using cat. 
   so, the problem is not with the tape or the medium, but with amanda.

Not clear yet.  Reading/writing your tapes with cat(1) is a little
unconventional.

Try reading/writing your tapes with GNU tar/dd(1), and manipulating
them with mt(1) (status, fsf, rewind).  If that all works flawlessly,
then maybe you've got an Amanda problem.

-- 
Jay Lessert   [EMAIL PROTECTED]
Accelerant Networks Inc.   (voice)1.503.439.3461
Beaverton OR, USA(fax)1.503.466.9472


Re: Solaris 8 Overland Tape changer config

2003-03-03 Thread Jay Lessert
[Posted and Cc'ed]

On Mon, Mar 03, 2003 at 02:54:11PM -, Nigel Barker wrote:
 I'm using Solaris 8, with an OverlandXB 10 slot changer and a DLT drive.
 Amanda2.4.3
 mtx 1.2.17

 amanda.conf contains :-
 tpchanger chg-zd-mtx
 tapedev /dev/rmt/1n
 changerfile /usr/local/share/amanda/Daily/chg-zd-mtx
 changerdev /dev/scsi/cganger/c1t6do
 ^
I assume the real amanda.conf doesn't have this typo.

 Trying to follow the chg-zd-mtx instructions, I try :-
 $ ./chg-zd-mtx -info
 none could not determine current slot

You seem to have installed mtx and chg-zd-mtx in /usr/local/share/amanda/Daily,
make sure they are also installed where amanda's $PATH can find them.
Then take the debugging to the next level with:

$ sh -x chg-zd-mtx -info

-- 
Jay Lessert   [EMAIL PROTECTED]
Accelerant Networks Inc.   (voice)1.503.439.3461
Beaverton OR, USA(fax)1.503.466.9472


Re: amanda having trouble choosing next tape

2003-03-03 Thread Jay Lessert
[Posted and Cc'ed]

On Mon, Mar 03, 2003 at 09:20:22AM -0800, John Oliver wrote:
 The dumps were flushed to tape Indyme014.
 The next tape Amanda expects to use is: a new tape.
 
 [EMAIL PROTECTED] root]# cat /etc/amanda/DailySet1/tapelist
 20030303 Indyme014 reuse
 20030228 Indyme013 reuse
 20030227 Indyme012 reuse
 20030227 Indyme011 reuse
 20030226 Indyme010 reuse
 20030225 Indyme009 reuse
 20030221 Indyme008 reuse
 20030220 Indyme007 reuse
 20030219 Indyme006 reuse
 20030218 Indyme005 reuse
 20030215 Indyme004 reuse
 20030214 Indyme003 reuse
 20030213 Indyme002 reuse
 20030212 Indyme001 reuse
 [EMAIL PROTECTED] root]#
 
 Why isn't it telling me it wants 001?

Because tapecycle  14?

-- 
Jay Lessert   [EMAIL PROTECTED]
Accelerant Networks Inc.   (voice)1.503.439.3461
Beaverton OR, USA(fax)1.503.466.9472


Re: Solaris 8 Overland Tape changer config

2003-03-03 Thread Jay Lessert
On Mon, Mar 03, 2003 at 05:10:45PM -, Nigel Barker wrote:
 $ sh -x chg-zd-mtx -info
 
 Well, it produced a lot more info (attached), but having had a quick look
 through I don't see what's going wrong.

 + echo 17:06:02 Exit (0) - 2 9 1  
 + echo 2 9 1  
 2 9 1 

Looks to me like it worked:
Current slot = 2
Total slots = 9
Can go backwards = 1 (true)

-- 
Jay Lessert   [EMAIL PROTECTED]
Accelerant Networks Inc.   (voice)1.503.439.3461
Beaverton OR, USA(fax)1.503.466.9472


Re: question - Cannot access file after backup

2003-02-28 Thread Jay Lessert
On Fri, Feb 28, 2003 at 07:15:47PM -0500, [EMAIL PROTECTED] wrote:
 The command 'amanda  WeeklySet1 find mws02' shown  the following:
date   host  disk   lv tape or file file
 status
 2003-02-26 mws02 /0 WEEKLY001  26OK
 2003-02-26 mws02 /boot  0 WEEKLY001   7 OK
 2003-02-26 mws02 /dev/shm0WEEKLY0011 OK
 
 But, Amanda could not retrieve files from the backup.
 I ran the command amrestore -p /dev/nst0 mws02 /boot | tar  tvf -. 
 I only saw skip ... message until  end of tape.

Are you certain that tape WEEKLY001 was the tape that was in
/dev/nst0 at the time you ran amrestore?

If so, are you certain that the tape was rewound before you ran
amrestore?

If so, what did you see when file# 7 was skipped?

Totally unrelated, but /dev/shm?  Do you really want to do that?
Just curious.

-- 
Jay Lessert   [EMAIL PROTECTED]
Accelerant Networks Inc.   (voice)1.503.439.3461
Beaverton OR, USA(fax)1.503.466.9472


Re: dumpdates

2003-02-27 Thread Jay Lessert
On Thu, Feb 27, 2003 at 03:39:51PM -0500, Joshua Baker-LePain wrote:
 On Thu, 27 Feb 2003 at 11:52am, bao wrote
  /-- barracuda  /junk lev 0 FAILED [/sbin/dump returned 1]
  |   DUMP: You can't update the dumpdates file when dumping a subdirectory
 
 You can't use dump to backup anything that isn't a filesystem.  You have 
 to use tar here.

Not quite.

On most (all?) dump/ufsdump/xfsdump... you *CAN* backup a subdirectory,
but you *CAN'T* do an incremental of a subdirectory.

If you specify a subdirectory, Solaris ufsdump soundlessly forces
level=0 and ignores the record (u) flag,  Linux ext2 dump (I believe)
exits if level  0 or the u flag is set.  This is what Bao is seeing
above.

Both behaviors are appropriately documented in the man pages.

I've got a weekly forced-full amanda config that does ufsdump on
several different subdirectories.  Works great, which is more
than I can say for the GNU tar attempt that preceeded it (yes,
I know GNU tar/Amanda/Solaris works great for everybody else
except me! :-)

-- 
Jay Lessert   [EMAIL PROTECTED]
Accelerant Networks Inc.   (voice)1.503.439.3461
Beaverton OR, USA(fax)1.503.466.9472


Re: holding disk and full dumps

2003-02-27 Thread Jay Lessert
[posted and Cc'ed]

On Thu, Feb 27, 2003 at 02:09:33PM -0800, bao wrote:
 Now, for testing, I set the dumpcycle 0 days, runspercycle 0 days, so 
 that every time I call amdump, it will do a full dump.
 In someone's post and even in the amanda doc, it says Amanda will choose 
 when it will do a full or inc. dump due to load balancing.
 
 1. If I want to do a full dump of the entire selected folder every week, 
 and incremental dumps every other days (1 full, 6 inc. per week), does 
 it mean I have to rely on amanda in choosing what to do with load 
 balancing??

You *should* let Amanda do it, but you don't have to.  To let Amanda do
it, just set:

dumpcycle   7 days  # at least 1 full dump/week.
runspercycle7   # run amdump 7 days/week

Assuming you're using 1 tape/run, you should have at least
14 tapes on hands and set:

tapecycle   14

 2. I read somewhere (which I can't find where and can't remember 
 exactly) that for full dumps or for tapeless backup, holding disk should 
 be set to no.

Dunno about tapeless, but that is bad advice for full dumps, IMO.
You want holdingdisk at least as big as the single biggest dump,'
and (preferably) as big as the biggest combined full run you expect
to see.  This lets the actual dump/tar runs complete as quickly
as possible, potentially in parallel if circumstances allow, and
then taper can feed the tape drive at full speed.

-- 
Jay Lessert   [EMAIL PROTECTED]
Accelerant Networks Inc.   (voice)1.503.439.3461
Beaverton OR, USA(fax)1.503.466.9472


Re: Help With Restore

2003-02-26 Thread Jay Lessert
On Wed, Feb 26, 2003 at 02:10:31PM -0600, Rebecca Pakish Crum wrote:
 It's been my understanding that you can only use amrecover for
 individual files,

amrecover is quite happy to recover the top directory (and thereby all
it's content).

 and amrestore for entire filesystems. (I could be wrong)

If you're using dump/ufsdump/xfsdump, amrestore is a bit more efficient
for a full restore, because you can 'ufsrestore r' instead of
'ufsrestore i'.

 And in order to use amrecover, you have to have your dumptype
 index parameter set to yes somewhere in your config. Also, you need to
 have amindexd and amidxtaped installed.

Yes.

-- 
Jay Lessert   [EMAIL PROTECTED]
Accelerant Networks Inc.   (voice)1.503.439.3461
Beaverton OR, USA(fax)1.503.466.9472


Re: amcheck: selfcheck request timed out

2003-02-22 Thread Jay Lessert
[Posted and Cc'ed]

On Sat, Feb 22, 2003 at 08:09:16PM +0100, Carsten Rezny wrote:
 I have installed Amanda 2.4.2p2 on a SuSE 8.0 box. The machine is server
 and client.
 
 When I run amcheck I get the following result 
 ERROR: /dev/nst0: no tape online
(expecting tape maphy-d05 or a new tape)

I assume you understand this and will fix it, right?

 NOTE: skipping tape-writable test
 NOTE: info dir /var/lib/amanda/maphy-daily/curinfo: does not exist
 NOTE: it will be created on the next run
 NOTE: index dir /var/lib/amanda/maphy-daily/index: does not exist

Not a problem, amanda will create on-the-fly assuming write permissions
are OK on /var/lib/amanda.

 
 WARNING: localhost: selfcheck request timed out.  Host down?
 Client check: 1 host checked in 30.006 seconds, 1 problem found

You always hate to see localhost here, so many ways that can go
wrong.

Change the DLE (disklist entry) from localhost to the true hostname,
double check ~amanda/.amandahosts according to docs/INSTALL, and
try again.

-- 
Jay Lessert   [EMAIL PROTECTED]
Accelerant Networks Inc.   (voice)1.503.439.3461
Beaverton OR, USA(fax)1.503.466.9472


Re: Problem with compression?

2003-02-21 Thread Jay Lessert
[posted and Cc'ed]

On Fri, Feb 21, 2003 at 11:51:59AM -0800, John Oliver wrote:
 I got this report from amanda.  It looks to me like it's reporting that
 it used 48.9% of the tape (a DLT 4000 20/40GB), but that it ran out of
 space.  

Let's see

   Total   Full  Daily
       
 Output Size (meg)9737.1 9611.4  125.7
 Original Size (meg) 19162.419028.8  133.6
 Avg Compressed Size (%)50.8   50.5   94.1
 Filesystems Dumped   18 17  1   (1:1)
 
 Tape Size (meg)  9738.0 9612.2  125.8
 Tape Used (%)  48.9   48.20.6
 Filesystems Taped18 17  1   (1:1)

This is telling you what actually got on the tape, or on holdingdisk.
But unless I'm a bozo, (always a good possibility) your disklist is 18
entries long, correct?

   taper: tape Indyme008 kb 20040128 fm 19 writing file: No space left on  device

taper died after writing 20GB, while writing the 19th (not 18th) DLE.

  DUMPER STATSTAPER STATS 
 HOSTNAME DISKL ORIG-KB OUT-KB COMP% MMM:SS  KB/s MMM:SS
 KB/s
 -- -  
 backup   /dev/hda1   1 FAILED

So the 19th DLE was backup:/dev/hda1, and it was big, 10GB.  Either
the estimate was smaller than reality (uncompressible data, new data,
etc., you can take a look at the sendsize debug file in
backup:/tmp/amanda), and/or amanda could not get a level0 to work.

Since we're seeing 18 fulls and 1 daily, it smells like you're dumping
a whole bunch of new DLE's on amanda at the same time?  If so, you'll
probably be OK tonight, just don't put any more new DLE's in.

-- 
Jay Lessert   [EMAIL PROTECTED]
Accelerant Networks Inc.   (voice)1.503.439.3461
Beaverton OR, USA(fax)1.503.466.9472


Re: Problem with compression?

2003-02-21 Thread Jay Lessert
On Fri, Feb 21, 2003 at 01:08:10PM -0800, John Oliver wrote:
 Why is it, then, that 17 filesystems compressed to 10GB, but this one

No, 18 filesystems compressed to 10GB, problem was on the 19th, right?

 filesystem isn't being compressed at all, apparently?
 
 [EMAIL PROTECTED] root]# df
 Filesystem   1K-blocks  Used Available Use% Mounted on
 /dev/hda1 72572444  21325164  47560768  31% /
 
 The whole thing is 21GB, and this is supposed to be a Level 1 backup.
 It doesn't seem reasonable to me that 20GB should be written to tape

What was the estimate/plan for backup:/dev/hda1?  Level and size,
see GENERATING SCHEDULE in logdir/amdump.1.

Confirm you've got 'record yes', and that it worked.

See also got result in amdump.1, it'll tell you if there's any
difference between a level0 and level1 estimate.

Confirm your dumptype compress is set where you expect for
backup:/dev/hda1.

-- 
Jay Lessert   [EMAIL PROTECTED]
Accelerant Networks Inc.   (voice)1.503.439.3461
Beaverton OR, USA(fax)1.503.466.9472


Re: Problem with compression?

2003-02-21 Thread Jay Lessert
On Fri, Feb 21, 2003 at 02:26:22PM -0800, John Oliver wrote:
  Since we're seeing 18 fulls and 1 daily, it smells like you're dumping
  a whole bunch of new DLE's on amanda at the same time?  If so, you'll
  probably be OK tonight, just don't put any more new DLE's in.
 
 DLE?

disklist entry.

 I assume you mean filesystems to be backed up.

No, amanda backs up DLE's, each of which may or may not comprise an
entire file system.

 No, I haven't added any
 recently.

Then you'll want to make sure you understand why Amanda decided to
do 17 fulls and 2 incrementals on a particular day; this would be
a rather un-Amandalike thing to do on a day when it knows it's
going to be chewing up 1/2 the tape on a single DLE, unless
there are extraordinary circumstances.  Number of runs/dumpcycle
too short, maybe?  I don't know.

-- 
Jay Lessert   [EMAIL PROTECTED]
Accelerant Networks Inc.   (voice)1.503.439.3461
Beaverton OR, USA(fax)1.503.466.9472


Re: Problem with compression?

2003-02-21 Thread Jay Lessert
[Posted and Cc'ed]

My last posting on this thread, we're in tapeout crunch right now...

On Fri, Feb 21, 2003 at 03:43:13PM -0800, John Oliver wrote:
 No, not really... :-)  My tapes are 20GB without compression.  I'm
 telling amanda to use compression.  It looks like it's saying it is.

And it is, in fact.

 Therefore, I should be able to get *at least* 20GB on my tapes.

You will get exactly 20GB on the tape, after Amanda compression.

In the run in question, for 18 of 19 DLE's,  you got 19.2GB before compression,
9.7GB after compression.

 Output Size (meg)9737.1 9611.4  125.7
 Original Size (meg) 19162.419028.8  133.6

Then it tried to put backup:/dev/hda1 on the tape, 20GB before compression,
and failed at exactly 20GB total after compression.

 It
 seems to be crapping out right about 20GB.

Post-compression, yes.  It's doing exactly what it's supposed to, you
just need to sit down and think about it for awhile, until you grok the
fullness.  You/amanda tried to put about 40GB pre-compression on the
tape, and it almost (but not quite) fit.

 I see:
 
 define dumptype comp-root-tar {
 root-tar
 comment Root partitions with compression
 compress client fast
 }
 
 That tells me that it'll use tar,

Nope, tells you you're calling another dumptype, root-tar.  You're
assuming that's calling program GNUTAR.  Chances are your assumption
is correct, but you don't know unless you look.

 and compress on the client.

Correct.

So at this point your only problem is figuring out why your level1 on a
21GB DLE is giving out 20GB of pre-compressed output.

-- 
Jay Lessert   [EMAIL PROTECTED]
Accelerant Networks Inc.   (voice)1.503.439.3461
Beaverton OR, USA(fax)1.503.466.9472


Re: forgetting the history of tapes

2003-02-15 Thread Jay Lessert
On Sat, Feb 15, 2003 at 04:55:48PM +0100, [EMAIL PROTECTED] wrote:
 Hi,
 
 I'm busy testing amanda for its use in production in the future. For
 these tests, I often want to restart as if I was starting the dump cycle
 from zero. How can I achieve this?

From docs/FAQ:

Q: Ok, I'm done with testing Amanda, now I want to put it in
production.  How can I reset its databases so as to start from
scratch?

A: First, remove the `curinfo' database.  By default, it is a
directory, but, if you have selected any other database format (don't,
they're deprecated), they may be files with extensions such as .dir
and .pag.

   Then, remove any log files from the log directory:
log.TIMESTAMP.count and amdump.count.  Finally, remove the
tapelist file, stored in the directory that contains amanda.conf,
unless amanda.conf specifies otherwise.  Depending on the tape changer
you have selected, you may also want to reset its state file.

In your case, instead of *removing* tapelist (and re-amlabeling everything)
you *could* just edit tapelist, put it back in the order you want (bottom
of tapelist gets used first), and mark all tapes unused (0 in the left-
hand date field).

-- 
Jay Lessert   [EMAIL PROTECTED]
Accelerant Networks Inc.   (voice)1.503.439.3461
Beaverton OR, USA(fax)1.503.466.9472



Re: sol 2.6 HP-C1557A (12000e) changer device

2003-02-10 Thread Jay Lessert
On Mon, Feb 10, 2003 at 04:09:07PM +0100, [EMAIL PROTECTED] wrote:
 Thanks all
 
 It's working now. Had to compile 'sst' from the contrib dir, configure sst.conf
 and 'add_drv'. It's now at
 /devices/pci@1f,0/pci@1/scsi@2,1/sst@4,1:character.
 
 I just wonder whether this is the best (stable) solution?

For Solaris 2.6, it might be the only solution (except possibly
for STCTL (http://www.cs.pdx.edu/~eric/stctl/).

For Solaris 8+, there's a Sun driver, sgen(7D), you would use that.

-- 
Jay Lessert   [EMAIL PROTECTED]
Accelerant Networks Inc.   (voice)1.503.439.3461
Beaverton OR, USA(fax)1.503.466.9472



Re: Ultrium-LTO tapetype

2003-02-07 Thread Jay Lessert
On Fri, Feb 07, 2003 at 07:46:59AM -0500, Joshua Baker-LePain wrote:
 Note, also, that your length is probably rather optimistic.  I've rarely 
 seen hardware compression get the 2X most manufacturers claim (and the 
 2.6X Sony claims for AIT is truly laughable).  Only you know how 
 compressible your data is, but if amanda keeps banging into EOT, you're 
 going to want to back off on your length.

Good advice.  FWIW, LTO HW compression *does* seem to be pretty good,
though; far better than DLT-4000/7000 and EXB-8500 hardware I've used
in the past (no experience with SDLT or AIT).

Last time I ran a capacity/speed check on my LTO drive using my /home
partition, I got 216GB on a 100GB native tape.  This is a pretty
standard IC engineering /home, lots of e-mail, program source,
executable objects, Verilog source, with a sprinkling of
(uncompressible) gzipped tarballs and simulator waveform output.

On my /project partition (full of Cadence dfII chip database, highly
compressible) it did 249GB.

I use 166GB for my (HW compression) tapetype length; I *hate* hitting
EOT.

-- 
Jay Lessert   [EMAIL PROTECTED]
Accelerant Networks Inc.   (voice)1.503.439.3461
Beaverton OR, USA(fax)1.503.466.9472



Re: Overland XB tape stacker

2003-01-28 Thread Jay Lessert
On Tue, Jan 28, 2003 at 08:11:54AM -0500, Chris Dahn wrote:
 On Monday 27 January 2003 12:19 pm, Nigel Barker wrote:
  Hi
 
  Apologies for the level of this question, I'm sure the answer is around on
  the web, but I can't find it!
  (Is there a Idiots Guide to setting up your first Amanda install?)
 
  Anyway, got a 10 tape Overland XB tape stacker, with a dlt 7000 drive, and
  I'm looking to find a suitable chg-multi.conf file.

Didn't catch this the first time, apologies if it's already answered.

You almost certainly don't want chg-multi (which is about using
using multiple non-library tape drives as if they were a library).

You want chg-zd-mtx (which is just a shell wrapper around mtx),
or chg-scsi.

My personal recommendation is to bring up mtx 

http://mtx.sourceforge.net/

which is handy to have with or without amanda, and if that works,
try chg-zd-mtx.

Strongly recommend 2.4.3 for chg-zd-mtx, even if you're not using
2.4.3 for the rest of amanda; chg-zd-mtx got a good re-write.

Lots of people use chg-scsi also.

If you happen to be Solaris and happen to choose chg-zd-mtx, there
are pointers for the Solaris generic SCSI driver, sgen(7D) in
the mtx contrib directory.

-- 
Jay Lessert   [EMAIL PROTECTED]
Accelerant Networks Inc.   (voice)1.503.439.3461
Beaverton OR, USA(fax)1.503.466.9472



Re: speed of amdump [really, speed of ethernet]

2003-01-24 Thread Jay Lessert
On Fri, Jan 24, 2003 at 08:31:19AM -0600, [EMAIL PROTECTED] wrote:
 And if any of us can RELIABLY exceed 30 percent usage on 
 an Ethernet network, I want to see it.

Well, one of my amanda clients does two dumps in parallel over 100BaseT
at =3MB/s dumper rate (each) every single night.  Does that count?  :-)

2*3MB/s - 48 Mbit/s.  Minimum.  They average more like 3.5MB/s each.

As long as the packet size is reasonable, any competently implemented
10BaseT or 100BaseT link can gracefully handle sustained payload peaks
up to 90% of wire speed, no problem.

I plan on 50% wire speed utilization for my 11pm-5am backup period, and
I consider that very conservative.

1000BaseT is different, at least with stock interoperable HW and
SW; the 1500-byte packet size limits you to something like
50-65% of wire speed.  You need semi-proprietary, not-completely
interoperable jumbo packet HW and SW to go faster.

 As always, your mileage may vary ...

Yup!

-- 
Jay Lessert   [EMAIL PROTECTED]
Accelerant Networks Inc.   (voice)1.503.439.3461
Beaverton OR, USA(fax)1.503.466.9472



Re: Full Backup Configuration

2003-01-17 Thread Jay Lessert
On Fri, Jan 17, 2003 at 04:06:09PM -0800, DK Smith wrote:
 Do most amanda configs (with changers) run amdump every weekday (M-F)
 and skip running amdump on weekends? I see this sort of idiom stated as
 the way for Amanda, however I am not so sure this well-documented
 idiom is actually used in practice.

Sure it is.

- Not everybody has a changer.

- Many people do something else on the weekend (like a
  forced-full).

I haven't followed this thread closely (and am not sure why the
OP is having problems), but I've used 5-out-of-7, 6-out-of-7 and
7-out-of-7 configurations configurations with Amanda.  My current
config is 6-out-of-7.  On the 7th day, a seperate forced-full config
runs.

5-week tapecycle, 1-week dumpcycle, 6 dumps/week:

dumpcycle   7 days  # at least 1 full dump/week.
runspercycle6   # run 6 days/week.
tapecycle   30 tapes# 5 weeks worth

# Run daily0 on sun/mon/tue/wed/thu/fri nights (0=sunday)
01  23  *   *   0-5 /home/amanda/bin/daily0
# Run archive0 on sat nights
01  23  *   *   6   /home/amanda/bin/archive0

My understanding was that the OP just wanted to run 3-out-of-7, and
I have *no* idea why that should be any problem at all.

-- 
Jay Lessert   [EMAIL PROTECTED]
Accelerant Networks Inc.   (voice)1.503.439.3461
Beaverton OR, USA(fax)1.503.466.9472



Re: why is tapetype so slow?

2003-01-15 Thread Jay Lessert
[Posted and Cc'ed]

On Wed, Jan 15, 2003 at 04:32:23PM -0500, Eric Sproul wrote:
 After disabling H/W compression, I am getting the same performance from
 tapetype.  It has been running for 15 hours and has only written about
 30GB.

There is 1 tapetype version around, it's got command-line options,
there are ways to go wrong.

 What are the consequences of having the wrong values in a tapetype
 definition?  If the size value is too large, Amanda will just encounter
 EOT sooner than expected, right?  If it's too low, you waste some tape. 
 I'm thinking I might take what people have already posted for the DLT220
 drive and scale it up by 45% (160 is 45% more than 110).  Am I crazy? 

Not crazy at all.  I think folks get a little too hung up on tapetype.
With your drive, for example, it's hard to see how you would give
a darn *what* filemark is, unless your disklist is 1000 entries
long...

However, you bought yourself a big, fast drive, and what you *should*
do is to confirm that it *is* big and fast.

Make yourself a representative disk file (15GB of gzipped tar file,
80GB of uncompressed xfsdump file, whatever is apprpropriate for *your*
Amanda environment).  Run a loop on it like this (pardon the csh, it
was a 60-second quickie):

#!/bin/tcsh -f
set count=1
while (1)
echo transfer $count
time dd if=/a6/backup/spitfire._a1.20020718.0 of=/dev/rmt/1cn obs=32k || exit
@ count++
end

Always good to be a little conservative on length, particularly if
runtapes=1; you don't want Amanda to estimate a run that doesn't end up
fitting.

-- 
Jay Lessert   [EMAIL PROTECTED]
Accelerant Networks Inc.   (voice)1.503.439.3461
Beaverton OR, USA(fax)1.503.466.9472



Re: offsite strategies

2003-01-06 Thread Jay Lessert
[CC'ed to amanda-users]
On Mon, Jan 06, 2003 at 01:18:42PM -0500, David Lebel wrote:
 My point is that I would like to be able to store a full cycle offsite
 in case of a disaster

That's a good thing.

 so I can full rebuild all my machines without the
 worries of having only partials full backups.

Ummm, I don't understand this.

But anyway, I've seen people do what I think you want in two ways:

1)  Assuming tapecycle=N*dumpcycle, keep N-1 dumpcycles worth of tapes
offsite at all times, rotating the oldest dumpcycle in and the
freshest out at the end of each dumpcycle.

This can work pretty well if your offsite storage is convenient,
flexible and reliable.

If you burn the site down, you lose whatever you've got in the
current dumpcycle.

2)  Run two amanda configs; one normal that you run daily, and
another 'dumpcycle 0'/'strategy noinc'/'record no' full-only type
config that you run once per some-period and immediately send
off-site.

If you burn the site down, you lose all your normal tapes,
but have your latest full-only tape(s).

-- 
Jay Lessert   [EMAIL PROTECTED]
Accelerant Networks Inc.   (voice)1.503.439.3461
Beaverton OR, USA(fax)1.503.466.9472



Re: HP Surestore 2/20 Tape Library and amanda

2002-12-06 Thread Jay Lessert
On Fri, Dec 06, 2002 at 11:19:45AM +0100, Marco Schierloh wrote:
 Hi,
 
 planning for a new backup-strategy I am investigating the possibility to use
 amanda with a HP Surestore 2/20 Tape Library with one ultrium drive. Does
 anyone have experience with this setup or does amanda not support this
 hardware?

I'm using:

Sun L20 (re-badged HP 2/20) with Ultrium LTO
SPARC Solaris 8
mtx 1.2.16rel with sgen(7D) driver per contrib/config_sgen_solaris.sh
Amanda 2.4.2p2 with chg-zd-mtx

Works perfectly, I had to make a one-line change in chg-zd-mtx.  Amanda
2.4.3 is recommended; chg-zd-mtx got a lot of work done on it.

Make sure the changer firmware is up-to-date.  I've talked to another
L20 user who was having mtx problems, but they had older firmware
(Anne, did the firmware download fix your problem?).  FWIW, I've got:

orion:/home/jayl 1 mtx -f /dev/scsi/changer/c4t4d0 inquiry
Product Type: Medium Changer
Vendor ID: 'HP  '
Product ID: 'C7200   '
Revision: '133S'
Attached Changer: No

The LTO drives are great.  I routinely get 15-20MB/s taper rates (HW
compression on).

-- 
Jay Lessert   [EMAIL PROTECTED]
Accelerant Networks Inc.   (voice)1.503.439.3461
Beaverton OR, USA(fax)1.503.466.9472



Re: estimate timeouts

2002-12-03 Thread Jay Lessert
On Tue, Dec 03, 2002 at 10:57:19AM -0600, Matthew Boeckman wrote:
 After resolving my earlier problems, I've uncovered a new one. Amanda is 
 timeing out, apparently while waiting for estimates on one of my hosts. 
 I think I can fix this by bumping the timeout value,

Looks like this would be the thing to do.

[clip]
 nocomp tar on 4 directories (bkup1, 2, 3, 4) on the 100+GB filesystem
[clip]
 the behavior: If I remove all but 1 of the bkup partitions from 
 disklist, amanda runs fine, whenever I try to run with bkup1-4 in the 
 disklist, I get:
   ultra  /webhome/bkup4 lev 0 FAILED [Request to ultra timed out.]
 for all partitions.
[clip]

 perplexed, as it kind of appears from these two that amanda was trying 
 to do both lvl 0's and lvl 1's of some of the partitions!

Rather, she's estimating what would happen if she did a level 0 or level 1,
so she can make a rational decision.

Total allowed estimate time for a client is (etimeout * # of DLEs).  If
all estimates for that client are not finished by then, the client is
skipped completely.  It is common for estimates to take a lng
time on DLEs with a large number of small files; crank up etimeout
by 2X/4X and see what happens.

You may hit dtimeout next, you know what to do.  :-)

Make sure your GNU tar is appropriately recent (1.13.25), I've observed
older versions running pathologically long --listed-incremental times
on Solaris.

 sendsize: debug 1 pid 1796 ruid 602 euid 602 start time Tue Dec  3 01:00:07 2002
[clip]
 sendsize: pid 1796 finish time Tue Dec  3 02:19:31 2002

So the estimate does finish, in just over an hour.  Default etimeout is
5 minutes, and you're doing 6 DLEs on ultra, so amdump is only waiting
30 minutes, and you lose.  The one hour+ estimate time is survivable,
so make sure GNU tar is new, bump etimeout to 800 or 1000 or 1200
seconds, and go for it.

-- 
Jay Lessert   [EMAIL PROTECTED]
Accelerant Networks Inc.   (voice)1.503.439.3461
Beaverton OR, USA(fax)1.503.466.9472



Re: COMPAQ TSL-9000 DAT autoloader device

2002-11-21 Thread Jay Lessert
On Thu, Nov 21, 2002 at 01:55:27PM +0100, [EMAIL PROTECTED] wrote:
 Hello,
 
 Does anyone have experience in configuration a COMPAQ/SONY TSL-9000 DAT
 DDS-3 device with AMANDA ? I have only the device and no single
 documentation about it. What I can tell is that it's an 8 DDS-3 tape
 changer device which gets recognized on my Solaris 8 system under
 /dev/rmt/0. Somehow it doesn't have two devices in it (as usual where you
 have one device for the changer and one device for the tape) so that's
 already something weird. I've tried to use mtx but mtx doesn't seem to like
 the device at all.

Once you know exactly what the target/lun setup for the changer is
(you *really* need documentation), you can proceed.

In any case, you should do:

setenv auto-boot? false
reset-all
probe-scsi-all
setenv auto-boot? true

It is either two targets, or one target with 2 luns.

On Solaris 8, you'll need to configure the sgen driver for the robot.
In the mtx distribution, see contrib/config_sgen_solaris.sh.  Do not
just run the script please (there's a cvs command that will fail,
and you may need lun!=0), read and understand first.

You'll end up with a device like: /dev/scsi/changer/cNtMd0, that's
what you point mtx (or chg-scsi, I suppose) at.

-- 
Jay Lessert   [EMAIL PROTECTED]
Accelerant Networks Inc.   (voice)1.503.439.3461
Beaverton OR, USA(fax)1.503.466.9472



Re: One (of six) partitions returning disk offline

2002-11-21 Thread Jay Lessert
On Thu, Nov 21, 2002 at 09:46:36AM -0500, Joshua Baker-LePain wrote:
 Nope.  / has an entry in fstab just like everybody else.

Well, not *just* like *everybody* else:  :-)

Solaris:
#devicedevice  mount   FS  fsckmount   mount
#to mount  to fsck point   typepassat boot options
/dev/dsk/c0t0d0s0  /dev/rdsk/c0t0d0s0  /   ufs 1   no  logging
/dev/dsk/c1t3d0s7  /dev/rdsk/c1t3d0s7  /local  ufs 2   yes logging

Linux:
LABEL=/ /   ext3defaults1 1
LABEL=/local/local  ext3defaults1 2

But you're right, I was wrong, sorry for the misinformation.

-- 
Jay Lessert   [EMAIL PROTECTED]
Accelerant Networks Inc.   (voice)1.503.439.3461
Beaverton OR, USA(fax)1.503.466.9472



Re: restore error

2002-11-13 Thread Jay Lessert
On Wed, Nov 13, 2002 at 01:40:18PM -0600, Matthew Boeckman wrote:
 I've got a file that I amrestore'd, but restore is giving me a strange 
 error:
 
 restore -ivh -f filename
 Verify tape and initialize maps
 Tape block size is 32
 Note: Doing Byte swapping

Eeek!

Sure sounds like you're running restore on hardware/OS that is very
different than that of the original amanda client that generated
the dump.

 Dump   date: Fri Nov  1 01:39:49 2002
 Dumped from: Tue Oct 29 04:47:44 2002
 Level 1 dump of /usr on sparck:/dev/dsk/c0t0d0s4

In particular, it smells like you're trying to run:

restore(8) on an x86 Linux box

using a data file written by:

ufsdump(1M) on a SPARC Solaris box.

If that's the case (I could be wrong):

1)  It would have been nice of you to tell us up-front, instead of
making us reverse-engineer it.

2)  It's not guaranteed to work in the first place.

3)  I'm fairly impressed that it gets as far as it does!  :-)

Grab sparck or another Solaris box, then just copy/NFS mount the data
file over there and run ufsrestore.

You could try:

% dd if=filename obs=32k conv=swab | sudo restore -ivh -b 32 -f -

But I doubt that will help.

You could also try bare-minimum restore flags, assuming you have
room:

% dd if=filename obs=32k conv=swab | sudo restore -r -b 32 -f -

But I doubt that will help, either.

-- 
Jay Lessert   [EMAIL PROTECTED]
Accelerant Networks Inc.   (voice)1.503.439.3461
Beaverton OR, USA(fax)1.503.466.9472



Re: Requesting comments on a possible amanda configuration

2002-11-12 Thread Jay Lessert
On Tue, Nov 12, 2002 at 10:29:08AM -0700, Carl D. Blake wrote:
[clip]
 this kind of arrangement you can restore within a day for the previous
 week, within a week for the previous month, and within a month for the
 previous year.

Carl,

If this is your goal, and you want to use Amanda, I would suggest
this:

config daily, run 5X/week:
- dumpcycle 7 days
- runspercycle 5
- tapecycle 25
- index yes
- record yes
- strategy normal

config monthly, run 1X/month
- dumpcycle 0
- runspercycle 1
- tapecycle 12
- index yes
- record no
- strategy noinc

This gets you within a day for 5 weeks, and within a month for 1 year,
using 37 tapes.  It will just work, with very little effort on your
part, because there are Amanda users all over the world doing something
*very* close this.

Note that I've freely mixed dumptype and non-dumptype parameters here
for brevity, you'll be more careful, of course.  :-)

Once you get over the extra tapes (hey, 37  20, but  80!) and get
used to the idea that Amanda will do your daily fulls when *she* wants
(not when you want), you'll like it.  Trust me.  :-)

-- 
Jay Lessert   [EMAIL PROTECTED]
Accelerant Networks Inc.   (voice)1.503.439.3461
Beaverton OR, USA(fax)1.503.466.9472



Re: Requesting comments on a possible amanda configuration

2002-11-12 Thread Jay Lessert
On Tue, Nov 12, 2002 at 01:36:54PM -0700, Carl D. Blake wrote:
 On Tue, 2002-11-12 at 11:59, Jay Lessert wrote:
[clip]
  config daily, run 5X/week:
[clip]
  config monthly, run 1X/month
[clip]
 This doesn't sound too bad.  My only question is how would I run the
 monthly config.  Would I just pick a day in the month on which I would
 run amdump monthly instead of amdump daily?

Do what you like, that's *one* choice Amanda won't make for you.  :-)

If I were you, and since (apparently) you want to run your dailies
5X/week on Monday-Friday, I would run my monthly on the first Saturday
of each month, something like:

% sudo crontab -l amanda
# Run daily on mon/tue/wed/thu/fri nights (0=sunday)
01  23  *   *   1-5 /home/amanda/bin/daily
#
# Run monthly on every sat night, it exits if DOM  7
01  23  *   *   6   /home/amanda/bin/monthly

~amanda/bin/monthly:
#!/bin/sh
if expr `date +%d` \ 8 ; then
amdump monthly
amtape monthly eject
else
exit 0
fi

 If so, would that cause
 the daily dumps to get offset by one day?  Does this even matter?

Note that the monthly config runs with 'record no', so the daily
config has no idea that monthly exists.

If for some reason you can't run monthly on an off night (no changer,
for example), then yes, you would have to arrange for your cron/script
environment to either:

- Run daily, then monthly in series 1 day/month 

- Skip daily, run monthly instead 1 day/month.

Best is a changer with one slot dedicated to the monthly config,
though.  Make that happen if you possibly can.

-Jay-



Re: amtape error more info

2002-10-08 Thread Jay Lessert

On Tue, Oct 08, 2002 at 01:25:48PM -0700, Jerry wrote:
 Aha! this may be related to textutils. It looks like
 the solaris tr is different than the gnu util tr. 
 Wow, solaris sucks!
 --- Jerry [EMAIL PROTECTED] wrote:
  But... I did have to fix this in the script:
  (chg-zd-mtx):
  
  numeric=`echo $whichslot | tr -cd 0-9`
  to
  numeric=`echo $whichslot | tr -cd [0-9]` -- add [ ]

Solaris 8+ /usr/bin/tr is indeed broken, but not in the way you think.
:-)

The appropriate new character range syntax for tr(1), which
will work with *either* GNU tr or Solaris 2.6+ /usr/bin/tr is:

[CHAR1-CHAR2]

The 2.4.3 release notice mentioned that chg-zd-mtx has been re-written,
so hopefully that included handling tr syntax.

If you need the old syntax to work on Solaris you can get it from
/usr/ucb/tr or /usr/xpg4/bin/tr.  I fixed my 2.4.2p2 chg-zd-mtx
my just putting /usr/xpg4/bin first in $PATH.

I agree it's unfortunate that Sun decided to stop grandfathering the
old syntax in /usr/bin/tr, but the man page stopped defining it that
way years and years ago.

-- 
Jay Lessert   [EMAIL PROTECTED]
Accelerant Networks Inc.   (voice)1.503.439.3461
Beaverton OR, USA(fax)1.503.466.9472



Re: Amanda changes her mind after tape removed from rotation.

2002-09-06 Thread Jay Lessert

On Fri, Sep 06, 2002 at 08:22:53AM -0400, Doug Johnson wrote:
 These dumps were to tape DailySet106.
 The next tape Amanda expects to use is: DailySet108.
 
 So our procedure here is to remove DailySet106.

You use amrmtape, correct?

 and add a new tape, say
 DailySet115. This happens without a problem but where we run into issues is
 now Amanda doesn't want DailySet108 anymore, she wants DailySet115. The only
 way to know she has changed her mind is to run a amcheck to see what she
 expects.

Sounds like you just need to hand-edit tapelist until it looks the way
you want.  The last tape on the list is the one Amanda wants to use
next.  The top tape on the list is the one Amanda used last night.

-- 
Jay Lessert   [EMAIL PROTECTED]
Accelerant Networks Inc.   (voice)1.503.439.3461
Beaverton OR, USA(fax)1.503.466.9472



Re: statistics

2002-09-04 Thread Jay Lessert

On Thu, Sep 05, 2002 at 03:29:39PM -0700, greg wrote:
 Does this look right?
 
 STATISTICS:
   Total   Full  Daily
       
 Estimate Time (hrs:min)0:06
 Run Time (hrs:min)11:48
 Dump Time (hrs:min)   11:42  11:42   0:01
 Output Size (meg)   24171.424171.40.0
 Original Size (meg) 54320.454318.32.1
 Avg Compressed Size (%)44.5   44.51.5   (level:#disks ...)
 Filesystems Dumped3  2  1   (1:1)
 Avg Dump Rate (k/s)   587.4  588.00.8
 
 Tape Time (hrs:min)   11:41  11:41   0:00
 Tape Size (meg) 24171.524171.50.1
 Tape Used (%)  64.5   64.50.0   (level:#disks ...)
 Filesystems Taped 3  2  1   (1:1)
 Avg Tp Write Rate (k/s)   588.5  588.5   29.3
 
 
 It is saying it took 11 hrs to dump 54GB or 24GB compressed.  I have
 a quantum DLT8000-40 which is rated at 6MB/s or 12MB/s compressed.
 11hrs seems a long time even considering gzip as the software
 compression.  Is there something I am missing here?

It would be useful to see the DUMP SUMMARY: section, but the average
dump and tape rates *almost* match, so it looks very much like your two
full dumps were straight to tape, did not use holding disk, and were
rate-limited by gzip.

There are other things that could have limited the dump rate, but
gzip is the first place to look.  You can force fulls on the
same two file systems and run a top in background, something
like:

% top b -d300 -n500  top.out 

...on the client(s) and server in question, see how much cpu time the
gzip processes take.  If the gzip times are short, then I would
try running the backup processes (dump or tar or whatever) by
hand to /dev/null and see how long that takes.

-- 
Jay Lessert   [EMAIL PROTECTED]
Accelerant Networks Inc.   (voice)1.503.439.3461
Beaverton OR, USA(fax)1.503.466.9472



Re: dumps fail: data timeout

2002-08-28 Thread Jay Lessert

On Wed, Aug 28, 2002 at 02:30:41PM -0400, Gene Heskett wrote:
 3600 certainly does seem like enough, thats ten hours!

Nope.  1 hour.  I've seen seen data timeout failures on incrementals
for file systems with very large numbers of small files with
dtimeout=1800, FWIW.

Amy asked:
 If the dump is timing out, shouldn't amanda properly kill the
  processes though?

It's supposed to.  On 2.4.2p2/Solaris 8, I've witnessed that *not*
happen at least once, though.  The client sendbackup process finally
got ready to send data, found that the server had closed the sockets on
it and quietly died.  Left tracks in sendbackup.*.debug.  It was a pretty
minor annoyance, I didn't pursue it.

 Will the using hardware compression cause the dumps to take
  longer?

No.  Assuming you're dumping to holdingdisk, hardware compression (no
compression, from Amanda's POV) will make sendbackup finish earlier.

-- 
Jay Lessert   [EMAIL PROTECTED]
Accelerant Networks Inc.   (voice)1.503.439.3461
Beaverton OR, USA(fax)1.503.466.9472



Re: dumps fail: data timeout

2002-08-28 Thread Jay Lessert

On Wed, Aug 28, 2002 at 03:44:36PM -0400, Gene Heskett wrote:
 On Wednesday 28 August 2002 15:13, Amy Tanner wrote:
 On Wed, Aug 28, 2002 at 02:30:41PM -0400, Gene Heskett 
 etimeout 300
 
 Which looks like enough, but note on down the page where it 
 indicates it took 7 minutes, not 5.

Careful.  Estimate timeout is per client, not per disklist entry, and
is (etimeout * number_of_disklist_entries).  So if you have a client
box1, and box1 has three disklist entries, then the real wall-clock
estimate timeout for box1 would be 300*3 = 15 minutes

   Total   Full  Daily
       
 Estimate Time (hrs:min)0:07
 Run Time (hrs:min) 9:33
 Dump Time (hrs:min)8:30   6:20   2:11
 Output Size (meg)   15258.310167.3 5091.0
 Original Size (meg) 15258.310167.3 5091.0
 Avg Compressed Size (%) -- -- --   
  (level:#disks ...) Filesystems Dumped   71 20
  51   (1:43 2:6 3:2) Avg Dump Rate (k/s)   510.3 
  457.0  665.0
 
 Tape Time (hrs:min)1:54   1:21   0:33
 Tape Size (meg) 15260.510167.9 5092.6
 Tape Used (%)  46.6   31.0   15.5  
  (level:#disks ...) Filesystems Taped71 20
  51   (1:43 2:6 3:2) Avg Tp Write Rate (k/s)  2284.3
  2151.3 2606.0
 
 This doesn't add up, you're moving about 2.2 megs/second, and it 
 takes over 8 hours?

2.2 megs/second is *taper* throughput, not *dumper* throughput.

-- 
Jay Lessert   [EMAIL PROTECTED]
Accelerant Networks Inc.   (voice)1.503.439.3461
Beaverton OR, USA(fax)1.503.466.9472



Re: dumps fail: data timeout

2002-08-28 Thread Jay Lessert

On Wed, Aug 28, 2002 at 03:31:50PM -0500, Amy Tanner wrote:
 On Wed, Aug 28, 2002 at 01:13:32PM -0700, Jay Lessert ([EMAIL PROTECTED]) wrote:
  On Wed, Aug 28, 2002 at 02:13:03PM -0500, Amy Tanner wrote:
   
 We recently switched to hardware compression because we
 got a new tape changer.  Perhaps that's the cause of the
 problems.
  
  Sounds unlikely.  You don't get data timeout from a tape drive.
 
 What I meant was that because we switched to a new tape changer, we
 decided to turn on hardware compression and turn off software
 compression.  Perhaps the change of compression choices is the source of
 the problems.

And what *I* meant :-) is that if you are dumping to holdingdisk, it is
not possible for *anything* you do with the tape drive to cause data
timeout, even if you hang the tape drive from the ceiling and use it
for a pinata!  You would get some kind of taper failure, but not a
data timeout.

 Yes, we're running dump on linux ext2.  However, the kernel and dump
 versions have not changed from when they were working until now (not
 working).

The kernel/dump issues in question are data and activity-dependent,
not hard failures.  Just because it worked yesterday doesn't mean
it will work (as well) today, unless your data and usage patterns
are static.

 Also, note: some file systems on these 2 machines DO dump successfully,
 and some don't.

So it's data dependent, which is not surprising.

 And it's not always the same ones that fail or succeed.
 I can't find a pattern.

*That* is a little surprising.  Though if it is the case that you
have zero file systems that *always* fail (e.g., 50% of the dumps on
any single file system succeed in a given week), maybe you can just
leave it alone and let the other admin fix it when they get back. :-)

-- 
Jay Lessert   [EMAIL PROTECTED]
Accelerant Networks Inc.   (voice)1.503.439.3461
Beaverton OR, USA(fax)1.503.466.9472



Re: Configuration help?

2002-08-16 Thread Jay Lessert

On Fri, Aug 16, 2002 at 10:07:54AM +0300, Conny Gyllendahl wrote:
 Now for my last question for this time: what are the pros and cons, if
 any, for using tar or (ufs)dump? Are there any reasons or situtations
 for choosing one over the other?

You say ufs(dump), so I'm assuming recent Solaris.  The calculation
is different for Linux.

ufsdump plus:

- Gets all file system/file attributes, period, no ifs ands or
  buts, even ones you don't know are there.  :-)

- Does not touch atime on files.

- Does not require running as root.

- In my experience, on file systems with large numbers of small
  files, estimates and incrementals are much faster than GNU tar.

ufsdump minus:

- No exclude list.

- No splitting the file system.

- Data is not portable to other OS's.

tar plus:

- Flexibility.  Excludes, splitting.

- 100% portable, ubiquitous.  (But see also tar minus)

tar minus:

- Touches atime.

- In my experience, on file systems with large numbers of small
  files, estimates and incrementals can take a long time.

- Portable, but: GNU tar output with very long paths/names is
  only guaranteed to be readable by another GNU tar of similar
  version.  Reading with non-GNU tar (or older GNU tar) may
  generate errors, or garbled paths/names.  Depending on the
  versions involved, things get interesting at name lengths of 100
  or 256, and path lengths of 256 or 1024.  This is a very minor
  minus, you just need to be aware.

- Has to run as root (no, I don't lose any sleep worrying about
  runtar exploits! :-)

Speed folklore:

In my experience, on full backups, on modern Solaris kit, GNU tar
is a bit faster than ufsdump, *not* slower.  Disks have gotten a
lot faster (helps the more-random seeks that tar has to do), ufs
and the processors have gotten faster (so ufs isn't in the way
any more), and ufsdumps's initial mapping (wonderful for
'ufsrestore -i') is just time down the drain from amanda's
POV.

There, no religion here, I think.  Look at the plus/minus lists and
make up your own mind.  As the Perl folk say, TMTOWTDI.

-- 
Jay Lessert   [EMAIL PROTECTED]
Accelerant Networks Inc.   (voice)1.503.439.3461
Beaverton OR, USA(fax)1.503.466.9472



Re: DLT8000s in a TL891 on Compaq kit running RedHat 7.3 Input/output error

2002-08-15 Thread Jay Lessert

On Thu, Aug 15, 2002 at 05:30:14PM +0100, Owen Williams wrote:
 The second is that I get problems using tapes from 'slot 9' of my 
 10 slot library.  I have a cleaning tape in the tenth.  I have 
 two tape drives.  I get:
 
 amcheck-server: slot 9: rewinding tape: Input/output error

H, some changer/changer_script combos like to start at slot 0 and
some like to start at slot 1.  Are you sure you're not just
off-by-one and you're trying to amcheck the cleaning tape?

You could tell us your changer setup (both the changer.conf and the
relevant bits of amanda.conf), and show us 'amtape config show'.
And 'mtx status', if you're using mtx.

Personally, I do *not* leave a cleaner in the library, and clean
the tape drives once a year whether they need it or not.  :-)

Cleaning was essential and frequent in the old Exabyte 8x00 days (may
they rot in hell forever), but I've literally never had a
cleaning-related failure in DLT or LTO drives.  (I've had bad
*tapes*, though.)

 I got this first with amlabel.  I configured amanda to use the 
 other drive.  I cleaned both drives and tried a new tape in 'slot 
 9' but I still get the 'rewinding input/output' error.
 
 I'm loathed to start backing things up while it still complains.

If you're not off-by-one, and there really is some sort of anomaly
where your changer script doesn't like slot 9, just tell it not
to use slot 9 for now, and start doing backups.  You can always
update changer.conf later when you sort it out.

-- 
Jay Lessert   [EMAIL PROTECTED]
Accelerant Networks Inc.   (voice)1.503.439.3461
Beaverton OR, USA(fax)1.503.466.9472



Re: chg-zd-mtx output

2002-08-15 Thread Jay Lessert

On Thu, Aug 15, 2002 at 12:49:04PM -0400, Jason Greenberg wrote:
 Does anyone know what could cause this output?  I am trying to debug my
 setup with a PowerVault 128T / Linux Redhat 7.3 
 
 
 bash-2.05a$ /usr/lib/amanda/chg-zd-mtx -info
 /usr/lib/amanda/chg-zd-mtx: [: : integer expression expected
 /usr/lib/amanda/chg-zd-mtx: [: -lt: unary operator expected
   16 1 1

I'm a happy chg-zd-mtx user, but it is not the most portable script
ever, because of:

- The wide range of responses possible from a dizzying
  array of almost-but-not-quite standards compliant
  changers.

- The wide range of responses possible from a disturbingly
  variant range of tr, sed, awk, sh, etc. utilities.

When I was tweaking my chg-zd-mtx to work, what I found most useful
was:

% cd /usr/local/etc/amanda/config_dir
% sh -x /usr/lib/amanda/chg-zd-mtx -info (or whatever command doesn't work)

In your case, the problem is probably *not* in the line like:

if [ $usedslot -lt 0 ]; then

where the script is blowing up, but in the mildly amazing line like:

usedslot=`echo $tmpslot |
  sed -n s/Data Transfer Element $drivenum:Empty/-1/p;
  s/Data Transfer Element $drivenum:Full (Storage Element 
\([1-9][0-9]*\) Loaded)\(.*\)/\1/p`

where the sed is not interacting w/the mtx output from your changer the
way the script expects.

-- 
Jay Lessert   [EMAIL PROTECTED]
Accelerant Networks Inc.   (voice)1.503.439.3461
Beaverton OR, USA(fax)1.503.466.9472



Re: DLT8000s in a TL891 on Compaq kit running RedHat 7.3 Input/output error

2002-08-15 Thread Jay Lessert

On Thu, Aug 15, 2002 at 06:15:05PM +0100, Owen Williams wrote:
 # mtx status
   Storage Changer /dev/changer:2 Drives, 10 Slots ( 0 Import/Export )
 Data Transfer Element 0:Empty
 Data Transfer Element 1:Empty
   Storage Element 1:Full :VolumeTag=A0  
   Storage Element 2:Full :VolumeTag=A1  
   Storage Element 3:Full :VolumeTag=A2  
   Storage Element 4:Full :VolumeTag=A3  
   Storage Element 5:Full :VolumeTag=A4  
   Storage Element 6:Full :VolumeTag=A5  
   Storage Element 7:Full :VolumeTag=A6  
   Storage Element 8:Full :VolumeTag=A7  
   Storage Element 9:Full :VolumeTag=A8  
   Storage Element 10:Full :VolumeTag=C0
[clip]
 runtapes 1# number of tapes to be used in a single run of amdump
 tpchanger chg-scsi  # the tape-changer glue script

OK, I *think* you're off-by-one.

 tapedev 0   # the no-rewind tape device to be used
 #changerdev /dev/sg0
[clip]
 config0
 drivenum  0
 startuse  0   # The slots associated with the drive 0
 enduse9   # 
 cleancart 9   # the slot where the cleaningcartridge for drive 0 is located

So the changer thinks first slot is 1, but you've said 'startuse 0'.
when you go 'amtape DailySet1 slot first' and 'slot last', and you
go physically watch the robot, which tapes move?

Besides which, if you *are* going to use cleancart (and I'm not sure
it really works in Amanda at all), I'm pretty sure enduse == cleancart
is not right.

-- 
Jay Lessert   [EMAIL PROTECTED]
Accelerant Networks Inc.   (voice)1.503.439.3461
Beaverton OR, USA(fax)1.503.466.9472



Re: Sun L20 and amanda

2002-08-15 Thread Jay Lessert

On Thu, Aug 15, 2002 at 04:12:01PM -0600, Anne M. Hammond wrote:
 Hello,
 
 I have configured amanda 2.4.3b3 to use the DLT8000 in the
 library.
 
 The next step is to get the changer working.  I wasn't
 able to get mtx working with the L20.
 
 If anyone if currently using the L20 library with amanda,
 or if you have suggestions on how to configure the tape
 library changer, I'd really appreciate it.

I'm currently running Sparc Solaris 8, L20 w/LTO, 2.4.2p2,
mtx-1.2.16rel, chg-zd-mtx.

Drop me a line off-list w/any specific questions, you can summarize
to the list later.

-Jay-

-- 
Jay Lessert   [EMAIL PROTECTED]
Accelerant Networks Inc.   (voice)1.503.439.3461
Beaverton OR, USA(fax)1.503.466.9472



Re: FW: Sun L280

2002-08-14 Thread Jay Lessert

On Wed, Aug 14, 2002 at 03:49:30PM -0500, [EMAIL PROTECTED] wrote:
 Hello,
 
 A little more information:
 
 I am running Solaris 8
 I have an A1000 connected to the 220R (works fine)
 I have installed the suggested patches for Amanda and L280
[clip]
 can help me here.  I have a sun e220r connected to a sun L280 and I have 
 a couple questions.  First of all, is there a way that I can confirm 
 that my server actually sees the tape library and the drive?

The low-tech way to make sure the box sees your new SCSI devices
is to do a clean halt to boot prompt then:

OK reset-all
OK probe-scsi-all

Confirm the target addresses are what you expected them to be.

Bring it back up with:

OK boot -r

FOR THE TAPE:

IIRC, the drive in that box is a DLT-7000, right?  For that drive, with
Solaris 8, you won't need to touch st.conf.  After the 'boot -r' you
should have /dev/rmt/0* (if you've never had a tape drive on this box
before), and you should be able to 'mt -f /dev/rmt/0n status'.  The
devices will be:

0l*:20G (DLT4000, compression off)
0m*:40G (DLT4000, compression on)
0h*:35G (DLT7000, compression off)
0*: 70G (DLT7000, compression on)
0c*:70G (DLT7000, compression on)
0u*:70G (DLT7000, compression on)

You can download a nice Solaris st.conf PDF from Quantum, but you don't
need it.

FOR THE CHANGER:
---
These days, for Solaris 8, I'm recommending sgen(7D) and mtx.  Even if
you don't end up using chg-zd-mtx, mtx itself is nice to have, and the
mtx source includes a nice little contrib/config_sgen_solaris script.

You end up with an sgen.conf something like:

device-type-config-list=changer;
name=sgen class=scsi target=0 lun=0;
name=sgen class=scsi target=1 lun=0;
name=sgen class=scsi target=2 lun=0;
name=sgen class=scsi target=3 lun=0;
name=sgen class=scsi target=4 lun=0;
name=sgen class=scsi target=5 lun=0;
name=sgen class=scsi target=6 lun=0;
name=sgen class=scsi target=7 lun=0;
name=sgen class=scsi target=8 lun=0;
name=sgen class=scsi target=9 lun=0;
name=sgen class=scsi target=10 lun=0;
name=sgen class=scsi target=11 lun=0;
name=sgen class=scsi target=12 lun=0;
name=sgen class=scsi target=13 lun=0;
name=sgen class=scsi target=14 lun=0;
name=sgen class=scsi target=15 lun=0;

(You don't *need* every possible target, of course).

This ends up making /dev/scsi/changer/cXtYd0.

-- 
Jay Lessert   [EMAIL PROTECTED]
Accelerant Networks Inc.   (voice)1.503.439.3461
Beaverton OR, USA(fax)1.503.466.9472



Re: Sun L280

2002-08-14 Thread Jay Lessert

On Wed, Aug 14, 2002 at 05:38:32PM -0500, [EMAIL PROTECTED] wrote:
 Hello,
 
 I was able to see the changer and the drive, but mt says my drive is 
 either offline or no tape loaded (and is correct, no tape loaded).
 
 /dev/rmt/0n: no tape loaded or drive offline
 
 However, I was not able to get the sgen.conf to work correctly.  I 
 couldn't get the config_sgen_solaris script to work without a CVROOT 
 set.  Any ideas what I can do from here?Thanks...

Dude, just delete the line!  It doesn't do anything.  -Jay-

-- 
Jay Lessert   [EMAIL PROTECTED]
Accelerant Networks Inc.   (voice)1.503.439.3461
Beaverton OR, USA(fax)1.503.466.9472



Re: dumps too bigggg.....

2002-08-13 Thread Jay Lessert

On Tue, Aug 13, 2002 at 09:19:05AM -0500, Chris Johnson wrote:
 Now that the first dump somewhat worked I have more questions.
 Two of the file systems had errors dumping. I am attempting to 
 backup 2x 45GB and 2x 9 GB file systems. I'm using a DLT tape changer. 
 The errors came backs as:
 
 *hostname* /u lev 0 FAILED [dumps too big, but cannot incremental dump new disk] 
 *hostname* /u5 lev 0 FAILED [dumps too big, but cannot incremental dump new disk]

The usual way to cold start an amanda config with totaldisk  tapesize
is to start with disklist mostly commented out, then uncomment
a few each day.

In your case, assuming your disk usage is normal ( 10% new
files/day, say) you've got enough tape for your 4 file systems once you
get into the rotation, and you could have commented out 1x9 and 1x45
the first night and uncommented them today.

Moot point now, though.  :-)  Amanda will catch up tonight and start
balancing the level0 rotation tomorrow night, without you intervening
at all.

 did dump both a 9GB and a 45GB file system to the tape. I was hoping 
 that amanda would load the another tape to finnish the dump but only one 
 tape was used.

If you set 'runtapes 2', it'll use two tapes.  I did that once in a
cold-start situation, instead of the uncommenting trick, and it
worked great.  Deleted the runtapes entry the next day.

 The report emailed to me said that only 60% of the tape 
 was used. Why didn't the one of the other two tapes get used?

Because runtapes defaults to 1.

-- 
Jay Lessert   [EMAIL PROTECTED]
Accelerant Networks Inc.   (voice)1.503.439.3461
Beaverton OR, USA(fax)1.503.466.9472



Re: hardware compression...

2002-08-13 Thread Jay Lessert

On Tue, Aug 13, 2002 at 01:38:14PM -0600, Scott Sanders wrote:
 OK I know that's a bad thing to say around here BUT...

Not around me.  I'm the lone HW compression advocate (in the right time
and place)...

 I'm backing up some Solaris 2.6 machines and need to be able to do a
 restore with nothing but the O/S CD-ROM.  Since it doesn't have gzip or
 any other compression software on the ROM I am just doing straight
 ufsdumps (level 0 every night) to tape using amanda. My question is,
 since the drive is handling the compression what tape length should I be
 specifying in my tapetype definitions? For example should I use 35000
 mbytes or 7 mbytes for a DLT-7000 with 35GB of native capacity? Or
 maybe something in between just to make sure I don't run out f tape?

If you're otherwise happy with SW compression, your protocol could
easily be:

1)  HW compression for the OS partitions.
2)  SW compression on everything else.

You boot CDROM, restore 1), reboot and restore 2).  (AFAIK, you have to
do something like this if you're going to run VxVM or Disksuite,
anyway.)

You definitely do not want to use 70GB for tapetype length, you
will not get that much on OS partitions.

Until recently, I was running a DLT-7000, HW compression, Solaris 2.6,
and I used:

define tapetype DLT-7000HC {
comment DLTtape IV half-inch cartridge for DLT-7000, hardware compression
# assume compression ratio 0.58, length = 35000/.58 mbytes
length 60344 mbytes
filemark 8 kbytes
# speed = 5000/.58 kbytes
speed 8620 kbytes
}

This was tweaked to the max (I was stalling for as long as I possibly
could before buying a fancy new changer), and did roll off the end a
couple times in 6 months.  If I were you, and I was close to fitting on
35GB, I would use 35/.7 = 50GB.

(FWIW, I've tested the LTO drive I'm using now (HP) on the same data,
and it gets a little better than 2X (.5) compression.  Definitely
better compression HW there than on the old DLT-7000.

-- 
Jay Lessert   [EMAIL PROTECTED]
Accelerant Networks Inc.   (voice)1.503.439.3461
Beaverton OR, USA(fax)1.503.466.9472



Re: amrecover problem

2002-08-11 Thread Jay Lessert

On Wed, Jul 17, 2002 at 05:19:16PM +0100, Mark Snelling wrote:
 I have a problem with amanda. It seems to back up my filesystems ok (except
 for the warning below). When I run amrecover, 2 of the filesystems I can
 browse through, but /mnt/data has the directory structure and NO files?! Has
 anyone any idea whats going on?
[clip]
 FAILED AND STRANGE DUMP DETAILS:
 
 /-- hades  /mnt/data lev 0 STRANGE
 sendbackup: start [hades:/mnt/data level 0]
 sendbackup: info BACKUP=/bin/tar
 sendbackup: info RECOVER_CMD=/bin/gzip -dc |/bin/tar -f... -
 sendbackup: info COMPRESS_SUFFIX=.gz
 sendbackup: info end
 ? gtar: ./home/dannyb/.Maildir/cur/1026897042.12092_1.hades,S=1602\:2,S:
 Warning: Cannot stat: No such file or directory
 | Total bytes written: 28790016000 (27GB, 1.5MB/s)

The STRANGE warning is solely due to some wacko thing your user dannyb
has down in his qmail delivery directory.  It should do no harm,
but you probably want to go clean it out.

 sendbackup: size 28115250

So *plenty* of files there, even if amrecover isn't letting you see 'em.  :-)

You can amrestore and confirm the data is actually there, that always
makes me sleep better.

amrecover doesn't run 'gtar -tv' on the tape, it just unzips what's in
the index file (under /var/adm/amanda/config_name/index for me).  The
most common cause of bad indexes is bad versions of GNU tar.  Do:

% /bin/tar --version

You want to see 1.13.19 or 1.13.25.  If you don't, build  install 1.13.25
from:

http://www.funet.fi/pub/gnu/alpha/gnu/tar/

MAKE SURE you install it where amanda expects to find it!  If your tar
version is already good, I'm out of ideas.

[clip]
 DUMP SUMMARY:
  DUMPER STATSTAPER STATS
 HOSTNAME DISKL ORIG-KB OUT-KB COMP% MMM:SS  KB/s MMM:SS  KB/s
 -- - 
 hades/   0  988980 410048  41.5   7:08 957.6   7:08 957.1
 hades/boot   04900   2848  58.1   0:031097.0   0:031027.5
 hades/mnt/data   0 2811525022508256  80.1 297:321260.8 297:331260.8

Nothing to do w/your problem, but you might want to try a columnspec:

columnspec HostName=0:10,Disk=1:10,OrigKB=1:9,OutKB=1:9,DumpRate=1:7,TapeRate=1:7

or something similar, makes the summary output nice  pretty.

-- 
Jay Lessert   [EMAIL PROTECTED]
Accelerant Networks Inc.   (voice)1.503.439.3461
Beaverton OR, USA(fax)1.503.466.9472



  1   2   >