Re: Laptop Backup Strategy

2004-10-27 Thread Jonathan Dill
I'm beginning to investigate using external USB 2.0 / Firewire drives to 
do backups of some systems.  The idea is that we could keep 2-3 spare 
PCs around, then if your computer is toast, we just ship it out for 
repairs, plug your external drive into one of the spare PCs, and rebuild 
the system from the external drive.

--jonathan


Re: Looking for tape drive suggestions

2004-10-27 Thread Jonathan Dill
I'm going to work for the Protein Data Bank, and we're seriously talking 
about using 1 TB flash drives for backups in the not-too-distant 
future.  It may take a few years to get down to a reasonable price 
point, however.

--jonathan


Bacula vs. amanda?

2004-09-28 Thread Jonathan Dill
I sent this From the wrong address initially, apologies if you 
actually get it twice :(

My experience is with amanda, but I will be taking over a Bacula
installation in about a month when I change jobs, see
http://www.bacula.org/  Has anybody used Bacula and have any comments on
how it compares to amanda?  I'll also have a bit more budget to play
with in my new role, so might consider just changing to a commercial
backup system, but I'll also be handling much fewer machines, which will
greatly simplify the ongoing maintenance of amanda anyway.
Thanks,
--jonathan


Re: Restoring an ArcServe tape on Linux without ArcServe

2004-09-08 Thread Jonathan Dill
Martin Hepworth wrote:
dd the entire tape to disk, so you've got the binary to play with^W^W 
inspect.
Exactly what I was thinking.  Next, I'd probably use split to break it 
down into manageable chunks and use strings and od commands such as 
od -c and hex editors to try to make some sense of the junk.  With 
luck, you might find some pattern, like the beginning/end of individual 
backups or files that you could use csplit to break the backup into 
more logical chunks.  With even more luck, you might figure out what 
compression algorithm was used, for example gzip compressed data 
almost always begins with the same character string, but I forget what 
it is, then you might be able to decompress the chunks or something if 
it isn't totally proprietary.

--jonathan


Re: Restoring an ArcServe tape on Linux without ArcServe

2004-09-08 Thread Jonathan Dill
Jon LaBadie wrote:
On Wed, Sep 08, 2004 at 11:57:37AM -0400, Jonathan Dill wrote:
 

... .  With even more luck, you might figure out what 
compression algorithm was used, for example gzip compressed data 
almost always begins with the same character string,
but I forget what it is, 
   

See /etc/magic for the list of magic numbers the file(1) uses.
 

On Linux, that's often burried in /usr/share somewhere these days, for 
example I just checked on Mandrake 10 it's /usr/share/misc/file/magic, 
rpm -q -l file will help on RPM-based systems, or locate (from 
slocate if installed) might help find it.


Re: DVD Arrays (was Re: Is anyone using a dvd drive yet?)

2004-09-08 Thread Jonathan Dill
Good points.  For small backups, I think that removable firewire / USB 
2.0 drives would be a more economical and convenient option, using 
FILE-DRIVER to write vtapes onto the removable drives.  I still think 
DVDs are more suited to archival backups of a few GB where you plan to 
store and delete the data.  Lately, I keep getting the feeling like 
people want a $50k backup solution but only want to pay $3k to get it 
and then complain when it doesn't live up to their expectations.

Frank Smith wrote:
At one time I thought DVDs would be a good backup media, but I have
since changed my mind.  The main reason is capacity (or lack of it).
Unless you are backing up small amounts of data, it would take stacks
of DVDs to perform your backups.  Perhaps now that dual-layer burners
and the new Blu-ray format are emerging the pain threshold is rising,
but currently I cant see using 25 DVDs instead of a single 100 GB tape
(and those aren't even the largest tapes commonly available).
If your needs can be handled by a few DVDs, the easiest way to implement
it might be to use Amanda's file driver and then burn that to DVD.
Frank
 




restore from multiple holding disk files

2004-07-23 Thread Jonathan Dill
Hi folks,
I'm trying to help someone do a restore from a dump that is split into 
multiple chunks in holding disk files.  In this case, flushing to tape 
first is not an option.  I thought amrestore could do it, but then I 
read the manpage and didn't see a way to do it.

The only way that I could think of doing it is to use dd to strip the 
amanda header off the chunks and then concatenate them together into 
one, huge file.  The filesystem on the holding disk is XFS so the 
filesize should not be a problem.

Is there an easier way to do this?  The files are ufsdump being restored 
to a Solaris machine, but the holding disk files are on a Linux box 
which does not support ufsrestore.

Thanks,
--jonathan


Re: restore from multiple holding disk files

2004-07-23 Thread Jonathan Dill
Paul Bijnens wrote:
If split in chunks, just feed the first part; the rest is done
automatically (the name of the next part is in the header of
each holding chunk).
Hmm.  That's what I thought, finding subsequent chunks might not be 
working correctly then for whatever reason, but I'll have to search for 
the e-mail from the person who was having the problem.

--jonathan


Re: backing up amanda server

2004-05-18 Thread Jonathan Dill
On Tue, 2004-05-18 at 10:14, Joe Konecny wrote:
 Am I correct in thinking that if I am backing
 up only the server running amanda then it makes
 no sense to use the holding disk?

Not exactly.  If you are backing up files from the holding disk, for
example in other directories, then it probably makes no sense to use the
holding disk.  For filesystems other than the holding disk, you may
still get better tape performance by using the holding disk because
dumps may stream to tape better than dumping directly to tape.

Offhand, I'd say use the holding disk.  If you really care, then try it
without the holding disk and compare the statistics with and without
using the holding disk.

In a nutshell, the 2 questions are: 1) How fast does your tapedrive
gobble up data? 2) How fast can the backup process feed data to the
buffer(s)?  The goal is to keep the tapedrive streaming as much as
possible, which means keeping a full buffer, which means your backup
process better generate data faster than the tapedrive can absorb it. 
Otherwise, every time the buffer goes empty, the tapedrive has to stop
and reposition, which takes time and may eat up space on the tape,
making for slower backups that use up more tape to store the same amount
of data.

If the amanda server is the only client, I'd probably turn on hardware
compression on the tapedrive, turn off software compression, and maybe
dump directly to tape.  If there are other clients involved, and you
want the bandwidth savings of software compression client-side (which I
do) then I'd stick to software compression for everything and use the
holding disk on the amanda server, since the software compression would
slow down the backup process and increase the chance of emptying the
buffer.

Why then use software compression for everything?  I'm not sure if it's
a good idea, or necessarily possible, to mix hardware compressed and
uncompressed data on the same tape, or to get amanda to somehow switch
the tapedrive between those modes.  I am sure that applying hardware
compression to data that is already software compressed will use at
least as much, and probably more (and possibly a lot more) tape.

-- 
Jonathan Dill [EMAIL PROTECTED]



Re: backing up amanda server

2004-05-18 Thread Jonathan Dill
On Tue, 2004-05-18 at 10:08, Paul Bijnens wrote:
 And there are the cases when there are problems with the tapedrive
 (wrong tape, holiday period, operator couldn't make it in the office
 because he's sick etc.)
 Having (at least incremental) backups on holdingdisk is a nice
 fallback.

Oh yeah that reminds me...If your average amanda runs are much smaller
than the size of a tape, then you might consider a strategy of usually
backing up to holding disk, and periodically running amflush to try to
get the tapes as full as possible.  I have one amanda setup that I run
that way.

-- 
Jonathan Dill [EMAIL PROTECTED]



Re: Max File Size

2004-05-17 Thread Jonathan Dill
The limitation could be that the login shell is not compiled with
LARGEFILE enabled--that was a bug that plagued tcsh in various distros
for awhile, though I haven't seen that bug lately.  In the past, I ended
up getting the Mandrake SRPM for tcsh and porting it into Red Hat to fix
the problem, or just using bash.

I think there's a way to use strace or strings to see if LARGEFILE
was enabled when the shell was compiled, but it's been so long I don't
remember--maybe someone else can chime in with the answer, or you could
dredge it up from the list archives.

On Fri, 2004-05-14 at 17:53, Josh Welch wrote:
 I am thinking that things are dying as a result of this friggin huge file, I
 am able to restore just fine from a smaller backup on a different machine. I

-- 
Jonathan Dill [EMAIL PROTECTED]



Re: location of amandahosts

2004-05-17 Thread Jonathan Dill
On Mon, 2004-05-17 at 16:01, Eric Siegerman wrote:
 On Mon, May 17, 2004 at 03:40:16PM -0400, Joe Konecny wrote:
  First install of amanda...  Freebsd 5.2.1, Amanda 2.4.4p2.
  I used bin and operator when compiling.
 
 I much prefer to create a new userid just for Amanda.  If it runs
 as bin, then it can write to a large part of the system (no

For compatibility with pre-compiled RPMs for Linux, I like to use amanda
UID 33 and disk GID 6 that just keeps everything consistent across all
of the platforms that I back up.  GID 6 has read but not write access to
the disk devices for dump style backups (xfsdump, ufsdump, dump etc.)
and no users are members of that group.

-- 
Jonathan Dill [EMAIL PROTECTED]



Re: your mail

2004-05-11 Thread Jonathan Dill
Gavin Henry wrote:

Sorry, forgot to put in a subject.

Yes, Fedora. Is there anything else like it for Linux?
 

I have periodically searched for ufsrestore-compatible software for 
Linux, but have been unable to find any yet.  In my case, I wanted to be 
able to index dumps from a Solaris amanda client on a Linux amanda server.

A few ideas, which might be useful for the future at least:  1) switch 
to GNUTAR (I was using GNUTAR but had ended up switching to ufsdump due 
to some problems); 2) there used to be a binary compatibility layer 
that let you execute Solaris executables on Linux, but maybe that has 
disappeared, or at least would probably require compiling a custom 
kernel; 3) look for source code for ufsrestore, but I don't think it's 
Open Source; 4) Can Solaris run under VMware on x86 architecture? 5) 
look for Open Source libraries that can parse ufsdump format, or ufs 
filesystem, and write your own code.

If #2 still exists (though I don't think so) there might be a Linux 
distro out there tailored to exploit that capability.

--jonathan



Re: FW: unsubscribe please

2004-05-11 Thread Jonathan Dill
As a stopgap, perhaps there is a way to turn off receipt of mail?  On
LISTSERV it was something like set nomail  Technically, you would
still be subscribed, but you would not receive any messages.

On Tue, 2004-05-11 at 08:57, Bryan K. Walton wrote:
 I'm another person who was magically re-subscribed yesterday.  Is
 there someone on this list who and explain what has happened.  Also,
 I'm not able to unsubscribe either.  Getting an error message back
 from Majordomo. 
-- 
Jonathan Dill [EMAIL PROTECTED]



Re: unsubscribe

2004-05-11 Thread Jonathan Dill
OK just for fun, I gave it a shot, and it looks to me like someone used
RCS (ci, co, rcsdiff etc.) to manage something within the Majordomo tree
and Majordomo is unhappy about having an RCS directory within the tree
because it is not a valid list.  That's what I suspect.  You're probably
going to have to: 1) store the *,v files in another directory; 2)
remove the RCS directory; 3) or, figure out how to make Majordomo ignore
the RCS directory.

Why oh why do people still insist on using antiquated Majordomo when
there are a dozen or so better MLMs out there these days?  Try Mailman
for instance, available as an RPM on multiple Linux distros.

 --
 
  unsubscribe * [EMAIL PROTECTED]
  unsubscribe: unknown list 'RCS'.
 Your request to [EMAIL PROTECTED]:

-- 
Jonathan Dill [EMAIL PROTECTED]



Re: New to Amanda- discouraged by some absurd limitations..

2004-05-03 Thread Jonathan Dill
Which reminds me...If cost is a factor, now that FILE-DRIVER is an
option, RAID or removable hard drives may give you a better $/GB ratio
than tapes, and much more capacity than CD-R.  I think this is a very
good option for a single computer or small network like Justin described
in his original e-mail.  If you use removable drives or a RAID-1, you
might not need anything else, though it would be a good idea to still
dump more important files to tape or writable DVD media occasionally.  I
haven't investigated it, but I have heard of hot-swap external SATA and
firewire options which would be very good indeed.

250 GB removable drives could be a great option if you are backing up
large partitions, say up to 500 GB uncompressed, so that you could get
around dumps not fitting on a single tape without having to use RAIT
with multiple tape drives, or very expensive tape drives and media, or
split up dumps with (IMHO inefficient and CPU/IO intensive) GNUTAR.

In my case, I am using a 1 TB Snap Server 4500 in a RAID-5 configuration
and flushing mostly just the full dumps to 200/100 GB LTO (Ultrium-1). 
Since RAID-5 has less redundancy than RAID-1, I am more concerned about
having at least some dumps on tape since a 2-disk failure would mean
that all of the data on the RAID-5 would be gone.

On Sun, 2004-05-02 at 20:53, [EMAIL PROTECTED] wrote:
 In my Amanda experience, I was lucky enough to have a
 large holding disk area and a tape drive which failed
 spectacularly before even one backup was flushed. It gave
 me the opportunity to see how Amanda works. The most
 wonderful aspect was how happy she was to restore
 from the holding disk.

--jonathan



Re: New to Amanda- discouraged by some absurd limitations..

2004-05-03 Thread Jonathan Dill
Jon LaBadie wrote:
One of the concerns I have about disk-only based backup schemes is the
total loss of data.  If you encounter a 2-disk failure you lose not only
your most recent, but all your backups.  If a tape drive fails the data
can be read on another drive.  If a single tape goes bad, that is the
only set of backups lost.
 

If you're using multiple removable hard drives as tapes, I think that 
would mitigate the risk somewhat vs. disks that are online all the 
time, although spin-up seems to be THE most crucial moment in the life 
of any disk drive.  I think disks still aren't as reliable as tapes in 
terms of failure rates, but would expect a removable hard drive solution 
to fall somewhere between full-time disk drives and tapes.

--jonathan


Re: New to Amanda- discouraged by some absurd limitations..

2004-05-02 Thread Jonathan Dill
Use FILE-DRIVER and wait until you have enough files to fill up the 
CD-R.  Or use CD-RW as your tapes and keep several that you can rotate 
and re-use.

Oh yes, we have designed amanda specifically to satisfy your personal 
whims, pretty please don't reject it, it will so much hurt my feelings.  
IMHO this was rather rude way to bring up the issue, Why not just look 
for some other software if you didn't like it?  No one held a gun to 
your head and made you use amanda.  freshmeat.net lists several other 
packages specifically geared to making backups to CD-R as you describe.  
amanda is geared to backing up large networks, like Veritas or Legato 
without the very expensive licenses.  You can get it to work for a 
small, single computer, but it was not designed to do that, hence it may 
well not be the best tool for that job, nor does it promise to be.

Justin Gombos wrote:
I was looking forward to using Amanda to backup 4-5 machines on my LAN
and one over the Internet, but something seems incredibly stupid about
the way Amanda forces the user to operate.  Please tell me I'm wrong;
maybe I'm misunderstanding the documentation.  If I want to perform
daily backups to CDRs, and I expect to have around 10 megs of data
change per day, do I really have to waste an entire CDR every day?
 

--jonathan


Re: New to Amanda- discouraged by some absurd limitations..

2004-05-02 Thread Jonathan Dill
Justin Gombos wrote:
First of all, I did not know whether this was an issue, that's why I
posted here.  It was certainly proper to raise an Amanda
issue/question to the amanda-users mailing list.
 

Asking, Can amanda do X? is one thing, but to complain of an absurd 
limitation is, frankly, insulting, unless that was a bad translation of 
something from another language.

OTOH-- if you want to talk about etiquette, you should try not to
ignore the mail-followup-to header on mailing list posts.
 

I don't see how the use of insulting language is equal to whether my 
e-mail client correctly handles mail-followup-to headers.

If you know of a multi-client GNU backup tool that both works over the
network and is also uses the target media intelligently, please
advise.  Otherwise, someone might as well be holding a gun to my head
forcing me to use Amanda, because it seems to be the closest tool for
meeting this requirement.  The other tools I've studied lack the
automated across network ability.
 

As others have pointed out, 700 MB CD-R is probably not going to be 
adequate for backing up multiple clients across the network anyway.  By 
GNU do you mean covered under the terms of the Gnu Public License?  
Are you opposed to other free, or free to use but restricted 
licensing?  Is the point that you want something free or very cheap, 
hence would not consider something like Arkeia?

--jonathan


Re: star and amanda

2004-04-08 Thread Jonathan Dill
Amanda doesn't use tar to write to the tape directly, so the ability of
tar or star to span tapes or not isn't the issue.  Amanda makes a dump
image, which has a header followed by what ever backup format was used
on the client OS.  Then the image is then written out to tape, even if
you are backing up direct to tape through amanda.  Besides, I swear that
I can remember doing tar to multiple floppies with regular old GNU tar
by specifying a volume size and multiple flag or something like that.

The issue with amanda is whether or not you can span the dump images
across multiple tapes.  I thought there had been some progress in that
area, or at least some possible hacks using FILE-DRIVER or something,
though it still might not be completely straightforward.

In freshmeat.net the other day I think I noticed there were some other
GUI front-ends to amanda out there, but I have not checked them out. 
Also, Red Hat is including, and supposedly supporting, amanda on Red Hat
Enterprise, and may have made some advances in the configuration area.

I think a lot of GUI for backups tend to be a bit clunky and mask a
little too much of what they are actually doing in the background, but
then I am still a bit of a console cowboy.  Windows in general can drive
me crazy that way sometimes, when something is broken and there is no
easy, free way to get in and see exactly where it is choking except for
some cryptic, stupid error code.  Not having a GUI also weeds out some
of the people who don't have a clue from trying to use amanda :-)

In the Windows department, I'm looking into using either built-in backup
of XP Pro, or free software that came with my Snap for backing up
Windows, and then use amanda to backup the Windows stuff off the Snap in
the (hopefully rare) event of a 2-drive failure on the RAID-5.

On Thu, 2004-04-08 at 15:35, Galen Johnson wrote:
 Does/can amanda utilize star?  This would be one of the most useful 
 capabilities I can foresee for Amanda (quiet Windows users).

-- 
Jonathan Dill [EMAIL PROTECTED]



Re: Amanda backup strategy thoughts, full and incremental backups to tape once a week.

2004-04-06 Thread Jonathan Dill
Here's a strategy that I implemented about a month ago that is working 
pretty well so far:

1. run amdump every night to large RAID w/o tape, mix of full and incr
2. run script to separate images to amanda-incr and amanda-full
3. when amanda-full exceeds size of tape, run amflush
4. when RAID approaches capacity, flush some incrs to tape
Since it's a RAID, there is less chance it will go to crap than a 
regular disk partition, but it is still possible.  And if it does, I 
still have the full dumps on tape to fall back on.

To give you some gauge of capacities, I'm backing up about 300 GB of 
disks, have ~600 GB RAID5 (on a Snap 4500) and am using 100/200 GB LTO 
(Ultrium-1) tapes.  So far, it's mostly web servers, but I haven't moved 
everything to the new backup cycle yet.  I have flushed 2 tapes worth of 
full dumps, and accumulated about 167 GB of incrementals that are still 
on the RAID.

The goal and idea is that  90% of the time, I will only need to load 
one tape to do a restore, and the rest will come out of holding files.  
Even better, there is a good chance that the tape that I need will be 
the last one that I flushed, so I won't have to change the tape.

/snap/amanda-hd is the directory that amanda.conf looks for, which is 
actually a symbolic link to /snap/amanda-full.  Therefore, I just move 
the incremental dumps from amanda-full to amanda-incr, both actually on 
the same filesystem (a simple directory rename in the filesystem inode 
table).  If I need to flush incrementals, I just change the link 
amanda-hd - amanda-incr and run the flush (this could be done more 
elegantly with a patch, but this is so easy, so why bother?)  In the 
cron job, I make sure amanda-hd - amanda-full before amdump runs.

I'm using NFSv3 to XFS filesystem through a dedicated 100baseTX FD 
connection, so I'm using a chunksize of ~600 GB, and that makes the 
rename script very simple since I don't have to worry about any backup 
getting split into chunks.  AFAIK NFSv3 and XFS both support 64-bit 
addressing, so I shouldn't run into any filesize limits.

--jonathan


Re: Avg Dump Rate and Compression

2004-03-24 Thread Jonathan Dill
There are a few more things to try.  First, there may be an mt command 
to set the default compression for the drive--that will at least help 
make sure if you use some new tapes, they will get started with the 
correct compression.  The tricky part is that some drives, such as 8mm, 
use the density to set the compression and ignore the compression 
command.  You can usually get some ideas by looking at the output of mt 
status.  Under Linux, the commands you want would be something like mt 
defcompress off and mt defdensity then the drive will (hopefully) use 
those settings by default when you load a new tape, and for some drives, 
may actually override what's on the tape that you load.  I have 
sometimes put these commands into one of the system startup scripts, 
such as /etc/rc.d/rc.local on Linux.

Second, some drives have dip switches that you can set to hard-disable 
compression.  For example, I had a Sony DDS4 autoloader that had dip 
switches to do that.  At least, the dip switches make it non-ambiguous 
what is going on.

Third, some drives may have (usually Windows-based) firmware utilities 
that will let you set the settings of the drive, sort of like software 
dip switches.  You would just need some Windows computer with a SCSI 
card in it, it wouldn't even have to be a good SCSI card just to send 
the commands to the drive as long as you have the right type of cable.  
The bad part is you have to disconnect the drive, possibly move it, and 
hook it up to the WIndows box.  Worst case, maybe you could get some 
sort of SCSI adapter for a laptop with WIndows, such as PCMCIA, USB, 
Firewire, or even Paralllel to SCSI adapter.

Fourth, some tape drives like AIT or AME (Mammoth) supposedly try to set 
the compression dynamically based on the data and don't care whatever 
you do to try to stop it--you would have to go to the manufacturer and 
look up the specs for the drive to find out if it does that.

Lastly, when I have run into this problem, I usually have done this, but 
wonder if it could have caused problems:

# load the tape
mt compress off
amrmtape config label
amlabel config label
If you're going to be doing this to a bunch of tapes in an autoloader, 
it probably would be easier to script the dd command that Gene 
described, because you just dump the header to a temp file and the 
script doesn't have to try to parse what's in the temp file to figure 
out what the tape should be called.

Good luck!

Gene Heskett wrote:

The only way I know of to fix such a tape is:
rewind it
read out the label to a scratch file using dd
rewind it again
turn the compression off with mt -f device whatever turns it off
dd that scratch file back to the tape using the non-rewinding device
dd at least enough data from /dev/zero to force the drive to flush its 
buffers.  This will force the drive to turn the header compression 
flag off and that tape will not be compressed again unless you turn 
compression back on.
 

--jonathan


Re: selfcheck reply timed out

2004-03-24 Thread Jonathan Dill
On Wed, 2004-03-24 at 11:30, Joshua Baker-LePain wrote:
  Remembet I have the NIS stopped.
 
 Then you need to either fix NIS or define the amanda user locally on the 
 server.

My experience has been that on linux, NIS (via ypbind) is unreliable and
tends to crash, so I always put the amanda user in the local passwd file
to make sure that backups don't fail even if ypbind craps out.  I think
part of my problem has been that I'm using SGI IRIX as NIS servers, and
Linux as NIS clients, and something between them is not 100% compatible.

If you really want to rely on NIS, I would suggest setting up some kind
of a watchdog to restart ypbind if it fails.  In fact, I think I am
going to look into that option right now, it would fix some of the
problems/complaints that I have had, like occasional problems with
people not being able to login.

-- 
Jonathan Dill [EMAIL PROTECTED]



Re: selfcheck reply timed out

2004-03-24 Thread Jonathan Dill
Maybe there are some zombie amanda processes hanging in the
background.  Try looking at ps to see if any processes need to be
killed.  Look at the files in /tmp/amanda on the server and the clients
to see if there are any clues.  Try rebooting and see if the problem
still persists.

On Wed, 2004-03-24 at 13:09, Pablo Quinta Vidal wrote:
 I´ve NIS running without errors and now the message is  selfcheck request 
 timed out
 I had this problem earlier and when I solve this then came selfcheck reply 
 timed out and now again
 the old problem.
 I checked the help ind docs/FAQ but it doesn´t work.
 Help please and sorry if I´m tedious!!
 Thanks

-- 
Jonathan Dill [EMAIL PROTECTED]




Re: amanda and firewall

2004-03-23 Thread Jonathan Dill
Stefan G. Weichinger wrote:

on Dienstag, 23. März 2004 at 09:28 you wrote to amanda-users:

PB (Don't be afraid, you wont hear any noise during compilation, it's
PB quiet and easy  :-)  )
Seems to get the joke-of-the-week in here.
Quiet funny ;)
 

[sound of crickets chirping]

I saw that in a comment on the blog Witt and Wisdom and just had to 
steal it :-) so please no offense.

Joking aside, you could rpm -ivh the amanda src.rpm, and then hack the 
SPECS/amanda.spec file (usually in /usr/src/redhat or /usr/src/RPM) to 
change  the ./configure command, then you would be able to rpm -bb 
SPECS/amanda.spec to compile a new binary RPM with the --portrange 
option that you wanted.

--jonathan


Re: All estimates failed

2004-03-20 Thread Jonathan Dill
Just a thought, but I would probably boot off a rescue disk and check 
the filesystem consistency on the client with the failing estimates.  
I'd probably use Knoppix just because it is user-friendly and provides a 
comfortable, gui environment for poking around.

--jonathan

James Tappin wrote:

I've been having a similar problem in that as of 14 March all dump-based
backups on my main machine have been failing because of an all estimate
filed error. Tar-based backups are working correctly.
 




Re: flush full dumps only

2004-03-19 Thread Jonathan Dill
I have attached a very simple but effective script that I set up to
separate full and incremental holding disk files, so that full dumps can
be flushed to tape while incrementals remain on disk. The script is
almost so simple as to be a joke, but I don't have any idea what kind of
bugs can be found.

WARNING: The script is provided as-is with no warranty express or
implied. If you screw up your amanda backups do not come crying to me.
To really make the script work, you will need to change the values of $r
and $i for your configuration, run it to see what it will do, and then
remove the echo from the beginning of the two lines. If you can't
figure out what that means, then you probably shouldn't be screwing
around with this stuff anyway and should forget about it.

For restores, I think I can flush the latest batch of full dumps, then
use a modified amanda.conf that looks in the directory where the
incrementals are stored, but I am not sure how amanda decides where
holding disk files are located.  I might have a use another script to
move the incrementals back to the normal holding disk tree, but I will
cross that bridge when I come to it, to use a rather hackneyed
expression.

-- 
Jonathan Dill [EMAIL PROTECTED]
#!/bin/tcsh -f
set r=(/snap/amanda-hd)
set i=(/snap/amanda-incr)
cd $r
set dl=(`find . -depth -type d -print`)
cd $i
foreach d ($dl)
  if (! -d $d) then
echo mkdir -p $d
  endif
end
cd $r
foreach d ($dl)
  set fl=(`find . -type f \! -name *.0 -print`)
  foreach f ($fl)
echo mv $f $i/$f
  end
end


Re: High CPU running backup

2004-03-19 Thread Jonathan Dill
I would guess that the ufsrestore is making an index of one of the 
dumps.  If you don't care about interactive amrecover you could make a 
dumptype that doesn't do index that should eliminate the ufsrestore 
process.  Running fewer dumps in parallel should help, too.

I don't know a lot about Solaris.  0.0% swap should mean nothing is 
paging, but with 729M swap used, I'd still check out high-memory 
processes, maybe something is leaking memory, but any paging should 
still show up in swap and not kernel I would think.  It looks to me like 
it wouldn't take a lot to make the system start paging heavily from the 
state that it is in with only 13M of real memory free.

Simon Lorenz wrote:

Any suggestions for stopping this would be much appreciated.

load averages:  2.21,  2.21,  2.02
17:48:54
142 processes: 131 sleeping, 5 running, 1 zombie, 4 stopped, 1 on cpu
CPU states:  0.0% idle, 13.7% user, 76.9% kernel,  9.3% iowait,  0.0% swap
Memory: 768M real, 13M free, 729M swap in use, 3911M swap free
  PID USERNAME THR PRI NICE  SIZE   RES STATETIMECPU COMMAND
10563 amanda 1  220 2232K  936K sleep5:32 35.03% sendbackup
10476 amanda 1   0   19 2640K 1920K run  2:26 15.04% dumper
10568 amanda 1  320   11M 9928K sleep1:49 12.55% ufsrestore
10572 amanda 1  480   11M 2864K run  0:47  5.20% ufsdump
10571 amanda 1  530   11M 2864K run  0:48  5.16% ufsdump
10573 amanda 1  480   11M 2872K run  0:47  4.64% ufsdump
10574 amanda 1  360   11M 3072K sleep0:32  3.53% ufsdump
10570 amanda 1  480   11M 7520K sleep0:20  1.89% ufsdump
10646 root   1  580 2104K 1208K cpu  0:01  0.35% top
 



Re: I don't think Amanda is going to work for my environment...

2004-03-18 Thread Jonathan Dill
On Thu, 2004-03-18 at 14:31, Joshua Baker-LePain wrote:
 It is true that it takes some wedging to make amanda work in a 'doze heavy 
 environment -- that's simply not what it was designed for.  As for advice 
 on commercial solutions, this isn't exactly the best place to ask.  ;)

If you're not on a shoestring budget, I'd say knock yourself out, there
are some great (but usually expensive) packages out there that are very
easy to set up and maintain, though this list is not the best place to
get that sort of information.  Veritas, Arkeia, and Legato are three
packages I would definitely check out if money was not an object.

You could, however, consider a sort of hybrid approach to handle the
'doze machines, which is what I am thinking about right now.

In a nutshell, you would use some other mechanism to make the 'doze
machines back themselves up to a share, and then you would use amanda to
backup that share periodically.

Windows XP Professional has a built-in backup/restore function, which is
one option that I plan to try.  I am using a 1 TB Snap Appliance 4500
NAS which came bundled with Backup Express for Guardian OS and an
unlimited number of Windows clients, so that is another option that I
plan to try.  The Snap itself runs Guardian OS, which is some flavor of
Linux, and I have it NFS mounted on my amanda server, and also use it as
my holding disk for amanda backups. Then I can use GNUTAR with amanda to
backup the directory where the 'doze machines have taken a dump.

A lot of Windows people I have talked to lately are using a 2nd hard
drive on USB or Firewire to do backups, or in a cold-swap cartridge. 
Norton Ghost is a popular application to do that, also XP Pro
backup/restore, and even robocopy (is that Win2K Pro?) with varying
degrees of automation. DVD+R with 4.7 GB capacity is popular for
longer-term storage.

The latest Norton Suite 2004 Pro looks like makes it pretty easy to
automate Ghost backup to a 2nd disk or to DVD+R (I have it on the
Windows partition of my home computer) but I mainly use Windows for
games and don't care enough about losing that data to have put the
effort into automating backups.

--jonathan



flush full dumps only?

2004-03-17 Thread Jonathan Dill
Has anyone tried the approach of only flushing full dumps and leaving 
incremental dumps on disk? I think that this would have roughly the same 
effect as doing full dumps out of cycle, but I have not had luck so far 
with the out-of-cycle approach. I think that it should be fairly easy to 
script, create a parallel directory tree and temporarily move out the 
incremental dumps that you don't want to flush, for example.

The idea would be to leave all of the incremental dumps available on a 
huge disk, but keep all of the full dumps on tape, so at most one tape 
should be required for any restore. Since the full dumps take up the 
most space, clearing them out should leave more space for a longer 
history of incrementals. At least that is the idea.

--jonathan


Re: filesystem limit?

2004-03-16 Thread Jonathan Dill
Geoff Swavley wrote:

I was wondering if anyone could tell me why amanda seems to split
my filesystem into 2 holding files? An error has occurred so these filesare
 

It sounds like you have other, unrelated problems, but check the setting 
of chunksize in amanda.conf, that is usually what determines dumps 
getting split up into multiple holding files.  In any case, it is not 
determined by the filesystem--you can give amanda a chunksize that is 
too large for some filesystem type, in which case it will keep writing 
until it gets an I/O error.  The holding files are concatenated during 
writing to tape, at which point the holding file size becomes irrelevant.

Most modern filesystems can support file sizes up to the TB range--I'm 
using a chunksize of 256GB on Linux XFS, it could be larger but Why go 
larger than the size of the holding disk?  Exceptions include (but are 
not limited to) ext2, SGI efs, iso9660, FAT(12|16|32), NFSv2, not sure 
about ufs, hfs, NTFS, I'm sure there are others.  2 GB and 8 GB are 
typical limits.  But as I said AFAIK amanda will not automagically set 
the chunksize for the holding disk filesystem type, it would just keep 
writing until it got an I/O error, but then there have been a lot of 
changes from 2.4.2 to 2.4.4 that I am just catching up with now.

--jonathan


Re: Disaster Recovery

2004-03-11 Thread Jonathan Dill
Joshua Baker-LePain wrote:

I think what you mean is what files do you need in order to save the

complete current state and history of the backups, although I'm guessing 
as your request was overly terse.  If that's right, you need:

the config dirs (where your amanda.confs are)
the infofile dirs as defined in your amanda.confs
the logdirs as defined in your amanda.confs
the indexdirs as defined in your amanda.confs
 

How I deal with it is that I just rsync the amanda account home 
directory to another server periodically.  If you wanted to, you could 
probably set it up as a daily cron job or something.

--jonathan


Re: *Slow* amrestore

2004-03-11 Thread Jonathan Dill
Joshua Baker-LePain wrote:

I'm reading some somewhat large (14-18GB) images off of AIT3 tapes, and 
it's taking *forever*.  Some crude calculations show it coming off the 
tape at around 80 KB/s, whereas it was written out at 11701.6 KB/s.  The 
tapes were written in variable block size mode.  What's the best way to 
read these images more quickly?
 

Are you piping the amrestore output, say to uncompress and extract 
files? Maybe the extraction process is too slow, and causes the tape to 
have to stop and reposition while the pipe is emptying out. I wonder if 
you could put a bigger buffer in the pipe somehow, that might help, say 
64 MB to begin--I remember seeing a cat-like program that could buffer a 
pipe, I think it was with Linux software for doing backups to CD-R.

I would try amrestore -c to just dump the image off the tape, and then 
do the uncompress and extraction separately, but you will need enough 
disk space to do it.  Worst case, you could try amrestore -r and use 
dd bs=32k skip=1 if=dump-file to skip over the header, and then pipe 
that to the uncompress and extract processes.

--jonathan


Re: *Slow* amrestore

2004-03-11 Thread Jonathan Dill
Hmm. Check also mt status to make sure the drive thinks that the 
blocksize is 0 if not change it with mt blksize. The files will be 
perfectly fine with bs=16M. gzip and/or tar will probably give a warning 
bitching about the nulls at the end, but it won't have any effect on the 
restore, the files will still be intact.

In some desparate cases in the past, where some mung brain had data on 
Sony Video 8 tapes (!) I ended up mucking around with mt blksize and 
just using cat to try to grab the stuff off the tape without any 
blocking, crazy as that sounds, but that was also on SGI IRIX with 
Exabyte 820 Eagle drives. It was a pain, and there were loads of 
errors, but I got most of the data off the tapes.

--jonathan

Joshua Baker-LePain wrote:

On Thu, 11 Mar 2004 at 2:21pm, Jonathan Dill wrote

 

I would try amrestore -c to just dump the image off the tape, and then 
do the uncompress and extraction separately, but you will need enough 
disk space to do it.  Worst case, you could try amrestore -r and use 
dd bs=32k skip=1 if=dump-file to skip over the header, and then pipe 
that to the uncompress and extract processes.
   

The current slow-running process is 'amrestore -r'.  I'm guessing it's 
forcing the block size that's the culprit here -- the amrestore man page 
isn't quite clear, but it seems to say that if you don't specify -b, 
it'll default to 32k.  I tried just 'dd if=/dev/nst1 of=img.raw' but it 
said dd: reading `/dev/nst1': Cannot allocate memory.  tapeinfo says 
this about blocksizes for the drive:

MinBlock:2
MaxBlock:16777215
Maybe I should try dd with bs=16M?  But will that pad the output file with 
an unacceptable-to-tar chunk at the end since the tapefile is unlikely to 
be an exact multiple of 16M?

Thanks.

 




Re: *Slow* amrestore

2004-03-11 Thread Jonathan Dill
Have you taken a look around in /proc/scsi? /proc/scsi/scsi should give 
you some basic information, and the subdir for your driver should give 
more details, such as what transfer rate the drive is negotatiated at, 
for example /proc/scsi/aic7xxx/0 for an Adaptec 2940 series.  Perhaps 
there was a bus problem and the device hooked up at 10 MHz async or 
something. Reboot and get into the SCSI bios config page and make sure 
everything in there is kosher.  I have even seen a few terminators go 
bad, so such things are not impossible, even though the drive was 
writing fine before. Since dd is having just as much problems, I would 
suspect that amanda is not the problem per se. Could be kernel, could be 
hardware, could be the tape cartridge, could be some stupid, proprietary 
Sony thing, dip switch settings on the drive, hardware compression settings.

In some cases, I have taken tape drives and hooked them up to a Windows 
box so I could run proprietary diagnostics, take a look at the tape 
drive's internal logs, update the drive firmware.  I think the weirdest 
stuff was using a 9600 bps null modem serial terminal connection to 
connect to the embedded console on an Exabyte 10h library, very strange 
set up. I have also pulled out SCSI cards and put them on a Windows box 
to check out the SCSI BIOS and load new firmware.

Run a cleaning tape through just in case--with some drives, I had the 
problem of drives actually needing cleaning more frequently than the 
drives thought they needed it. Exabyte Eliant drives were notorious for 
that--I would run into these problems where I had to run a cleaning tape 
through 5-6 times in a row (even though the drive said it didn't need 
cleaning) and then finally it was fine, but in those cases, I was also 
seeing read and/or write errors. The Eliants just based cleaning on 
hours of read/write time, which turned out not to be often enough, so 
crud would accumulate over time--no doubt because amanda was streaming 
so efficiently that the drive was actually processing more length of 
tape than ordinarily anticipated :-)

--jonathan


client with private address

2004-03-09 Thread Jonathan Dill
I want to backup a client on a private network 10.160.32, but amanda 
seems to be looking for a DNS to resolve the IP, and then do a reverse 
lookup on the IP to get the hostname.  Is there a way to do this without 
setting up a DNS for 10.160.32?  I wish amanda would just believe the 
address that I put in disklist instead of double-checking with a DNS.  
Does she not trust me?  Am I not trustworthy?

I guess I'll pick at the docs for gethostbyaddr and gethostbyname calls 
to see if there is any way to modify their behavior, or if there is a 
different routine that I could plug in that would check /etc/hosts 
first, or something like that.

--jonathan


Re: client with private address

2004-03-09 Thread Jonathan Dill
Nevermind--I just added the address of the server to /etc/hosts on the 
client and that fixed the problem.

Some very useful information in man gethostbyname and man 
gethostbyaddr  /etc/host.conf may be consulted for the res order for 
those two calls.  The default is to check bind first, but you can 
override that by putting

order hosts, bind

in /etc/host.conf.  At least that's how it works on some flavors of Linux.

Jonathan Dill wrote:

I want to backup a client on a private network 10.160.32, but amanda 
seems to be looking for a DNS to resolve the IP, and then do a reverse 
lookup on the IP to get the hostname.  Is there a way to do this 
without setting up a DNS for 10.160.32?  I wish amanda would just 
believe the address that I put in disklist instead of double-checking 
with a DNS.  Does she not trust me?  Am I not trustworthy?

I guess I'll pick at the docs for gethostbyaddr and gethostbyname 
calls to see if there is any way to modify their behavior, or if there 
is a different routine that I could plug in that would check 
/etc/hosts first, or something like that.

--jonathan




Re: client with private address

2004-03-09 Thread Jonathan Dill
A common shorthand for specifying a Class C subnet is to leave off the
4th number, basically the same thing as 10.160.32.0, 10.160.32.0/24, or
10.160.32.0/255.255.255.0 etc.

On Tue, 2004-03-09 at 15:26, Hans-Christian Armingeon wrote:
 Am Dienstag, 9. März 2004 20:11 schrieben Sie:
  I want to backup a client on a private network 10.160.32, but amanda 
 I think, that an ipv4 address has four address parts, you have only three.

-- 
Jonathan Dill [EMAIL PROTECTED]
jfdill.com



Re: mixed full / incremental backup...

2004-03-09 Thread Jonathan Dill
Here's a very simple solution:

1. reserve 100 in amanda.conf (or comment out reserve line)
2. leave out the tape during the week (*or change dev to /no/such/tape)
3. run amflush before Friday backups
In this case, amanda should try to do degraded mode backups during the 
week while there is no tape, only incrementals.  On Friday, you put a 
tape in the drive before the backups, and I think it should try to catch 
up on some full backups.

It probably won't give you perfect incr/full separation, but it would be 
good enough for me.  Naturally, you will need sufficient holding disk 
space.  You will have to keep an eye out for messages warning about 
overwrite last full dump.

* do not change tapedev to /dev/null because that activates a test 
mode where amanda will happily dump all data into the bit bucket rather 
than decide there is no tape.

--jonathan

Jon LaBadie wrote:

What about a single config. Set it up with a long dumpcycle.

Use dumptypes that specify always incremental, or no-full,
or strategy incr or ???  I.e. something that reduces the
possibility of full dumps.
 



Re: client with private address

2004-03-09 Thread Jonathan Dill
Hi Frank,

The documentation for gethostbyaddr and gethostbyname explained how each 
call goes about looking up addresses.  At least under Linux, there were 
several opportunities to override the default behavior and make the 
routines consult /etc/hosts first.

In my particular case, there are only two private addresses that I 
need to handle due to the amanda server and client having a direct 
cross-over connection, for an unrelated purpose.  For two IP addresses, 
it really didn't seem worth it to set up a local DNS with forward and 
reverse domains.

As for address spoofing, there are basically 2 scenarios that I can 
think of:

1. idiot hacker causes some backup(s) to fail on one night, maybe a DoS, 
but that's about the extent of it

2. hacker who knows about amanda, and has the right ports open to 
intercept and capture the stream, possibly to steal sensitive data

#2 would probably be loads easier to do with just a run of the mill 
sniffer that can capture streams, and the activity would be much less 
likely to be detected.  I can't see the benefit of impersonating the 
amanda server, besides which it would cause loads of errors and send up 
red flags that something was going on.  Not to mention that if your data 
is all that sensitive, you should really be encrypting the data on the 
client and not sending it in the clear across the network, and the 
systems should be behind a tight firewall if not disconnected from the 
internet altogether.

I really can't imagine DNS spoofing being that big of a risk with 
respect to amanda.  Having the addresses hard coded in /etc/hosts and 
looking at that and not the DNS should be more secure than relying on 
DNS lookups crossing the network, which could be spoofed.

Frank Smith wrote:

I suspect that Amanda was designed to use hostnames in their disklists and

.amandahosts, and names are very easy to spoof, so the lookups are done
to verify that the correct host is connecting.  I'm sure the code could be
modified to not do lookups if given an IP, but having proper DNS has many
other benefits than just helping Amanda.
 



Re: Strange DNS lookup problems ... I think ...

2004-03-08 Thread Jonathan Dill
I read the last e-mail about this, but lost it, but I think I remember 
the basic details.

First, I would try setting up some sort of nameservice caching on the 
client and server as a work-around.  Some flavors of Linux have a 
caching-nameserver package that sets up the correct bind files for you, 
then you just put

nameserver 127.0.01

at the top of /etc/resolv.conf.  tmdns is supposed to be a more 
lightweight caching nameserver of some sort, but I haven't had good luck 
with it so far.

nscd is a more general-purpose nameservice caching mechanism that can 
also cache NIS and LDAP data, but I think there may be a kernel piece to 
it that you also need compiled into the kernel.  SGI IRIX has nsd 
which is similar to nscd.  If you use nscd or nsd, check 
/etc/nsswitch.conf for the order that name services will be checked for 
hosts.  In particular, you may need to delete nis+ or nisplus if you 
don't have NIS+ running on your network--It is often in there as one of 
the defaults, but can cause the host res process to crap out at that 
point if you don't have NIS+ available on your network.

Second, I would check interface statistics on the client, server, 
nameserver, and switches and routers if possible.  You want to check for 
collisions and/or errors, and keep an eye out for duplex mismatch or 
auto-negotiation problems related to certain hardware.  Watch out for 
misbehaving mini-hubs or mini-switches along the way.

I have had problems with interface hotplug on Linux and certain cards 
not detecting a link or auto-negotiating correctly, eg. 3c509B.  I had 
to put

MII_NOT_SUPPORTED=yes

in /etc/sysconfig/network-scripts/ifcfg-ethX where X is the number of 
the interface, to explicitly disable hotplug for that adapter.

--jonathan


hybrid theory

2004-03-08 Thread Jonathan Dill
Is anyone successfully using a mixed strategy of backups to both disk 
and tape?  In particular, I have a 1 TB Snap 4500 and an LTO tape 
drive.  I have thought about a few different ways to go about it, but 
would appreciate suggestions from someone who has tried this approach.  
The Snap also has built-in snapshot capability, and I wondered if there 
was a way to make use of that in combination with backups to disk.

1. normal amanda backup to holding disk, flush when disk is getting full
2. do full dumps to tape out of cycle and incrementals to disk with 
file-driver
3. #2 but also use snapshot technology to keep an even longer history of 
incrementals
4. use file-driver, and occasionally archive some of the tape-files to 
real tape

Any other ideas?

Thanks,
--jonathan


Re: Strange DNS lookup problems ... I think ...

2004-03-06 Thread Jonathan Dill
Resolving IP address to a hostname (reverse lookup) is the part that 
looks broken, check the reverse domain in the DNS i.e.

host 172.24.16.86
or
nslookup 172.24.16.86
The error says *hostname* lookup failed, not address lookup failed.  
Someone else reported a similar problem a few days ago, and he reported 
that there was a typo in the reverse domain file of the DNS, and that 
fixing that fixed the problem.

I wonder though, Why does amanda need to do a reverse lookup?  You give 
amanda a hostname in the DLE, and it looks up the IP address, which 
should be adequate for amanda to do what it needs to do.  But then it 
tries to do a reverse lookup for the hostname based on the IP address, 
and gives up if that fails.

It would be nice if the reverse lookup could be avoided.  In principle, 
yes, the reverse table in your DNS should be correct, but failing backup 
seems like an expensive DNS diagnostic.

 planner: ERROR intrap:  [addr 172.24.16.86: hostname lookup failed]
   

--jonathan


Re: server lookup failing

2004-03-03 Thread Jonathan Dill
I have run into this problem before with NIS when ypbind crashed on 
some of the clients--this has been a chronic problem for me with Linux 
talking to IRIX NIS servers.  Consequently, I put the amanda host IP, 
amanda user and group IDs in the local files so that ypbind crashing 
would not muck up the backups.

amanda doesn't cache any IPs AFAIK, but the OSes of the clients probably 
do.  Also, special case if you are using NIS--you have to make sure 
hosts is updated in NIS and you have re-run ypmake manually, plus in 
IRIX 6.X you may need to do the nsadmin commands on the NIS master.  
Also, check the clients to see what's in /etc/nsswitch.conf if it 
exists--that should tell you what data sources are checked and in what 
order.  It's interesting that host works and amanda does not--perhaps 
they use different host lookup mechanisms, and you could get clues from 
looking at the source code of both programs.

As for the clients, it's going to vary depending on the client OS, but 
here are a few I can think of off the top of my head:

# IRIX 6.X
nsadmin flush
nsadmin restart
# Linux with cacheing-nameserver
/etc/rc.d/init.d/named restart
# Linux with tmdns
/etc/rc.d/init.d/tmdns restart
George Kelbley wrote:

After changing the IP on my amanda server, many of my clients fail 
amcheck with host name lookup fail.  The server is resolved via dns 
(not in /etc/hosts), running host on the clients returns the proper 
IP.  Re-booting the client fixes the problem, but I'd rather not 
reboot all my clients.
So, does amanda cache the server IP somewhere that I can flush?  I 
been looking around but have't found anything.

thanks




Re: gnutar in configure

2004-03-02 Thread Jonathan Dill
If you're backing up more than one architecture, I find that it's nice 
to set things up so that you can have the same path to gnutar on all of 
the architectures.  That way, you can run amrecover on any machine and 
it will find a valid path to gnutar.  Normally, I just create a symbolic 
link to the real path to gnutar in /usr/local/etc/amanda/bin and use 
--with-gnutar=/usr/local/etc/amanda/bin/tar  If you're only backing up 
one architecture, and this suggestion causes a brain hemmorhage, then 
forget about it and just stick to what you were doing.

Maybe it seems a bit esoteric, but in practice I have found it to be 
very useful.  Very often, I have restored files for an IRIX workstation 
to the holding disk of the amanda server, which is Linux, and then pick 
and choose which files I really want to transfer to the workstation with 
rsync, rather than just restoring everything to the IRIX workstation 
directly.  That way, I can be careful not to overwrite files, or force 
overwriting corrupt files with newer timestamps, whatever it is that I 
need to do.  I think it gives me an extra level of control to help avoid 
making a mistake.

On another note, maybe things have changed, but I once found that gnutar 
incremental backups sucked performance-wise, would make machines pretty 
much unusable during estimates and dumps.  Normally, this would not 
matter, but you're talking University with eccentric grad students 
working at 3am and such who complain about these things.  I have 
migrated most things to XFS filesystem and use xfsdump on Linux and 
IRIX--a process that I started when XFS went Open Source (around Red Hat 
7.0) and I got tired of waiting for the problems with dump for ext2fs to 
get sorted out.  Machines are still very usable with xfsdump and 
software compression running in the background, and finish faster than 
gnutar dumps.  xfsdump estimates are very fast, comparatively speaking.

However, with faster CPUs, faster disk interfaces, and filesystems like 
Rieserfs, perhaps the performance of gnutar has improved.

--jonathan

Frank Smith wrote:

Then you can run configure --with-gnutar=/usr/local/bin/tar, and make
sure that that path exists on your clients, and is gnu tar of the
proper version on all of them as well.
 



fake install path

2004-03-01 Thread Jonathan Dill
I vaguely recall that there is variable that you can pass to 'make' to 
install to a different root, similar to what happens during building a 
binary RPM, for example:

make install VARIABLE=/tmp/amanda-2.4.4p2

The result is that you end up with all of the amanda files in 
/tmp/amanda-2.4.4p2/usr/local and so on.  Then you can do something like 
this:

cd /tmp/amanda-2.4.4p2
tar cvzf amanda-tarball.tgz .
Then you can ftp the tarball to a different machine, cd / and extract 
the tarball.  Does anybody remember what VARIABLE is supposed to be or 
how to do this?

My goal is to set up a Snap Appliance 4500 as an amanda server--Yeah, I 
have already seen Preston Smith's page about setting up amanda client on 
a Snap 4400, and I also know that Snap Appliance won't take 
responsibility if I somehow manage to screw up my Snap.

Thanks,
--jonathan


Re: fake install path

2004-03-01 Thread Jonathan Dill
Thanks, that worked:

make install prefix=/tmp/amanda-2.4.4p2/usr/local
cd /tmp/amanda-2.4.4p2
tar cvzf ../amanda-tarball.tgz
Paul Bijnens wrote:

The prefix= can be specified to override the configure prefix
like:
  make install prefix=/tmp/amanda-2.4.4p2



Re: dds3 or dds4?

2001-06-27 Thread Jonathan Dill

Tom Strickland wrote:
 We're on the verge of ordering a DDS drive (this week). It'll probably
 be an HP Surestore - but the question is DDS3 or DDS4? There's the
 obvious difference in capacity, but beyond that are there any other
 differences? Speed is an obvious one - any others?

Keep in mind that amanda can't span individual backups across multiple
tapes and think about your requirements.  If you buy a new disk today,
18 GB is probably the smallest size that you will find easily available,
and that number keeps going up.  I work in an academic research lab
where now some people are getting 75 GB and 180 GB disk drives, and with
new instruments the data sets are increasing in size rapidly.  Even the
DDS4 isn't big enough anymore, so I'm having to look into getting a
tapedrive with higher capacity (or using GNUTAR to split up dumps, or
partitioning the drives into smaller chunks, both of which are kind of
messy solutions).  If things are different in whatever business you are
in, maybe the DDS3 would be adequate, but it's something to think about.

-- 
Jonathan F. Dill ([EMAIL PROTECTED])



Re: dds3 or dds4?

2001-06-27 Thread Jonathan Dill

Hi Tom,

OK I got it now this is a charity.

If the backups aren't too big and you have a big enough holding disk,
you might consider a strategy where you do some dumps to the holding
disk.  For example, I have a config where on Wednesdays, I flush the
holding disk and then do a dump with the tape in the drive, and the rest
of the week I just dump to disk.

If you are just backing up one or two servers, you might consider using
a tool like rsync to mirror the disks to another server off site that is
not web accessible, or ghosting to a removable hard drive.

Lastly, you might consider looking for a corporate sponsor who could put
the $ for the DDS4.

Tom Strickland wrote:
 I've just done some sums:
 total for HP SureStore DDS3i, SCSI card, 20 tapes, delivery, VAT: 998.52UKP
 total for HP SureStore DDS4i etc: 1408.71 UKP
 total for HP DLT1, SCSI card, 20 tapes, delivery, VAT: 2441.415
 
 Well, unless I'm being wildly ripped off somewhere, it looks as though
 DDS is the only affordable solution. Probably DDS3, I'm afraid. I may
 be able to work something out, but at the moment I don't have the
 funds to be able to chip in myself.
 
 Anyway, thanks to all for the advice. Very helpful.

-- 
Jonathan F. Dill ([EMAIL PROTECTED])
CARB Systems and Network Administrator
Home Page:  http://www.umbi.umd.edu/~dill



Re: Try2, Amanda Question

2001-06-18 Thread Jonathan Dill

Bryan S. Sampsel wrote:
 
 That's the bitch of it...it IS resolved: via nslookup, via ping--ANYTHING but Amanda.
 
 It's bizarre.  I'm getting ready to compile amanda from source to see if it's a 
problem with the rpm on the client.  Rpm installs are OK sometimes--other times, I'd 
rather not deal with 'em.

Do you have all of the redhat updates applied on the client?  nscd was
buggy in RH 7.0 and that could cause problems with hostname lookups if
you use nscd and have not upgraded to the updated rpm.  There were also
updated rpms for bind-utils, glibc, kernel, and possibly some other
packages that could affect hostname lookups.

You might want to run nscd on the client to do nameservice cacheing. 
Check out /etc/nsswitch.conf and possibly put files first for hosts,
but then be careful about keeping /etc/hosts up to date or only put the
minimum necessary entries in /etc/hosts.  If you use nscd, you might
have to do the following to flush the cache (on the amanda client,
because that is where the lookups are failing):

/etc/rc.d/init.d/nscd restart

Stupid question, but did you add the address to /etc/hosts on both the
server and the client?

-- 
Jonathan F. Dill ([EMAIL PROTECTED])



Re: Raid and Amanda

2001-06-17 Thread Jonathan Dill

Olivier Nicole wrote:
 
 I am learning to use Amanda but it seems it has a problem, as everyone
 knows, backing up a filesystem larger than the tape.
 
 Am I wrong? I understood that Amanda is supposed to ask for as many
 tapes during a single run, that are needed to complete the back-up.
 
 So it would start and ask for a second, third, fourth... tape until it
 managed to back-up everything.

A backup of an individual filesystem cannot span multiple tapes.  For
example, if you have a 40 GB tapedrive, and you want to back up a 75 GB
disk drive that is pretty full of data that is not very compressible,
it's probably not going to work.  You will have to use gnutar to split
up the disk into smaller subdirectories.  Or, you might consider backing
up the disk and then repartitioning it into  40 GB chunks.

I think that spanning multiple tapes may be a feature in some
development version of amanda, but I don't know the current status.

Secondly, runtapes determines the total number of tapes that amanda
will use during the run, period.  You could set runtapes equal to the
number of tapes in your changer and let amanda use all of the tapes
until it is done, but that may not be the ideal strategy in the long
term.  You may want to try to get amanda to use about 2-3 tapes a day,
for example, and balance the full and incremental dumps accordingly. 
You have to try to be realistic about how many tapes you will need to
use, but things will also balance out some as the dump cycle progresses.

What I do is during the week, I only let amanda use 2-3 tapes for each
run.  Sometimes, dumps end up left on the holding disk, and I have to
flush them to tape, but at least it keeps the backup cycle from running
into the middle of the work day if there are some large dumps that need
to get done.

For the weekend, I let amanda use all of the tapes in the rack if it
wants to.  I will also deliberately force a few full dumps to try to get
them done on the weekend so they won't run during the week.  One thing I
haven't worked out yet is a good system for trying to guess what full
dumps should be coming due soon so I can specifically force those dumps
on the weekend.

-- 
Jonathan F. Dill ([EMAIL PROTECTED])
CARB Systems and Network Administrator
Home Page:  http://www.umbi.umd.edu/~dill



Re: Is amanda suitable for a single server backup?

2001-06-09 Thread Jonathan Dill

Hello Howard,

There are two solutions that I have used in this situation:

1) If you can handle a few minutes of down time, like on a home PC or
personal workstation, you could use ghosting to a removable disk
drive.  This is probably OK for a home PC or another situation where you
can handle losing some data, because if both disks fail, you would lose
everything, so that is why it's a good idea to remove the backup disk
and store it offline.  Or, if you are on a network, the backup disk
could be on a another computer.  (NB: In case you were thinking of it,
RAID1 or simple mirroring is NOT a backup solution and will not cover
you for eg. if a hacker trashes the filesystem).

2) You could use amanda to do backups to holding disk most of the time,
and flush the dumps to tape when the holding disk is getting full. 
Since you have 40/80 tapes, if you use software data compression, you
would want about 40 GB holding disk to get the most out of the tape.  If
you can use an IDE disk like ATA-100, in the US you can get a 40 GB disk
for about $120-150 USD (see www.megahaus.com for example).  If you look
for a close-out you can probably get it even cheaper, but may have
problems getting a warranty replacement if it fails.  If both disks
fail, you will lose everything up to the last time that you flushed the
holding disk to tape.  For this reason, it would be better to use
another computer as the backup server if possible.

Howard Zhou wrote:
 
 If I have a single server to backup with 4Gb of data backed up to a 40-80Gb
 tape without a tape changer, would amanda do the job? To my understanding,
 amanda does append to a tape. Without appending, you have to put in a new
 tape everyday for a small amount of incremental backup data on 40-80Gb tape
 (what a waste) plus manual process.
 
 Is there any plan to support appending to tape in a future release?
 
 Is there any other free backup system which supports this feature?
 
 Thanks in advance.
 
 Howard

-- 
Jonathan F. Dill ([EMAIL PROTECTED])



predicting full dumps

2001-06-07 Thread Jonathan Dill

Does anyone have any scripts, or any tricks that I could use to try to
predict what full dumps are coming due?

I would like to be able to force a bunch of full dumps on the weekend,
when the total time of the run is not an issue, so that the load should
be a little lighter during the week.

Thanks in advance,
-- 
Jonathan F. Dill ([EMAIL PROTECTED])



Re: Fwd: Re: predicting full dumps

2001-06-07 Thread Jonathan Dill

Hi guys,

Actually, as Joshua pointed out, amadmin config due does exactly
what I want--It gives you a nice list saying which dumps are overdue,
what dumps are due today, and how many days until other dumps are due. 
Forcing dumps that are overdue, due today, or that are due in a day or
two would be a good starting place.  You could do something like this to
consider candidates for forcing:

amadmin config due | fgrep -v Overdue | fgrep -v today \
  | awk '{print $3,$0}' | sort -rn
amadmin config due | fgrep today
amadmin config due | fgrep Overdue

You would start from the bottom of the list and work your way up.

Mitch Collinsworth wrote:
 
 You can figure out what's coming due by looking at
 'amadmin config balance'.  You can't always predict what might
 get promoted by planner, but if you do what you're suggesting and
 force enough filesystems to make the total data size for the run
 greater than average, planner won't promote anything.
 
 Some other things to consider if run-time is getting to be a problem
 are adding more holding disk space and possibly increasing maxdumps.
 These should allow for more parallelism in the dumping phase.
 Depending on your needs, getting the data off the clients quickly
 might be the real issue, and then amanda can continue spooling data
 from the holding disk to tape long after the dumps have finished.
 
 -Mitch
 
 On Thu, 7 Jun 2001, Francis 'Dexter' Gois wrote:
 
 
  Not sure it is possible : the dumps queue is created when you launch amdump,
  and Amanda looks at which dumps it can move (to optimize) or it has to move
  (because of space problems) just before starting the dumps when you launch
  amdump Config (see the logs).
 
  Dexter
 
  On Thursday 07 June 2001 06:48, you wrote:
   Does anyone have any scripts, or any tricks that I could use to try to
   predict what full dumps are coming due?
  
   I would like to be able to force a bunch of full dumps on the weekend,
   when the total time of the run is not an issue, so that the load should
   be a little lighter during the week.
  
   Thanks in advance,
 
  --
  Knowledge-sharing and open-source content : another way to gain eternity.
  Francis 'Dexter' Gois - [EMAIL PROTECTED]
  System Administrator  -  Tiscali Belgium NV/SA
  phone: +3224000839-  fax : +3224000899
 
  ---
 
  --
  Knowledge-sharing and open-source content : another way to gain eternity.
  Francis 'Dexter' Gois - [EMAIL PROTECTED]
  System Administrator  -  Tiscali Belgium NV/SA
  phone: +3224000839-  fax : +3224000899
 

-- 
Jonathan F. Dill ([EMAIL PROTECTED])
CARB Systems and Network Administrator
Home Page:  http://www.umbi.umd.edu/~dill



Re: Linux and dump

2001-05-17 Thread Jonathan Dill

Hi Eric,

You may want to take a look through the list archives at:

http://groups.yahoo.com/group/amanda-users/

This subject has already been hashed and rehashed to death on just about
every mailing list that I subscribe to including this one.

I'm planning to migrate to SGI XFS on Linux--SGI has released an
installer CD for Red Hat 7.1 which can make XFS filesystems.  XFS is a
journaled filesystem, and it can be run over RAID unlike ext3 which had
problems with RAID on 2.2 kernel.  You can download the installer for
free from ftp://linux-xfs.sgi.com but the server appears to be down
right now.

Eric Veldhuyzen wrote:
 I just saw that someone had problems with dump and Linux. This made
 me remember an posting from Linux Torvalds of a few weeks back which I
 think anyone still using dump with Linux should read:
 
http://www.lwn.net/2001/0503/a/lt-dump.php3
 
 Summary: Dump was a stupid program in the first place. Leave it behind.

-- 
Jonathan F. Dill ([EMAIL PROTECTED])



Re: Linux and dump

2001-05-17 Thread Jonathan Dill

I think I was the one who made the suggestion to remove ext2 dump, but I
was wrong, you don't have to do that.  ./configure will find both
xfsdump and dump, and amanda will choose whichever program is
appropriate for the type of filesystem i.e. if it is XFS filesystem and
you have not specified GNUTAR, xfsdump will be used automatically.

Alexandre Oliva wrote:
 On May 17, 2001, C. Chan [EMAIL PROTECTED] wrote:
 
  I followed suggestions to remove ext2 dump so
  Amanda would detect xfsdump and recompiled, but I find this rather inelegant.
 
 What is inelegant?  Removing ext2 dump?  You didn't have to do that.
 You only need xfsdump available at configure time to get the xfsdump
 supporting bits enabled; the existence of ext2 dump doesn't make a
 difference.

-- 
Jonathan F. Dill ([EMAIL PROTECTED])
CARB Systems and Network Administrator
Home Page:  http://www.umbi.umd.edu/~dill



Re: amanda

2001-05-13 Thread Jonathan Dill

You can read the tapes without amanda using just dd and a restore
program.  You can get some hints by looking at the first part of a dump
image.

For example I will do these commands with one of my backup tapes. 
First, I have to mt fsf to skip over the tape header, and then I can
use just dd command to read the image header which has the
instructions:

[root@masq1 ~]# mt fsf
[root@masq1 ~]# dd if=/dev/nht0 bs=32k count=1
AMANDA: FILE 20010503 shaman /mnt/raw lev 1 comp .gz program
/usr/local/etc/amanda/bin/tar
To restore, position tape at start of file and run:
dd if=tape bs=32k skip=1 | /bin/gzip -dc |
usr/local/etc/amanda/bin/tar -f... -

Maybe your backups were made with ufsdump or dump or some other program
than tar, but the dump header should tell you what to do.  Also, if you
go to www.amanda.org and find your way to the amanda FAQ I think the
instructions are there, or if you download the source code and unpack
it, I think you can find them in some files in the docs directory.

Also, don't forget to always make sure that your tape is write-protected
when you are dealing with old backup tapes.  I think this is just some
good practical advice for any kind of backup, and you could have big
trouble if you forget and someone else decides to use the tapedrive.

 PQ20 Yuval Ruttkai wrote:
 I am working in a company that their first backup was made before 4
 years by Amanda.
 
 There are old tapes that they want to see if they need the material.
 
 I need to install amanda just to watch the files that are in the tape.
 
 I have Sun Ultra 5 with Solaris 5.8.
 
 Do I need a special configuration to configure Amanda just to watch
 and read those tapes?
 
 Please try to help me as soon as you can.

-- 
Jonathan F. Dill ([EMAIL PROTECTED])



Re: file system problem - umount

2001-05-11 Thread Jonathan Dill

Marcelo G. Narciso wrote:

 |   DUMP: Warning - cannot read sector 600384 of `/dev/rdsk/c0t6d0s0'
 |   DUMP: Warning - cannot read sector 600400 of `/dev/rdsk/c0t6d0s0'

It looks to me like you have some bad sectors on your disk, or possibly
a disk drive that is on its way to failing, like the head is having
trouble seeking to some sectors.  The filesystem was probably
automatically unmounted to prevent corruption, or at least I have seen
something similar happen with an XFS filesystem on SGI IRIX.  Check the
system logs.

 After amdump, the file system /dev/rdsk/c0t6d0s0 is unmounted. Why it
 happens?
 Someone knows what is happens?
 
 Thanks a lot.

-- 
Jonathan F. Dill ([EMAIL PROTECTED])



Re: Dumping live filesystems on Linux 2.4 - Corruption

2001-05-11 Thread Jonathan Dill

C. Chan wrote:
 I'm experimenting with XFS now and if I run into any glitches
 I'll let the Amanda list know. I'd like know how to tell Amanda
 to use xfsdump rather than GNU tar on XFS partitions.

I think you would have to recompile amanda.  I would install
xfsdump/xfsrestore and rpm -e dump and then run ./configure script,
then it should find xfsdump and choose that as the OS dump program. 
Then you would assign a non-GNUTAR dumptype to the disk(s) and it should
run xfsdump for the backups.

Hopefully, the configure script does not check to see if the OS is IRIX
or not Linux and decide that xfsdump can't be installed on the
system--Then you would have to hack the configure script a little bit to
get it to work properly.

-- 
Jonathan F. Dill ([EMAIL PROTECTED])



Re: Linux kernel 2.4 and dump vs. SGI XFS

2001-05-11 Thread Jonathan Dill

Hi folks,

If you are concerned about dumping live filesystem with Linux kernel
2.4, you might be interested in this:  SGI has released an updated
anaconda installer CD for Red Hat 7.1 with XFS journaled filesystem
support.  You load the installer CD first, which includes only the RPMs
necessary for XFS support, and then you will need the regular Red Hat
7.1 CDs to complete the installation:

ftp://linux-xfs.sgi.com/projects/xfs/download/

With XFS you have the option of using xfsdump for your backups on
Linux--AFAIK this should work with amanda, but I have not tested it and
YMMV.  According to my discussions with Eric Sandeen at SGI, xfsdump
does not look at the on-disk metadata structures directly, so it would
not be subject to the problem which affects ext2fs and Linux dump.  I am
forwarding Eric's response to a couple questions that I had about it.

Now if only amrecover would work with xfsdump, I would be in heaven :-)

 Original Message 
Subject: Re: SGI's XFS: ready for production use?
Date: Fri, 11 May 2001 11:39:19 -0500
From: Eric Sandeen [EMAIL PROTECTED]
To: Jonathan Dill [EMAIL PROTECTED]
CC: [EMAIL PROTECTED] [EMAIL PROTECTED]
References: [EMAIL PROTECTED]

Jonathan Dill wrote:
 
 Hi Eric,
 
 Are there plans to keep up with new releases of Red Hat eg. release a
 new anaconda disk as new versions come out?  I have a mostly SGI IRIX
 shop with a growing Linux population, so it seems somehow attractive to
 be able to use XFS as a standard--It could mean good things for
 consistent, automated backups, and being able to toss filesystems around
 between IRIX and Linux.  I would consider adopting this as a standard,
 but I want to make sure that it will be updated and not a dead end.

In terms of Red Hat integration, we don't really have an official
guarantee that we'll keep up with the Red Hat releases.  However, I
expect that we will, unofficially if not officially.  There are enough
people interested in this inside and outside of SGI that I think this
will continue.  I have a personal interest in continuing.

Of course, if Red Hat and/or Linus picks up XFS, that would be a good
thing too, and we're trying to facilitate both of those things. 

As far as SGI's commitment to XFS on Linux, while I can't speak for the
company, I see no signs of that commitment waning.  We need XFS on Linux
for upcoming products, and for our customers.

And of course, the source is out there, so that's a certain level of
guarantee that XFS will be around.

See also the Long Term Commitment... thread at
 http://linux-xfs.sgi.com/projects/xfs/mail_archive/0105/threads.html
 
 Also, there  has been a lot of chatter lately about the problems with
 Linux dump and ext2fs corruption.  Is it true that XFS with the xfsdump
 tools ought to be free from these sorts of buffer cache issues?  Or is
 it safest on Linux to stick with tools like gnutar which access the FS
 at a higher level?

Here's the scoop from Nathan Scott, our userspace guru:

 Yes, I also read this thread - it was quite interesting.
 
 Though it wasn't explicitly spelt out, I think the problem
 that was being refered to with dump (not xfsdump) is that
 it looks at the on-disk metadata structures (via libext2fs)
 to figure out what needs dumping, etc.  tar, et al. use the
 standard system call interfaces for traversing metadata and
 obtaining file data, so aren't exposed to the problems
 associated with dump-on-a-mounted-filesystem.
 
 xfsdump uses an xfs-specific ioctl (bulkstat) to walk the
 filesystem inodes, and doesn't refer directly to the on-disk
 data structures either, so it should be as safe as tar.

HTH, 

-Eric

-- 
Eric Sandeen  XFS for Linux http://oss.sgi.com/projects/xfs
[EMAIL PROTECTED]   SGI, Inc.



Re: custom barcode labels

2001-04-23 Thread Jonathan Dill

Hi Ron,

I have a Brother P-touch 540 Extra that I use for making tape labels
which also does several types of barcodes.  However, someone else will
have to verify whether the P-touch barcodes work with a changer or not
because my changers don't read barcodes.

I find using the P-touch a whole lot easier than printing out a whole
sheet of labels from a printer, trying to find the right size labels or
cut them to size, and trying to get the layout right.  The only thing I
don't like about it is that 9mm is the narrowest label size which is a
little too wide for a 4mm tape.  Perhaps Brother has some other model
that supports a narrower width.  You can check it out on www.brother.com

Ron Snyder wrote:
 For those of you with tape changers and barcode readers: are any of you
 using any software to generate your own custom barcodes?  It seems like it
 might be a bit nicer (for locating/moving tapes around) if the barcode label
 could match what the tape label has.
 
 I've only tried two pieces of software so far (BarCode Maker 3.0) and
 something called barcode 97 for windows 3.1.  They both create barcodes
 OK, but for some reason my Qualstar tape library won't read them. (Maybe
 I'm doing something wrong, though-- I'm printing them out on regular paper,
 and then taping them to a test tape to see if the bar code reader can scan
 it.

-- 
Jonathan F. Dill ([EMAIL PROTECTED])
CARB Systems and Network Administrator
Home Page:  http://www.umbi.umd.edu/~dill



Linux SCSI utilities

2001-04-20 Thread Jonathan Dill

Hi folks,

I just found out about these sg_utils which may be helpful for folks
running amanda on Linux systems, especially for debugging tapedrive and
changer problems...

http://gear.torque.net/sg/#Utilities: sg_utils and sg3_utils
-- 
"Jonathan F. Dill" ([EMAIL PROTECTED])
CARB Systems and Network Administrator
Home Page:  http://www.umbi.umd.edu/~dill
Title: The Linux SCSI Generic (sg) Driver







The Linux SCSI Generic (sg) Driver
Introduction
Background
Features
SG device driver downloads
Utilities: sg_utils and
sg3_utils
Sg related pages
External references
SG in Redhat 6.2, 7.0 and
Mandrake 7.1


Introduction
The Linux sg driver is a upper level SCSI subsystem device driver that
is used primarily to handle devices _not_ covered by the other upper level
drivers: sd (disks), st (tapes) and sr (CDROMs). The sg driver is used
by scanners, cd writers and applications that read cd audio digitally.
Sg can also be used for less usual tasks performed on disks, tapes and
cdroms. Sg is a character device which, in some contexts, gives it some
advantages over sd and sr which are block devices. The interface of sg
is at the level of SCSI command requests and their associated responses.

Background
The original driver was written by Lawrence Foard in 1992 and remained
unchanged for several years. In August 1998 Heiko Eissfeldt and Joerg Schilling
started working on enhancements to this driver. Soon after, the author
became involved and these efforts culminated in a new sg driver being placed
in Linux kernel 2.2.6 which was released on 16th April 1999. It contains
the first major upgrade to the SCSI generic packet device driver ("sg")
since 1992. This new driver has a super-set of the original interface and
the semantics of the implementation are very similar. Hence it offers a
high degree of backward compatibility with the original driver.
The major reason for introducing a new sg driver into the 2.2 series
of kernels was the problem that the original device driver was having trouble
finding memory. This driver improves the situation by using scatter gather,
memory above the 16 Mbytes level and memory from the scsi dma pool as appropriate.
Other drivers were affected by these memory problems (especially those
associated with ISA hardware). In kernel 2.2.10 H.J. Lu introduced a new
kernel memory allocator that alleviated many of these memory problems.
On 4th January 2001 the Linux 2.4.0 kernel was introduced and it contained
a "version 3" sg driver that is described below.

Features
The following enhancements have been added: scatter gather, command queuing,
per file descriptor sequencing (was per device) and asynchronous notification.
Scatter gather allows large buffers (previously limited to 128 KB on i386)
to be used. Scatter gather is also a lot more "kernel friendly". The original
driver used a single large buffer which made it impossible to run 2 or
more sg-based applications at the same time. With the new driver a buffer
is reserved for each file descriptor guaranteeing that at least that buffer
size will be available for each request on the file descriptor. A user
may request a larger buffer size on any particular request but runs the
(usually remote) risk of an out of memory (ENOMEM) error.
A "version 3" sg driver is now available. It adds a new interface that
allows more control over SCSI commands and returns more information about
their performance. This driver is present in Linux kernel 2.4.0 . A separate
version with reduced capabilities is available for the 2.2 series kernels.
Features include: larger sense buffer, residual DMA count, 16 byte commands,
direct IO support, command duration timing and a "proc_fs" interface. Naturally
it is backward compatible with applications based on the sg interface in
the lk 2.2 series and earlier.

SG device driver downloads
The following table summarizes the different versions of the sg device
driver that are available. If you wish to use one of these tarballs then
untar it in /usr/src/linux (or wherever the top of your kernel tree is).
As a precaution you may wish to copy the files include/scsi/sg.h and drivers/scsi/sg.c
to other names. This will facilitate reversing the patch if required. For
information about the differences between versions see the history section
at the top of the include/scsi/sg.h file.



Linux version

Sg driver
version+tarball

Patch

Notes



 2.2.6

sg22orig.tgz



found in RH6.0, original
documentation













2.2.6 + 2.2.7

sg2131.tgz

pat_sg2134_225.gz





2.2.8 + 2.2.9

sg2132.tgz







2.2.10 - 2.2.13

sg2134.tgz

patch





2.2.14 + 2.2.15

sg2136.tgz

patch on sg version 2.1.34

(for documentation see bullet list)



2.2.16

sg2138.tgz

patch on sg in lk 2.2.14+15

fixes scatter gather bugs on aha1542



2.2.17 - 2.2.19

sg2139.tgz

patch on sg in lk 2.2.16

sg2138 broke sym53c416, now fixed













optional 2.2 series

sg version 3.0.17



sg version 3 backported to lk 2.2



optional 2.4 series

sg version 

Re: Amanda with two tape devices

2001-04-06 Thread Jonathan Dill

Alexandre Oliva wrote:
  Can amanda use who tape devices to perform a single backup?
 
 You can't use them concurrently (yet), but you can set up chg-multi to
 switch between tape drives automatically.  That's what we do here.

Actually, there is a way that you can use them concurrently--You could
split up your dumps into 2 dump configs, one dump config for each drive,
which is what I have done.  However, all of the disks for one system
must be in the same dump config.  You cannot have some disks from a
system in one config, and other disks from that same system in the other
config, or at least it would be extremely difficult to do that.  The
reason for this is that the client will already be busy with the first
dump config, so when the second dump config runs, it will not be able to
connect.

If you run 2 dump configs concurrently, it is also a good idea to have
separate holding disks for each config, or to split the amount of disk
space that each config is allowed to use.  For example, if you have an
18 GB holding disk, only let each config use 9 GB, otherwise disk space
could run out unexpectedly and it might not be handled gracefully by
amanda..  I think technically it should not be a problem, but in my
experience I know that it has caused my backups to fail when both dump
configs were writing to the same disk and used up all of the space.

-- 
"Jonathan F. Dill" ([EMAIL PROTECTED])



Re: Advice: NFS vs SMB - looking for the voice of experience

2001-04-06 Thread Jonathan Dill

Alexandre Oliva wrote:
 I'd much rather use NFS than SMB.  It's generally far more efficient.
 However, God only knows how much crap an NFS server running on
 MS-Windows would have to work against, so it might be that it actually
 takes longer to run.

I recommend running some I/O benchmarks eg. bonnie with a 100 MB or 256
MB file over NFS then over SMB.  My experience has been that Sun PCNFS
is incredibly slow, but some other NFS implementation on NT might be
faster.

-- 
"Jonathan F. Dill" ([EMAIL PROTECTED])



Re: Advice: NFS vs SMB - looking for the voice of experience

2001-04-06 Thread Jonathan Dill

This point is very important.  You will have to do the equivalent of
exporting to the server with "root" enabled.  In Unix this usually is an
option like "root=X" or on Linux "no_root_squash" otherwise you may not
have sufficient priviledges to read the files.  It may look like the
backups worked, but when you restore the files, you may find that the
files are the right size but only contain null characters (aka. ^@ or
ASCII 0).  It all depends how the MS NFS implementation handles UID
mapping and what happens when you have insufficient priviledges to
access some file.  If you choose to use this NFS arrangement, you should
make sure to export the disk read-only, otherwise someone could use NFS
to trash your NT server(s).  You should also try restoring a backup to a
different location, eg. the holding disk, and make sure the file
contents are OK and not bogus ^@ files.

"John R. Jackson" wrote:
 ...  I lean towards NFS, is there any reason I should not?
 
 I know very little about this, but the one thing that popped to mind is
 whether an MS NFS server would give a tar running as root on a client (to
 NFS) enough access to get to everything.  The normal action is to convert
 all root requests to "nobody", which will not work well for backups.

-- 
"Jonathan F. Dill" ([EMAIL PROTECTED])



Re: Error: gtar: uid_t value 4294967294 too large (max=16777215)

2001-03-23 Thread Jonathan Dill

Hi Terry,

I have seen this problem with Unix computers running either Samba,
Appletalk sharing, or PCNFS.  Something, possibly a misconfigured Samba,
is probably using that UID as the "nobody" UID.  If you're not using
Samba for anything and it's just installed and "turned on" I think it
would be a good idea to turn it off as it could be a security liability,
plus that may take care of your UID problem.  You also might try
upgrading to tar-1.13.19 if you haven't already--that seems to clear up
some problems where an exit code should be returned to indicate a
"warning" rather than a total "failure."

Terry Koyama wrote:
 I ran amanda last night and received the below report.  The client with
 the error (hendrix) is an AIX machine running amanda v2.4.2p1.  I tried
 checking out the UID's in /usr but didn't find anything out of the
 ordinary.
 
 I've also included  sendback.debug and sendsize.debug to see the actual
 command that was executed.  The syntax of runtar looks kinda funky (to
 me anyway) but seems to run fine on all the other clients.  Plus, I
 don't have a clear understanding of how runtar and gnutar are related.

-- 
"Jonathan F. Dill" ([EMAIL PROTECTED])
CARB Systems and Network Administrator
Home Page:  http://www.umbi.umd.edu/~dill



Re: Error: gtar: uid_t value 4294967294 too large (max=16777215)

2001-03-23 Thread Jonathan Dill

"John R. Jackson" wrote:
 
 I checked the /etc/passwd and /etc/group files which had user/group
 nobody with the id of 4294967294.  After I changed that ...
 
 Ummm, you changed the gid and uid of "nobody"?  That was probably kind
 of rash.  There are things floating around that know about that and
 expect a specific value.  Like NFS :-).

Yes, for example we have a piece of lab equipment from England which
requires an Acorn computer.  If you don't have other Acorn devices on
the network, usually the only way to print and share files is via PCNFS,
and to get that to work correctly I had to set UID nobody to some such
value as that, and changing the value would "break" printing and file
sharing from the Acorn.

-- 
"Jonathan F. Dill" ([EMAIL PROTECTED])



huge incrementals

2001-03-21 Thread Jonathan Dill

Hi all,

What values are people using for bumpsize, bumdays, bumpmult other than
the example values?

I have a lot of  10 GB disks to back up and it doesn't seem efficient
to me to do eg. an 11 GB "level 5" backup of a disk that has 13 GB of
data on it--In that situation, I think it would probably make more sense
to just do a level 0 backup.  On the other hand, I don't want to create
a situation where disks  4 GB get a full dump every time.  It would be
nice if bumpsize could be expressed as a % of the disk size rather than
an absolute amount of space.

Any hints anybody can give me in tuning the bump parameters would be
appreciated.

Thanks,
-- 
"Jonathan F. Dill" ([EMAIL PROTECTED])



Re: using a DDS-4 Autoloader under linux

2001-03-08 Thread Jonathan Dill

I'm using a Sony TSL-11000 DDS4 autoloader under Linux without
problems.  It works as an Ultra2 LVD device without problems.  I think
I'm getting better throughput than 2.8 MB/s, but it's still well under
Fast/Narrow bandwidth due to the limitations of the drive.  I use mtx
and random access works as well as sequential.  I had to modify the
chg-mtx script slightly--I can send it to you if you get the TSL-11000.

The one problem that I have had is an occasional tape getting stuck
during the change process, and that's using Sony brand DDS4 tapes and
not some cheap imitation.  3 times so far I had to take the cover off
the drive, mark the stuck tape as "bad," and give it a little push back
into the magazine.  Then you have to shut down the system and power
cycle the drive to get the error to reset and magazine to eject.  I
don't know if any of the other autoloaders ever have this problem.

Martin Apel wrote:
 
 On Thu, 8 Mar 2001, Werner Behnke wrote:
 
   We are using a Seagate Scorpion DDS-4 Autoloader without problems.
   The only issue was, that for an unknown reason it doesn't like to be
   connected to the SCSI controller as a 'wide' device. If you set the DIP
   switches, such that it will register itself as a 'narrow' SCSI device
   everything is fine. Performance is about 2.8 MB/s so the narrow cable
   should not slow down the device noticeably.
 
  Does the autoloader support random access or
  only sequential mode?
 
 It supports random access.
 
  Did you set up chg-manual in amanda to change
  tapes manually?
 
  If yes: how do you control the media changer?
  With mtx (http://mtx.sourceforge.net/) or
  Autoloader Tape Library
  (http://www.ee.ryerson.ca/~sblack/autoloader/)
  or something else (mt, SCSI UNLOAD command,
  Amanda's chg-mtx...)?
 
 I took chg-mtx and adapted it at one place to use mtx 1.2. Using Linux
 for the tape server you have to configure the kernel with multiple
 LUNs per SCSI device.
 
 Greetings,
 
 Martin

-- 
"Jonathan F. Dill" ([EMAIL PROTECTED])



Re: Don't open attachment!!!

2001-03-01 Thread Jonathan Dill

"Anthony A. D. Talltree" wrote:
 IMHO, anyone who insists on using the software that's vulnerable to such
 attacks deserves to lose.

OTOH the amanda-users list doesn't deserve to lose if someone is dumb
enough to open the attachment, gets infected, and sends all kinds of
crap back to the list.  After all, the message probably got here in the
first place from someone on amanda-users who got infected.

Anyway, here is the official dirt from SARC:

http://www.sarc.com/avcenter/venc/data/w95.hybris.gen.html

-- 
"Jonathan F. Dill" ([EMAIL PROTECTED])



[Fwd: enp3.unam.mx spam relay]

2001-02-19 Thread Jonathan Dill

Hello all,

My experience trying to help some scientists at UNAM is that unam.mx is
a disaster area, therefore this e-mail is a shot in the dark at best. 
FYI I am not an admin of amanda-users or the server that hosts it, so
don't complain to me about spam or other problems.

 Original Message 
Subject: enp3.unam.mx spam relay
Date: Mon, 19 Feb 2001 14:35:34 -0500
From: Jonathan Dill [EMAIL PROTECTED]
Organization: CARB
To: [EMAIL PROTECTED], [EMAIL PROTECTED], [EMAIL PROTECTED]

Dear administrators,

enp3.unam.mx has incorrect e-mail configuration which allows spammers
everywhere in the world to use it to send spam.  Please refer the admin
of the system to http://maps.vix.com/tsi for instructions on the correct
way to configure a SMTP server.  This particular spammer sends several
spam messages to the amanda-users-list which is very rude and annoying,
so I hope somebody can do something about it.

 Original Message 
Return-Path: [EMAIL PROTECTED]
Received: from ashd1-1.relay.mail.uu.net ([199.171.54.245])
byumbi3.umbi.umd.edu (Netscape Messaging Server 4.15) with ESMTPid
G90R4300.HKS for [EMAIL PROTECTED]; Mon, 19 Feb 2001 14:21:39 -0500
Received: from surly.omniscient.com by mr0.ash.ops.us.uu.net with ESMTP
(peer crosschecked as: surly.omniscient.com [208.213.83.10])id
QQkdbx04110;Mon, 19 Feb 2001 19:18:09 GMT
Received: (from root@localhost)by surly.omniscient.com (8.11.1/8.11.1)
id f1JJE9a169507for amanda.org.amanda-users-list; Mon, 19 Feb 2001
19:14:09 GMT
Received: from enp3 (enp3.unam.mx [132.248.93.100])by
surly.omniscient.com (8.11.1/8.11.1) with SMTP id f1JJCld169568for
[EMAIL PROTECTED]; Mon, 19 Feb 2001 14:12:47 -0500 (EST)
Received: from 132.248.93.100 by enp3 (SMI-8.6/SMI-SVR4)id UAA03664;
Thu, 15 Feb 2001 20:20:14 -0600
From: [EMAIL PROTECTED]
Message-Id: 200102160220.UAA03664@enp3
To: [EMAIL PROTECTED]
Date: Thu, 15 Feb 01 01:26:44 EST
Subject: toner supplies
Sender: [EMAIL PROTECTED]
Precedence: bulk

PLEASE FORWARD TO THE PERSON
RESPONSIBLE FOR PURCHASING
YOUR LASER PRINTER SUPPLIES


 VORTEX  SUPPLIES 

-SPECIALS OF THE DAY ON LASER TONER SUPPLIES AT DISCOUNT PRICES--

LASER PRINTER TONER CARTRIDGES
COPIER AND FAX CARTRIDGES

WE ARE --THE-- PLACE TO BUY YOUR TONER CARTRIDGES BECAUSE YOU
SAVE UP TO 30% FROM OFFICE DEPOT'S, QUILL'S OR OFFICE MAX'S EVERY DAY
LOW PRICES

ORDER BY PHONE:1-888-288-9043
ORDER BY FAX: 1-888-977-1577
CUSTOMER SERVICE: 1-888-248-2015
E-MAIL REMOVAL LINE: 1-888-248-4930 

UNIVERSITY AND/OR SCHOOL PURCHASE ORDERS WELCOME. (NO CREDIT APPROVAL
REQUIRED)
ALL OTHER PURCHASE ORDER REQUESTS REQUIRE CREDIT APPROVAL.

PAY BY CHECK (C.O.D), CREDIT CARD OR PURCHASE ORDER (NET 30 DAYS).


IF YOUR ORDER IS BY CREDIT CARD PLEASE LEAVE YOUR CREDIT CARD # PLUS
EXPIRATION DATE. 
IF YOUR ORDER IS BY PURCHASE ORDER LEAVE YOUR SHIPPING/BILLING ADDRESSES
AND YOUR P.O. NUMBER
NO SHIPPING CHARGES FOR ORDERS $49 OR OVER
ADD $4.75 FOR ORDERS UNDER $49.
C.O.D. ORDERS ADD $4.5 TO SHIPPING CHARGES.

FOR THOSE OF YOU WHO REQUIRE MORE INFORMATION ABOUT OUR COMPANY
INCUDING FEDERAL TAX ID NUMBER, CLOSEST SHIPPING OR CORPORATE ADDRESS IN
THE
CONTINENTAL U.S.  OR  FOR CATALOG  REQUESTS PLEASE CALL OUR CUSTOMER
SERVICE LINE  1-888-248-2015 
 

OUR NEW  (LASER PRINTER TONER CARTRIDGE) PRICiES ARE  AS FOLLOWS: 

(PLEASE ORDER BY PAGE NUMBER AND/OR ITEM NUMBER)
 
HEWLETT PACKARD: (ON PAGE 2)
   
ITEM #1  LASERJET SERIES  4L,4P (74A)$44
ITEM #2  LASERJET SERIES  1100 (92A)-$44
ITEM #3  LASERJET SERIES  2 (95A)---$39
ITEM #4  LASERJET SERIES  2P (75A)-$54 
ITEM #5  LASERJET SERIES  5P,6P,5MP, 6MP (3903A)--$44
ITEM #6  LASERJET SERIES  5SI, 5000 (29A)--$95
ITEM #7  LASERJET SERIES  2100 (96A)-$74
ITEM #8  LASERJET SERIES  8100 (82X)---$145
ITEM #9  LASERJET SERIES  5L/6L (3906A0--$35
ITEM #10 LASERJET SERIES  4V-$95
ITEM #11 LASERJET SERIES 4000 (27X)-$72
ITEM #12 LASERJET SERIES 3SI/4SI (91A)$54
ITEM #13 LASERJET SERIES 4, 4M, 5,5M---$49

HEWLETT PACKARD FAX (ON PAGE 2)

ITEM #14 LASERFAX 500, 700 (FX1)--$49
ITEM #15  LASERFAX 5000,7000 (FX2)--$54
ITEM #16  LASERFAX (FX3)$59
ITEM #17  LASERFAX (FX4)$54

LEXMARK/IBM (ON PAGE 3)

OPTRA 4019, 4029 HIGH YIELD---$89
OPTRA R, 4039, 4049 HIGH YIELD-$105
OPTRA E$59
OPTRA N--$115
OPTRA S--$165
-
EPSON (ON PAGE 4)

ACTION LASER 7000,7500,8000,9000---$105
ACTION LASER 1000,1500-$105

CANON PRINTERS (ON PAGE 5)

 PLEASE CALL FOR MODELS

Re: spam messages in the amanda-user mailing list

2001-02-19 Thread Jonathan Dill

Ryan Williams wrote:
 There are now daily spam messages about toner supplies going to the amanda
 mailing list. This is a big annoyance. Please do something to prevent such a
 thing from happening again. If needed I can provide headers and the emails
 that I recieved.

I'm mad about it too, but what makes you think that John R. Jackson can
do anything about it, or anybody else subscribed to the list for that
matter?  Majordomo doesn't have much capabilities for spam filtering as
far as I know.  Personally, I'd like to see the list run on mailman
rather than majordomo.  The list appears to be hosted on
surly.omniscient.com.  I'm going to inquire about helping out with the
admin of the server and some other options.

Until then, I'd suggest getting a real e-mail client like Netscape
Messenger and use a message filter on "Sender-Contains" and "toner"
then "Move To-Spam" or something like that.

-- 
"Jonathan F. Dill" ([EMAIL PROTECTED])
CARB Systems and Network Administrator
Home Page:  http://www.umbi.umd.edu/~dill



Re: amlabel + rack of tapes question

2001-02-15 Thread Jonathan Dill

Hi Joe,

Writing a script is a lot easier than you might think--I just did it to
label about 90 tapes using a DDS4 autoloader with 8 tape magazine, it
was just a 14 line tcsh script with almost no debugging involved.  The
script is very site-specific since it was a one-off type of project, but
I attached it anyway so you can get some ideas.  My script labels one
magazine full of tapes, then you have to reload the magazine and run the
script again.  It wouldn't be too hard to change the script to use the
correct amanda API like using amtape and getting info from amanda.conf
file, but I haven't got the time or motivation to do it right now.

If you're in the middle of your dump cycle and want to add some tapes
onto the end of the dump cycle, the tricky part is fixing your tapelist
file--If you don't fix your tapelist, on the next run amanda will want
to use the tapes that you just labeled instead of continuing the current
cycle.  The easy way to fix it is to take the date stamp from the last
tape in the cycle and change the "0" for all of the new tapes to that
date--and of course don't forget to adjust tapecycle and other
parameters in amanda.conf--and then amanda should finish the current
cycle before starting to use the new tapes.

Joseph Del Corso wrote:
 Is it possible to amlabel an entire rack of tapes without
 doing it manually for each tape?
 
 Specifically I have roughly 35 tapes that I'd like to label
 in some kind of automated fashion all at once.  Besides writing
 my own script (which would take time and more than likely a HECK of
 a lot of debugging) is there an easier way to do this?
 
 Is there any benefit to doing this?

-- 
"Jonathan F. Dill" ([EMAIL PROTECTED])
CARB Systems and Network Administrator
Home Page:  http://www.umbi.umd.edu/~dill

#!/bin/tcsh -f
set i=2
while ($i  9)
  set j=`awk '{split($2,a,"."); print a[2]}' dds4/tapelist | sort -n | tail -1`
  @ j++
  amlabel dds4 dds4.$j
  mtx -f /dev/sg3 unload
  mtx -f /dev/sg3 load $i
  @ i++
end
set j=`awk '{split($2,a,"."); print a[2]}' dds4/tapelist | sort -n | tail -1`
@ j++
amlabel dds4 dds4.$j
mtx -f /dev/sg3 eject



Re: Strange dump summary...

2001-02-02 Thread Jonathan Dill

Hi Suman,

Did you check on the client to see what's in the debug files of the
/tmp/amanda directory?

Check the permissions on /usr/local/libexec/runtar--it must be owned by
root and SUID like this:

-rwsr-x---1 root amanda  78334 Nov 13 15:32
/usr/local/libexec/runtar

I had a problem recently where some genius decided to something like:

chown -R foobar.silly /usr/local

and backups of his machines stopped working because of it.  If possible,
you may just want to run "make install" again to make sure that the
permissions of everything are set correctly.

I imagine that it's also possible that you could have some sort of
security patched kernel which blocks SUID execution which would stop
runtar from working--I vaguely remember hearing of some sort of kernel
patch like that, but it wouldn't be in some stock kernel eg. from Red
Hat or another major distribution.

Suman Malla wrote:
 
 Hi there,
 
 Could someone pls enlighten me? I am trying to backup a filesystem on redhat
 server (mogadon) but in vain. amcheck doesn't complain at all though. Amanda
 is backing up 8 other servers without any problem for
 the last 4-5 months but with this server somehow it fails.
 
 Messages from log files -
 
 Amanda report:
 =
 FAILURE AND STRANGE DUMP SUMMARY:
 
  mogadon/etc lev 0 FAILED [disk /etc offline on mogadon?]
 [snip]...
  mogadon   /etc   0   FAILED 
 [snip]...
 
 amdump.1:
 
 setting up estimates for mogadon:/etc
 mogadon:/etc overdue 11355 days for level 0
 setup_estimate: mogadon:/etc: command 0, options:
 last_level -1 next_level0 -11355 level_days 0
 getting estimates 0 (0) -1 (-1) -1 (-1)
 got result for host mogadon disk /etc: 0 - -1K, -1 - -1K, -1 - -1K
 FAILED QUEUE:
   0: mogadon/etc
 
 Any hint, point would be greatly appreciated.
 
 TIA.
 -
 smallA
 
 ___
 Send a cool gift with your E-Card
 http://www.bluemountain.com/giftcenter/

-- 
"Jonathan F. Dill" ([EMAIL PROTECTED])