Re: features: append, span tapes, compress?

2003-01-23 Thread Chris Karakas
Scott Mcdermott wrote:
 
- the largest filesystem backed up must be smaller than the size of
  the tapes used (possibly after compression is considered)
 

Correct.

- amanda can't do hardware compression without breaking easily
 

AMANDA *can* do hardware compression. You shouldn't use both hardware
and software compression.

- amanda doesn't do parallel backups under any circumstances
 

If AMANDA does not write to tape, but to the holding disk, then she
*does* dump in parallel, taking under consideration the spinning number
in the disklist entries, so that disk thrashing is avoided.


- I have some filesystems that are LARGE and have no hope of fitting
  on a single tape, and no hope of fitting in a staging/holding area
  on a disk.  These are large logical volumes that span across
  multiple RAID arrays.  Amanda simply can't handle these because it
  can't span tapes, correct?
 

Yes, if you use dump, no if you use tar and exclude lists to define
directories, instead of devices to be dumped.


- I have a nice fast compression ASIC in my tape drives which can
  probably compress at the drive's write speed, while my backup host
  is slow and intended mainly for IO.  Do I have it right that Amanda
  can't just write until EOT (allowing the drive to compress), rewind
  to last EOF, and move on to the next tape? Instead I have to use
  CPU in my backup server to do compression?
 

I think you *will* be able to use your drive's compression. A file that
does not make it to the tape is not considered as dumped, so it will
be delayed and tried the next run.

- my library has 4 drives in it, which can all write at once.  Do I
  need to go out and buy 3 more backup hosts, split up my changer's
  SCSI bus and partition the library into 4 virtual libraries in
  order to actually do concurrent backups? Maybe I can run separate
  amdump instances that don't know anything about each other? ugh :)
 

You can't run multiple AMANDAs concurrently, that's right. They will
complain that another copy is running. So I don't know how you can get
all 4 drives run concurrently :-( . But don't give up, ther must be a
way... run in a chrooted environment perhaps?


-- 
Regards

Chris Karakas
http://www.karakas-online.de



Re: Replacing a partially bad tape

2003-01-22 Thread Chris Karakas
Anthony Valentine wrote:
 
 I would like to replace this tape with a new one, however I don't want
 Amanda to forget about the old one yet.  

This is also interesting in this context, although probably not exactly
what you want:

http://www.storagemountain.com/amanda-18.html

-- 
Regards

Chris Karakas
http://www.karakas-online.de



Re: Dump size calculator

2003-01-20 Thread Chris Karakas
Barry Callahan wrote:
 
 Last week, I stumbled across a webpage that had a discussion on how big
 to make your holding area, and I can't seem to find it anymore.  I don't
 remember where I saw it.
 

The holding area should be at least as big as to  be able to accomodate
the largest amount of backup data that would fit on a real tape (not on
estimated tape length). Taking the limit, a minimum holding disk area
should have the capacity of exactly one tape (assuming runtapes to be 1,
otherwise multiply by runtapes). Since you might have a tape library and
the backups might fail on many consecutive runs, and/or you might be
arbitrarily slow in flushing the contents of the holding disk to tape
while failing images may keep coming into the holding area at the steady
pace set by  runspercycle, I postulate there is no upper limit to the
capacity of the holding disk. A truly mathematical proof of this is left
to the inclined reader ;-)

 The example used on the page assumed 100GB in use, with 5% change per
 day, and a dump every day in a 7 day cycle.  The example came up with a
 dump size of ~17GB.
 
 Does anyone know the page of which I speak, or of a similar discussion?
 If so, would you be kind enough to post a link?
 

Try

http://www.storagemountain.com/amanda-12.html
http://www.storagemountain.com/amanda-17.html
http://www.storagemountain.com/amanda-8.html


-- 
Regards

Chris Karakas
http://www.karakas-online.de



Re: Full Backup Configuration

2003-01-18 Thread Chris Karakas
DK Smith wrote:
 Do most amanda configs (with changers) run amdump every weekday (M-F) and skip 
running amdump on weekends? I see this sort of idiom stated as the way for Amanda, 
however I am not so sure this well-documented idiom is actually used in practice. Or 
is it?
 

Well, I use AMANDA mostly during the week, but *I reserve the right*
(and I do use it occasionally) to use it also on weekends. I think
trying to get AMANDA to ignore Saturdays and Sundays in its time
calculations is outside the way it currently functions and would require
extra programming and configuration parameters (perhaps a whole calender
function to click off the days that should not count as such...)
which, in the end of ends, will make life more complicated instead of
easier (as it is now, as far as backups are concerned ;-)).

My experience is that there is absolutely no meaning in trying to mess
with AMANDA and interfere in the way she does the backups - she does it
so optimally, that every other effort will be suboptimal! So let her do
the work and go home! Next day, or Monday, you'll have this or that on
your tapes - 0 levels, 1 levels and so on. So what? You wanted that 0
level to be on Tuesday? Why? This thinking will only bring you trouble.
The AMANDA philosophy is you press the button, AMANDA decides for
you.

-- 
Regards

Chris Karakas
http://www.karakas-online.de



Re: Full Backup Configuration

2003-01-18 Thread Chris Karakas
Jon LaBadie wrote:
 
 While I agree basically with you there can be cases where the scheduler
 fights a specific situation.  Suppose my Level 0's are 9-10 GB total
 and I'm using a DDS2 tape with 4GB capacity with a dumpcycle of 3 days.
 Works well for a single tape/dump usage with the Level 0's spread out.
 Further suppose I'm a small business with activity and operators available
 M-F only.  Thus I want to do 5 dumps a week.  However, every Monday, after
 missing dumps on 2 days, amanda will believe it is time for Level 0's for
 everything.  

Indeed, it is! :-)

 No spreading of Level 0's at all.  

Why do you say this? The 0's are *due* on Monday, in your example above.
Some of them will not make it to the tape, because the tape is smaller
than all the full backups, that's clear. AMANDA will find out the ones
that fit on the tape, under the restriction that there will be enough
space to also copy the incrementals (at least those that are very
important). So, ideally, some of the full backups will be on the tape,
as well as all incrementals. The rest of the fulls will be delayed and
the planner will tell me this and I will know why this is so, but I will
not worry. The next day, some of the delayed fulls will make it to the
tape, along with the incrementals of that day and so on. If my dumpcycle
is long enough, I will have all full backups by the end of it on *some*
tape, just not only on that of any specific day, but evely scattered on
all tapes. If not, either my dumpcycle is too small, or runspercycle is
too small, or both, or my tapes are hopelessly underdimensioned for the
task. It's as simple as that.

-- 
Regards

Chris Karakas
http://www.karakas-online.de



Re: amrestore problems after replacing tape drive

2003-01-17 Thread Chris Karakas
Toomas Aas wrote:
 
 Hi again.
 amrestore: WARNING: not at start of tape, file numbers will be offset
 amrestore:   0: reached end of information
 ** No header
 0+0 in
 0+0 out

Whenever (very seldomly)I get the not at start of tape error, I ran
the vtblc program that lists the contents of the tape header (the
virtual table) and then it's O.K. But this advice may be QIC-specific,
so YMMV (I use the ftape programm and tools).

-- 
Regards

Chris Karakas
http://www.karakas-online.de



Re: question

2003-01-17 Thread Chris Karakas
[EMAIL PROTECTED] wrote:
 
 I installed and configured Amanda in our system (client/server).
 How do a back up?
 Could you please show me an example?
 

http://www.backupcentral.com/amanda.html

-- 
Regards

Chris Karakas
http://www.karakas-online.de



GNU tar estimates for vfat filesystems (solved: Description of solution)

2003-01-14 Thread Chris Karakas
Dear AMANDA users,

I am referring to the vfat estimates problem that I had almost two
years ago. It has to do with getting the estimates right on a vfat
filesystem. I had quite a few discussions on this at that time, see 

http://groups.yahoo.com/group/amanda-users/message/26231
http://groups.yahoo.com/group/amanda-users/message/25658

to mention just two. I tried a suggestion from Alexandre, it worked 
and I reported it to the list. Still, I didn't find the time to write
down the details of it, so that others could benefit from my solution. 

It was not until recently that I was able to fulfill my promise. The
document describing all the details of the solution to the above vfat
estimates problem can be found in my homepage in

http://www.karakas-online.de/myAMANDA/t1.html

I will be very glad to receive your feedback on this. I have made every
possible effort to follow the style and spirit of the Linux
Documentation Project in producing this document, which can be found in
various formats in the address given. It is referring to 
an older AMANDA version though (2.4.1p1). Enjoy!

-- 
Regards

Chris Karakas
http://www.karakas-online.de



amverify and grep

2002-01-03 Thread Chris Karakas

Hello,

after upgrading to SuSE 7.3, I found that my amverify script
(amanda-2.4.1p1) did not work properly - it told me that VOLUME is
"matches" and Date is "file":

Volume matches, Date file

(actually, the first time I saw this I really thought the volume matches
a "date file" - this may indeed "mean" something, especially if the
first time you happen to use amverify in the past few months is after
the refreshing experience of an upgrade of 400 packages...;-).

This comes from the write statement

report "Volume $VOLUME, Date $DWRITTEN"

so clearly VOLUME and DWRITTEN were computed wrongly. But how, since I
did not change the script at all?

The answer is that, due probably to the upgrade from SuSE 6.2 to 7.3,
the behaviour of grep has changed. Indeed, for binary files, grep now
says:

Binary file xxx matches

instead of showing the line where the match happens. This may be good in
other occasions, but for amverify it has the effect that when grep is
used in the computation of TAPENDATE:

TAPENDATE=`grep AMANDA: $TEMP/header | sed 's/^AMANDA: TAPESTART //'`

grep will say

Binary file /tmp/header matches

so that when, a few lines further, we do

set X $TAPENDATE
shift
VOLUME=$4
DWRITTEN=$2


VOLUME will be "matches" and DWRITTEN will be "file".

The remedy is to use the -a option to grep in the TAPENDATE line:


---
TAPENDATE=`grep -a AMANDA: $TEMP/header | sed 's/^AMANDA:
TAPESTART //'`
[ X"$TAPENDATE" = X"" ] \
 report "** No amanda tape in slot" \
 continue
set X $TAPENDATE
shift   
VOLUME=$4
DWRITTEN=$2
-

I decided to post this curiosity, since I did not find any similar
message.
Please CC me - I am not on the list anymore.

-- 
Regards
Chris Karakas
Dont waste your cpu time - crack rc5: http://www.distributed.net



Re: gtar returns 2

2001-03-02 Thread Chris Karakas

Jeff Silverman wrote:
 
 1) write a wrapper for gnu-tar, which what Paul Bijnens did ( I think).
 Where might I get a copy of the wrapper?

I did it too (for a different problem). Works fine. I used the following
as a starting template (thank you JJ):

ftp://gandalf.cc.purdue.edu/pub/amanda/gtar-wrapper.*


 Is my understanding correct?  Do other people have this problem and if
 so, how do they deal with it?

I have seen a similar discussion on the list with respect to doing a
backup of an active database. Search the list archives with the string
"DB backup", for example.

-- 
Regards

Chris Karakas
Dont waste your cpu time - crack rc5: http://www.distributed.net



Re: Can I use only one tape for backing up all the week?

2001-02-12 Thread Chris Karakas

Simon Mayr wrote:
 
 And hopefully the holding disk is not crashing at the same time as the
 backup client crashes.

Maybe this is not exactly what you meant, but I will report it anyway,
just to make clear what can happen in this world...

I *never* put a tape (or the right tape for that matter ;-)) in the
drive when AMANDA starts. This way I force all images to be copied to
the holding disk and remain there till next day. After AMANDA finishes,
a trivial script copies the files from the holding disk to an MO disk. I
use as many MO disks as tapes for this purpose. They have roughly the
same capacity too, which comes handy. When I run amflush next day, I
will have the images on both the tape and the MO disk :-)

Some days ago I started the usual daily amflush operation. In the middle
of this, a power outage occured :-(((. What happens in this case is that
some of the files in the holding disk have already been transfered to
tape, so they are not on the holding disk any more. The tape, on the
other side, did not have its headers updated (not the AMANDA headers,
but its low level headers - on my system this is done at the end of the
flushing, after the rewind command is issued), so the drive will not
find the transfered files on it. The result was that all the transfered
images were lost :-(

My double net strategy saved me: I copied the images from the MO disk
back to the holding disk and started amflush again :-)


-- 
Regards

Chris Karakas
Dont waste your cpu time - crack rc5: http://www.distributed.net



Re: Linux setup problems

2001-02-06 Thread Chris Karakas

Stan Brown wrote:
 
 ERROR: debian: [can not access /dev/hdc1 (hdc1): Permission denied]
 ERROR: debian: [can not access /dev/hda1 (hda1): Permission denied]
 ERROR: debian: [can not access /dev/hda3 (hda3): Permission denied]
...
 brw-rw1 root disk   3,   1 Nov 30 10:22

Assuming that this is set so on all three disks, you should make
"amanda" (your AMANDA user) a member of the group "disk".

 ERROR: debian: [can not read/write /etc/dumpdates: No such file or
 directory]

touch /etc/dumpdates
amanda should get read/write permission on this file.

-- 
Regards

Chris Karakas
Dont waste your cpu time - crack rc5: http://www.distributed.net



Re: It's Impossible!! Is the tape full?

2001-02-06 Thread Chris Karakas

Adolfo Pachn wrote:
 
   planner: Last full dump of server:/home on tape Diaria-000 overwritten on
 this run.
   planner: Last full dump of server://javierpuech/Documentos on tape
 Diaria-000 overwritten on this run.
 

This tells you that you have two full dumps on this tape and they are
the only ones (the last ones) you have for /home and
//javierpuech/Documentos. (A "full dump" is a dump of all the files in
the filesystem you specified, it does not mean that the tape is full).
The message is normal, since you use only one tape: AMANDA overwrites
all the files on the tape each time you use this same tape again and
warns you if she has to delete the last full dumps available for a
filesystem. You probably thought that the files would be *appended* to
the tape, but this is not possible with AMANDA (and there are good
reasons for this).

Remedy: Use more than one tapes. Read the docs and
http://www.backupcentral.com/amanda.html.


-- 
Regards

Chris Karakas
Dont waste your cpu time - crack rc5: http://www.distributed.net



Re: self check request timed out

2001-01-25 Thread Chris Karakas

Ben Hyatt wrote:
 
  If it shows a value of 2 at this point, which is before any Amanda code
  is run, then it almost has to be a compiler or loader error.
 
 H, gcc version 2.8.1 is what I am using...
 
  John R. Jackson, Technical Software Specialist, [EMAIL PROTECTED]
 

Ben and John,

I read this thread like a detective's story! I saw in the process that
your AMANDA is compiled using libc.so.1:

Program received signal SIGSEGV, Segmentation fault.
0xff0b6e94 in strlen () from /usr/lib/libc.so.1
 ^^

I am not an expert in all these versions, but I think libc.so.6 is the
state-of-the-art and everything older is asking for trouble... Could you
please double-check your symlinks and libraries in the paths listed in
your /etc/ld.so.conf ?


 thanks,
 -Ben

-- 
Regards

Chris Karakas
Dont waste your cpu time - crack rc5: http://www.distributed.net



Re: Weird Tape drive behavior

2001-01-25 Thread Chris Karakas

Johannes Niess wrote:
 
 server:~ # rm /tmp/dattest;mtst -f /dev/nst0 rewind;mtst -f /dev/nst0
 status; echo;dd if=/dev/nst0 of=/tmp/dattest;echo;cat /tmp/dattest

Try specifying the block size in the dd command:

dd if=/dev/nst0 of=/tmp/dattest ibs=32k obs=32k
^^^
(some implementations of dd may need an explicit setting of input and
output block sizes, so don't just use bs=32k for the moment).


 SCSI 2 tape drive: File number=0, block number=0, partition=0.  Tape
 block size 0 bytes. Density code 0x25 (DDS-3).  Soft error count since
  ^^

You didn't set the tape's block size. Try 

mt -f /dev/nst0 setblk 32768

to set it to 32k, which is AMANDA's fixed block size.

If this does not help, then I suspect a hardware failure, probably
cabling (LVD?).

-- 
Regards

Chris Karakas
Dont waste your cpu time - crack rc5: http://www.distributed.net



Re: Ejecting tapes

2001-01-19 Thread Chris Karakas

Ben Elliston wrote:
 
 Is there a way for Amanda to eject the tape at the end of a run so that I
 can simply remove it each day (and to indicate that the tape run has
 actually completed)?
 
 Or should I just run `mt offline' myself?

Don't run "amdump ..." directly. Instead, write a script that calls
amdump and after that does a "mt rewoffl" and whatever other stuff you
like. Let cron, then, run this script.

-- 
Regards

Chris Karakas
Dont waste your cpu time - crack rc5: http://www.distributed.net



Re: Only one dumper running

2001-01-18 Thread Chris Karakas

Ben Elliston wrote:
 
 I'm doing a test run with about 8 entries in my disklist and Amanda is only
 running one dumper. 

If you are trying to increase the number of dumps that go on at once on
a single client, you need to increase "maxdumps".  If you need more
dumpers because clients are "starved" for service, then increase
"inparallel".

(stolen from the list archives)


 What does `no-diskspace' mean here?  I seem to have free network capacity,
 so why aren't up to 4 dumpers (as I've configured) running?
 

For the dumpers, see above. `no-diskspace' means you don't have enough
holding disk space. Perhaps you have enough space, but you didn't free
it for full backups, so that the extra dumpers are not allowed to do
anything -- set the "reserve" parameter to a value lower than 100, see
the comments in amanda.conf for this.

-- 
Regards

Chris Karakas
Dont waste your cpu time - crack rc5: http://www.distributed.net



Re: Weird illustration of peculiar interactions :-}

2001-01-18 Thread Chris Karakas

Martin Apel wrote:
 
 
  So?  I was trying to point out that simply selecting the biggest
  dump may not give you the best packing.  Often, the few tapes contain
  four or five smaller dumps and can obtain a 99.8% usage rate.
 
...

 Yes, you are right. You might achieve a better packing by a more intelligent
 algorithm. 

I dont know if you noticed it, but you are talking about the famous "bin
packing problem" in combinatorics. Just search the web for "bin packing"
and you will find quite a few algorithms and further literature on this
vast subject (even AMANDA uses one, according to some old papers). 

-- 
Regards

Chris Karakas
Dont waste your cpu time - crack rc5: http://www.distributed.net



Re: amanda hacks

2001-01-18 Thread Chris Karakas

Andrew BOGECHO wrote:
 
 My main worry is that small change done to one small file, hence a
 very tiny level 1, that gets dumped, written to tape, and removed from
 the holding disk, before it ever shows up in the amdump log.

Why don't you keep it simple? Here's what I do here and it works
perfectly: I just don't put any tape in the drive. The backup images go
thus all to the holding disk. In my backup script, immediately after the
"amdump ..." command, I have a "cp -av ..." ;-). So after AMANDA is
finished, the backup images are copied from the holding disk to my MO
disk, which is patiently waiting there. Next day I check if everything
went well and I do an "amflush ...".  I thus get AMANDA backups on
removable MO disks with the added feature of tape copies ;-) (you see,
it all depends on how you see the world).

In place of my MO disks, you would use a large part of your hard disks,
large enough to accomodate 7 days's worth of data. Your holding disk
should be separate from this and also large enough to hold one day's
backup images. Try it, "It works with AMANDA (TM)" :-)

-- 
Regards

Chris Karakas
Dont waste your cpu time - crack rc5: http://www.distributed.net



Re: amcheck with gnutar

2001-01-17 Thread Chris Karakas

Takayuki Murai wrote:
 

 my client's /etc/group:
 ---
 operator:*:5:root,amanda
 amanda:*:1000:amanda
 
...

   The files of permissions are:
  
   -rwsr-x---  1 rootamanda   52300 Jan 15 17:06 runtar
   drw-rw-rw-  2 amanda  amanda  512 Jan 16 15:58 gnutar-lists
  

Since amanda is in the operator group on your client, runtar, at least,
should be owned by group operator, not group amanda. I see that you have
a user amanda (on the client and the server), a group amanda (on the
client, as seen on the files permissions above), a group operator (on
the client) and a group disk (on the server). Of course, you can do it
that complicated, but you must know what you are doing ;-)

I suggest you stick to *one* AMANDA user and *one* group with disk
access rights for both server and client. I have decided for "amanda" as
the AMANDA user and "disk" for the disk access group where amanda
belongs. It is then enough to change the *group* ownership to the disk
group. So I have:

-rwsr-xr--   1 root disk21344 Dec 10 16:42
/usr/lib/amanda/runtar




 Please give me some advice!!
 
 taka
 
 Takayuki Murai -'o^a?@-2"V-
 [EMAIL PROTECTED]
 
  -Original Message-
  From: [EMAIL PROTECTED]
  [mailto:[EMAIL PROTECTED]]On Behalf Of
  [EMAIL PROTECTED]
  Sent: Tuesday, January 16, 2001 5:45 PM
  To: [EMAIL PROTECTED]
  Subject: Re: amcheck with gnutar
 
 
  taka murai hath declared on Tuesday the 16 day of January 2001  :-:
  
   ERROR: dirac: [can not execute /usr/local/libexec/runtar:
  Permission denied]
   ERROR: dirac: [can not read/write /usr/local/var/amanda/gnutar-lists/.:
   Permission denied]
  

   How can I do to get Permission allowed?
 
  First off, what user are you running amandad as on the client?
  In /etc/inetd.conf:
 
  amanda  dgram   udp waitamanda /usr/local/libexec/amandad amandad
  ^^
 
  Next, is this user in the amanda group to be able to run runtar,
  it it possibly operator (Freebsd) or disk (linux), you could just
  add amanda to the amanda group in /etc/group as you want amanda to
  stay in the group that has read access to the disks (operator/disk)
 
  /etc/group:
  amanda:*:6:amanda  (or something similar...)
 
  As for gnutar-lists... *shrug*
  check the permission to the directories under it, /usr/local/var/,
  /usr/local/var/amanda. Also it is a bad idea to have write permission
  to all other groups on the system, someone could be nasty!
 
  --
  Robert "bobb" Crosbie.
  System Administrator, Internet Ireland.
 

-- 
Regards

Chris Karakas
Dont waste your cpu time - crack rc5: http://www.distributed.net



Re: Will amanda do what I need?

2001-01-17 Thread Chris Karakas

Ed Troy wrote:
 
 I have a small peer to peer network with several windows 95 machines and a
 windows 2000 machine and a linux machine. Ideally, what I would like to be
 able to do is to backup everything, on a regular, to a very large (and
 prehaps removable) ata hard drive on the linux box. 

...

 Is Amanda what I am
 looking for? 

AMANDA can do what you want, just leave the files that AMANDA creates on
your "very large" drive. Depending on the version you will use, she
might not "trust" those files (trust, that is, that they really have
made it to some tape ultimately), so you might have to use some trick. 

As far as Windows is concerned you will not be able to backup the
registry, or swap file, or such active or system files, so a separate
measure has to taken for them. You will experience problems in the
estimates of incrementals for the vfat filesystems (not an AMANDA
problem per se, rather a tar/kernel issue): sometimes they will be as
large as full ones. To get around this, you will need to hack AMANDA a
little (but really only just a bit) and recompile (I will post a report
on this soon, whatever soon means...). You will need SAMBA on you Linux
box to back the Windows boxes with AMANDA.  And you will need to be
fluent in commands like dd, tar, gzip and basic Linux administration.

Now the good news: After having gone around all the pitfalls above,
backups will never be an issue for you again! AMANDA will take care of
everything (Linux _and_ Windows hosts) and you will be free for more
creative tasks! Never again having to think "Hmm... shall I do a full
backup on that host today, and an incremental level 2 for that one, or
did I already do it yesterday - and which backups are due for the other
one?". All backups with one method, with a stable system :-), with
classic, standard tools, with a sophisticated strategy (one that really
deserves this name). Configure and forget. Period.

PS: You don't need (portable) backup media that are so large. The
important thing is that each medium has enough space for a full backup
of some (configurable in the disklist) portion of your filesystem plus
the incrementals for the rest (the increment level is decided by AMANDA,
so what really matters is the daily rate of change of your systems).
Read http://www.backupcentral.com/amanda.html for the details. 

-- 
Regards

Chris Karakas
Dont waste your cpu time - crack rc5: http://www.distributed.net



Re: Question

2001-01-13 Thread Chris Karakas

Daren Eason wrote:
 
 ...The problem I am running in
 to is that amcheck is looking for the configuration file in
 /PATH_TO_AMANDA/etc/amanda/amanda/amanda.conf/amanda.conf ...

Recompile AMANDA: remove config.cache and run ./configure, setting
--with-configdir=/etc/amanda. Thus all you configurations will be saved
under /etc/amanda/config. To see all the parameters recognized by
"configure", run "./configure --help". Here's how I compiled mine
(2.4.1p1 with ftape, YMMV):
 

   ./configure --prefix=/usr \
   --libexecdir=/usr/lib/amanda \
   --sbindir=/usr/bin \
   --with-config=Set1 \
   --with-configdir=/etc/amanda \
   --with-gnutar-listdir=/var/lib/amanda/gnutar-lists \
   --localstatedir=/var/spool --with-bsd-security \
   --with-amandahosts --with-portrange=5,50100 \
   --with-tape-device=/dev/nqft0
--with-ftape-rawdevice=/dev/rawft0 \
   --with-debugging=/var/lib/amanda/debug \
   --with-user=amanda --with-group=disk \
   --with-gnutar=/usr/local/bin/gtar-wrapper
--with-smbclient=/usr/bin/smbclient \
   --with-samba-user=chris \
   --sysconfdir=/etc --with-gnutar-exclude \
   --with-buffered-dump --disable-libtool \
   --disable-shared --disable-static

-- 
Regards

Chris Karakas
Dont waste your cpu time - crack rc5: http://www.distributed.net



Re: Problem with amcheck

2001-01-13 Thread Chris Karakas

"Bernhard R. Erdmann" wrote:
 
 It'a typical problem when DNS is not set up properly.

Let's be clear here: I don't want to setup a DNS server for a network
that is not supposed to have _direct_ access to the internet, one that
uses the 192.168.xxx.xxx addresses and is small enough to fit on a few
/etc/hosts lines - period.

Sendmail call this a "simpler" world, just because about anyone has set
up a DNS server in their netwok today. This makes the job "simpler" for
sendmail, does not make the world any simpler, because this "simplicity"
is bought with the extra headaches people like me are confronted with,
when they don't want to use the "simpler" solution.

Name resolution works fine here with simple, small, mamanageable
/etc/hosts files. In this sense, DNS _is_ set up properly. You just have
to do all the other tricks for programs that insist on having a domain
name to work with. Of course, there is the possibility of using
/etc/nsswitch.conf, by just not putting "dns" there. Sendmail uses the
nsswitch.conf file. But in the moment in which the gateway goes online
through a dialup connection to the Internet, you will have not only to
update /etc/resolv.conf with the nameservers given to you by the peer
PPP server, but also change /etc/nsswitch.conf to reflect the fact that
now you _can_ use dns. Is _this_ what you mean a "properly" set DNS? No
thanks. 

I found the "." solution a working one. It saved me from having to
ponder on scripts that call scripts that change this file and that
file... (think for a moment that the "gateway" is laptop, the "modem" is
a PCMCIA card and the pcmcia scripts call the ip-up script which call
the firewall scripts, change /etc/resolve.conf and should change
/etc/nsswitch.conf, probably not only on the gateway, but also on all
internal machines too, just for having DNS set up properly...).

That said, let me say that I will be happy to hear about a simpler
solution than the above that also sets up DNS properly.

-- 
Regards

Chris Karakas
Dont waste your cpu time - crack rc5: http://www.distributed.net



Re: unable to amcheck, amdump

2001-01-11 Thread Chris Karakas

Takayuki Murai wrote:
 
 ERROR: running as user "root" instead of "amanda"

Don't run AMANDA as root. Run it as the AMANDA user (which I hope you
have specified). If your AMANDA user is, say, amanda, then do

su amanda -c "amcheck test1"

 WARNING: program /usr/local/libexec/planner: not setuid-root
 WARNING: program /usr/local/libexec/dumper: not setuid-root
 WARNING: program /usr/local/sbin/amcheck: not setuid-root

Make sure the following programs have permissions as shown:

  -rwsr-x---   1 root backup244716 Nov 22 20:04 libexec/calcsize
  -rwsr-x---   1 root backup804744 Nov 22 20:08 libexec/dumper
  -rwsr-x---   1 root backup233312 Nov 22 20:04 libexec/killpgrp
  -rwsr-x---   1 root backup945028 Nov 22 20:08 libexec/planner
  -rwsr-x---   1 root backup231016 Nov 22 20:04 libexec/rundump
  -rwsr-x---   1 root backup231852 Nov 22 20:04 libexec/runtar
  -rwsr-x---   1 root backup953900 Nov 22 20:08 sbin/amcheck

Your group ownership may be different, but should be consistent as
above.
(and AMANDA user "amanda" should be a member of group "backup" in this
example). These Amanda programs (no more, no less) **must** be owned by
root and have the setuid bit turned on (rws). This has to be done only
for the server.

 ERROR: /dev/nst0: reading label: Input/output error
(expecting tape test11 or a new tape)

You put a tape that either was not labelled by AMANDA (you must use
amlabel for each tape before you use AMANDA), or was defective and could
not be read, or was not the tape with label test11. In any of these
cases, the tape was _not_ tape test11, and that is what matters, because
that is what AMANDA expects.

 WARNING: localhost: selfcheck request timed out.  Host down?

Two problems here: first, you use "localhost" as the name of your
client. Use the hostname instead (best is, you use the fully qualified
name, "best" meaning "you avoid most trouble doing this" ;-) ). And
second, the inetd daemon on your client (your localhost) seems to be
misconfigured as far as AMANDA is concerned. Check the FAQ for this (and
also the rest of the documentation in the docs directory, as well as
http://www.backupcentral.com/amanda.html).

-- 
Regards

Chris Karakas
Dont waste your cpu time - crack rc5: http://www.distributed.net



Re: Help replacing tapes

2001-01-11 Thread Chris Karakas

Andrew Robinson wrote:
 
 ... I
 figured it was time to replace them. The question is exactly how do I do
 that? 

Suppose you want to replace DAILY0. Then do

amrmtape config DAILY0
throw away  old DAILY0 tape
amlabel config DAILY0 (put your new tape in the drive)
do the scheduled backup for today. AMANDA will request a "new" tape. Put
the new DAILY0 in and you are done for today.

The next 19 days you repeat this procedure with tapes DAILY1,...DAILY19.

 
 I intend to recycle the old tapes for monthly archives. I figure after all
 20 of the new tapes have been used once, I can safely relabel the old
 tapes. 

You can actually relabel the tape just after you have amrmtaped it. For
AMANDA, after "amrmtape" the tape is "new", even if for you it is "old"
;-)

-- 
Regards

Chris Karakas
Dont waste your cpu time - crack rc5: http://www.distributed.net



Re: Disaster Recovery Recipe

2001-01-11 Thread Chris Karakas

"Bort, Paul" wrote:
 
 If you can't append, you could write your indexes to a separate tape, as a
 separate backup set. 

Or to a MO/Zip/ORB disk. You just have to

cp -auv /var/lib/amanda/Set1 MO/Zip-directory

It may not be "vital" for the backups to save the index, but I insist on
having it doubly copied to MO after each  AMANDA run. You can read
horror stories about lost backup databases in "Unix backup and recovery"
;-)


-- 
Regards

Chris Karakas
Dont waste your cpu time - crack rc5: http://www.distributed.net



Re: taper: FATAL shmctl: Invalid argument

2001-01-11 Thread Chris Karakas

"Shane T. Ferguson" wrote:
 
 I checked ipcs and it doesn't list anything in the shared memory segments (i
 am running RH6.1 with 2.2.18 kernel).
 

I read somewhere on this list that

ipcs -l   
ipcrm

might help. I also found the following on this list:

Shared memory is a kernel feature that you may have to enable or turn
on in some way.  Try "ipcs -a", which should report all current shared
memory segments, semaphores and message queues.  With luck, it will
either tell you there are a bunch of old shared memory segments for the
Amanda user caused by your testing, in which case you can use ipcrm to
clear them, or it will tell you shared memory is not enabled, in which
case you'll have to find out from someone who knows about your OS (or
from the system documentation) how to turn that on.

(If it works don't thank me, thank the list ;-))
-- 
Regards

Chris Karakas
Dont waste your cpu time - crack rc5: http://www.distributed.net



Re: help

2001-01-11 Thread Chris Karakas

Sandra Panesso wrote:
 
 ? gtar:
 
./etc/amanda/miro_daily/index/ernst.tomandandy.com/_Local_Library_CVSRoot/20010111_1.gz.tmp:
 Warning: Cannot stat: No such file or directory

Clear: No such file or directory! But why? Well, have a look at the
name:
0010111_1.gz.tmp. This means that it is a temporary file. Probably this
tmp file was there when tar started, but was not there when tar came to
archiving it. No problem, rather normal.

 ? gtar: ./mailman/logs/qrunner: file changed as we read it

Clear again: file changed as we read it! But why? Have a look at the
name again: ./mailman/logs/qrunner. This is a log file for some mail
program. Is it that strange that it changed while tar was running? I
don't think so. So, don't panic ;-)

But why then AMANDA calls these info lines "STRANGE"? Because she does
not understand them and thinks it is better to mark them "STRANGE" and
forward them to you. If you want to eliminate them, you have to ensure
that no file changes during AMANDA's run. Here's how AMANDA works in
this respect (stolen from the list):

Amanda watches the stderr lines from the backup program and pattern
matches them against things it expects (normal).  Every other line is
considered "strange".  If the backup failed or there were any strange
lines, it reports them in this section of the E-mail.

The character at the front of the line is a code for the class of line:

  |   a normal (expected) line
  ?   a strange (unclassified) line


--- 
Regards

Chris Karakas
Dont waste your cpu time - crack rc5: http://www.distributed.net



tar: 200MB vfat system allegedly changed mtime in one second!

2001-01-06 Thread Chris Karakas

Hello,

I admit that the headline looks like coming from the yellow press ;-)

I think I have bumped onto a tar bug. I did this while doing AMANDA
backups on a vfat filesystem mounted as /dos/d. I have hacked AMANDA a
very little bit to let her pass the parameters --incremental
--newer-mtime to tar, since --listed-incremental does not work on vfat
due to non-constant inodes since kernel version 2.2.5 (I think). This is
SuSE Linux 6.4, kernel 2.2.14, tar 1.13.18, AMANDA 2.4.1p1-something. I
am sending this to both amanda-users and bug-tar lists. 

Usually, passing --incremental --newer-mtime to tar seems to work well
for my case and I was about to send a report to amanda-users. But look
at this, that I discovered today:

Suddenly, on Jan 6, 2001 between  06:28:14 GMT and 06:28:15 GMT
(i.e. in only one second!) 236MB of /dos/d (which is 253MB total 
in size, i.e. almost all of /dos/d !) changed its modification time!
Look at the following output :

bacchus:~ # /bin/gtar --create --directory /dos/d --incremental
--newer-mtime "2001-01-06  06:28:14 GMT" --sparse --one-file-system
--ignore-failed-read --totals --file /dev/null .
Total bytes written: 247214080 (236MB, 39MB/s)

bacchus:~ # /bin/gtar --create --directory /dos/d --incremental
--newer-mtime "2001-01-06  06:28:15 GMT" --sparse --one-file-system
--ignore-failed-read --totals --file /dev/null .
Total bytes written: 286720 (280kB, 93kB/s)

Of course, AMANDA has correctly backed up a level 1 of 236+ MB...
But why should this one filesystem change mtime almost as a whole
in just one second? I am clueless. 

I have checked the mtimes with "ls -l" for some of the files that tar
would archive in the 236MB file. They had _not_ changed. I checked the
ctime with "ls -lc" and I found that _a_lot_ of files had a ctime in the
future! But in the 236MB archive there were files with ctime in the
past,
as well as in the future - and anyway this is correct, since I gave the
option --newer-mtime to tar.

I checked the logs. The only logged operation within 3 minutes before 
06:28:14 GMT is a mke2fs on /dev/sdd (MO disk, 2048 bytes sector media)
with the necessary "syncing disks". But we're talking about /dos/d,
which is
on /dev/hda... 

As I said, I consider this to be a bug.

Any ideas?

-- 
Regards

Chris Karakas

don't waste your cpu time, search for an OGR: http://www.distributed.net



Re: RV: RV: amdump

2001-01-05 Thread Chris Karakas

Monserrat Seisdedos Nunez wrote:

 now i'm sure it is a harward problem because i backup it directly using tar
 and it hangs too.

It seems that there are more problems here: 

1) You say you run tar 3 consecutive times and then it lists you the
archives. What does it exactly list you? The contents of the archives? I
find this hard to believe, unless the two previous unsuccessful
invocations of tar manage to position the drive so that the third time
it just gets the tar part of the image. This is a very unorthodox way of
treating the tape, if at all! You should really try the "recommended
way".

2) Hardware problems with your drive.

 the tape is a internal scsi one, so i supose it is well terminatted.

3) Too much faith on the guy who assembled your computer ;-). I suggest
you check the internals too, actually every detail.

 Maybe it is the version of tar: 1.13.17-8!!

4) tar in the 1.13.17 version is problematic too, but for other reasons.
You should upgrade to 1.13.18.

-- 
Regards

Chris Karakas
Dont waste your cpu time - crack rc5: http://www.distributed.net



Re: will tapes in rotation be reused???

2001-01-04 Thread Chris Karakas

Mark Abene wrote:
 
 Does amanda know that it should reuse tapes
 that have plenty of space left on them, as opposed to clobbering a tape that
 is full, but still technically marked as reuseable?
 

AMANDA _will_ reuse the tape when its run has come. AMANDA rotates the
tapes one after the other and keeps track of it. When a tape has been
written, it will have to wait its turn to be used again. Even if there
were very few data written on that tape, AMANDA will not ask for it
before its turn has come again. To put it another way, AMANDA does not
append the images of *different* run on a tape alredy written in a
previous run, unless the turn of this tape has come, in which case it is
overwritten. This is an intended behaviour. Trying to append to an
already used tape is very problematic, due to all the different
implementations in the hardware. You risk losing data. So AMANDA will
just require the next tape, not a previous one, each time she runs anew.

AMANDA tries to balance the load over the tapes. If she runs already for
some dumpcycles and your systen does not change very rapidly, then you
should see that all the tapes are more or less used to the same extend
(if this is not the case, then either your system changes too
erratically, or there are entries in your disklist that are
significantly larger than the majority of the rest). This is what
"balancing the backup load over the dumpcycle and over the tapes" means.
If, under these circumstances, you see that there remains plenty of
space unused on your tapes, then you can safely reduce the dumpcycle,
leaving all other parameters untouched. This will increase the mean load
on your tapes and result in a better average usage of them. For this to
work well, you have to define your disklist entries so that they will
fit on the tape, leaving enough room for the incrementals even when
there will be more of them on the tape. It is good to define entries of
various sizes (ideally, the sizes should be uniformly distributed
between their min and max), so that AMANDA can combine them well to
utilize the tape better. You see, it's all about combinatorics and
statistics (and I am happy that AMANDA does this for me) :-)

-- 
Regards

Chris Karakas
Dont waste your cpu time - crack rc5: http://www.distributed.net



Re: RPM Weirdness with amreport

2001-01-03 Thread Chris Karakas

Josh Kuperman wrote:
 
 I still have a problem with amreport. Amreport, would complain that
 there was no print command defined and the dump would fail to
 complete, if and only if I include the path to the PostScript tape
 label in my tapetype definition. Any ideas what that is all about.
 

Did you install a new lpd/lpr lately? I ran in a similar problem and
found out that the lpd RPM had changed the permissions. Here's the note
I made for this:

If you install a new lpd/lpr, don't forget to set the right permissions!
If you do, then either lpd will have too many of them,
or lpr too few, or both  (so that amreport complains

/usr/bin/lpr: cannot create /var/spool/lpd/midas-lp/.seq

and the AMANDA reports do not get printed!)

chmod 700 /usr/sbin/lpd
chown root.lp /usr/bin/lpr
chmod 6555 /usr/bin/lpr


-- 
Regards

Chris Karakas
Dont waste your cpu time - crack rc5: http://www.distributed.net



Re: Small tapes

2001-01-03 Thread Chris Karakas

Gregory Propf wrote:
 
 Looks like the compression command is only for SCSI tapes. 

Yes it seems so. Are you sure your floppy drive _uses_ hardware
compression? The ftape manual does not provide an MTIOCTL for hardware
compression, from which I deduce that there is no need for it.

 The scheme
 for using floppy tapes is a little goofy under Linux. 

Claus-Justus Heine, the ftape maintainer, has put a great deal of work
in making floppy tapes have an mt interface that looks as close as
possible to the SCSI one. You load the modules and use mt to access the
tape as usual. What is "goofy" about this?

 The best advice
 is "Don't use floppy tapes".  They truly are crap.  Mine is slow, runs
 hot and makes all sorts of wheezing sounds while it runs. 

Well, again it depends: mine is fast, not hot at all, not loud at all.
It is more silent than under Windows, because it _streams_. I didn't
really know why streamers are called that way, until I switched from
Windows to Linux. Then I realized that what I thought was "normal
operation" for my drive, it was close to "shoe-shining", compared to its
operation under Linux: when the drive really streams, it is silent. When
it cannot be fed fast enough with data, it "shoe-shines", producing
these annoying sounds.

I use an FC-20 floppy controller. There is also a floppy controller on
Adaptec's 1542CF ISA SCSI card (I think). Try to get an FC-20, it should
be a bargain nowadays. It is supported by ftape (although not in DMA
5,6,7, due to the lack of documentation by the manufacturer). This will
make your drive fast enough - you don't need a Ferrari for backups, they
can take as long as they like, it suffices that they are done by the
next day. And I backup 8GB of a network on a 14 day dumpcycle using 3M's
650MB floppy tapes (MC3000XL) and "best" software compression on AMANDA
with *no* problems.

 I guess I
 should break down and buy a SCSI.  Anyone have any advice on a good low
 cost SCSI tape?

And I thought you would stay in the AMANDA floppy tape club... ;-)

-- 
Regards

Chris Karakas
Dont waste your cpu time - crack rc5: http://www.distributed.net



Re: Colorado Tape

2001-01-03 Thread Chris Karakas

Richard Grace wrote:
 
  Gregory Propf [EMAIL PROTECTED] 01/01/01 04:54am 
 
  I have an old Travan floppy tape (400mb uncompressed).
 
 This makes me wonder if anyone is using Colorado QIC floppy tapes...?
 

I have a Colorado 1400 floppy tape drive and an FC-20 floppy controller.
I use QIC-3020 650MB tapes from 3M (Imation) (MC3000XL). It all works
fine under Linux and AMANDA. I have posted a detailed message to this
list about compilation of ftape for use with AMANDA on Oct. 25th 2000. 

 Has anyone got (or heard of) one running with amanda?  Perhaps there is  better 
support in Linux or Solaris x86?  

ftape is actively maintained on linux by Claus-Justus Heine. There is a
mailing list (linux-tape) which will certainly answer your questions.
Check the Linux Ftape Homepage at
http://www.instmath.rwth-aachen.de/~heine/ftape. AMANDA works fine with
ftape under Linux. I will be glad to help where I can to get this work
for you too.

 I really only need to back up /etc on the local machine and a few others so the size 
of the tape would be perfect if I can make it work.
 

Go on and give it a try!

-- 
Regards

Chris Karakas
Dont waste your cpu time - crack rc5: http://www.distributed.net



Re: Backup of soft-raid device?

2001-01-02 Thread Chris Karakas

Rainer Hofmann wrote:
 
 Is it possible to backup a device /dev/md0/, which is a software raid
 level 0, at all?

Yes, I use tar (1.13.18) for that without any problems. In the disklist,
I use directory names, instead of devices.

-- 
Regards

Chris Karakas
Dont waste your cpu time - crack rc5: http://www.distributed.net



Re: amanda

2000-12-21 Thread Chris Karakas

Jan Van den Abeele wrote:
 
 For obvious reasons (inetd is no more ... xinetd.conf) this doesn't work
 for Redhat 7.0...

Check the list archives using "xinetd" as keyword. This has been handled
already some times in the past. I found the following posting (but there
might be more out there):

snip-

1) the rpm files don't create a file amandad in /etc/xinetd.d, so you
have to create one :

#xinetd.d file = amandad#
# default: off
#
# description: Part of the Amanda server package

service amanda
{
socket_type = dgram
protocol= udp
wait= yes
user= operator
group   = disk
server  = /usr/lib/amanda/amandad
disable = no
}

after this you have to create an entry in hosts.allows as well ... I
added the following line on the server PC:

ALL: localhost 127.0.0.1 128.197.61.90

...

4) I used the following amandaidx amidxtape files :
#xinetd.d file = amandaidx#
# default: off
#
# description: Part of the Amanda server package

service amandaidx
{
socket_type = stream
protocol= tcp
wait= yes
user= operator
group   = disk
server  = /usr/lib/amanda/amindexd
disable = no
}

#xinetd.d file = amidxtape#
# default: off
#
# description: Part of the amanda server package
#

service amidxtape
{
socket_type = stream
protocol= tcp
wait= no
user= operator
group   = disk
server  = /usr/lib/amanda/amidxtaped
disable = no
}

snip-



-- 
Regards

Chris Karakas
Dont waste your cpu time - crack rc5: http://www.distributed.net



Re: dump scheduler algorithmn

2000-12-20 Thread Chris Karakas

Amanda Backup wrote:
 
 My configuration (2.4.2) has these settings:
 
 dumpcycle   1 weeks
 runspercycle6
 tapecycle   18 tapes
 
 Does this mean that given a level 0 today, the
 next level 0 will be at the latest 1 week from
 today?  

Exactly. Of course, if the balance of load over the week (i.e. over the
dumpcycle) permits it, you might see level 0s more often.

 Or could it mean that within each dump cycle
 there will be at least 1 level 0 but that
 they could be up to 12 dumps apart (begin
 of one cycle, end of next)?  That is nearly
 what one file system is doing.
 

Normally, this shouldn't happen. That it happens, it means that the load
is too heavy for this dumpcycle. Now, what does this mean, you will
ask... ;-) It means that the dumpcycle is too short, or the tapes have
too small a capacity and they cannot absorb more data per run. The
easiest solution is to increase dumpcycle to, say, 10 (you can verify
that it is indeed a balancing problem if you do a "amadmin config
balance" and see the projected load and the number of overdue disks).

What AMANDA always tries is to get the load as evenly distributed as
possible over the dumpcycle period, taking into account constraints like
"at least one level 0 every dumpcycle days per disk", priorities,
capacities, previous backup sizes  etc. This is a "bin packing problem",
the "bins" being the tapes :-). There is no exact solution to it -
actually, it is a computationally very hard one and the subject of
ongoing academic research. But if you choose your parameters
"reasonably", you will see that AMANDA solves it very well (I am always
amazed by the combinations she uses to "pack" all those levels in one
tape efficiently). 

I suggest you start with a somewhat larger dumpcycle than you think is
necessary, say 15 days. Let AMANDA do some dumpcycles this way. You will
get a feeling of the "balanced load" on your tapes, i.e. you will see
that your tapes get *evenly* filled (as long as this is possible of
course), up to a certain percentage. AMANDA will *not* try to fill the
tapes as much as possible because she tries to spread the backups evenly
on the tapes. As soon as you see that you have steadily "room" on your
tapes, that is left empty, you can start decreasing your dumpcycle
carefully. This way you will eventually reach the Optimum :-)

-- 
Regards

Chris Karakas
Dont waste your cpu time - crack rc5: http://www.distributed.net



Re: about setting up the disklist.conf

2000-12-16 Thread Chris Karakas

richard wrote:
 
 Dear all,
   Thanks. You are right. After I have rewinded the tape by:
 mt -f /dev/nst0 rewind
   It does not show the following error any longer.

Wonderful. I like success reports :-)

 /sbin/restore: Tape is not a dump tape
 

That's why I asked you what "DUMP" program you have specified in the
dumptype for /usr, dump or tar. It seems that you use tar, not dump.

 have also tried:
 tar -zxvf 202.85.165.88._operator
 tar (child): 202.85.165.88._operator: Cannot open: No such file or directory

Of course ;-)

First, you typed the wrong file name: 202.85.165.88._operator, instead
of 202.85.165.88._operator_usr.20001215.0.

Second, you must skip the first 32K, which is the file header, then pipe
the rest to gzip (if you used compression) and then to tar. Try

dd if=202.85.165.88._operator_usr.20001215.0 ibs=32k obs=32k skip=1 |
gzip -dc | tar -xvf -

(don't forget the "-" after "-xvf")

This should work fine. 

-- 
Regards

Chris Karakas
Dont waste your cpu time - crack rc5: http://www.distributed.net



Re: about setting up the disklist.conf

2000-12-15 Thread Chris Karakas

richard wrote:

 labelstr "^DailySet1[0-9][0-9]*$"
 
   I have seven tapes to have a cycle in one week. Is it correct to label
 them as DailySet100, DailySet101, DailySet102,...?
 

Given the label string you have chosen in your amanda.conf, it is
perfectly "legal" to use the names you suggest. Bear in mind that you
are *not* obliged to use a labelstr declaration, i.e. you are thorougly
free to choose the names you like. labelstr is there just to help you
organize your tapes "better", if you feel so - I use names from the
ancient greek mythology (Nausika, Tethys, Alkmene, Io...) :-)

 Should I run the command
 amlabel DailySet1 DailySet100
 amlable DailySet1 DailySet101 
 as root or as operator?
 

It is always good practice to do 

su amanda user -c "amanda command"

than just

amanda command

no matter what amanda command is. This means you should prefer
operator from root, if operator is your AMANDA user. The reason for this
is that  these commands *may* change some file (configuration, database,
index, whatever AMANDA can change) and if you run them as root, then the
file will be owned by root, so that the next time you run AMANDA as the
AMANDA user, you will get in trouble - see the point ;-) ?

   For the amrestore,
 /usr/sbin/amrestore /dev/nst0 202.85.165.88 '/home$'
 amrestore: missing file header block
 amrestore: WARNING: not at start of tape, file numbers will be offset
 amrestore:   0: reached end of tape: date $?
 [—@31 /—
 —¥ƒ#
 

I see garbage here and my first guess is that nothing reasonable really
got in your tape. What is your "DUMP" program, dump or tar? Perhaps you
should position the tape ("mt -f /dev/nst0 asf 5", for the 5th file on
your tape) before you try amrestore. But read the RESTORE file in the
docs first. AFAIK using numeric values, instead of hostnames, for hosts
is *not recommended* ;-)

-- 
Regards

Chris Karakas
Dont waste your cpu time - crack rc5: http://www.distributed.net



Re: Reading a file on the holding disk (solved)

2000-12-13 Thread Chris Karakas

"John R. Jackson" wrote:
 
 So are you saying you just did a cp of the holding disk file to the
 MO disk?  If so, then the MO must have a normal Unix file system on it
 and the dd should have worked, assuming the cp worked.  I'd start by
 comparing the original and the copy, first by length (ls -l) and then
 by bytes (cmp).
 

I have checked again. The bug was something simpler - but not less
frustrating:

I use my newly created gtar-wrapper script (with your help for passing
parameters to tar :-) ). There, at one place I had:

print "$PN: running $GTAR $@"

instead of

print "$PN: running $GTAR $@"  $log

:-(((

The poor script could not do otherwise than to print the line to stdout,
which means in the middle of the pipe which builds the files! A simple
grep on the files has verified this. After correcting this, I can read
the files on the holding disk with

dd if=file bs=32k skip=1 | gzip -dc | tar -tf -

without problems. Thanks for your patience :-)

PS: Once I was scared by all this dd stuff. By now I have typed the
above pipe so many times, that it seems to me the simplest thing to do
when desaster strikes - and in fact it is, isn't it? Just for fun, I
plan to insert sed in the pipe, to get rid of the extra "running..."
line in the corrupted files :-)

-- 
Regards

Chris Karakas
Dont waste your cpu time - crack rc5: http://www.distributed.net



Re: about setting up the disklist.conf

2000-12-13 Thread Chris Karakas

richard wrote:
 
 In my amanda.conf file, I use
 dumpcycle 1 days# the number of days in the normal dump cycle

dumpcycle is the period of time for which AMANDA has to guarantee that
for every disk in your disklist, there will be a full backup done in
that period. Setting it to 1 day means that for every 1 day you pick,
you will find a full backup of every disk done in this day = you will
need multiple tapes (probably) to achieve this, set runtapes to 7. But I
doudt that this is exactly what you want ;-). To spread the full backups
in a period of, say, 7 days is much more reasonable, so the second
configuration is what you will want:

 dumpcycle 7 days
 runspercycle 1 days
 tapecycle 7 tapes
 
That's fine :-)

PS. Read http://www.backupcentral.com/amanda.html for a detailed
description of AMANDA.

-- 
Regards

Chris Karakas
Dont waste your cpu time - crack rc5: http://www.distributed.net



Re: amdump inconsistancy.

2000-12-13 Thread Chris Karakas

"John R. Jackson" wrote:
 
 Thanks John, I think you are absolutely right to question the dump
 program.  ...
 It's 0.4b19 version ...
 
 Well, I'd be a lot happier if it was ancient so I could blame it :-).
 
I've read in this list that there is a Linux dump 0.4b20 out there, so
there might be hope for you, John :-)

-- 
Regards

Chris Karakas
Dont waste your cpu time - crack rc5: http://www.distributed.net



Re: Reading a file on the holding disk

2000-12-12 Thread Chris Karakas

"John R. Jackson" wrote:
 
 How do I read a file on the holding disk?
 
 Amrestore knows how to read from the holding disk directly.
 

I have tried

amrestore -h -p /scsi/DynaMo/linux/20001212/bacchus._usr_share.1 | dd
bs=32k skip=1 of=test.6

but test.6 seems to be neither a tar file, nor a gzipped one. Although
it contains something, I cannot read it with tar, or tar -tzf.

 But when I copy the file to a MO-disk (2048 bytes per sector) ...
 
 Huh?  Did you copy it with the Amanda header or not?
 

I did just a simple cp. I suppose the header (the first 32K) was copied
too.

 So are you saying you just did a cp of the holding disk file to the
 MO disk? 

Exactly.

 If so, then the MO must have a normal Unix file system on it
 and the dd should have worked, assuming the cp worked.  I'd start by
 comparing the original and the copy, first by length (ls -l) and then
 by bytes (cmp).
 

I will check this. The MO disk does have an ext2 filesystem. But I
flushed the files to tape, so I will have to wait until the end of the
next AMANDA run in order to find some files on the holding disk to do
the test.

-- 
Regards

Chris Karakas
Dont waste your cpu time - crack rc5: http://www.distributed.net



Re: Reading a file on the holding disk

2000-12-12 Thread Chris Karakas

David Lloyd wrote:
 
 Chris!
 
  Why does if=some device work and if=some file does not? Is this
  because I have an ext2 filesystem on the disk and copied the file with
  cp?
 
 If it's a file on an ext2 system, just use tar directly:
 
 tar -xzvf the_file
 

This does not work, because the file has the 32k header at the start.
What I am trying is

dd if=/scsi/DynaMo/linux/20001212/bacchus._usr_src.1  bs=32k skip=1 |
zcat | tar -tf -

but I get 

tar: This does not look like a tar archive
tar: Skipping to next header

-- 
Regards

Chris Karakas
Dont waste your cpu time - crack rc5: http://www.distributed.net



gtar-wrapper returned 2 - backup FAILED

2000-12-11 Thread Chris Karakas

Hello,

I'm almost finished with solving my "incrementals on vfat" problem.
Almost, because now I get an error and I need your advice ;-)

The situation is the following: I have hacked a little (it took just as
much as, practically, commenting a preprocessor "#else" :-) ) AMANDA's
sendsize.c and sendbackup.c, so that they pass all parameters to tar
now. By "all" I mean "--listed-incremental file", as well as
"--incremental --newer-mtime time". Of course, tar cannot process
both, so I set GNUTAR to point to a wrapper script
(/usr/local/bin/gtar-wrapper) and modified the gtar-wrapper script found
on ftp://gandalf... to check the directory argument, decide if it is a
vfat one or not, and then chop the corresponding unneeded options,
before passing them to tar:

$DEBUG $GTAR "$1" "$2" "$3" "$6" "$7" "$8" "$9" "${10}" "${11}" "${12}"
"${13}" "${14}" "${15}" "${16}"

(you see that options nr. 4 and 5 are missing, they are "chopped" ;-)

The idea is: vfat filesystems get the job done with "--incremental
--newer-mtime time", all others use "--listed-incremental file".

Everything works fine, tar gets the right options each time:

--- output of gtar-wrapper for /usr/local

gtar-wrapper: start: Mon Dec 11 03:58:52 CET 2000
gtar-wrapper: args: --create --directory /usr/local --listed-incremental
/var/lib/amanda/gnutar-lists/bacchus_usr_local_1.new --incremental
--newer-mtime 2000-12-08  1:37:28 GMT --sparse --one-file-system
--ignore-failed-read --totals --file - .
gtar-wrapper: directory: /usr/local
gtar-wrapper: file: -
gtar-wrapper: pre_real__usr_local not found so nothing special run
gtar-wrapper: running /bin/gtar --create --directory /usr/local
--listed-incremental
/var/lib/amanda/gnutar-lists/bacchus_usr_local_1.new --sparse
--one-file-system --ignore-failed-read --totals --file - .
gtar-wrapper: looking for post_real__usr_local
gtar-wrapper: post_real__usr_local not found so nothing special run
gtar-wrapper: end: Mon Dec 11 04:43:22 CET 2000

---
(notice the difference between the arguments for gtar-wrapper and
/bin/tar)

But then, tar spits an error:

--- AMANDA's mail output for /usr/local

/-- bacchus/usr/local lev 1 FAILED [/usr/local/bin/gtar-wrapper
returned 2]
sendbackup: start [bacchus:/usr/local level 1]
sendbackup: info BACKUP=/usr/local/bin/gtar-wrapper
sendbackup: info RECOVER_CMD=/usr/bin/gzip -dc
|/usr/local/bin/gtar-wrapper -f... -
sendbackup: info COMPRESS_SUFFIX=.gz
sendbackup: info end
? /bin/gtar: : Cannot stat: No such file or directory
? /bin/gtar: : Warning: Cannot stat: No such file or directory
| Total bytes written: 53729280 (51MB, 20kB/s)
? /bin/gtar: Error exit delayed from previous errors
sendbackup: error [/usr/local/bin/gtar-wrapper returned 2]
\

---

The problem is that tar says "cannot stat", sends error code 2, which is
returned by my gtar-wrapper (exit_code=$?) and (here begins the real
problem!) *nothing is written* to the holding disk (I had deliberately
no tape in the drive). tar has worked till the end, has calculated a
level 1 file of 51MB and this is not written because gtar-wrapper
returned 2! :-(

So the question is: how do I tell my gtar-wrapper to ignore "Cannot
stat" messages? Shall I do what AMANDA does, i.e. parse the output of
tar? Should I manipulate the return code from tar (e.g. set it always to
0), just to give AMANDA the impression that everything was O.K., so that
she accepts the output and writes it to disk/tape? Shall I patch tar
(this is version 1.13.18)? And generally, how should I handle this,
since I can see that everything is actually being computed correctly?
You see now, I'm really near the end of this odyssey and I am eager to
reach Ithaka...can you help with the last miles? 

This is AMANDA 2.4.1p1 with the samba patches.

Thanks in advance (and of course: as soon as I get this working
reliably, I will post a resumee and the script!).

-- 
Regards

Chris Karakas
Dont waste your cpu time - crack rc5: http://www.distributed.net



Re: remote backup pb...

2000-12-11 Thread Chris Karakas

Yann PURSON wrote:
 
 Ok...Now that's rigth I'm passing thru a Firewall with IP masquarading,
 but I'm not sure that it's the pb, because when I try to backup only
 argon it works fine...
 

Firewalls with masquerading bring an extra level of complication. I am
quite sure it has to do with this. As a first measure, check the list
archives using the keywords "NAT", "masquerading", "firewall",
"timeout", separately or in combinations. You will then see what I mean.

As a second measure, investigate the following:

check with ipchains on
your firewall and examine the masquerading timeouts:

ipchains -L -M 

should list the timeouts used (among other things)

ipchains -M -S timeout1 timeout2 timeout3

with aproppriate values for timeoutx, x=1,..3 should set new timeouts
(in seconds) for masquerading regarding TCP sessions, TCP sessions after
receiving a  FIN  packet, and  UDP  packets, respectively.

Since the defaults as listed in
`/usr/src/linux/include/net/ip_masq.h', are currently 15 minutes, 2
minutes and 5 minutes respectively, you might want to increase them with
the above command and see what happens. 

I see a difference of quite exactly 2 minutes between trying and giving
up in your log, so could this be the 2 minutes for "TCP sessions after
receiving a  FIN  packet", as above?

A third measure could be to investigate where *exactly* the client's
ports are mapped to by your firewall. It might even be that they are
blocked there by a packet filter, by some security software etc. Ask the
firewall's administrator and ... good luck :-)

-- 
Regards

Chris Karakas
Dont waste your cpu time - crack rc5: http://www.distributed.net



Re: Using Removable Hard Drives for Backup

2000-12-11 Thread Chris Karakas

Rod Roberts wrote:
 
 I have seen previous articles about using hard drives for backup by
 tweaking the "reserve" percentage for the holding disk.

I also plan to use a character device (magneto-optical drive) for dumps
with AMANDA. Of course, writing a driver that makes a block device out
of a character one (as suggested by David), or waiting for the
DUMPER-API to come, might be solutions, but they do not work *now*. 

Here's what I came up with (and plan to implement): 

1) Don't change anything, no multiple configurations, nothing such.
2) You have to have a tape drive (sorry Rod ;-) )
3) The tapes capacity should be as large as your
MO-disk's/CD-ROM's/whatever random access medium you choose.
4) The number of disk media should be exactly the number of your tapes.
For best convenience, you should manually label them with the same
names  as your tapes (you can do this also electronically, by using the
-L option to mke2fs, when you first create the filesystem on them ;-) ).

Now, when you run AMANDA, you simply let *no tape* in the drive. This
forces all the output (remember to tweak "reserve"!) to the holding
disk. After the dumps are finished, you copy them from the holding disk
to the disk medium. Then, you run amflush. Oh, nearly forgotten, the
tape should have the same label as the disk you just used, so you'd
better choose the disk according to the tape's name ;-)

This solution has the advantage that the index database is correct for
both the tapes and the disks. Also, you don't change anything in your
configuration and just run AMANDA as usual. You don't have to "cheat"
AMANDA either. You get the added result of tape backups, just in case
your MO-disks/CD-ROMs/CD-RWs/DVDs got corrupted ;-) (you see here? the
point of view has changed!).

You are limited to using the same capacity for both types of media
though (which leads to just using the minimum of both).

This should get you running, until one of the other solutions becomes
reality :-)

-- 
Regards

Chris Karakas
Dont waste your cpu time - crack rc5: http://www.distributed.net



Re: gtar-wrapper returned 2 - backup FAILED

2000-12-11 Thread Chris Karakas

Paul Bijnens wrote:
 
 Chris Karakas wrote:
 
  ? /bin/gtar: : Cannot stat: No such file or directory
  ? /bin/gtar: : Warning: Cannot stat: No such file or directory
  | Total bytes written: 53729280 (51MB, 20kB/s)
  ? /bin/gtar: Error exit delayed from previous errors
  sendbackup: error [/usr/local/bin/gtar-wrapper returned 2]
  \
 
 2. It seems somehow that an empty element or something with an invisible
character (space? backspace? etc) is passed to gtar, and it says "No
such file dor directory".
Could this be caused by your gtar wrapper script?
 

Bingo! That indeed seems to be the case! I invoke tar in the
gtar.wrapper script as follows:

$DEBUG $GTAR "$1" "$2" "$3" "$6" "$7" "$8" "$9" "${10}" "${11}" "${12}"
"${13}" "${14}" "${15}" "${16}"

Now, the problem is that parameter nr. 16 may, or may not be filled,
depending on whether the "--exclude-from=" option is passed. And this
depends on the dumptype. ("--exclude-from=" is not parameter nr. 16, but
rather parameter nr. 15, but if it is not passed, then ".", which is
always the last parameter passed to tar/gtar-wrapper, becomes the nr. 15
and nr. 16 is empty, which causes it to be interpreted as a
filename...).

Thanks Paul! :-)

So the solution will be to pass *exactly* the non-empty parameters to
GTAR. This is Korn shell (ksh). Any elegant, compact suggestions from
the script profs?

-- 
Regards

Chris Karakas
Dont waste your cpu time - crack rc5: http://www.distributed.net



Re: patch

2000-12-11 Thread Chris Karakas

Yann PURSON wrote:
 
 I need to apply the samba patch but I don't know how to do it...
 

cd /usr/src/packages/BUILD (or, wherever the amanda directory is located
under, e.g. for me it is in /usr/src/packages/BUILD/amanda-2.4.1p1)

patch -p0  /home/chris/amanda/samba2-2418.diff (or whatever path
you have for the diff file)

(the samba2-2418.diff patch is against samba-2.06,
while samba2.diff is against samba-2.05)

check for reject files...;-)

I think you will not need to patch if you use 2.4.2, which has just been
released, so check your version first (and its change log file,
CHANGES), as well as that of your SAMBA.

-- 
Regards

Chris Karakas
Dont waste your cpu time - crack rc5: http://www.distributed.net



Re: trouble shooting amanda -

2000-12-11 Thread Chris Karakas

Denise Ives wrote:
 
 My holding disk was flushed to tape (daily000) on Friday the 8th and I
 force a full dump to tape (daily001) on the 9th. No Amanda Mail Report was
 generated. Sda10 failed on Friday's full dump. Finally, my Sunday Am and
 Monday AM dumps did not run ans a message to 'run amflush' was generated.
 
 Does any one know why that is - I am going to run amflush now.
 

For some reason, sda10 failed to send the dumps:

 admin1.cor sda10 RESULTS MISSING

Then, for some also unknown reason, AMANDA did not send you a report,
meaning that she didn't finish (properly). This left some files (e.g.
the file "log") in the logdir directory. 

 amdump: amdump or amflush is already running, or you must run amcleanup

The next time amdump ran, found the "log" file and refused to run.

Solution: Run amcleanup _and_ investigate why sda10 failed to send the
results.

I consider this a rather normal situation: clients go down and
connections are lost from time to time and this happens mostly on
weekends, when they run unattended and we need some rest (Murphy's law
;-) ).

-- 
Regards

Chris Karakas
Dont waste your cpu time - crack rc5: http://www.distributed.net



Re: Using Removable Hard Drives for Backup

2000-12-11 Thread Chris Karakas

"John R. Jackson" wrote:
 
 Unless I misunderstand some recent contributions from Marc Mengel, it
 seems that this is now quite easy to do.  See docs/VTAPE-API.
 
 Correct.  What you want is my "file:" driver that sits on top of Marc's
 work.  I should have it ready in a day or two.
 

Please announce it loudly as soon as you are thus far, 'cause I'm eager
to mistreat my MO-disks as tapes too :-)

Do I need to upgrade to 2.4.2 for this to work?

-- 
Regards

Chris Karakas
Dont waste your cpu time - crack rc5: http://www.distributed.net



Re: format of amandahosts file

2000-12-11 Thread Chris Karakas

"John R. Jackson" wrote:
 
 FYI, I always recommend using fully qualified host names everyplace.
 

Sigh...that's what also Sendmail wants from me too...

But what if you don't have a publicly accessible network, so that  a
domain name would be pure fantasy? Shall I choose a domain arbitrarily?

Note that I _love_ calling my computers by first name: "This is Bacchus,
the God of Wine, and this is Midas, the King that everything he touched
became gold, and here we have Pan, the flute Satyr, and there you see
Nymphe sleeping..." ;-) (taken from Poussin's "Bacchus and Midas",
which   hangs here on the wall...).

How can one be so cruel to say "This is [EMAIL PROTECTED]"? :-) :-)

I ended up using a "." ("bacchus.", instead of "bacchus"). This has kept
AMANDA, Sendmail and me quiet for some time now.

-- 
Regards

Chris Karakas
Dont waste your cpu time - crack rc5: http://www.distributed.net



Re: format of amandahosts file

2000-12-11 Thread Chris Karakas

"Bort, Paul" wrote:
 Having your own DNS server inside the network makes this a lot easier.

Paul,

thanks for the hint. But I refuse to set up a DNS server just for 5-10
computers that should never serve the Internet. That's the meaning of
/etc/hosts, isn't it? I still remember reading the DNS-How-To some years
ago, where the guy said that one must be crazy to want a DNS server in
his small, private network (or something like that, AFAIR). And he was
right.

Now, Sendmail talks about a "simpler" world, due to everybody having a
DNS server, be it for private or public use... It was Sendmail that
forced me to introduce ".". Then I had to change the .amandahosts file.
And I still don't know if this is the reason why netscape sometimes
takes a lng time to "Connect to bacchus" (or is it just that I still
don't understand mod_perl ;-)). You see, netscape has to contact apache
here, because apache is a proxy, used (through mod_rewrite) to bann
banners...but apache is happier when I have "bacchus", instead of
"bacchus.". Go figure. And then you have /etc/nsswitch.conf...and
masquerading...

If you ask me, I find this an awful mess...

Of course, a DNS server would "simplify" things. But besides the
theoretical point of "Occam's razor" (what is simple?), we have all
these security announcements for bind, that make the blood chill, even
with the most nuclearly hardened packet filter...

-- 
Regards

Chris Karakas
Dont waste your cpu time - crack rc5: http://www.distributed.net



Re: sendsize.debug, sendbackup.debug, killpgrp.debug

2000-12-11 Thread Chris Karakas

Denise Ives wrote:
 
 Do you really think we ran out of tape?
 
 Subject: daily AMANDA VERIFY REPORT FOR daily001
 
 Tapes: daily001
 No errors found!

No. Otherwise, the tape you just verified would give you some error
("not at start of tape", 0+0 records in,  etc.) when amverify would
reach its end and there would still be further data to read.

My experience is that it is mostly some hardware error that prevented
AMANDA from finishing. My experience is also that it is not much worth
chewing the cud of the logs in such a case (unless you want to debug
AMANDA). It just happens from time to time that AMANDA finishes without
a report. Of course, there is a cause for every effect (tell this to a
physicist). And we are talking about deterministic automata (don't
we?...). But if it doesn't happen very often, why bother? Run amcleanup,
then amflush, run AMANDA next time - and be happy :-) 

PS. Get a 9th and a 10th tape for these rare, but exciting Murphy cases
;-) And let the real backup on tape take place in the middle of the
week, when you can at least check it the _next_ day, not after 2 days,
on Monday...

-- 
Regards

Chris Karakas
Dont waste your cpu time - crack rc5: http://www.distributed.net



Re: daily AMANDA MAIL REPORT FOR December 11, 2000 (fwd)

2000-12-11 Thread Chris Karakas

Denise Ives wrote:
 
 Ok here we go again. I ran amcleanup and then re-ran today's amdump.
 I got another fail of dump for on sda10.
 
 p.s. -this was the quickest dump I've seen run so far. It took less than
 10 minutes - is that possible or is there another bug?
 

Again, the debug files should give more details. Did you check for
hardware problems on the disk that failed (sda10)? 

I see that these are just level 1 backups, which are usually small, so
the fact that they did not take very long could be normal.

-- 
Regards

Chris Karakas
Dont waste your cpu time - crack rc5: http://www.distributed.net



Re: Using Removable Hard Drives for Backup

2000-12-11 Thread Chris Karakas

"John R. Jackson" wrote:
 
 When available, you'll need to build from the latest 2.4.2 (or 2.5) CVS
 source tree, which will have things beyond the 2.4.2 release tar image.
 

Sigh...another O'Reilly book I'll have to buy. CVS - no matter how often
kind members of this list post the cryptic incantations for this temple,
I _will_ want to know it all when I enter it. I just ask myself why I
didn't buy all those fine manuals (BTFM) at once... ;-)

-- 
Regards

Chris Karakas
Dont waste your cpu time - crack rc5: http://www.distributed.net



Re: tapetype SEAGATE-SCORPION-40

2000-12-11 Thread Chris Karakas

Denise Ives wrote:
 
 driver: state time 3104.913 free kps: 15400 space: 4837972 taper: writing
 idle-dumpers: 4 qlen tapeq: 0 runq: 1 stoppedq: 0 wakeup: 86400
 driver-idle: no-diskspace
   

It just caught my attention: "no-diskspace". 

-- 
Regards

Chris Karakas
Dont waste your cpu time - crack rc5: http://www.distributed.net



Re: Seeking Recommendations on Tape Backup configs

2000-12-11 Thread Chris Karakas

"John R. Jackson" wrote:
 
 However, we've now outgrown even these devices (sigh) and I'm madly
 working on "multi-tape" (tape overflow).  I hope to have it functional
 in a couple of weeks (before my end of semester backups have to be done).

Does this mean AMANDA being able to span multiple tapes?

-- 
Regards

Chris Karakas
Dont waste your cpu time - crack rc5: http://www.distributed.net



Re: How do I change the ctime of a vfat file?

2000-12-08 Thread Chris Karakas

Martin Brown wrote:
  How do I change the ctime of a vfat file?
 try moving it

Sorry, I've tried it, but this does not change the *ctime* of a vfat
file. Check with ls -lc to see yourself. Any other ideas?
 
-- 
Regards

Chris Karakas
Dont waste your cpu time - crack rc5: http://www.distributed.net



Re: Rebuilt Tape Server

2000-12-08 Thread Chris Karakas

Harri Haataja wrote:
 
 I don't know if there is a "redhat manual" somewhere but I might like to
 see it if there is.  It's much easier to use mechanisms in place than to
 try to sew things together with your own scripts.

Use SuSE :-)
They have a 500+ pages manual that comes hardcopy, as well as online on
CD. You don't need "your own scripts" (well, most of the time), it all
fits very well together. 

And here is a distribution-independent solution: Why not use the source
rpms of AMANDA, the ones that your own distribution uses? Then you could
see, in the dif file, all the parameters they used to compile AMANDA and
make changes and/or additions. You generally proceed as follows (paths
valid for SuSE):

0) Install the source rpm of AMANDA. Check the dif file (e.g.
/usr/src/packages/SOURCES/amanda-2.4.1p1.dif) and make the aproppriate
changes there (regarding the AMANDA user and that sort of things). Since
this is in diff format, when you add a line (e.g. when you want an extra
library to be linked and you declare it on an extra line in the dif
file), you will have to adjust the numbers:

@@ -0,0 +1,57 @@
   ^^
(You don't need this if you don't add extra lines)

1) rpm -bp /usr/src/packages/SPECS/amanda.spec

2) patch, if necessary (the samba2-2418.diff patch is against
samba-2.06, while samba2.diff is against samba-2.05):

cd /usr/src/packages/BUILD
patch -p0  /home/chris/amanda/samba2-2418.diff

With 2.4.2, you probably don't need this (and the next) step.

3) check for reject files...

4) configure and compile (don't forget the --short-circuit option!):

rpm -bc --short-circuit /usr/src/packages/SPECS/amanda.spec

5) install:

rpm -bi --short-circuit /usr/src/packages/SPECS/amanda.spec

This is a little cumbersome method, but it gives you some control over
what your distribution did and what not (see step 0). 

-- 
Regards

Chris Karakas
Dont waste your cpu time - crack rc5: http://www.distributed.net



Re: Verify Tape contents

2000-12-08 Thread Chris Karakas

Daniel Schwager wrote:
 
 Hi together,
 
 is it possible to make an verify-run with amanda
 (Checking contens of Tape against contens of HDD) ?
 

Use amverify. See the man page. If you use tar to do the backups,
amverify tests very thoroughly:

---snip from a previous posting

amverify will only check a DUMP image if it was created on a
similar system as that on which you run amverify.  Even then, restore
will only check the dump header.  tar, OTOH, will go over the whole
tar image.

---snip

But this tests whether the files really made it to tape and are readable
and not damaged, not whether they are exact copies of their counterparts
on the disk. If your motive is to make sure that the backups are O.K.,
then amverify is your friend. Comparing CRCs may not check the whole
image, but only the length of its files.

-- 
Regards

Chris Karakas
Dont waste your cpu time - crack rc5: http://www.distributed.net



Re: format of amandahosts file

2000-12-08 Thread Chris Karakas

mnk wrote:
 1 - pondering keemiya:/dev/dsk/c1t3d0s6... next_level0 -11299 last_level -1 (due for 
level 0) (new disk, can't switch to degraded mode)

This is normal. Means "I have to switch to degraded mode, but I can't,
because this is a new disk and, since I haven't done any full backups of
it yet, I cannot compute any incrementals. But in degraded mode, I
_have_ to do only incrementals (this is my default behaviour, which you
can change setting the reserve parameter to less that 100 in
amanda.conf)."

This will disappear, as soon as AMANDA has done a full backup of
/dev/dsk/c1t3d0s6.

 2 - How the heck do I start the amanda daemon on the client!!
 

You must have configured inetd correctly on the client - RTFM ;-)

-- 
Regards

Chris Karakas
Dont waste your cpu time - crack rc5: http://www.distributed.net



Re: ERROR: 202.85.165.88: [addr 202.85.164.38: hostname lookup failed]

2000-12-08 Thread Chris Karakas

 richard wrote:
 
 *** Default servers are not available
 

Do you have a DNS server? It seems that you don't. If you really don't
have one, then you should update your /etc/hosts file (or, more
precisely, whichever alternative to DNS is listed in your
/etc/nsswitch.conf file in the entry "hosts:") with entries for
202.85.164.38 and all the other adresses that AMANDA will need. If you
have a DNS server, then you will have to check /etc/resolv.conf: the DNS
server should be listed in the entry "nameserver". If nothing works,
it's a deeper DNS problem of your site.

-- 
Regards

Chris Karakas
Dont waste your cpu time - crack rc5: http://www.distributed.net



Re: remote backup pb...

2000-12-08 Thread Chris Karakas

Yann PURSON wrote:
 
 amandad: waiting for ack: timeout, retrying
 amandad: waiting for ack: timeout, giving up!
 amandad: pid 23636 finish time Fri Dec  8 03:13:27 2000
 
 What I don't understand is that if I lauch a backup only for this
 server (argon) it works fine...
 

It could be that, if you start more backups, argon will have to wait
longer and then for some reason the connection is timed out. Any
firewalls and/or masquerading/NAT in between? 

-- 
Regards

Chris Karakas
Dont waste your cpu time - crack rc5: http://www.distributed.net



Re: amanda to blame for NT crashes?

2000-12-07 Thread Chris Karakas

David Woodhouse wrote:
 
 [EMAIL PROTECTED] said:
   Any more ideas now? How can this "ATTR_ARCH" flag be reasonably used
  here?
 
 The ATTR_ARCH flag doesn't get mapped to any standard Unix flag. Mapping it
 to the executable flag would be very strange.
 

I don't think so: this is exactly the approach taken from the SAMBA
people. There you have files on ext2 (usually) which have to behave like
Windows files (to put it very simply). They decided to map the archive,
hidden and system bits to the user, group and world execute bits
respectively (see the SAMBA directives map archive, map hidden, map
system). If this is possible to do when we have ext2 files and we want
them to "behave" like vfat, then it should be possible also when we have
vfat files (i.e. with "real" archive, hidden and system bits) and want
them to be mounted under some Linux directory (which SAMBA could then
use, read the mapped bits, find out the "real" bits and use the archive
bit to do correct incrementals on vfat). I think the mount command and
SAMBA should be consistent on this topic. The executable flags are
unused when you mount a vfat partition. They currently just get filled
according to the umask, but they are meaningless. Using the above method
they at least can be used for something (very) useful.

In one sentence: I miss the "map archive/hidden/system" options in the
vfat driver of the mount command in Linux.

-- 
Regards

Chris Karakas
Dont waste your cpu time - crack rc5: http://www.distributed.net



Re: amanda cron job doesn't start?

2000-12-06 Thread Chris Karakas

Rainer Hofmann wrote:
 
 Hi,
 
 any explanations why that cron job for user amanda doesn't even start:
 
 PATH=/sbin:/bin:/usr/bin:/home/amanda/bin:/usr/local/sbin
 
 0 18 * * 1-5 amcheck merten  amdump merten
 

My first guess is that "amcheck merten" gives an error (the command
after  will be executed only if the previous one gave no error). I
suggest you run it by hand first and see what happens. Did cron send
mail with the errors?

Further: Is this the general crontab, or the crontab of the amanda user?
Who is allowed to run amdump? I mean, the way you have set things up, is
amdump ran as root? If so, you should change it, doing 'su amanda -c
"amdump merten"' (if you use the general crontab, or root's). For more
details on cron: man cron, man crontab.

 I'm using joe as editor instead of vi.
 

This is totally irrelevant - are you superstitious? :-)

-- 
Regards

Chris Karakas
Dont waste your cpu time - crack rc5: http://www.distributed.net



Re: problem with amdump

2000-12-05 Thread Chris Karakas

Vinche wrote:
 
   fatboy /lib lev 0 FAILED [Request to fatboy timed out.]

I suspect the estimates took too long. Increase etimeout in amanda.conf. 

   planner: Last full dump of fatboy:/usr/apache/archive on tape DailySet107
 overwritten on this run.

Either you demand full backups too often, or use too few tapes (which is
actually an equivalent way of saying the same thing). Increase dumpcycle
in amanda.conf and/or use more tapes.

-- 
Regards

Chris Karakas
Dont waste your cpu time - crack rc5: http://www.distributed.net



Re: GNU tar estimates for vfat filesystems (Was: How do I check level 1 sizes?)

2000-12-05 Thread Chris Karakas

Alexandre Oliva wrote:
 
 On Dec  1, 2000, Chris Karakas [EMAIL PROTECTED] wrote:
 
  Do you mean that just by undefining GNUTAR_LISTED_INCREMENTAL_DIR,
  AMANDA is going to call tar using --incremental, instead of
  --listed-incremental?
 
 That's right.
...
 However, you'll still be missing the
 `--newer' flag, that is passed when GNUTAR_LISTED_INCREMENTAL_DIR is
 used.  You may have to work around this problem by reading/storing
 timestamps in the `filename' argument.

I am in the process of undefining GNUTAR_LISTED_INCREMENTAL_DIR and
recompiling. I have had a look at the sources and, as I understand it,
the --newer flag is passed *only* when I *don't* use
GNUTAR_LISTED_INCREMENTAL_DIR. From client-src/sendsize.c, around line
1220:

#ifdef GNUTAR_LISTED_INCREMENTAL_DIR
 " --listed-incremental ", incrname,
#else
 " --incremental",
 " --newer ", dumptimestr,
#endif

So, actually, the --newer flag *will* be passed if I undef
GNUTAR_LISTED_INCREMENTAL_DIR and I don't miss anything, do I?

-- 
Regards

Chris Karakas
Dont waste your cpu time - crack rc5: http://www.distributed.net



Re: amanda to blame for NT crashes?

2000-12-05 Thread Chris Karakas

(Joi and David, please just read on)

Alexandre Oliva wrote:
 
 On Dec  4, 2000, Chris Karakas [EMAIL PROTECTED] wrote:
 
  when you use SAMBA for the vfat partitions of your dual boot system,
  it does not work.
 
 What do you mean?  It works for me.  Or are you talking about backing
 up vfat partitions while running GNU/Linux, in which case Samba isn't
 used at all; all you need is GNU tar?
 

I mean, I have tried both, without success. Here's what I tried:

Common part in both situations: The vfat partitions are mounted under
/dos as /dos/c, /dos/d etc. This is done in boot time from fstab:

/dev/hda3   /dos/cvfat  
defaults,uid=500,gid=101,umask=002   0   0
/dev/hda5   /dos/dvfat  
defaults,uid=500,gid=101,umask=002   0   0

etc.

uid is user chris, gid is group windows. AMANDA is member of the group.
This way *all* files get the permissions -rwxrwxr-x, i.e. they become
executables, even if they are not (probably due to the umask I use).
(That's the first thing that annoys me in mount: I can't map the archive
bit to the fictitious executable bits, as SAMBA is able to do when the
files are on ext2 partitions).

Now, for the two cases I have tested:

1. The Linux/AMANDA server becomes a SAMBA server too. The /dos/c,
/dos/d directories are exported as C and D respectively. In disklist I
would have //bacchus/c and //bacchus/d as the directories to be backed
up with tar, where bacchus is the Linux/AMANDA/SAMBA server.

2. No SAMBA in this case. I just tell AMANDA to use tar to backup the
directories /dos/c, /dos/d etc.

In both cases, incrementals are almost as large as full backups. It was
not always so: I had to boot Windows a few times (which I do not do very
often) to get this behaviour, so it is seems to be the problem of inodes
on vfat not being constant between mounts (after some kernel, I think
2.2.5). But, this is also curious: Even if I do not boot between two
successive AMANDA runs (i.e. the vfat filesystems stay mounted with the
same inode numbers), even then, a level 1 incremental which is done the
day after a level 0 was done, is almost as big as level 0. I can't
figure what's going on. From now on, I have set "dumpcycle 0" for the
vfat filesystems.

 Which has just given me yet another idea for you to try: set up your
 GNU/Linux box as a Samba server, and run your backups pretending the
 GNU/Linux box is actually running MS-Windows, i.e., arrange for it to
 be backed up through Samba.  This will use Samba's mechanism of
 creating incrementals, which are quite different from those of GNU
 tar, and might get you around the problems you're facing with vfat,
 GNU tar, the Linux kernel or whatever :-)
 

This is exactly my case 1. Joi Ellis may remember that I contacted him
offline for this problem 6 months ago. I came to the coclusion that I
could not map the archive bit on some (even fictitious) execute bit,
which would then be used by SAMBA as an archive bit to compute
incrementals and that this was the reason incrementals were so big. So
the problem seems to be the vfat driver in mount, which cannot map the
archive bit at anything meaningful, although it sets it correctly,
according to David Woodhouse  [EMAIL PROTECTED] in the
linux-kernel mailing list:

---snip

We do, however, correctly set the archive bit whenever we modify or
create 
a file on a FAT filesystem, so it should be usable for its intended
purpose.

You just need to make sure your backup program uses the ATTR_ARCH flags
to 
decide whether to back up each file, and resets the flag after/before
doing the backup.

---snip

Any more ideas now? How can this "ATTR_ARCH" flag be reasonably used
here?

-- 
Regards

Chris Karakas
Dont waste your cpu time - crack rc5: http://www.distributed.net



Re: amanda to blame for NT crashes?

2000-12-04 Thread Chris Karakas

Alexandre Oliva wrote:
 
  I don't care if the mechanisms are out of AMANDA's influence - this
  is a design issue.
 
 Indeed.  Amanda is designed to be a backup manager, that knows how to
 use a couple of existing backup tools.  If none of them fit your
 needs, well, then you just have to go find some backup program capable
 of backing up stuff you need and plug it into Amanda.  See?  It's not
 Amanda's fault.
 

I see. I will have to devise something myself to help me out of this. I
admit I had not expect this.

  If you say that AMANDA can backup SAMBA shares, people will believe it
  and be happy.
 
 And they can believe it.  I do it every day.  Many others do it.  Just
 because it fails for some unlucky souls, it doesn't mean it doesn't
 work at all.  Most likely, it's some detail in these souls' setups
 that can be easily fixed.


That "detail" might be writing a wrapper script, computing datestamps,
redesigning the wheel...:-)
 
  The case that does not work is when you have a dual boot system
  (Win/Linux) and the SAMBA shares you are trying to backup with
  AMANDA (when running Linux) are the vfat partitions of Windows.
 
 Ok, so how about this idea: get Plex86 or VMWare, boot Windows atop
 GNU/Linux, share the disks you want to back up and tell Amanda to back
 them up using Samba.  Then, wait for the blue screens :-) :-)
 

I am glad I had the same idea before I read your answer (see my other
posting) ;-) Believe me, I would try it, but for the moment it is out of
the question because of the computing power needed by VMware (the backup
server is a 486DX-133...).

 Except for this
 minor detail people keep forgetting, which is that Amanda just
 *manages* backups, it doesn't *create* them.
 

I will have to make this statement my daily mantra, until I get it.
Thanks for all the input.

-- 
Regards

Chris Karakas
Dont waste your cpu time - crack rc5: http://www.distributed.net



Re: amanda to blame for NT crashes?

2000-12-04 Thread Chris Karakas

"John R. Jackson" wrote:
 
 You understand a chance as a curse: AMANDA _has_ the qualities to become
 a real, integrative, superlative backup tool.  ...
 
 So explain to me exactly what you think Amanda should do to solve this.
 And keep in mind that the design philosophy is that it uses other tools
 and that it does **not** do the work itself.

Separate "estimator programs" from "backup programs". As it is now, if I
decide to use tar for my backups, it will also be used for the
estimation phase (which is where my problem is: estimation of
incrementals). But tar  may not be estimating correctly in my case,
although it may be perfect in backing up.

So the idea is to give us the chance to write our own "estimator
function": a script that gets a filename, a date and/or a backup level
(and perhaps also other parameters) as input and gives a boolean as
output ("yes, it should be backed up" or "no, it shouldn't"). This
script will have a "default" code in it: if tar is used, then tar is
invoked to determine if the file should be backed up, if dump is used,
then dump is invoked for the estimation, otherwise...a custom code
should go here. If the user decides to put custom code in there, he
should comment the default coding. He can then check the file's inodes,
ctimes, the phase of the moon or whatever else he deems aproppriate to
decide whether it should be backed up or not, in the given incremental
level and/or date.

The way it is now, I have to change tar itself, writing a wrapper around
it, taking into account who started it, whether it was for estimation,
backup, or restore etc. This makes the task more formiddable than it
already is (for unskilled programmers like me).

By the way: Why shouldn't I be able to use dump for the estimations and
tar for the backups (and vice-versa)? This solution (assuming it is one,
i.e. assuming that dump estimates correctly where tar does not), is less
general than the above, but should take you only another parameter in
amanda.conf and some case checking to implement with the existing code.

-- 
Regards

Chris Karakas
Dont waste your cpu time - crack rc5: http://www.distributed.net



Re: amanda to blame for NT crashes?

2000-12-04 Thread Chris Karakas

Jon LaBadie wrote:
 
 On Sat, Dec 02, 2000 at 03:28:49AM +0100, Chris Karakas wrote:
 
  If you say that AMANDA can backup SAMBA shares, people will believe it
  and be happy. People like me (a mathematician...) will believe it so
  much, that they will eventually construct a case that does not work -
  and will get a problem. The case that does not work is when you have a
  dual boot system (Win/Linux) and the SAMBA shares you are trying to
  backup with AMANDA (when running Linux) are the vfat partitions of
  Windows. Just because exceptions confirm the rules, does not mean that
  we should not point them out.
 
 I'm missing something here.  If I have it correct you have some
 client systems that run windows (maybe some linux clients too).
 Your amanda backup host runs linux, but is also bootable into
 windows.  Further, it is the windows partitions on the backup
 host you are having difficulties with.  Am I correct?
 

Yes.

 If so, I don't understand where samba comes into play.  Window's
 is not running to share the vfat partitions so you must be
 simply mounting them as vfat file systems under linux.  In that
 case I would think normal tar would work.  It does for me on my
 dual-boot Solaris box with windows partitions mounted under that os.
 No need for samba or smbtar or ...
 

I've been trying for 6 months now to get the Windows partitions on the
backup server to be backed up correctly while running Linux and AMANDA.
I first tried it as follows: I told AMANDA to backup the SAMBA shares
that a SAMBA server made available on the backup server. The SAMBA
server got those shares from as mounted vfat partitions. This is where
SAMBA came into play. It did not work - incrementals were almost as big
as full backups. Then I said myself "why have this SAMBA stuff at all?
Just mount the vfat partitions and tell AMANDA to use tar and backup the
mount point directory". This did not work either for the same reason. I
am still looking for a solution.

 
  You understand a chance as a curse: AMANDA _has_ the qualities to become
  a real, integrative, superlative backup tool.
 
 Some would say it already is :))
 

Well, maybe for many people, but still not for me, because it cannot
handle the above situation sufficiently. And, now that all this
discussion is going on, I know I am not alone - another participant said
he gave up on backing up Windows partitions just because of this.

 
 Be aware that amanda is NOT a backup program.  It is a backup manager.
 There are no backup programs nor recovery programs supplied with it.
 Only interfaces to other, non-supplied programs.
 
 It was also created as a unix backup manager, not for windows, samba,
 oracle, mvs, or anything else.  That it can and has been used for
 these purposes is a compliment to its design.
 
 To suggest that the people who maintain amanda should be responsible
 for the programs that it schedules and manages is analogous to saying
 the people who wrote the cron deamon on unix are responsible for
 debugging the programs cron kicks off.
 

O.K, I have understood that. Now I want you to understand that I have a
problem (the above one) and I cannot find a solution with AMANDA. When I
read the "AMANDA chapter" on backupcentral.com, stating that "Recent
versions can also use SAMBA to back up Microsoft Windows
(95/98/NT/2000)-based hosts", I thought "Fine! That's the _ultimate_
solution!". It turned out that the above statement may be technically
correct, but somewhat misleading to the unaware: when you use SAMBA for
the vfat partitions of your dual boot system, it does not work. It seems
that either you have to run Windows on those partitions the same time
that you run Linux and AMANDA to back them up (which is impossible on a
dual boot system...eh, OK, maybe VMware could solve this "detail") or
run the SMB shares that you want to backup on ext2, not vfat,
filesystems (also impossible for a dual boot system, as long as Windows
does not run on ext2 ;-)). It does not work even if you abandon SAMBA
and just try to backup a vfat directory. The problem is estimating the
incrementals and has more to do with tar, vfat, the kernel, or whatever,
than AMANDA itself but I insist on refining the above statement by
pointing out this - admittedly very special - case, where it is just not
true. Believe me, I will be very happy to find out that this refinement
is not necessary and that I was wrong after all ;-)

-- 
Regards

Chris Karakas
Dont waste your cpu time - crack rc5: http://www.distributed.net



Re: GNU tar estimates for vfat filesystems (Was: How do I checklevel 1 sizes?)

2000-12-04 Thread Chris Karakas

Alexandre Oliva wrote:
 
 I don't know how different it would be, but, if you eventually find
 out it's too much, tell Amanda to use a wrapper script instead of GNU
 tar in which you check whether the directory being backed up is in a
 vfat filesystem, then replace the `--listed-incremental filename'
 arguments with `--incremental'.  However, you'll still be missing the
 `--newer' flag, that is passed when GNUTAR_LISTED_INCREMENTAL_DIR is
 used.  You may have to work around this problem by reading/storing
 timestamps in the `filename' argument.
 

Thank you, I will have to try all this out. 

-- 
Regards

Chris Karakas
Dont waste your cpu time - crack rc5: http://www.distributed.net



Re: Firewalls and other joyous things..

2000-12-01 Thread Chris Karakas

Dan Wilder wrote:
 
 Isn't this a bit off-topic for amanda-users?
 

Not at all. It deals with the problem of how to use AMANDA to backup a
client outside the firewall when using Masquerading, NAT etc.

-- 
Regards

Chris Karakas
Dont waste your cpu time - crack rc5: http://www.distributed.net



Re: access as USERNAME not allowed!!

2000-12-01 Thread Chris Karakas

Casile Antonino wrote:
 
 Thanks to everybody who replied to my E-mail  unfortunately all the
 advices had no effect .. I keep on getting the same error.
 To make things a little bit clearer I installed amanda using the rpms
 given with Linux RedHat7.0 for i386. I think that the rpms are compiled
 with the option  --with-amandahosts on. In any case they create a file
 /root/.amandahosts upon installing.
 

The .amandahosts file must be in the home directory of the AMANDA backup
user. If your AMANDA backup user is operator and its home directory is
/home/operator, then your .amandahosts file should be
/home/operator/.amandahosts.

-- 
Regards

Chris Karakas
Dont waste your cpu time - crack rc5: http://www.distributed.net



Re: GNU tar estimates for vfat filesystems (Was: How do I checklevel 1 sizes?)

2000-11-30 Thread Chris Karakas

Andreas Herren wrote:
 
 I had the same problem, since I reboot my machine at least once day,
 i.e.
 between backup runs. So my solution was to avoid the use of tar with the
 "--listed-incremental=FILE" option and using "--incremental" instead.
 
 To achieve this I had to change the file config/config.h near line 628,
 undefining
 GNUTAR_LISTED_INCREMENTAL_DIR as shown below and recompile.
 

Thanks for the answer! Unfortunately, mails didn't come the last two
days, then they came all at once (~150!). Although I checked for answers
to my problem first, I missed yours, so I wrote a first reply without
taking it into account.

It is too late to try your solution tonight, so I will rather ask you
some questions ;-)

Do you mean that just by undefining GNUTAR_LISTED_INCREMENTAL_DIR,
AMANDA is going to call tar using --incremental, instead of
--listed-incremental? And, by the way, what's the difference between the
two?

What do you mean by saying that "Linux-Directories with changes may use
more backup-space"? Does this mean "a little more backup-space", or "so
much more backup-space, as to render incrementals on Linux directories
almost as large as full backups"? In the latter case you are "exorcizing
devil with beelzebul"...

-- 
Regards

Chris Karakas
Dont waste your cpu time - crack rc5: http://www.distributed.net



Re: GNU tar estimates for vfat filesystems (Was: How do I checklevel 1 sizes?)

2000-11-30 Thread Chris Karakas

Conrad and David,

thank you very much for your replies. I upgraded to tar 1.13.18, but
even this newest version does not remedy the problem of incorrect
computation of incrementals on mounted vfat filesystems on my AMANDA
server.  

I think the time of truth has come: AMANDA *cannot* backup Windows
filesystems correctly, contrary to the many claims to the opposite! It
will *not* compute incrementals correctly. This is something that those
who evaluate it as a possible candidate for Linux _and_ Windows backups
should seriously take into account. I have been struggling with this the
past six months without success :-(

Consider the following situation: you try to minimize the Windows
footprint on your network. You migrate to Linux and SAMBA. Windows is
run only on the clients: all your data and applications are on the vfat
filesystems of the SAMBA server which runs Linux. Now you want to back
all this up. Your backup server is the SAMBA server itself, running
AMANDA. There are Linux filesystems on the server, Linux filesystems on
the Linux clients, vfat filesystems on the server and vfat filesystems
of the Windows clients. The clients with the vfat filesystems run on
Windows, the server runs on Linux. You don't care that much about the
vfat files on the Windows machines, because these are easily reproduced.
You *do* care about your data and apps on your Linux server, be it ext2
or vfat.

In this situation, AMANDA will *fail* do do its job for the vfat files
on the server! The incrementals on these files may (or will) be computed
wrongly. They may (or will) be almost as large as full backups, taking
up so much space on your tapes, that almost all of the other full
backups will be delayed, or not done at all. Of course, you can increase
your dumpcycle, use larger tapes etc., but this is almost equivalent to
doing full backups of all vfat filesystems every day! No matter how you
declare the files in the disklist, the SAMBA way (i.e. as shares
exported from the SAMBA server), or the "usual" way (i.e. as mounted
filesystems on the server) the result is the same: maybe that the vfat
files on the boxes running Windows are backed up correctly, but the vfat
files on the box running Linux (your SAMBA and AMANDA server) will have
their incrementals computed incorrectly. And those are your most
important files! At least this is the case with the latest (1.13.18) and
some older (1.12) tar version.

Now, JJ will say "this is tar's problem" and will be in principle
correct. tar's developers will say "this is vfat's problem due to its
lack of inodes" and will be in principle correct. Me, as an AMANDA user
who likes it because of its integrative capabilities, will say "I don't
care whose problem it is, I just see that it is so simple (every backup
program can compute incrementals correctly) and it still cannot be
solved with AMANDA". I understand that it is a design issue: AMANDA
wants to be independent of the way the estimates are done. But in this
case it becomes dependent on tar, or the Linux kernel. And even if I had
written *the* correct estimator, I would not be able to tell AMANDA to
use it. So I will have to rely on native Windows backups for the vfat
filesystems on the AMANDA server - saying goodby to simplicity :-(

Comments and workarounds are very welcome.

-- 
Regards

Chris Karakas
Dont waste your cpu time - crack rc5: http://www.distributed.net


Conrad Hughes wrote:
 
 My suspicion is that GNU tar, which I use in the version 1.12, somehow
 cannot compute incrementals right, for filesystems of vfat type that
 are mounted the way I described above...(?)
 
 Even RedHat have a later version of GNU tar than 1.1.2 and (honestly)
 they're not known to release the latest versions of programs.  Upgrade
 tar to at least (GNU tar) 1.13.17; the ones below this are likely to
 cause all sorts of weird problems.
 
 I'm using 1.13.17 and have the same problem: incrementals on vfat just
 don't work, they're effectively the same as full backups.  tar seems to
 think that every file has been modified since the last backup.
 
 Searches elsewhere suggest that it stems from a rewrite of the kernel
 vfat support between kernels 2.10 and 2.11, but I couldn't find a
 solution mentioned anywhere (*).  It's very frustrating.  If anyone else
 has an idea what to do I'd be ecstatic: this added hours to my daily
 incrementals until I just gave up on backing up Windows.
 
 Conrad
 
 * If I understand correctly (and there's no guarantee that I do), the
   vfat change was to ensure that a file on a vfat FS would have the same
   inode number for the duration of a single mount; inodes need to be
   constructed in some manner on vfat because it doesn't actually have
   real inodes, and the previous mechanism meant that a file's inode
   wouldn't be constant (for example a rename would change it; this
   caused much gnashing of teeth among one crowd of people).  This new
   mechanism means inod

GNU tar estimates for vfat filesystems (Was: How do I check level 1 sizes?)

2000-11-23 Thread Chris Karakas

"John R. Jackson" wrote:
 
 I want to check the level 1 sizes that AMANDA reports.  ...
 
 First of all, let's be clear here.  It's not Amanda.  It's your dump
 program that is reporting sizes.  Amanda just uses them.
 

as I wrote in a previous posting, I wanted to wait a little and observe
what is going on. I think I can tell you a little more about my problem
now: It seems that *all* estimates for *vfat* filesystems are wrong, not
just the ones for /dos/f, /dos/n and /dos/o. The vfat filesystems are
mounted at boot time according to the fstab file, which looks like this:

/dev/hda3   /dos/cvfat  
defaults,uid=500,gid=101,umask=002   0   0
/dev/hda5   /dos/dvfat  
defaults,uid=500,gid=101,umask=002   0   0
/dev/hda6   /dos/evfat  
defaults,uid=500,gid=101,umask=002   0   0

etc.

The uid 500 is the user chris and the gid 101 is the group windows. The
AMANDA user, amanda, belongs to the group windows. When I do ls -l for
the /dos directory, I get

drwxrwxr-x  51 chriswindows 16384   1  1970 c
drwxrwxr-x   8 chriswindows 16384   1  1970 d
drwxrwxr-x   8 chriswindows 16384   1  1970 e

etc.

( is Jan in greek). The files inside the directories c, d, e etc.
have their normal datestamps.

My suspicion is that GNU tar, which I use in the version 1.12, somehow
cannot compute incrementals right, for filesystems of vfat type that are
mounted the way I described above...(?)

 The first place I would look is /tmp/amanda/sendsize*debug on the client.
 That will show the command used to generate the estimate and everything
 else that went on.
 
 I assume you do not have "record no" floating around?
 

No "record no" in the relevant dumptypes. But also no
/tmp/amanda/sendsize*debug here. There is a /var/lib/amanda/debug
directory, though (correctly, since I specified it as the debug
directory in amanda.conf). I have attached the sendsize.debug file from
a somewhat older AMANDA run.
There is no /var/lib/amanda/gnutar-lists/bacchus_dos_f_1.new, but a
/var/lib/amanda/gnutar-lists/bacchus_dos_f_1. There are lines like

775 6935813 ./original/allied/floppy
775 6953074 ./original/allied/floppy/dosodi
775 6953075 ./original/allied/floppy/ibmlan.os2
775 6953076 ./original/allied/floppy/info


but I cannot see anything suspicious in them. The very first line
contains the epoch, 973836145, meaning Fri Nov 10  7:02:25 2000.

The only thing that catches my attention is the order in which the
estimates are printed. For example, it says

calculating for amname '/dos/f', dirname '/dos/f'

and immediately after:

sendsize: getting size via gnutar for /dos/e level 0
sendsize: running "/usr/lib/amanda/runtar --create --directory /dos/e
--listed-incremental /var/lib/amanda/gnutar-lists/bacchus_dos_e_0.new
--sparse --one-file-system --ignore-failed-read --totals --file
/dev/null ."
Total bytes written: 125317120

On that AMANDA run, in amdump.1, I saw:

got result for host bacchus disk /dos/f: 0 - 213170K, 1 - 191710K, -1
- -1K
got result for host bacchus disk /dos/n: 0 - 779500K, 1 - 322920K, 2
- 322840Kgot result for host bacchus disk /dos/o: 0 - 170520K, 1 -
114750K, -1 - -1K

These 3 filesystems were backed up on level 0 one or two days before and
have not been touched since. 

Now, today, in amdump.2, I see:

got result for host bacchus disk /dos/f: 0 - 213530K, 1 - 189890K, 2
- 189540K

AMANDA wanted to bump on level 2, but even that was almost as large as
the level 1! (I see it also for /dos/n in the example above now...) It
_has_ to do something with vfat, GNU tar and mount, or not?

 Are you using dump or GNU tar?  If dump, is /etc/dumpdates accessible to
 the Amanda user and being updated?  If GNU tar, the same questions apply
 to the listed incremental file (and all the subdirectories down to it).
 

GNU tar. I suppose the listed incremental files are:

-rw-rw   1 amanda   disk 5072 Nov 11 07:17 bacchus_dos_c_0
-rw-rw   1 amanda   disk 5072 Nov 22 04:22 bacchus_dos_c_1
-rw-rw   1 amanda   disk11291 Nov 14 12:03 bacchus_dos_d_0
-rw-rw   1 amanda   disk11291 Nov 22 04:25 bacchus_dos_d_1

etc.

I see no problem here.


 Does amcheck have anything interesting to say?
 

No, everything seems to be OK.

I am very grateful for any hint. 


-- 
Regards

Chris Karakas
Dont waste your cpu time - crack rc5: http://www.distributed.net

sendsize: debug 1 pid 18533 ruid 37 euid 37 start time Fri Nov 10 04:26:26 2000
/usr/lib/amanda/sendsize: version 2.4.1p1
calculating for amname '/', dirname '/'
sendsize: getting size via gnutar for / level 0
calculating for amname '/usr/doc', dirname '/usr/doc'
calculating for amname '/usr/local', dirname '/usr/local'
calculating for amname '/usr/man', dirname '/usr/man'
calculating for amname '/usr/src', dirname '/usr/src'
sendsize: getting size via gnutar for /usr/man level 0
sendsize: gettin

How do I check level 1 sizes?

2000-11-10 Thread Chris Karakas

Hello,

I want to check the level 1 sizes that AMANDA reports. For 3 filesystems
I continue to get the right size for level 0, but quite a large one for
level 1 (50-90% of level 0). I am quite sure that they did not change
the last days. I even forced a full dump without any effect to this
behaviour. A more detailed post of mine about this got unanswered.

For the moment, I have taken them out of the filelist, 'cause they take
up almost half of the tape with their huge level 1s, not letting other
filesystems enough space.

How shall I proceed to check if AMANDA calculates the right level 1
sizes?

-- 
Regards

Chris Karakas
Dont waste your cpu time - crack rc5: http://www.distributed.net



Re: backup order and starttime

2000-11-10 Thread Chris Karakas

Frank Smith wrote:
 
 The starttime option works for making sure a backup occurs AFTER a
 certain time, but how can you make sure one occurs BEFORE a time?

I suppose AMANDA polls the current time and starts dumping as soon as
the "starttime" is in the past. In this case, if the polling interval is
small enough, then approaching the exact time from a time _AFTER_ comes
arbitrarily close to approaching it from a time _BEFORE_ that. In the
limit (AMANDA polling each and every time quantum...eh, jiffie ;-)) they
are equal. Actually, this is how real numbers (we think of time as
such...) are defined in mathematics :-)

-- 
Regards

Chris Karakas
Dont waste your cpu time - crack rc5: http://www.distributed.net



Re: explanation of the following dump reports -

2000-11-10 Thread Chris Karakas

Denise Ives wrote:
 
 Can anyone please explain to me what Amanda did here after the full dump
 to tape daily118 on Tuesday 7 Nov 2000 ?  To me it looks like Amanda did a
 level 1 dump on Wednesday am to the holding disk, then another level 1
 dump on Thursday am, then another level 1 on Friday am. 

Yes, exactly. Your AMANDA works fine, where is the problem?
You don't have any tape in the drive, so it dumps to the holding disk.
You just have to use amflush to flush the images onto tape.

 Bump level is set
 to 0.
 

In my amanda.conf I see

bumpsize 50 Mb  # minimum savings (threshold) to bump level 1 -
2
bumpdays 2  # minimum days at each level 
bumpmult 1.5# threshold = bumpsize * bumpmult^(level-1) 

but not "bump level" as parameter which one could set. The bump level is
indirectly determined by AMANDA under consideration of the above
parameters. Again, I don't see anything wrong here.

 Also what promoted from x days ahead?
 (planner: Full dump of admin1.corp.walid.com:sda9 promoted from 4 days
 ahead.)
 

Typical computer program output in english - the verb "was" was
deliberately eaten up. Examples:

"Done", meaning "We are done", not "We have done something".
"File saved", meaning "File was saved", not "File saved something".
"Dump  promoted", meaning "Dump  was promoted", not "Dump 
promoted something". If you ask "promoted what?" here, then you
understand it in the last meaning, which was not the intended one.

In brief, it's passive voice, not active, just the verb is omitted for
brevity.

What I really disgust, is something similar that has crept in the
english language used by computer professionals:

"The program does not compile" - compile _what_?
"The software ships with a CD" - ships _what_?

In this situation, to ask "what" is justified, because here we don't
just omit something for brevity, we rape a language - but that's another
story.

-- 
Regards

Chris Karakas
Dont waste your cpu time - crack rc5: http://www.distributed.net



Re: Getting version 2.4.2-beta2

2000-11-08 Thread Chris Karakas

Eric Wadsworth wrote:
 
 My queries about two of my problems (backing up partial shares in Windows
 boxes, and preventing monstrous report emails which include the list of
 files of windows boxes backed up) have resulted in the same solution:
 install 2.4.2-beta2.
 

Not exactly. Alexandre Oliva pointed out that you could also stay with
2.4.1.p1 but then you should apply the patches that you will find in
www.amanda.org. I went this way myself and I can tell you that it works
very well. Just keep an eye on the estimates AMANDA gets from the
Windows clients - if there is something that the patches somehow did not
correct, it will show up in this area (and in the error messages, of
course!). That's for the second problem.

For the first problem, it was also suggested that you create an
additional share comprising just the directories that you need to
backup. 

It is just unfair to say that "upgrade to a beta version" was the only
solution offered. There is always some risk in both ways. Either you go
for the current, official version and accept that you will have to patch
it (but all you need is just one patch, why bother so much?), or you go
for the "beta" one and accept its beta status. You'll have to take one
risk or the other, but the concensus is that the risk is acceptable
either way.

-- 
Regards

Chris Karakas
Dont waste your cpu time - crack rc5: http://www.distributed.net



Re: Windows client

2000-11-06 Thread Chris Karakas

 Alessandro Chiauzzi wrote:
 
 I need to back up a Windows pc with Amanda.
 I didn't find appropriate documentation about "how configure amanda to
 backup Windows client".
 Could someone help me?

Download http://www.backupcentral.com/amanda.html and read it all. For
information on configuring AMANDA for Windows clients, just search this
document for "Windows" and/or "SAMBA".

-- 
Regards

Chris Karakas
Dont waste your cpu time - crack rc5: http://www.distributed.net



Re: filling a tape before going to the next ...

2000-10-31 Thread Chris Karakas

The Hermit Hacker wrote:
 
 any way of getting it to continue on with that same tape until its full?

No. This is considered a Bad Thing, because it may cause a number of
problems due to different implementations of the tape commands and it
may even lead to loss of data, so AMANDA deliberately does not do it. It
is a feature.

-- 
Regards

Chris Karakas
Dont waste your cpu time - crack rc5: http://www.distributed.net



Re: amanda-user access problem

2000-10-31 Thread Chris Karakas

Pierre Volcke wrote:

  I meet a problem while configuring my backup server
  (lets call it 'theserver')
  note : 'theserver' is also an amanda client.
 
  * amanda is compiled with the "--with-amandahosts" flag
 
  * my amanda user is 'bin' ('disk' group),
 
  * the ~bin/.amandahosts file contains
a line with "theserver"
 

This line must be:

theserver bin

and not just

theserver


-- 
Regards

Chris Karakas
Dont waste your cpu time - crack rc5: http://www.distributed.net



Re: amanda-user access problem

2000-10-31 Thread Chris Karakas

Pierre Volcke wrote:

  I meet a problem while configuring my backup server
  (lets call it 'theserver')
  note : 'theserver' is also an amanda client.
 
  * amanda is compiled with the "--with-amandahosts" flag
 
  * my amanda user is 'bin' ('disk' group),
 
  * the ~bin/.amandahosts file contains
a line with "theserver"
 

This line must be:

theserver bin

and not just

theserver


-- 
Regards

Chris Karakas
Dont waste your cpu time - crack rc5: http://www.distributed.net



Wrong estimates for SAMBA 2.0.6 shares (repost)

2000-10-31 Thread Chris Karakas

Hello again,

I did not receive any answers to my posting, so I repost it below, just
in case something went wrong.

The main point is that AMANDA interprets the number of blocks from the
SAMBA du command as the wanted estimate, which is wrong and that this is
not the old "-d 3" (debug level in smb.conf must be at least 3) problem
discussed in the docs.

Since nobody else complains about this, I assume that I miss something -
but what?

--- Original posting follows ---

Hello,

today AMANDA made me angry... She tried to write 748,4MB on a tape that
is declared as having a capacity of only 66KB! The result was "end
of tape" and I had to use another medium to backup just 153,4MB.

I decided to find out why. I checked amdump.1. What happened? AMANDA
gets the wrong estimate for the SAMBA share //nymphe/e (also for the
other SAMBA shares):

got result for host bacchus disk //nymphe/e: 0 - 38377K, 1 - 38377K, 2
- 38377K

But the e share on nymphe has 216MB!

Then:

pondering bacchus://nymphe/e... next_level0 2 last_level 1 (not due for
a full dump, picking an incr level)
   pick: size 38377 level 1 days 5 (thresh 51200K, 2 days)

...and then AMANDA decides to promote //nymphe/e for a full backup. The
estimated size jumps only from 625017 to 651016, which is within the
limit of 66:

promote: moving bacchus://nymphe/e up, total_lev0 375713, total_size
651016

This is fatal, because I get an "end of tape error", since the level 0
image of //nymphe/e is at least 100MB (taking into account that I use
maximum client compression).

Hmm...so I get the wrong estimate. I checked sendsize.debug:

--- start snip --
sendsize: getting size via smbclient for //nymphe/e level 0
sendsize: running "/usr/bin/smbclient '\\nymphe\e' X -d 0 -U chris
-E -c 'archive 0;recurse;du'"
added interface ip=192.168.0.1 bcast=192.168.0.255 nmask=255.255.255.0
Total bytes written: 276480
.

38377 blocks of size 8192. 10637 blocks available
Total number of bytes: 205021041.
sendsize: getting size via smbclient for //nymphe/e level 1
sendsize: running "/usr/bin/smbclient '\\nymphe\e' X -d 0 -U chris
-E -c 'archive 1;recurse;du'"
added interface ip=192.168.0.1 bcast=192.168.0.255 nmask=255.255.255.0

38377 blocks of size 8192. 10637 blocks available
Total number of bytes: 183909873.
sendsize: getting size via smbclient for //nymphe/e level 2
sendsize: running "/usr/bin/smbclient '\\nymphe\e' X -d 0 -U chris
-E -c 'archive 1;recurse;du'"
added interface ip=192.168.0.1 bcast=192.168.0.255 nmask=255.255.255.0

38377 blocks of size 8192. 10637 blocks available
Total number of bytes: 183909873.
Total bytes written: 788316160
.
sendsize: getting size via gnutar for /dos/l level 1
--- end snip --

Clearly AMANDA interprets the number of blocks (38377) as the wanted
estimate, which is wrong. Clearly also, it is not the old "-d 3" (debug
level in smb.conf must be at least 3) problem, since AMANDA recognized
that I use SAMBA 2.x and used the "du" command. 

I use: amanda 2.4.1p1, samba 2.0.6, tar 1.12, samba2-2418.diff
patch, debug level = 2 in smb.conf.

Do I miss yap (yet another patch)? Don't tell me the answer is yup...

-- 
Regards

Chris Karakas
Dont waste your cpu time - crack rc5: http://www.distributed.net



Re: Changing a tape's density

2000-10-25 Thread Chris Karakas

Frederic Woodbridge wrote:
 
 Do I have to "format" a tape before I can use it, especially if it
 was on an NT box and now no longer is?  Am I making sense?
 
Since generally the various medium formats are specified in
international standards and the drivers have to abide by them, I would
say no, you don't need to format a medium just because you change the
platform. But don't expect the data to be readable in both platforms -
this may or may not be the case, even though the format remains the
same. 

This is the concrete situation with the QIC standards and the ftape
driver which implements them in Linux. I don't know if this is the case
with DLT too, but I would expect so.

-- 
Regards

Chris Karakas
Dont waste your cpu time - crack rc5: http://www.distributed.net