Re: amandad

2001-05-11 Thread Annette Bitz


 I'm having trouble with amanda 2.4.1-p1 running on a RedHat-Linux-Server.
 Everything is working fine.. except the problem, that sonetimes the amandad
 is stopped. Can anyone tell me why?

 This is the message in /var/log/messages:

 May  4 14:59:56 earth xinetd[12802]: amanda service was deactivated because
 of looping
 May  4 14:59:56 earth xinetd[12802]: recv: Ungueltiger Dateideskriptor
 (errno=9)

There was an error in my /etc/xinetd.d/amanda. I forgot to set wait = yes. 
Amanda is now working without further problems for three or four days.




RE: amanda failed

2001-05-11 Thread Monserrat Seisdedos Nuñez



hello, and thanks for your answers.
If my tar is broken or my amanda version is wrong, why it is the first time
it happens since i started using amanda 4 months ago???

I will follow your advices.


I recieved this mail, what is the meaning of no backup size line?
Why the dump failded???

That message means Amanda could not parse the last line of output from
tar that says how big the backups were.  It reports the line:

 ? Total bytes written: nan (0B, ?B/s)

That's clearly wrong.  My guess is your version of GNU tar is broken.
What version are you using?  If it's 1.13 based, you need at least
1.13.17, or better yet, 1.13.19.  If it's 1.12, you need the patch
from www.amanda.org (although I don't think it affects this particular
problem).

FYI, these errors:

 | gtar:
 ./desar/desar11/LineaBase/lotoserver/var/socks/sock.bbox_server:
 socket ignored

are fixed in the latest Amanda (2.4.2p2).



file system problem - umount

2001-05-11 Thread Marcelo G. Narciso

Hi people,

I have a problem. When amdump send the message of backup,
there is the message:

  canopusc0t6d0s0 lev 0 FAILED [/usr/sbin/ufsdump returned 3]
FAILED AND STRANGE DUMP DETAILS:

/-- canopusc0t6d0s0 lev 0 FAILED [/usr/sbin/ufsdump returned 3]
sendbackup: start [canopus:c0t6d0s0 level 0]
sendbackup: info BACKUP=/usr/sbin/ufsdump
sendbackup: info RECOVER_CMD=/bin/gzip -dc |/usr/sbin/ufsrestore -f... -

sendbackup: info COMPRESS_SUFFIX=.gz
sendbackup: info end
|   DUMP: Writing 32 Kilobyte records
|   DUMP: Date of this level 0 dump: Thu May 10 22:21:14 2001
|   DUMP: Date of last level 0 dump: the epoch
|   DUMP: Dumping /dev/rdsk/c0t6d0s0 (canopus:/export/home2) to standard
output.
|   DUMP: Mapping (Pass I) [regular files]
|   DUMP: Mapping (Pass II) [directories]
|   DUMP: Estimated 2937362 blocks (1434.26MB) on 0.02 tapes.
|   DUMP: Dumping (Pass III) [directories]
|   DUMP: Dumping (Pass IV) [regular files]
|   DUMP: Warning - cannot read sector 600384 of `/dev/rdsk/c0t6d0s0'
|   DUMP: Warning - cannot read sector 600400 of `/dev/rdsk/c0t6d0s0'

After amdump, the file system /dev/rdsk/c0t6d0s0 is unmounted. Why it
happens?
Someone knows what is happens?

Thanks a lot.




Re: file system problem - umount

2001-05-11 Thread Jonathan Dill

Marcelo G. Narciso wrote:

 |   DUMP: Warning - cannot read sector 600384 of `/dev/rdsk/c0t6d0s0'
 |   DUMP: Warning - cannot read sector 600400 of `/dev/rdsk/c0t6d0s0'

It looks to me like you have some bad sectors on your disk, or possibly
a disk drive that is on its way to failing, like the head is having
trouble seeking to some sectors.  The filesystem was probably
automatically unmounted to prevent corruption, or at least I have seen
something similar happen with an XFS filesystem on SGI IRIX.  Check the
system logs.

 After amdump, the file system /dev/rdsk/c0t6d0s0 is unmounted. Why it
 happens?
 Someone knows what is happens?
 
 Thanks a lot.

-- 
Jonathan F. Dill ([EMAIL PROTECTED])



compatible changers list 4 Solaris2.0

2001-05-11 Thread mohren

Hi,

did anybody compile a list for compatible changers fro Solaris2.0? :-)

e.g. does the HP-Dat4 Six-Slot work ( probably yes) or 
even the new HP LTO Ultrium 1/9 ?
 
Werner Mohren
System- and Networkmanager
Research Center Ornithology of the Max Planck Society
Max Planck Institute for Behavioral Physiology




Re: tar question

2001-05-11 Thread Ray Shaw


On Fri, May 11, 2001 at 12:51:43AM -0400, Jon LaBadie wrote:
 On Thu, May 10, 2001 at 08:41:24PM -0400, Ray Shaw wrote:
  On that note, it is annoying that if I have an exclude
  file, say /etc/amanda/exclude.home, running amanda with that file
  produces different results than:
  
  tar cvf /dev/null /home -X /etc/amanda/exclude.home
 
 I don't think amanda tar's /home, I think it cd's to /home and
 tar's . (current dir).  Thus your exclude file should be
 relative to ., not /home.

Ah, yes, that's correct.  Thanks for pointing that out; it was the
first step in creating the Tower of Globbing in my last post :)

For testing, I now use:

cd /home
tar cvf /dev/null . -X /root/exclude  moo

and then look at the contents of moo.


-- 
--Ray

-
Sotto la panca la capra crepa
sopra la panca la capra campa



Re: tar question

2001-05-11 Thread Ray Shaw

On Thu, May 10, 2001 at 08:36:07PM -0500, John R. Jackson wrote:
 ... I'd like to back up at least
 their mail, and probably web directories.
 ...
 I've been trying to do this with exclude lists, but I haven't hit the
 solution yet.  ...
 
 Wow!  I wouldn't have had the guts to try this with exclude lists.
 They give me (real :-) headaches just trying to do normal things :-).

Brave or stupid, I've gotten it working with some help from a LUG
member on my sh globbing.  If you have a line like so in your
disklist:

myhost  /home   myhost-home

and the myhost-home dumptype is calling
/etc/amanda/exclude.myhost-home, which contains this:

./*/?
./*/??
./*/???
./*/?*
./*/[!mM]???
./*/[Mm][!a]??
./*/[Mm]a[!i]?
./*/[Mm]ai[!l]

Then /home/*/Mail and /home/*/mail will be backed up.

This creation of evil could easily be extended to back up
/home/*/public_html, too, or whatever else you wanted.

 Maybe you could do it with inclusion instead of exclusion?  Take a
 look at:
 
   ftp://gandalf.cc.purdue.edu/pub/amanda/gtar-wrapper.*
 
 During the pre_estimate you could run find to gather a list of what
 you want to back up, squirrel that away and then pass it to gtar for
 both the estimate and the real run.

Yes, but that's a compile-time option, right?  As I'm a university
student and will be replaced in a few years, I don't want my successor
to be confused when apt-get dist-upgrade breaks the /home backup :)

Of course, I hope we have a bigger tape drive by then...


-- 
--Ray

-
Sotto la panca la capra crepa
sopra la panca la capra campa



GNU tar syntax (or How does amanda do that?)

2001-05-11 Thread Paul Brannigan

Some times I would like to just use GNU tar at
the command line for a quick archive to
a tape device on a remote host.
My Amanda system seems to do this very well.
Amanda some how uses GNU tar to archive
files on a Solaris 2.6 host to a remote tape device that is
attached to a Linux (7.0) host.

Yet when I try to do this at the command line I get
tar complaints.  Am I using the wrong syntax?

solaris-host#/usr/local/bin/tar -cvf linux-host:/dev/nst0 /home
sh: unknown host
/usr/local/bin/tar: Cannot open linux-host:/dev/nst0: I/O error
/usr/local/bin/tar: Error is not recoverable: exiting now

Why the odd error: sh: unknown host
In every respect these two host know about each other.
with wide open permissions for logins, .rhosts etc.

I had read some posts about a buggy version of GNU tar.
The GNU mirror sites don't seem to specify any revisions,
just 1.13   not the rumored 1.13.19  or the 1.13.17 which is
running on my linux host.

But like I said, as long as I let Amanda do the tar all is well.
I just wanted to be able to do a quick tar on the Solaris host without
changing any of my amanda configurations.
What say you good people?

Paul



begin:vcard 
n:Brannigan;Paul
tel;pager:303-826-2365
tel;fax:303-245-1025
tel;work:303-245-1045
x-mozilla-html:TRUE
adr:;;3434 47th St.;Boulder;Colorado;80301;USA
version:2.1
email;internet:[EMAIL PROTECTED]
title:Support Engineer
x-mozilla-cpt:;-5024
fn:Paul Brannigan
end:vcard



Re: Help with Tapeless Configuration

2001-05-11 Thread John R. Jackson

I'm new to Amanda and to this list...

Welcome!

First, when Amanda runs, it reports itself as 2.4.2p1 and not p2. Is the
tapeio branch not in sync with the latest release?

It is in sync, I just forgot to update the p1 to p2.  It's fixed in the
sources so if you do a cvs update and then start over at the autogen,
it should show the right thing.

OK, so I read that Amanda tapes all need to be labeled, so I ran amlabel.
After that, it let me run amdump OK. But then I ran amdump again to try
another dump, but Amanda complained cannot overwrite active tape. How are
tapes ejected or rotated in the tapeless configuration?  ...

You need to create multiple file:/whatever directories, one per tape,
then either change tapedev in amanda.conf to point to the current one
or create a symlink and put that in tapedev then change the symlink.

Or you can configure up one of the tape changers (e.g. chg-multi).
They should work with file: devices.

filemark 100 kbytes  # Is this needed for a hard disk?

This is not needed for a disk.

speed 1000 kbytes# Local speed of the hard disk? Is this used?

Speed is not used by Amanda.

One more question... when I tried to do a dump of /etc, I got the following:

? You can't update the dumpdates file when dumping a subdirectory
sendbackup: error [/sbin/dump returned 1]

What am I doing wrong?  ...

You're trying to use dump on something other than an entire file system.
Dump (in general) only works on whole file systems, not subdirectories.
If you want to do a subdirectory, use GNU tar.

Clinton Hogge

John R. Jackson, Technical Software Specialist, [EMAIL PROTECTED]



Re: file system problem - umount

2001-05-11 Thread John R. Jackson

I have a problem. When amdump send the message of backup,
there is the message:
...
|   DUMP: Warning - cannot read sector 600384 of `/dev/rdsk/c0t6d0s0'
...
After amdump, the file system /dev/rdsk/c0t6d0s0 is unmounted. Why it
happens?

There isn't enough information here to know how to answer.  For one thing,
what OS are you using on the client?  For another, this isn't anything
Amanda would do.  You need to talk to your OS vendor.

My guess (and this is purely a guess) is that the disk is broken (hence
the cannot read errors) and the OS decided to unmount it for you.

John R. Jackson, Technical Software Specialist, [EMAIL PROTECTED]



sendbackup*debug

2001-05-11 Thread George Herson

John R. Jackson wrote:

  What are the complete contents a typical /tmp/amanda/sendbackup*debug
  file for one of the failed runs?

 Don't have any from a failed run. Below is my only sendbackup*debug
 file, which is from the successful run.  
  
 # cat /tmp/amanda/sendbackup*debug
 sendbackup: debug 1 pid 17143 ruid 507 euid 507 start time Fri May  4
 20:45:05 2001
 /usr/local/libexec/sendbackup: version 2.4.2p2
 sendbackup: got input request: GNUTAR /home/amanda 0 1970:1:1:0:0:0
 OPTIONS |;bsd-auth;
   parsed request as: program `GNUTAR'
  disk `/home/amanda'
  lev 0
  since 1970:1:1:0:0:0
  opt `|;bsd-auth;'
 sendbackup: try_socksize: send buffer size is 65536
 sendbackup: stream_server: waiting for connection: 0.0.0.0.2865
 sendbackup: stream_server: waiting for connection: 0.0.0.0.2866
   waiting for connect on 2865, then 2866
 sendbackup: stream_accept: connection from 192.168.1.100.2867
 sendbackup: stream_accept: connection from 192.168.1.100.2868
   got all connections
 sendbackup-gnutar: doing level 0 dump as listed-incremental to
 /usr/local/var/amanda/gnutar-lists/dellmachine_home_amanda_0.new
 sendbackup-gnutar: doing level 0 dump from date: 1970-01-01  0:00:00 
  GMT
 sendbackup: spawning /usr/local/libexec/runtar in pipeline
 sendbackup: argument list: gtar --create --file - --directory
 /home/amanda --one-file-system --listed-incremental
 /usr/local/var/amanda/gnutar-lists/dellmachine_home_amanda_0.new
 --sparse --ignore-failed-read --totals .
 sendbackup-gnutar: /usr/local/libexec/runtar: pid 17144
 sendbackup: pid 17143 finish time Fri May  4 20:45:05 2001

 ... This looks just like a failed run.
 I didn't think this ever worked for you. ...

Can you pls tell me what gives this away as a failed run?  To me, it
looks very much like a sendbackup*debug from a successful run (i have my
sources ;).  The time elapsed is small because only a very small
directory was involved.

thank you,
george



Re: missing files

2001-05-11 Thread John R. Jackson

What shows an error occurred?  ...

By error I meant you didn't see all the entries in amrecover that you
expected to see.  There isn't anything in the sendbackup*debug file to
indicate what went wrong (or even that anything did go wrong).  The more
useful place to look is the index file (see below).

I am not cleaning out everything between tests.  I don't know how to do
that or what that means.  

Since you're having amrecover/index problems, the important thing to
remove is all the index files related to the test.  Run amgetconf
config indexdir to see where the top of the index directory is, then
cd to the host directory within there and then cd to the disk directory
within that.  Remove all the *.gz files.  Then when you run another test
you can be sure amrecover (amindexd) isn't see old data by mistake.

You can zcat the most recent file Amanda creates to see what amrecover
will have to work with.  If you see all the files you expect to, but
amrecover doesn't show them, then that's one kind of problem (with the
index file itself or amrecover).  If you don't see the files you expect,
then that's a problem with the backup itself.

george

John R. Jackson, Technical Software Specialist, [EMAIL PROTECTED]



Re: GNU tar syntax (or How does amanda do that?)

2001-05-11 Thread John R. Jackson

Amanda some how uses GNU tar to archive
files on a Solaris 2.6 host to a remote tape device that is
attached to a Linux (7.0) host.

Yet when I try to do this at the command line I get
tar complaints.  Am I using the wrong syntax?

Yes.

Amanda doesn't do things the way you're thinking.  It does not pass the
tape device name to GNU tar and have it write to the tape.  Amanda runs
all (not just GNU tar) backup programs in a mode that has them write
their image to stdout.  Amanda then gathers that and sends it across the
wire to the server.  The server may write it into the holding disk or
directly to tape.  But GNU tar (or ufsdump, etc) know nothing about this.

solaris-host#/usr/local/bin/tar -cvf linux-host:/dev/nst0 /home
sh: unknown host
/usr/local/bin/tar: Cannot open linux-host:/dev/nst0: I/O error
/usr/local/bin/tar: Error is not recoverable: exiting now

Why the odd error: sh: unknown host

I just tried this between two of my machines and things worked fine.
Then I tried deliberately entering a bad host name:

  $ gtar cvf xxx:/tmp/jrj/z .
  xxx: unknown host
  gtar: Cannot open xxx:/tmp/jrj/z: I/O error
  gtar: Error is not recoverable: exiting now

which matches what you're seeing except xxx instead of sh.

So there is clearly something wrong in the host name part of what you
tried to do, but unless you entered sh as the host name, I don't know
exactly what.

What happens if you do this:

  $ rsh linux-host pwd

I had read some posts about a buggy version of GNU tar.
The GNU mirror sites don't seem to specify any revisions,
just 1.13   not the rumored 1.13.19  or the 1.13.17 which is
running on my linux host.

I don't know what they have.  You need to get the .17/.19 from alpha.gnu.org.

Paul

John R. Jackson, Technical Software Specialist, [EMAIL PROTECTED]



ERROR port 40001 not secure

2001-05-11 Thread syn uw

Hello

I have a little problem, I am running my AMANDA backup server on my private 
network to backup some clients on my internet network. The problem is that 
between these two network I use NAT and NAT does some port mapping so it 
translates for example port 858 to 40001. And it looks like to be able to 
backup a client the client needs to receive a request from a source port 
lower than 1024. So is there any method to disable this on the client side 
?? I would like to be able to backup even if the source port is over 1024, I 
don't care.

Or is there another better method to cope with this ?? But I won't make my 
firewall map ports under 1024 that's not a good solution I think.

Thanks for your opinion.

Regards,
Marc
_
Get Your Private, Free E-mail from MSN Hotmail at http://www.hotmail.com.




Re: ERROR port 40001 not secure

2001-05-11 Thread John R. Jackson

... the client needs to receive a request from a source port 
lower than 1024. So is there any method to disable this on the client side 
??  ...

Sure.  Look for not secure in common-src/security.c and comment out
that test.

I would like to be able to backup even if the source port is over 1024, I 
don't care.

If you're sure you understand the security implications of opening this
door, go right ahead.

I'm curious, though.  Why don't you want to let a small privileged port
range on the server go through NAT to the clients unaltered?

Marc

John R. Jackson, Technical Software Specialist, [EMAIL PROTECTED]



Re: GNU tar syntax (or How does amanda do that?)

2001-05-11 Thread Bernhard R. Erdmann

John R. Jackson wrote:
 
 /usr/local/bin/tar -cvf - /home | rsh linux-host dd of=/dev/nst0
 
 (you may want to specify '-b blocksize' to the tar command... and
   'bs=blocksize' to the dd)
 
 The only problem with using dd like this is that dd does not do output
 reblocking.  It takes whatever record size comes in (i.e. the size of
 the read()) and that's what it writes.

As far as I have used dd on Linux, I can recommend obs=32k. It does
the trick of reblocking.

Once I had a slow network between a DLT drive and the host I wanted to
backup from:
ssh hostname /sbin/dump 0af - / | dd obs=32k of=$TAPE
...was a pain for the tape drive

My first guess using a buffer with dd
ssh host /sbin/dump 0af - / | dd bs=100M | dd obs=32k of=$TAPE
didn't succeed, but when I used
ssh host /sbin/dump 0af - / | dd obs=100M | dd obs=32k of=$TAPE
the tape drive rocked!

Even if the manpage tells you: if you specify bs, ibs and obs will be
set to bs, dd will do no reblocking in this case.



Re: problem backing up a host with more than 171 disklist entries of root-tar

2001-05-11 Thread Marty Shannon, RHCE

This sounds like a classic case of running out of file descriptors --
either on a per-process basis, or on a system-wide basis (more likely
per-process, as you seem to be able to reproduce it at will with the
same number of disklist entries on that host).

It seems to me that Amanda should specifically check for the
open/socket/whatever system call that is returning with errno set to
EMFILE (or, on some brain damaged systems, EAGAIN).  When that happens,
Amanda should wait for some of the existing connections to be taken down
(i.e., closed).

Cheers,
Marty

Bernhard R. Erdmann wrote:
 
 Hi,
 
 I'm using Amanda 2.4.2p2 on/for a Linux Box (RH 6.2, 2.2.19, GNU tar
 1.13.17) to backup home directories on a NetApp Filer mounted with NFS.
 
 Up to and including 171 disklist entries of type root-tar, everything is
 ok. amcheck complains about the home directories being not accessible
 (amanda has uid 37), but runtar get's them running with euid 0 (NFS
 export with no root squashing). It takes about 3 secs for amcheck to
 check these lines.
 
 If I add some more disklist entries of the same type, amcheck hangs for
 a minute (ctimeout 60) and then reports selfcheck request timed out.
 Host down?
 
 /tmp/amanda gets three more files: amanda.datetime.debug, amcheck...
 and selfcheck...
 With up to 171 entries, selfcheck.datetime.debug grows to 28387 Bytes
 containing 171 lines could not access. Using 172 entries, it stops at
 16427 Bytes and contains only 100 lines could not access (o.k. because
 of NFS permissions). The last line of the disklist is checked first.
 /tmp/amanda/selfcheck... ends with:
 selfcheck: checking disk /home/User/cb
 selfcheck: device /home/User/cb
 selfcheck: could not access /home/User/cb (/home/User/cb): Permission
 denied
 selfcheck: checking disk /home/User/ca
 selfcheck: device /home/User/ca
 
 After adding one or more lines to the disklist file, only the last 100
 lines get checked, then an amandad and a selfcheck process is hanging
 around:
 $ ps x
   PID TTY  STAT   TIME COMMAND
 28833 pts/2S  0:00 -bash
 28854 pts/2S  0:00 emacs -nw disklist
 29000 pts/1S  0:00 -bash
 29149 ?S  0:00 amandad
 29151 ?S  0:00 /usr/libexec/amanda/selfcheck
 29182 pts/3S  0:00 -bash
 29227 pts/3S  0:00 less selfcheck.20010511233745.debug
 29230 pts/1R  0:00 ps x
 
 Killing selfcheck spaws another selfcheck process and this one's debug
 file stops after having checked the last 100 disklist lines, too.
 $ kill 29151
 $ ps x
   PID TTY  STAT   TIME COMMAND
 28833 pts/2S  0:00 -bash
 28854 pts/2S  0:00 emacs -nw disklist
 29000 pts/1S  0:00 -bash
 29182 pts/3S  0:00 -bash
 29231 ?S  0:00 amandad
 29233 ?S  0:00 /usr/libexec/amanda/selfcheck
 29234 pts/1R  0:00 ps x
 $ kill 29233
 $ ps x
   PID TTY  STAT   TIME COMMAND
 28833 pts/2S  0:00 -bash
 28854 pts/2S  0:00 emacs -nw disklist
 29000 pts/1S  0:00 -bash
 29182 pts/3S  0:00 -bash
 29238 ?S  0:00 amandad
 29240 ?S  0:00 /usr/libexec/amanda/selfcheck
 29241 pts/1R  0:00 ps x
 $ kill 29240
 $ ps x
   PID TTY  STAT   TIME COMMAND
 28833 pts/2S  0:00 -bash
 28854 pts/2S  0:00 emacs -nw disklist
 29000 pts/1S  0:00 -bash
 29182 pts/3S  0:00 -bash
 29244 ?S  0:00 amandad
 29246 ?D  0:00 /usr/libexec/amanda/selfcheck
 29247 pts/1R  0:00 ps x
 $ kill 29246
 $ ps x
   PID TTY  STAT   TIME COMMAND
 28833 pts/2S  0:00 -bash
 28854 pts/2S  0:00 emacs -nw disklist
 29000 pts/1S  0:00 -bash
 29182 pts/3S  0:00 -bash
 29251 pts/1R  0:00 ps x
 
 Now it's got killed...
 
 Any ideas?

--
Marty Shannon, RHCE, Independent Computing Consultant
mailto:[EMAIL PROTECTED]



Re: problem backing up a host with more than 171 disklist entries of root-tar

2001-05-11 Thread John R. Jackson

Up to and including 171 disklist entries of type root-tar, everything is
ok.  ...
If I add some more disklist entries of the same type, amcheck hangs for
a minute (ctimeout 60) and then reports selfcheck request timed out. 
Host down?

Wow.  If it's what I think it is, that bug has been around forever.
Sheesh!  :-)

Please give the following patch a try and let me know if it solves the
problem.

Basically, there is a deadlock between amandad and the child process
it starts (selfcheck, in your case).  Amandad gets the request packet,
creates one pipe to write the request and one to read the result, then
forks the child.  It then writes the whole packet to the child, and
that's where the problem lies.  If the pipeline cannot handle that much
data, the write loop will hang and amandad will never clear out the data
filling up the read pipe, so the child stops and does not read any more.

This patch moves the write loop into the select loop that was already set
up to read the child result.  I did a minimal test and it didn't seem to
break anything.  Well, it didn't after I got it to stop dropping core :-).

I haven't looked at this w.r.t. 2.5 yet.  I suspect things are much
different there.

John R. Jackson, Technical Software Specialist, [EMAIL PROTECTED]

 amandad.diff


Dumping live filesystems on Linux 2.4 - Corruption

2001-05-11 Thread Brian J. Murrell

I am sure everybody has seen this:

http://www.lwn.net/2001/0503/kernel.php3

It would seem that one can no longer use dump (assuming one is using
Linux 2.4) to backup live filesystems.  So is everybody here now using
gnutar to backup filesystems on Linux 2.4?

Is there any loss of functionality to using gnutar?

b.

-- 
Brian J. Murrell



Re: Dumping live filesystems on Linux 2.4 - Corruption

2001-05-11 Thread C. Chan

Also Sprach Brian J. Murrell:

 I am sure everybody has seen this:
 
 http://www.lwn.net/2001/0503/kernel.php3
 
 It would seem that one can no longer use dump (assuming one is using
 Linux 2.4) to backup live filesystems.  So is everybody here now using
 gnutar to backup filesystems on Linux 2.4?
 
 Is there any loss of functionality to using gnutar?
 
 b.
 
 -- 
 Brian J. Murrell
 

Well, that's dump for ext2. I've been using GNU tar under Linux since I've
trusted dump for ext2.

Now ReiserFS does not have a dump util as far as I know. XFS does have
xfsdump which has behavior in between tar and dump (does filesystem
access rather than raw device but does not touch atime) so you may
try converting to XFS. IBM JFS under AIX's backup util has filesystem
and inode backup modes but I don't think their dump for JFS is ready yet.
I don't know if anyone had tried BRU with Amanda but it has tar-like
syntax and may work without much difficulty.

I'm experimenting with XFS now and if I run into any glitches
I'll let the Amanda list know. I'd like know how to tell Amanda
to use xfsdump rather than GNU tar on XFS partitions.


--
C. Chan  [EMAIL PROTECTED]  
Finger [EMAIL PROTECTED] for PGP public key.




Re: Dumping live filesystems on Linux 2.4 - Corruption

2001-05-11 Thread John R. Jackson

... I'd like know how to tell Amanda
to use xfsdump rather than GNU tar on XFS partitions.

Doesn't just setting program to DUMP in the dumptype do this?
Amanda should auto-detect the file system type and run the correct dump
program (assuming it found it when you ran ./configure).  For instance,
it flips back and forth between ufsdump and vxdump on my Solaris systems.
If it doesn't do that for you, we need to work on that.

C. Chan

John R. Jackson, Technical Software Specialist, [EMAIL PROTECTED]



Re: Dumping live filesystems on Linux 2.4 - Corruption

2001-05-11 Thread Jonathan Dill

C. Chan wrote:
 I'm experimenting with XFS now and if I run into any glitches
 I'll let the Amanda list know. I'd like know how to tell Amanda
 to use xfsdump rather than GNU tar on XFS partitions.

I think you would have to recompile amanda.  I would install
xfsdump/xfsrestore and rpm -e dump and then run ./configure script,
then it should find xfsdump and choose that as the OS dump program. 
Then you would assign a non-GNUTAR dumptype to the disk(s) and it should
run xfsdump for the backups.

Hopefully, the configure script does not check to see if the OS is IRIX
or not Linux and decide that xfsdump can't be installed on the
system--Then you would have to hack the configure script a little bit to
get it to work properly.

-- 
Jonathan F. Dill ([EMAIL PROTECTED])



Re: Linux kernel 2.4 and dump vs. SGI XFS

2001-05-11 Thread Jonathan Dill

Hi folks,

If you are concerned about dumping live filesystem with Linux kernel
2.4, you might be interested in this:  SGI has released an updated
anaconda installer CD for Red Hat 7.1 with XFS journaled filesystem
support.  You load the installer CD first, which includes only the RPMs
necessary for XFS support, and then you will need the regular Red Hat
7.1 CDs to complete the installation:

ftp://linux-xfs.sgi.com/projects/xfs/download/

With XFS you have the option of using xfsdump for your backups on
Linux--AFAIK this should work with amanda, but I have not tested it and
YMMV.  According to my discussions with Eric Sandeen at SGI, xfsdump
does not look at the on-disk metadata structures directly, so it would
not be subject to the problem which affects ext2fs and Linux dump.  I am
forwarding Eric's response to a couple questions that I had about it.

Now if only amrecover would work with xfsdump, I would be in heaven :-)

 Original Message 
Subject: Re: SGI's XFS: ready for production use?
Date: Fri, 11 May 2001 11:39:19 -0500
From: Eric Sandeen [EMAIL PROTECTED]
To: Jonathan Dill [EMAIL PROTECTED]
CC: [EMAIL PROTECTED] [EMAIL PROTECTED]
References: [EMAIL PROTECTED]

Jonathan Dill wrote:
 
 Hi Eric,
 
 Are there plans to keep up with new releases of Red Hat eg. release a
 new anaconda disk as new versions come out?  I have a mostly SGI IRIX
 shop with a growing Linux population, so it seems somehow attractive to
 be able to use XFS as a standard--It could mean good things for
 consistent, automated backups, and being able to toss filesystems around
 between IRIX and Linux.  I would consider adopting this as a standard,
 but I want to make sure that it will be updated and not a dead end.

In terms of Red Hat integration, we don't really have an official
guarantee that we'll keep up with the Red Hat releases.  However, I
expect that we will, unofficially if not officially.  There are enough
people interested in this inside and outside of SGI that I think this
will continue.  I have a personal interest in continuing.

Of course, if Red Hat and/or Linus picks up XFS, that would be a good
thing too, and we're trying to facilitate both of those things. 

As far as SGI's commitment to XFS on Linux, while I can't speak for the
company, I see no signs of that commitment waning.  We need XFS on Linux
for upcoming products, and for our customers.

And of course, the source is out there, so that's a certain level of
guarantee that XFS will be around.

See also the Long Term Commitment... thread at
 http://linux-xfs.sgi.com/projects/xfs/mail_archive/0105/threads.html
 
 Also, there  has been a lot of chatter lately about the problems with
 Linux dump and ext2fs corruption.  Is it true that XFS with the xfsdump
 tools ought to be free from these sorts of buffer cache issues?  Or is
 it safest on Linux to stick with tools like gnutar which access the FS
 at a higher level?

Here's the scoop from Nathan Scott, our userspace guru:

 Yes, I also read this thread - it was quite interesting.
 
 Though it wasn't explicitly spelt out, I think the problem
 that was being refered to with dump (not xfsdump) is that
 it looks at the on-disk metadata structures (via libext2fs)
 to figure out what needs dumping, etc.  tar, et al. use the
 standard system call interfaces for traversing metadata and
 obtaining file data, so aren't exposed to the problems
 associated with dump-on-a-mounted-filesystem.
 
 xfsdump uses an xfs-specific ioctl (bulkstat) to walk the
 filesystem inodes, and doesn't refer directly to the on-disk
 data structures either, so it should be as safe as tar.

HTH, 

-Eric

-- 
Eric Sandeen  XFS for Linux http://oss.sgi.com/projects/xfs
[EMAIL PROTECTED]   SGI, Inc.