SOT: linux kernel-2.4.2 and glibc

2001-04-06 Thread Harri Haataja

On Thu, 5 Apr 2001, Gerhard den Hollander wrote:
 * John Palkovic [EMAIL PROTECTED] (Thu, Apr 05, 2001 at 12:08:23PM -0500)

  I recently compiled a 2.4.2 kernel for the backup server. If I boot
...

 Hmm,
 This 2.4 kernel, does this also imply glibc2.2 ?

I have seen a few claims lately that 2.4 requires glibc2.2.
Maybe I've misunderstood every post, but I don't see why a kernel would
require anything specific to be running on top of it. New libc makes sense
and compiled in a certain way, can be matched to 2.4 breaking 2.2
compatibility, but no the other way around.

I have 2.4.1 running on glibc 2.1.3, 2.2 and 2.0.7 alike. Haven't tried
libc5 nor diet libc =)
No problems anywhere as far as I can tell.

But the talks get me worried. Are there any pointers?

-- 
Funk, Funking n.
   A shrinking back through fear. Colloq. ``The horrid panic,
   or funk (as the men of Eton call it).'' --De Quincey.




Re: more with ADIC-1200 + SunOS 5.8 'sgen' driver and mtx problems

2001-04-06 Thread Craig Dewick

On Thu, 5 Apr 2001, John R. Jackson wrote:

 Well, I've tried the 'sst' driver and it works only slightly better than
 the 'sgen' driver. With the 'sst' driver I can send as many 'mtx inquiry'
 commands as I like and they all work.
 
 What about "mtx -f /dev/rsst5 status"?

No dice:

# mtx -f /dev/rsst5 status
mtx: No such device or address
mtx: Request Sense: 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00
mtx: READ ELEMENT STATUS Command Failed

 Anyway, has anyone done much testing with the 'sst' driver in SunOS
 5.8? It seems the bulk of people tend to use Linux or Windows so I guess
 Solaris testing is not the primary focus of the Amanda development group.
 
 Ummm, you've got that twisted a bit.  I use only Solaris and AIX and I'm
 an Amanda developer, but don't use any of the Amanda changers (it's a long
 story :-), and also do not use mtx.  So it's the mtx testing that might
 be a bit behind, not Solaris.  And for that, maybe you should ask the
 mtx people since it's their program that's not working here, not Amanda.

Sounds like a good idea. I'll pass my problems reports on to them and see
what happens.

 Be that as it may, I have a robot and sst on Solaris 2.6 and should at
 least be able to get some more debugging output into the code for you
 to try.  What version of mtx are you using?

I had a HP autochanger working (as far as the robotics go) with the 'sst'
code from the 2.4.1p1 distribution before, but the autochanger itself was
broken. 8-)

This ADIC-1200D autochanger is obviously responding differently.

 FYI, 1.10 works with inquiry for me but fails (I/O error) for status.
 Version 1.11pre2 doesn't work at all (I/O error).  I'm pretty sure I can
 get at least those things to work for me, and possibly in the process,
 get the going for you.

If you get some results, I'm more than happy to try things out here with
my setup.

I'm going to try out the stctl code tonight and have a go at using
that. It's been too long between backups. 8-)

Regards,

Craig.

-- 
Craig Dewick. Send email to "[EMAIL PROTECTED]"
 Point your web client at "www.sunshack.org" or "www.sunshack.net" to access my
 archive of Sun technical information and links to other places. For info about
 Sun Ripened Kernels, go to "www.sunrk.com.au" or "www.sun-surplus.com"




'stctl' driver will not build under SunOS 5.8

2001-04-06 Thread Craig Dewick


Hiya,

Apart from the troubles with 'mtx' mentioned in other message threads,
I've just tried to build the 'stctl' driver and it's complaining about
something to do with the environment variable handling functions in the
code which does syslogging:

# make
gcc -D_KERNEL   -c stctl.c
stctl.c: In function `stctl_log':
stctl.c:2134: `__builtin_va_alist' undeclared (first use in this function)
stctl.c:2134: (Each undeclared identifier is reported only once
stctl.c:2134: for each function it appears in.)
make: *** [stctl] Error 1

I've emailed this to the author of the 'stctl' package, but I thought some
of you may have come across the problem before me and figured out a
solution already... 8-)

If not, I'll wait to see what Eric says in response to my message and
report back here to the list.

I tried #include-ing 'sys/varargs.h' since that's where the 'va_start' and
'va_end' functions are declared, and this is supposed to include
'sys/va_list.h'. The varargs.h file mentions the '__builtin_va_alist'
function in it's comments, but when I tried adding
'-D__BUILTIN_VA_ARG_INCR' to the compiler flags in the makefile, so that
the build was invoked:

gcc -D_KERNEL   -D__BUILTIN_VA_ARG_INCR -c stctl.c

it produced exactly the same result as before. That's the point when I
decided to email Eric...

Regards,

Craig.

-- 
Craig Dewick. Send email to "[EMAIL PROTECTED]"
 Point your web client at "www.sunshack.org" or "www.sunshack.net" to access my
 archive of Sun technical information and links to other places. For info about
 Sun Ripened Kernels, go to "www.sunrk.com.au" or "www.sun-surplus.com"




RE: Amanda with two tape devices

2001-04-06 Thread Carey Jung


  Can amanda use who tape devices to perform a single backup?

 You can't use them concurrently (yet), but you can set up chg-multi to
 switch between tape drives automatically.  That's what we do here.


Can you elaborate on this?  We are just beginning to set up an Exabyte 220
tape library with 2 drives and would like to know what amanda can and can't
do with them.

thanks in advance,
Carey





Re: Amanda with two tape devices

2001-04-06 Thread Jonathan Dill

Alexandre Oliva wrote:
  Can amanda use who tape devices to perform a single backup?
 
 You can't use them concurrently (yet), but you can set up chg-multi to
 switch between tape drives automatically.  That's what we do here.

Actually, there is a way that you can use them concurrently--You could
split up your dumps into 2 dump configs, one dump config for each drive,
which is what I have done.  However, all of the disks for one system
must be in the same dump config.  You cannot have some disks from a
system in one config, and other disks from that same system in the other
config, or at least it would be extremely difficult to do that.  The
reason for this is that the client will already be busy with the first
dump config, so when the second dump config runs, it will not be able to
connect.

If you run 2 dump configs concurrently, it is also a good idea to have
separate holding disks for each config, or to split the amount of disk
space that each config is allowed to use.  For example, if you have an
18 GB holding disk, only let each config use 9 GB, otherwise disk space
could run out unexpectedly and it might not be handled gracefully by
amanda..  I think technically it should not be a problem, but in my
experience I know that it has caused my backups to fail when both dump
configs were writing to the same disk and used up all of the space.

-- 
"Jonathan F. Dill" ([EMAIL PROTECTED])



Re: Advice: NFS vs SMB - looking for the voice of experience

2001-04-06 Thread Jonathan Dill

Alexandre Oliva wrote:
 I'd much rather use NFS than SMB.  It's generally far more efficient.
 However, God only knows how much crap an NFS server running on
 MS-Windows would have to work against, so it might be that it actually
 takes longer to run.

I recommend running some I/O benchmarks eg. bonnie with a 100 MB or 256
MB file over NFS then over SMB.  My experience has been that Sun PCNFS
is incredibly slow, but some other NFS implementation on NT might be
faster.

-- 
"Jonathan F. Dill" ([EMAIL PROTECTED])



Re: Advice: NFS vs SMB - looking for the voice of experience

2001-04-06 Thread Jonathan Dill

This point is very important.  You will have to do the equivalent of
exporting to the server with "root" enabled.  In Unix this usually is an
option like "root=X" or on Linux "no_root_squash" otherwise you may not
have sufficient priviledges to read the files.  It may look like the
backups worked, but when you restore the files, you may find that the
files are the right size but only contain null characters (aka. ^@ or
ASCII 0).  It all depends how the MS NFS implementation handles UID
mapping and what happens when you have insufficient priviledges to
access some file.  If you choose to use this NFS arrangement, you should
make sure to export the disk read-only, otherwise someone could use NFS
to trash your NT server(s).  You should also try restoring a backup to a
different location, eg. the holding disk, and make sure the file
contents are OK and not bogus ^@ files.

"John R. Jackson" wrote:
 ...  I lean towards NFS, is there any reason I should not?
 
 I know very little about this, but the one thing that popped to mind is
 whether an MS NFS server would give a tar running as root on a client (to
 NFS) enough access to get to everything.  The normal action is to convert
 all root requests to "nobody", which will not work well for backups.

-- 
"Jonathan F. Dill" ([EMAIL PROTECTED])



Re: Amanda with two tape devices

2001-04-06 Thread Alexandre Oliva

On Apr  6, 2001, "Carey Jung" [EMAIL PROTECTED] wrote:

 I think I'd just like to be able to run amrestore, using one drive,
 while the other is running amdump on the same configuration.

Then you can completely ignore one of the drives, for the purposes of
configuring Amanda, and specify its device name only when running
amrestore.

-- 
Alexandre Oliva   Enjoy Guarana', see http://www.ic.unicamp.br/~oliva/
Red Hat GCC Developer  aoliva@{cygnus.com, redhat.com}
CS PhD student at IC-Unicampoliva@{lsd.ic.unicamp.br, gnu.org}
Free Software Evangelist*Please* write to mailing lists, not to me



RE: large filesystem problems

2001-04-06 Thread Jeff Heckart

Sorry for the lack of info on this.

1- When I said amcheck tells me "getting info", i meant amstatus.

2- I am using amanda 2.4.2p1

3- The filesystem is ufs.

4- How would I find the version of dump that I am using?  It would be again
whatever bsd4.2 comes with.

5- logs note:sd0a is 50mb.  sd0h is 51gb with approx 8gb used

'sendsize.debug'

sendsize: debug 1 pid 5846 ruid 213 euid 213 start time Fri Apr  6 07:07:11
2001
/usr/local/libexec/sendsize: version 2.4.2p1
calculating for amname 'sd0a', dirname '/'
sendsize: getting size via dump for sd0a level 0
sendsize: running "/sbin/dump 0sf 1048576 - /dev/rsd0a"
running /usr/local/libexec/killpgrp
  DUMP: Date of this level 0 dump: Fri Apr  6 07:07:11 2001
  DUMP: Date of last level 0 dump: the epoch
  DUMP: Dumping /dev/rsd0a (/) to standard output
  DUMP: mapping (Pass I) [regular files]
  DUMP: mapping (Pass II) [directories]
  DUMP: estimated 26620 tape blocks.
  DUMP: dumping (Pass III) [directories]
.
asking killpgrp to terminate
calculating for amname 'sd0h', dirname '/usr'
sendsize: getting size via dump for sd0h level 0
sendsize: running "/sbin/dump 0sf 1048576 - /dev/rsd0h"
running /usr/local/libexec/killpgrp
  DUMP: Date of this level 0 dump: Fri Apr  6 07:07:17 2001
  DUMP: Date of last level 0 dump: the epoch
  DUMP: Dumping /dev/rsd0h (/usr) to standard output
  DUMP: mapping (Pass I) [regular files]
  DUMP: mapping (Pass II) [directories]
  DUMP: estimated 8979024 tape blocks.
  DUMP: dumping (Pass III) [directories]
.
asking killpgrp to terminate
sendsize: getting size via dump for sd0h level 1
sendsize: running "/sbin/dump 1sf 1048576 - /dev/rsd0h"
running /usr/local/libexec/killpgrp
  DUMP: Date of this level 1 dump: Fri Apr  6 08:15:26 2001
  DUMP: Date of last level 0 dump: the epoch
  DUMP: Dumping /dev/rsd0h (/usr) to standard output
  DUMP: mapping (Pass I) [regular files]

--email from amandapart of it.
FAILURE AND STRANGE DUMP SUMMARY:
  bugs   sd0h lev 0 FAILED [Request to bugs timed out.]
  bugs   sd0a lev 0 FAILED [Request to bugs timed out.]


STATISTICS:
  Total   Full  Daily
      
Estimate Time (hrs:min)0:45
Run Time (hrs:min) 0:45
Dump Time (hrs:min)0:00   0:00   0:00
Output Size (meg)   0.00.00.0
Original Size (meg) 0.00.00.0
Avg Compressed Size (%) -- -- --
Filesystems Dumped0  0  0
Avg Dump Rate (k/s) -- -- --

Tape Time (hrs:min)0:00   0:00   0:00
Tape Size (meg) 0.00.00.0
Tape Used (%)   0.00.00.0
Filesystems Taped 0  0  0
Avg Tp Write Rate (k/s) -- -- --

The other odditty is i have dtimeout set for 1 in amanda.conf.  Why
would the estimate be only 45min??

Thanks

-Original Message-
From: John R. Jackson [mailto:[EMAIL PROTECTED]]
Sent: Thursday, April 05, 2001 4:54 PM
To: Jeff Heckart
Cc: [EMAIL PROTECTED]
Subject: Re: large filesystem problems


When I run amdump, everything appears to be ok, but I use amcheck and it
tells me that it is "getting estimate" ...

Huh?  Amcheck doesn't issue that kind of message.  Could you post exactly
what you did and what it said back?

What version of Amanda are you using?  What version of dump?  What file
of file system?

Take a look at /tmp/amanda/sendsize*debug and see if it has any clues.

Jeff Heckart

John R. Jackson, Technical Software Specialist, [EMAIL PROTECTED]




Re: timed out

2001-04-06 Thread Rob Flory



amanda-2.4.1p1


from corp:
amandad: waiting for ack: Connection refused, retrying
amandad: waiting for ack: Connection refused, retrying
amandad: waiting for ack: Connection refused, retrying
amandad: waiting for ack: Connection refused, retrying
amandad: waiting for ack: Connection refused, giving up!
amandad: pid 20307 finish time Thu Apr  5 22:16:55 2001

does this mean corp could not make a connection back to the amanda
server?
or that the amanda server could not make a connection to corp?

corp is a real hostname, no other hostnames point to that machine in
disklist

Thanks,
Rob


"John R. Jackson" wrote:
 
   corp   hdc3 lev 0 FAILED [Request to corp timed out.]
 ...
 this error happens every night, i have already verified that:
 ...
 
 What version of Amanda?
 
 Are the /tmp/amanda/*.debug files being updated from the amdump run?
 Make sure you don't run amcheck after amdump before looking as it will
 clobber the amandad*debug file (fixed in 2.4.2p2).
 
 Is "corp" a real host name?  Do any other host names listed in disklist
 point to the same real machine as "corp"?
 
 Rob
 
 John R. Jackson, Technical Software Specialist, [EMAIL PROTECTED]



block device and amanda

2001-04-06 Thread ericb

hello :)

I just discover this beautiful software that is Amanda, and i want to
use
it for my enterprise.
But i don't find in the fac and other documents if
Amanda supports  block devices(hard drive); i known a good backup
system don't use hard drive for support, but maybe i will use HD for
economic reason :(.
So if Amanda don't support HD, could you indicate me some good tapes
drives hardware for little price :)


E.B



suggestion of improvement

2001-04-06 Thread Roland Scialom

Dear colleagues,

This is a suggestion of a possible improvement for amanda.

  To check if the tape cartridges are in the tape unit, just 
  before starting the transfer to the tape of the backup files 
  recorded on the disk, instead of doing this check of the very
  beginning of the backup session.

Looks like this check is being performed at the very beginning 
of the backup operation. So, if for any reason the tape cartriges 
are inserted a few minutes right after the backup have started, 
amanda will not use the tapes and write the backup files only to
the disk.

Sincerely, 

---
Roland Scialom, system manager  INTERNET: [EMAIL PROTECTED]
Instituto de Computacao voice: +55 19 788-5843
Universidade Estadual de Campinas   
 IC - UNICAMP   fax: +55 19 788-5847
Caixa Postal: 6176  Av. Albert Einstein, 1251
13083-970, Campinas SP Brazil   Campus Zeferino Vaz



Re: [AMANDA-USERS] block device and amanda

2001-04-06 Thread Trevor Jenkins

On Fri, 6 Apr 2001, ericb [EMAIL PROTECTED] wrote:

 But i don't find in the fac and other documents if Amanda supports
 block devices(hard drive); i known a good backup system don't use hard
 drive for support, but maybe i will use HD for economic reason :(.

I don't want to use disk-to-disk backups per se but with both Zip and Jaz
drives connected to my workstation those are atractive backup devices for
my small office intranet.

 So if Amanda don't support HD, could you indicate me some good tapes
 drives hardware for little price :)


Regards, Trevor

British Sign Language is not inarticulate handwaving; it's a living language.
Support the campaign for formal recognition by the British government now!

-- 

 Re: deemed!




RE: amanda issue

2001-04-06 Thread Ben Hyatt

Error: Autoconf requires GNU m4 1.1 or later 
make: *** [autoconf.m4f] Error 1 

You need GNU make
ftp://ftp.gnu.org/gnu/make/

-Ben 

anurag 






Re: SOT: linux kernel-2.4.2 and glibc

2001-04-06 Thread Dan Wilder

On Fri, Apr 06, 2001 at 10:53:50AM +0200, Gerhard den Hollander wrote:
 * Harri Haataja [EMAIL PROTECTED] (Fri, Apr 06, 2001 at 09:57:05AM +0300)
 
  I recently compiled a 2.4.2 kernel for the backup server. If I boot
  
 
  Hmm,
  This 2.4 kernel, does this also imply glibc2.2 ?
 
  I have seen a few claims lately that 2.4 requires glibc2.2.
 
 no,
 I didn't say ``rquire'' I said ``imply''
 All the distros that are shipping a 2.4 kernel are also shipping glibc2.2.

In fact kernel 2.4 does not require glibc2.2.  I've had 2.4 running happily
since its pre-release days, on a glibc2.1 machine.

-- 
-
 Dan Wilder [EMAIL PROTECTED]   Technical Manager  Editor
 SSC, Inc. P.O. Box 55549   Phone:  206-782-8808
 Seattle, WA  98155-0549URL http://embedded.linuxjournal.com/
-



Estimate level 0, but ran level 4

2001-04-06 Thread ahall

Hello,


I am having some strange behavoir with amanda.  This morning I came in and
my level 0 had failed due to the fact that the disk has more data then my
tape device is capable of backup up.  Thats cool.  I tweaked the exclude
file to backup only what will fit.  I ran amcheck - all good.  So here is
where it gets weird.  I ran amdump and started a new backup.  I was
watching /tmp/amanda/sendsize.debug to make sure everything is ok.  I got
the follown output from the estimate:

sendsize: getting size via gnutar for sda5 level 0
Total bytes written: 5240483840 (4.9GB, 9.1MB/s)


Which is great, expect when it went to actually create the tarball to send
it did a level 4 incremental backups:


sendsize: getting size via gnutar for sda5 level 4
Total bytes written: 891750400 (850MB, 1.1MB/s)



Why did it estimate a full, but actually run a inc?  How does one force a
full backup with amanda.


Andrew Hall




Re: Estimate level 0, but ran level 4

2001-04-06 Thread Alexandre Oliva

On Apr  6, 2001, [EMAIL PROTECTED] wrote:

 Why did it estimate a full, but actually run a inc?

It estimated *both* a full and an inc, and decided to run an inc
because either a full wasn't due or it wouldn't fit.

 How does one force a full backup with amanda.

amadmin conf force

-- 
Alexandre Oliva   Enjoy Guarana', see http://www.ic.unicamp.br/~oliva/
Red Hat GCC Developer  aoliva@{cygnus.com, redhat.com}
CS PhD student at IC-Unicampoliva@{lsd.ic.unicamp.br, gnu.org}
Free Software Evangelist*Please* write to mailing lists, not to me



Re: Estimate level 0, but ran level 4

2001-04-06 Thread ahall



On 6 Apr 2001, Alexandre Oliva wrote:

 On Apr  6, 2001, [EMAIL PROTECTED] wrote:

  Why did it estimate a full, but actually run a inc?

 It estimated *both* a full and an inc, and decided to run an inc
 because either a full wasn't due or it wouldn't fit.


OK.  Here is the weird thing.  An estimate for the lvl4 was never ran.
Just the estimate for the full (4.9G), which will fit on my tape, then the
tar for the inc?  Is this an error I should be concerned about, or just
weird behavior.

Andrew




Re: Strange Failure: dumps too big

2001-04-06 Thread Marcelo Souza

On Thu, 5 Apr 2001, John R. Jackson wrote:

|host2  sd0h lev 0 FAILED [dumps too big, but cannot incremental dump
|new disk]
|
|This message says the total size of the dumps Amanda wants to do is larger
|than your tape size, and that it cannot shift this backup (host2:sd0h)
|back to an incremental dump from a full dump because it is "new", i.e.
|Amanda has never seen it before.

The tape capacity given by 'tapetype' is used to decide if it'll
fit the volume? I'm using a Sony SDT-9000 that suposely could compress
something. Can I increase the 'length' parameter and hope that the
compression will do its job?

|  The real strange thing is that this is a read-only file system and
|didn't grow up since yesterday ...
|
|So, are you saying this disk was backed up by Amanda before?  If so,
|it's very odd it now thinks it is new.  You didn't by any chance,
|run one of the Amanda commands as root instead of the Amanda user?
|Or remove or move some things?

Yes, the /etc/dumpdates tell me that this volume was last backed
up on April 4th.
I run 'amdmin force' every night to ensure that a full backup will
be done, this could generate that message?

Thank you,

- Marcelo





Re: Strange Failure: dumps too big

2001-04-06 Thread John R. Jackson

   The tape capacity given by 'tapetype' is used to decide if it'll
fit the volume?  ...

No.  The "length" parameter in your amanda.conf tapetype section controls
what will fit.  The "tapetype" utility is just a tool to help figure
out the amanda.conf parameters.

I'm using a Sony SDT-9000 that suposely could compress
something. Can I increase the 'length' parameter and hope that the
compression will do its job?

That's what I do.  Don't go nuts with the number :-).  For my particular
type of data, 20%-30% is about all I get.

Make sure you don't also use software compression.  Compressing things
twice usually expands them.

   Yes, the /etc/dumpdates tell me that this volume was last backed
up on April 4th.

What's in /etc/dumpdates doesn't matter.  It's whether Amanda did a
backup before.

   I run 'amdmin force' every night to ensure that a full backup will
be done, this could generate that message?

That's a possibility.  I didn't look at the code to be sure, but why
don't you use "dumpcycle 0" in amanda.conf instead of running the "force"?
That's the usual method to ask Amanda to do only full dumps.

- Marcelo

John R. Jackson, Technical Software Specialist, [EMAIL PROTECTED]



RE: large filesystem problems

2001-04-06 Thread Jeff Heckart

John,

Thank you very much for the response.  I am sorry for the lack of
detailsbut on the info below where you mentioned that I didn't give the
level 1...that was actually all that consisted of the log.  Does that tell
you anything?

Does the disk having a problem seem like the most logical from your
perspective?  This is about a $6-8,000 telenet 6 disk raid 5 array with 10k
scsi drives and a 80mbit adaptec scsi2 card.  The system is overall slow as
a dog.  This is odd to me because it is a thunderbird 900 with 512mb memory.
I cannot figure this whole thing out.

What do you all think.

Thank you very much,
Jeff

-Original Message-
From: John R. Jackson [mailto:[EMAIL PROTECTED]]
Sent: Friday, April 06, 2001 5:19 PM
To: Jeff Heckart
Cc: [EMAIL PROTECTED]
Subject: Re: large filesystem problems


4- How would I find the version of dump that I am using?  ...

I wouldn't know.  I just remember (from stories posted here) that on
Linux, you want the latest and greatest because old versions had a lot
of trouble.

sendsize: debug 1 pid 5846 ruid 213 euid 213 start time Fri Apr  6 07:07:11
2001
/usr/local/libexec/sendsize: version 2.4.2p1
calculating for amname 'sd0a', dirname '/'
sendsize: getting size via dump for sd0a level 0
sendsize: running "/sbin/dump 0sf 1048576 - /dev/rsd0a"
running /usr/local/libexec/killpgrp
  DUMP: Date of this level 0 dump: Fri Apr  6 07:07:11 2001
...
sendsize: running "/sbin/dump 0sf 1048576 - /dev/rsd0h"
running /usr/local/libexec/killpgrp
  DUMP: Date of this level 0 dump: Fri Apr  6 07:07:17 2001

Note that it only took 6 seconds to get the level 0 estimate of "/".

sendsize: running "/sbin/dump 1sf 1048576 - /dev/rsd0h"
running /usr/local/libexec/killpgrp
  DUMP: Date of this level 1 dump: Fri Apr  6 08:15:26 2001

But it took 1:08:09 to get the level 0 estimate of "/usr".  And since
you didn't post anything else, I assume the level 1 estimate took a long
while as well.

It's strange a level 0 estimate would take that long.  They are usually
reasonably quick since they don't have to do much groping around.  I think
I'd start checking into the health of that disk, or the controller,
cables, etc.

The other odditty is i have dtimeout set for 1 in amanda.conf.  Why
would the estimate be only 45min??

The dtimeout variable has to do with when the dump is actually being run,
not the estimate.  For that, you want the etimeout variable.

John R. Jackson, Technical Software Specialist, [EMAIL PROTECTED]




Re: large filesystem problems

2001-04-06 Thread John R. Jackson

...that was actually all that consisted of the log.  Does that tell
you anything?

It all fits together with the Amanda server only waiting 45 minutes but
the estimates taking much longer than that.

Does the disk having a problem seem like the most logical from your
perspective?  This is about a $6-8,000 telenet 6 disk raid 5 array with 10k
scsi drives and a 80mbit adaptec scsi2 card.  The system is overall slow as
a dog.  This is odd to me because it is a thunderbird 900 with 512mb memory.
I cannot figure this whole thing out.

Well, I'd say there is something fundamentally wrong with the system if
there are multiple slowdown symptoms.

Try some simple dd's of the raw device and see what kind of timing you
get.  For instance:

  time dd if=/dev/rsda0h of=/dev/null bs=64k count=8192

That will move half a GByte, then "do the math" to see what kind of
performance you're getting (and/or adjust the numbers to move more/less
data).  For instance, on one of the disks on my system it took 72 seconds,
so that's ~7.3 MBytes/sec.

I'll bet yours will show a problem.  But I don't know enough about your
system (or your system type in general) to advise where to go after that.
All I can suggest is that you start eliminating/swapping pieces until
it behaves or you find the culprit.

You might also do some serious SCSI chain checkout.  For instance, it's
my understanding you cannot have a narrow device after a wide device
in the same chain.  That kind of thing.  And termination is always a
likely culprit.

Good luck.

Jeff

John R. Jackson, Technical Software Specialist, [EMAIL PROTECTED]



Re: Strange Failure: dumps too big

2001-04-06 Thread Marcelo Souza

Hi John,

On Fri, 6 Apr 2001, John R. Jackson wrote:

|  The tape capacity given by 'tapetype' is used to decide if it'll
|fit the volume?  ...
|
|No.  The "length" parameter in your amanda.conf tapetype section controls
|what will fit.  The "tapetype" utility is just a tool to help figure
|out the amanda.conf parameters.
|
|I'm using a Sony SDT-9000 that suposely could compress
|something. Can I increase the 'length' parameter and hope that the
|compression will do its job?
|
|That's what I do.  Don't go nuts with the number :-).  For my particular
|type of data, 20%-30% is about all I get.

OK. I'll test some values.

|Make sure you don't also use software compression.  Compressing things
|twice usually expands them.

I'm sure.

|  I run 'amdmin force' every night to ensure that a full backup will
|be done, this could generate that message?
|
|That's a possibility.  I didn't look at the code to be sure, but why
|don't you use "dumpcycle 0" in amanda.conf instead of running the "force"?
|That's the usual method to ask Amanda to do only full dumps.

It's a case of not RTFM. :)
I'm using 'dumpcycle 0' and force. %-/

Thank you,

- Marcelo





Re: large filesystem problems

2001-04-06 Thread John R. Jackson

not a typo.  I did this four times and had an average of 1.6mb/s.  That is
awful.

True, but it sure explains a lot of things :-).

Do you feel it a problem to have both the scsi3 internal drive and the raid
on the same controller?  ...

I'd have to ask a local expert, which I'll do Monday if you haven't
got it figured by then.  But if you can do it without a lot of trouble,
I sure think it's worth a shot putting the RAID all by its lonesome and
see if it helps.

You might also look for any messages it kicks out during boot, especially
things like transfer rates.

Jeff

John R. Jackson, Technical Software Specialist, [EMAIL PROTECTED]



RE: large filesystem problems

2001-04-06 Thread Jeff Heckart

John,

Thank you very much for your help.  It is greatly appreciated.

That sounds like a good plan.  I will hopefully be able to try it late next
week.

Thanks again.
Jeff

-Original Message-
From: John R. Jackson [mailto:[EMAIL PROTECTED]]
Sent: Friday, April 06, 2001 10:26 PM
To: Jeff Heckart
Cc: [EMAIL PROTECTED]
Subject: Re: large filesystem problems


not a typo.  I did this four times and had an average of 1.6mb/s.  That is
awful.

True, but it sure explains a lot of things :-).

Do you feel it a problem to have both the scsi3 internal drive and the raid
on the same controller?  ...

I'd have to ask a local expert, which I'll do Monday if you haven't
got it figured by then.  But if you can do it without a lot of trouble,
I sure think it's worth a shot putting the RAID all by its lonesome and
see if it helps.

You might also look for any messages it kicks out during boot, especially
things like transfer rates.

Jeff

John R. Jackson, Technical Software Specialist, [EMAIL PROTECTED]




spectra logic bullfrog and barcodes

2001-04-06 Thread Jason Shupe

Hi all,

I'm having great fun with amanda-2.4.2p1 and a spectra logic bullfrog
ait-2 tape library...

I was just wondering if anyone got the bar code reader stuff working
with a bullfrog?

should I try the 2.5 chg-scsi?

here's some info from the 2.4.2p1 chg-scsi

$ chg-scsi -info
21 39 1 0

If you want more info about the bullfrog it should be in the developers
guide here:


http://www.spectralogic.com/common/collateral/documentation/BullFrog/92844008.pdf

And finally the contents of chg-scsi.debug will be here for a while:
 
http://d13.com/~jshupe/chg-scsi.debug.txt

Thanks,
Jason