Re: 2.6.1 client with 2.4.3 server?

2009-05-22 Thread Mitch Collinsworth


Yes, this patch solved the problem.  Thank you!  Can we anticipate
it being integrated into future versions?

-Mitch


On Tue, 19 May 2009, Jean-Louis Martineau wrote:

amanda client 2.6.1p1 is compatible with server 2.4.3b4 and up. But it is not 
with 2.4.3b3 and previous.


Try the attached patch.

Jean-Louis

Mitch Collinsworth wrote:


Hi,

Is there any gross imcompatability that would prevent a 2.6.1p1 client
from being backed up by a 2.4.3b3 server?

I've got it to the point where they are at least talking with each
other, using auth bsd.

amcheck -c is happy.
estimates worked ok.

But amstatus is showing this:

client-a:/a0 driver: [parse of reply message failed] (10:34:30)
client-a:/var  0 driver: [parse of reply message failed] (10:41:15)
client-a:/var/log  0   31020k wait for dumping driver: (aborted:[request 
timeout])



hmm...  here's something interesting in the sendbackup.debug file:

   1242744076.458012: sendbackup: pid 14788 ruid 501 euid 501 version 
2.6.1p1: start a

   t Tue May 19 10:41:16 2009
   1242744076.458089: sendbackup: Version 2.6.1p1
   1242744076.458650: sendbackup:   sendbackup req: GNUTAR /var 0 
1970:1:1:0:0:0 OPTI

   ONS |;bsd-auth;index;
   1242744076.458736: sendbackup:   Parsed request as: program `GNUTAR'
   1242744076.458749: sendbackup:  disk `/var'
   1242744076.458760: sendbackup:  device `/var'
   1242744076.458770: sendbackup:  level 0
   1242744076.458781: sendbackup:  since 1970:1:1:0:0:0
   1242744076.458791: sendbackup:  options 
`|;bsd-auth;index;'

   1242744076.458998: sendbackup: start: client-a:/var lev 0
   1242744076.459129: sendbackup: doing level 0 dump as listed-incremental 
to '/var/li

   b/amanda/gnutar-lists/client-a_var_0.new'
   1242744076.461334: sendbackup: Started index creator: /bin/tar -tf - 
2/dev/null |

sed -e 's/^\.//'
   1242744076.461537: sendbackup: pipespawnv: stdoutfd is 50
   1242744076.461947: sendbackup: Spawning /usr/libexec/amanda/runtar 
runtar NOCONFIG
/bin/tar --create --file - --directory /var --one-file-system 
--listed-incremental
/var/lib/amanda/gnutar-lists/client-a_var_0.new --sparse 
--ignore-failed-read --to

   tals . in pipeline
   1242744076.463151: sendbackup: gnutar: /usr/libexec/amanda/runtar: pid 
14792

   1242744076.463205: sendbackup: Started backup
   1242744166.493587: sendbackup: critical (fatal): index tee cannot write 
[Broken pip

   e]
   /usr/lib64/amanda/libamanda-2.6.1p1.so[0x35834215b2]
   /lib64/libglib-2.0.so.0(g_logv+0x26f)[0x3af5c34d5f]
   /lib64/libglib-2.0.so.0(g_log+0x83)[0x3af5c34f33]
   /usr/libexec/amanda/sendbackup(start_index+0x290)[0x4037f0]
   /usr/libexec/amanda/sendbackup[0x4075f0]
   /usr/libexec/amanda/sendbackup(main+0x10a4)[0x405834]
   /lib64/libc.so.6(__libc_start_main+0xf4)[0x3fe1c1d8b4]
   /usr/libexec/amanda/sendbackup[0x403139]


So it looks like a problem sending the index data.  I've set index_server
to the correct hostname in /etc/amanda-client.conf.  Will this properly
override the settings configured at compile time?  This client is installed
from the redhat rpm, so it has no local compile-time configuration.

The /etc/amanda-client.conf looks like this:

   #
   # amanda.conf - sample Amanda client configuration file.
   #
   # This file normally goes in /etc/amanda/amanda-client.conf.
   #

   conf test # your config name

   index_server amanda-server# your amindexd server
   tape_server  amanda-server# your amidxtaped server
   tapedev  tape:/dev/nst1   # your tape device
# if not set, Use configure or ask server.
# if set to empty string , ask server
# amrecover will use the changer if set to the 
value

# of 'amrecover_changer' in the server amanda.conf.

   #   auth- authentication scheme to use between server and 
client.
   # Valid values are bsd, bsdudp, bsdtcp, krb5, 
local,

   # rsh and ssh.
   # Default: [auth bsdtcp]
   auth bsd

   ssh_keys  # your ssh keys file if you use ssh auth


Anyone see anything obvious we're overlooking?  (Yeah, besides needing to
upgrade the server!)  :-)

-Mitch


2.6.1 client with 2.4.3 server?

2009-05-19 Thread Mitch Collinsworth


Hi,

Is there any gross imcompatability that would prevent a 2.6.1p1 client
from being backed up by a 2.4.3b3 server?

I've got it to the point where they are at least talking with each
other, using auth bsd.

amcheck -c is happy.
estimates worked ok.

But amstatus is showing this:

client-a:/a0 driver: [parse of reply message failed] (10:34:30)
client-a:/var  0 driver: [parse of reply message failed] (10:41:15)
client-a:/var/log  0   31020k wait for dumping driver: (aborted:[request 
timeout])


hmm...  here's something interesting in the sendbackup.debug file:

   1242744076.458012: sendbackup: pid 14788 ruid 501 euid 501 version 2.6.1p1: 
start a
   t Tue May 19 10:41:16 2009
   1242744076.458089: sendbackup: Version 2.6.1p1
   1242744076.458650: sendbackup:   sendbackup req: GNUTAR /var 0 
1970:1:1:0:0:0 OPTI
   ONS |;bsd-auth;index;
   1242744076.458736: sendbackup:   Parsed request as: program `GNUTAR'
   1242744076.458749: sendbackup:  disk `/var'
   1242744076.458760: sendbackup:  device `/var'
   1242744076.458770: sendbackup:  level 0
   1242744076.458781: sendbackup:  since 1970:1:1:0:0:0
   1242744076.458791: sendbackup:  options 
`|;bsd-auth;index;'
   1242744076.458998: sendbackup: start: client-a:/var lev 0
   1242744076.459129: sendbackup: doing level 0 dump as listed-incremental to 
'/var/li
   b/amanda/gnutar-lists/client-a_var_0.new'
   1242744076.461334: sendbackup: Started index creator: /bin/tar -tf - 
2/dev/null |
sed -e 's/^\.//'
   1242744076.461537: sendbackup: pipespawnv: stdoutfd is 50
   1242744076.461947: sendbackup: Spawning /usr/libexec/amanda/runtar runtar 
NOCONFIG
/bin/tar --create --file - --directory /var --one-file-system 
--listed-incremental
/var/lib/amanda/gnutar-lists/client-a_var_0.new --sparse 
--ignore-failed-read --to
   tals . in pipeline
   1242744076.463151: sendbackup: gnutar: /usr/libexec/amanda/runtar: pid 14792
   1242744076.463205: sendbackup: Started backup
   1242744166.493587: sendbackup: critical (fatal): index tee cannot write 
[Broken pip
   e]
   /usr/lib64/amanda/libamanda-2.6.1p1.so[0x35834215b2]
   /lib64/libglib-2.0.so.0(g_logv+0x26f)[0x3af5c34d5f]
   /lib64/libglib-2.0.so.0(g_log+0x83)[0x3af5c34f33]
   /usr/libexec/amanda/sendbackup(start_index+0x290)[0x4037f0]
   /usr/libexec/amanda/sendbackup[0x4075f0]
   /usr/libexec/amanda/sendbackup(main+0x10a4)[0x405834]
   /lib64/libc.so.6(__libc_start_main+0xf4)[0x3fe1c1d8b4]
   /usr/libexec/amanda/sendbackup[0x403139]


So it looks like a problem sending the index data.  I've set index_server
to the correct hostname in /etc/amanda-client.conf.  Will this properly
override the settings configured at compile time?  This client is installed
from the redhat rpm, so it has no local compile-time configuration.

The /etc/amanda-client.conf looks like this:

   #
   # amanda.conf - sample Amanda client configuration file.
   #
   # This file normally goes in /etc/amanda/amanda-client.conf.
   #

   conf test # your config name

   index_server amanda-server# your amindexd server
   tape_server  amanda-server# your amidxtaped server
   tapedev  tape:/dev/nst1   # your tape device
# if not set, Use configure or ask server.
# if set to empty string , ask server
# amrecover will use the changer if set to the value
# of 'amrecover_changer' in the server amanda.conf.

   #   auth- authentication scheme to use between server and client.
   # Valid values are bsd, bsdudp, bsdtcp, krb5, 
local,
   # rsh and ssh.
   # Default: [auth bsdtcp]
   auth bsd

   ssh_keys  # your ssh keys file if you use ssh auth


Anyone see anything obvious we're overlooking?  (Yeah, besides needing to
upgrade the server!)  :-)

-Mitch


Re: 2.6.1 client with 2.4.3 server?

2009-05-19 Thread Mitch Collinsworth



On Tue, 19 May 2009, Jean-Louis Martineau wrote:


I try to keep them compatible,  but I don't test with release before 2.4.5.
index_server is used only by amrecover.

Can you post the amandad.*.debug file from the client?

Jean-Louis



Yup, here's a sample below.

-Mitch


1242745188.258722: amandad: pid 14871 ruid 501 euid 501 version 2.6.1p1: start 
at Tue May 19 10:59:48 2009
1242745188.259342: amandad: security_getdriver(name=bsd) returns 0x358364e920
1242745188.259373: amandad: version 2.6.1p1
1242745188.259383: amandad: build: VERSION=Amanda-2.6.1p1
1242745188.259393: amandad:BUILT_DATE=Fri Apr 10 16:12:07 PDT 2009
1242745188.259402: amandad:BUILT_MACH=x86_64-unknown-linux-gnu 
BUILT_REV=1860
1242745188.259410: amandad:BUILT_BRANCH=amanda-261 CC=gcc
1242745188.259419: amandad: paths: bindir=/usr/bin sbindir=/usr/sbin
1242745188.259428: amandad:libexecdir=/usr/libexec
1242745188.259436: amandad:amlibexecdir=/usr/libexec/amanda 
mandir=/usr/share/man
1242745188.259445: amandad:AMANDA_TMPDIR=/tmp/amanda
1242745188.259453: amandad:AMANDA_DBGDIR=/var/log/amanda 
CONFIG_DIR=/etc/amanda
1242745188.259462: amandad:DEV_PREFIX=/dev/ RDEV_PREFIX=/dev/r
1242745188.259470: amandad:DUMP=/sbin/dump 
RESTORE=/sbin/restore VDUMP=UNDEF
1242745188.259479: amandad:VRESTORE=UNDEF XFSDUMP=UNDEF 
XFSRESTORE=UNDEF VXDUMP=UNDEF
1242745188.259489: amandad:VXRESTORE=UNDEF 
SAMBA_CLIENT=/usr/bin/smbclient
1242745188.259497: amandad:GNUTAR=/bin/tar 
COMPRESS_PATH=/usr/bin/gzip
1242745188.259506: amandad:UNCOMPRESS_PATH=/usr/bin/gzip 
LPRCMD=/usr/bin/lpr
1242745188.259515: amandad: MAILER=UNDEF 
listed_incr_dir=/var/lib/amanda/gnutar-lists
1242745188.259523: amandad: defs:  DEFAULT_SERVER=localhost 
DEFAULT_CONFIG=DailySet1
1242745188.259532: amandad:DEFAULT_TAPE_SERVER=localhost 
DEFAULT_TAPE_DEVICE=
1242745188.259540: amandad:HAVE_MMAP NEED_STRSTR HAVE_SYSVSHM 
AMFLOCK_POSIX AMFLOCK_FLOCK
1242745188.259549: amandad:AMFLOCK_LOCKF AMFLOCK_LNLOCK 
SETPGRP_VOID ASSERTIONS
1242745188.259557: amandad:AMANDA_DEBUG_DAYS=4 BSD_SECURITY 
USE_AMANDAHOSTS
1242745188.259566: amandad:CLIENT_LOGIN=amandabackup CHECK_USERID 
HAVE_GZIP
1242745188.259574: amandad:COMPRESS_SUFFIX=.gz 
COMPRESS_FAST_OPT=--fast
1242745188.259582: amandad:COMPRESS_BEST_OPT=--best 
UNCOMPRESS_OPT=-dc
1242745188.259683: amandad: dgram_recv(dgram=0x3583659788, timeout=0, 
fromaddr=0x3583669780)
1242745188.259718: amandad: (sockaddr_in *)0x3583669780 = { 2, 769, 11.22.33.44 
}
1242745188.259753: amandad: security_handleinit(handle=0xbcf5a30, 
driver=0x358364e920 (BSD))
1242745188.262340: amandad: accept recv REQ pkt:

SERVICE sendbackup
OPTIONS hostname=client-a;
GNUTAR / 0 1970:1:1:0:0:0 OPTIONS |;bsd-auth;index;



1242745188.262390: amandad: creating new service: sendbackup
OPTIONS hostname=client-a;
GNUTAR / 0 1970:1:1:0:0:0 OPTIONS |;bsd-auth;index;

1242745188.264233: amandad: sending ACK pkt:




1242745188.264340: amandad: dgram_send_addr(addr=0xbcf5a70, dgram=0x3583659788)
1242745188.264360: amandad: (sockaddr_in *)0xbcf5a70 = { 2, 769, 11.22.33.44 }
1242745188.264374: amandad: dgram_send_addr: 0x3583659788-socket = 0
1242745188.270256: amandad: security_streaminit(stream=0xbd06c60, 
driver=0x358364e920 (BSD))
1242745188.270320: amandad: stream_server opening socket with family 2 
(requested family was 2)
1242745188.270362: amandad: try_socksize: send buffer size is 65536
1242745188.270378: amandad: try_socksize: receive buffer size is 65536
1242745188.277672: amandad: bind_portrange2: Try  port 11039: Available - 
Success
1242745188.277744: amandad: stream_server: waiting for connection: 0.0.0.0.11039
1242745188.277769: amandad: security_streaminit(stream=0xbd0f140, 
driver=0x358364e920 (BSD))
1242745188.277789: amandad: stream_server opening socket with family 2 
(requested family was 2)
1242745188.277817: amandad: try_socksize: send buffer size is 65536
1242745188.277831: amandad: try_socksize: receive buffer size is 65536
1242745188.285066: amandad: bind_portrange2: Try  port 11039: Available - 
Address already in use
1242745188.292295: amandad: bind_portrange2: Try  port 11040: Available - 
Success
1242745188.292368: amandad: stream_server: waiting for connection: 0.0.0.0.11040
1242745188.292392: amandad: security_streaminit(stream=0xbd171c0, 
driver=0x358364e920 (BSD))
1242745188.292412: amandad: stream_server opening socket with family 2 
(requested family was 2)
1242745188.292438: amandad: try_socksize: send buffer size is 65536
1242745188.292452: amandad: try_socksize: receive buffer size is 65536
1242745188.299718: amandad: bind_portrange2: Try  port 11039: Available - 
Address already in use
1242745188.302654: amandad: bind_portrange2: Try  port 11040: Available - 
Address already in 

RE: Red Hat amanda question

2009-03-04 Thread Mitch Collinsworth


On Wed, 4 Mar 2009, McGraw, Robert P wrote:


For the OS file systems we use the native backup, which in this case is
dump.

Robert



[Please remember to reply on-list when asking for help.]

I was afraid of that.  dump is not recommended or even expected to work
on Linux.  You need to use the tar dumptype and specify a file path
rather than a device.  The tar will run suid as root and the permissions
problems won't occur.

-Mitch



-Original Message-
From: owner-amanda-us...@amanda.org [mailto:owner-amanda-
us...@amanda.org] On Behalf Of Mitch Collinsworth
Sent: Tuesday, March 03, 2009 4:54 PM
To: McGraw, Robert P
Cc: amanda-users@amanda.org
Subject: Re: Red Hat amanda question



On Tue, 3 Mar 2009, McGraw, Robert P wrote:


I have set up an Amanda client on a Red Hat 5.2 server and amcheck
recognizes the new client.

One of the things that I had to do for the 5.2 client to work is

change

the permissions and group for the device /dev/root.


Are you doing dump or tar backups?

-Mitch


Re: Red Hat amanda question

2009-03-03 Thread Mitch Collinsworth


On Tue, 3 Mar 2009, McGraw, Robert P wrote:


I have set up an Amanda client on a Red Hat 5.2 server and amcheck
recognizes the new client.

One of the things that I had to do for the 5.2 client to work is change
the permissions and group for the device /dev/root.


Are you doing dump or tar backups?

-Mitch


Re: krb5 auth problem

2008-07-01 Thread Mitch Collinsworth


If your realm is YZ.EDU, then that's what you use.  If UVWX.YZ.EDU is
a host name and not a realm name, then it doesn't belong in your
principal names.

Can you explain why you want to auth against the secondary rather than
the primary?  I can't think of any reason that should matter.

-Mitch


On Tue, 1 Jul 2008, Chad Kotil wrote:


Heres an update to the kerberos realm issue I am now seeing.

I want to use my secondary KDC (UVWX.YZ.EDU) rather than the primary KDC 
(YZ.EDU), but amanda doesnt seem to know how to look for it. I include the 
KDC realm in all of my config's. amanda.conf, and .k5login.

Here is my .k5login
backup/[EMAIL PROTECTED]

I am able to kinit with the secondary KDC on the client using the keytab that 
I have on the server.


[EMAIL PROTECTED] tmp]$ kinit backup/[EMAIL PROTECTED] -kt /home/ 
ckotil/keytab-amanda

[EMAIL PROTECTED] tmp]$
This works just fine.

Here is what i have in my amanda.conf

krb5keytab  /etc/amanda/keytab-amanda
krb5principal   backup/[EMAIL PROTECTED]

The reason I think that amanda is ignoring the kerberos realm is because of 
this error that I see on the client in /tmp/amanda/amandad.


1214918169.546254: amandad: gss_name host/[EMAIL PROTECTED]
1214918169.546587: amandad: critical (fatal): gss_server failed: can't 
acquire creds for host key host/skip: No such file or directory


It claims the gss_name is host/[EMAIL PROTECTED] when it should be 
host/[EMAIL PROTECTED]



Any ideas?

Thanks,

--Chad

On Jun 26, 2008, at 9:37 PM, Chad Kotil wrote:


Ian,
Jean-Loiuis provided me with a patch that fixed this problem. The patch was 
posted to the list.


I now face a new problem. I need to use my secondary kdc REALM to 
authenticate, and not my default realm. The keytab on the server is from 
the second kdc realm and the principal is from this realm too. But, the 
client tries to authenticate with the default realm.

Any idea how I can tell the client to use the secondary kerberos realm?

Thanks,

--Chad


On  Jun 26, 2008, at 6:46 PM, Ian Turner wrote:


Chad,

This is a bug in Amanda. I have filed a bug report. As a workaround, you 
can

probably make it work by compiling --without-force-uid.

I don't think we have any kerberos users who test out daily builds, so
sometimes things break and nobody notices right away. Maybe if you have a
spare machine, you can become the community kerberos tester. :-)

Cheers,

--Ian

On Thursday 26 June 2008 10:36:56 you wrote:

When i run spawn amandad via xinetd as root, i get this error.
1214490832.259079: amandad: critical (fatal): running as user root
instead of amandabackup

In the kerberos wiki it says amandad will relinquish root permissions
after reading the keytab. It doesnt seem to be doing that.
Also, What keytab on the client needs to be read as root?

--Chad

On Jun 25, 2008, at 5:29 PM, Jean-Louis Martineau wrote:

xinetd must be configured to run amandad as root.

Jean-Louis

Chad Kotil wrote:

I am trying to setup krb5 auth on amanda 2.6.0p1. I built the
server and client --with-krb5-security, added a new principal to my
KDC ([EMAIL PROTECTED] REALM), and wrote a keytab file and
placed it on the server. It is locked down so only amandabackup
(the user that runs amanda) can read it. The clients have
a .k5amandahosts file containing the following:

[EMAIL PROTECTED] REALM
backupmaster.f.q.d.n [EMAIL PROTECTED] REALM

my amanda.conf file contains

krb5keytab  /etc/amanda/krb5.keytab-amanda
krb5principal   [EMAIL PROTECTED] REALM


On both of my krb5 auth clients I am seeing this error:
1214425629.641678: amandad: critical (fatal): gss_server failed:
real uid is 10036, needs to be 0 to read krb5 host key

10036 is the UID for amandabackup, 0 is the UID for root.

Both clients work fine if I just use bsdtcp auth. I am using ssh
auth everywhere else but for these two particular hosts I cannot
use ssh keys.

Any ideas?

Thanks,

--Chad


Re: Can't seem to disable software compression.

2008-04-30 Thread Mitch Collinsworth



compress none turns off compression of the data, but amanda will
still compress the indexes.  You can check your tape contents to
verify that the data there is uncompressed.

-Mitch


On Wed, 30 Apr 2008, [UTF-8] Edi ??uc wrote:


Hi.

It seems that I can't disable software compression in amanda. I tried 
everything that could in my opinion have an impact on compression. But no 
luck. The problem is that i have a relatively slow machine and it takes 
forever to complete the backup. If I check the processes i see this:


19021 ?S  0:00  \_ /bin/sh /usr/sbin/amdump AAA1
19031 ?S  0:00  \_ /usr/lib/amanda/driver AAA1
19032 ?S  0:01  \_ taper AAA1
19037 ?S  0:01  |   \_ taper AAA1
19033 ?S  0:08  \_ dumper0 AAA1
19034 ?S  5:31  \_ dumper1 AAA1
19074 ?S  0:08  |   \_ /bin/gzip --best
19035 ?S  0:00  \_ dumper2 AAA1
19036 ?S  0:00  \_ dumper3 AAA1
19066 ?S  8:59  \_ chunker1 AAA1

I suppose that there shouldn't be a gzip --best if I disable the 
compression. I also tried to use the directive compress fast in the define 
dumptype section, but no luck. gzip --best is still there (it should at 
least change to gzip --fast).


This is my disklist:
server  /boot   tar_midp
server  /   tar_midp


And this is my amanda.conf:
org  AAA
mailto   root
dumpuser backup   # the user to run dumps under

inparallel 4
dumporder sssS
taperalgo first
displayunit k
netusage  600 Kbps
dumpcycle 1 weeks
runspercycle 5
tapecycle 10 tapes
bumpsize 20 Mb
bumppercent 20
bumpdays 1
bumpmult 4
etimeout 300
dtimeout 1800
ctimeout 30
tapebufs 20
usetimestamps no
runtapes 1
tapedev /dev/nst0
rawtapedev 
changerfile /etc/amanda/AAA/changer.conf
changerdev /dev/null
maxdumpsize -1
tapetype HPC7972A
labelstr ^AAADLT[0-9][0-9]*$
amrecover_do_fsf yes
amrecover_check_label yes
amrecover_changer 
holdingdisk hd1 {
  directory /var/local/arhiv/amanda_hold
  use -2 Gb
  chunksize 2Gb
  }
autoflush no
infofile /var/lib/amanda/AAA1/curinfo
logdir   /var/log/amanda/AAA1
indexdir /var/lib/amanda/AAA1/index

define tapetype HPC7972A {
  comment just produced by tapetype program
  length 193024 mbytes
  filemark 0 kbytes
  speed 13723 kps
}

define dumptype global {
  comment Global definitions
  index yes
  compress none
  holdingdisk auto
  comprate 1
  estimate calcsize
}

define dumptype tar_midp {
  comment Tar mid priority
  program GNUTAR
  exclude list .amanda.excludes
  priority medium
  compress none
  index yes
  holdingdisk auto
  comprate 1
  estimate calcsize
}

define interface local {
  comment a local disk
  use 1000 kbps
}

define interface le0 {
  comment 10 Mbps ethernet
  use 400 kbps
}

The installation is on Debian etch (4.0). Amanda (server and client) version 
2.5.1p1-2.1


Anyone any idea?

Regards,
Edi

Re: backup/recover using tar and hard links

2007-10-17 Thread Mitch Collinsworth


On Wed, 17 Oct 2007, Dustin J. Mitchell wrote:


Obviously, this isn't ideal.  I'm surprised nobody else has been
snagged by this before.  I've been using Amanda on Cyrus mailboxes for
years, with lots of recoveries.  I guess I've just gotten lucky.


We have no experience with cyrus, yet, but have been talking about it
for a while.  After reading this I forwarded it to a co-worker, who wrote
back this:

Don't let cyrus do it.


From http://cyrusimap.web.cmu.edu/imapd/overview.html#singleinstance

--
Single Instance Store

If a delivery attempt mentions several recipients (only possible if the MTA is
speaking LMTP to lmtpd), the server attempts to store as few copies of a
message as possible. It will store one copy of the message per partition, and
create hard links for all other recipients of the message.

Single instance store can be turned off by using the singleinstancestore flag
--


Obviously this won't help the person with the failing restore, but it's
something to consider for everyone else.  Who really is short enough on
disk space in this day and age that they think storing duplicate e-mail
with hard links is still a good idea?

(And just btw, we had lately been losing the battle against spam using
spamassassin, even though it was pretty effective when first deployed.
We recently added graylisting and are seeing very few spam making it
through now.)

-Mitch


Re: Sun 622 DLT8000 drive

2007-09-19 Thread Mitch Collinsworth


On Wed, 19 Sep 2007, Toomas Aas wrote:

I'm thinking about buying an used DLT8000 drive for the puropse of 
occasionally reading some old DLT tapes (not with Amanda, so sorry for the 
OT). Sun Model Number 622, Part Number 599-2347-02 is available on eBay, but 
I can't find any online documentation about this specific drive. Maybe 
someone on this list has one, and can point me to some online docs?


I assume that it is basically a standard SCSI DLT drive and should work fine 
with non-Sun server, right?



Right.  It should be able to read any tape written by DLT4000, DLT7000,
or DLT8000.

-Mitch



Re: ssh tunneling from wherever

2007-08-07 Thread Mitch Collinsworth


On Tue, 7 Aug 2007, Dustin J. Mitchell wrote:


All in all, it sounds like a lot of work, unless this is a months-long
conference :)


Which is why it would be really nice to have a different triggering method
for performing backups on roaming laptops.  Something that begins with the
laptop calling in to the server and saying Yoo-hoo!  I'm over here.  Can
you please back me up now?  Then the server can do the backup to holding
disk and then flush to tape next time a regular scheduled config run is
made.

-Mitch


Re: new backup server

2006-12-16 Thread Mitch Collinsworth


On Thu, 14 Dec 2006, Frank Smith wrote:


AIT5 recently came out, and it can read AIT3 and AIT4 tapes, and has
400GB native capacity.


Interesting.  I followed up on this and it appears to be true.  So, we
will consider upgrading our Qualstar AIT2 library with AIT5 drives at
some point.  Still, we're going forward with our LTO3 library purchase
at this point.  The one failure I see in reading the AIT5 specs is that
is it 400 GB native capacity but still 24 MB/sec transfer rate, same as
AIT4.  So it will take nearly 5 hours to write a tape at full speed.

After waiting forever for AIT3 to get out the door, and after then
being promised that AIT4 would be backwards compatible, only to find
out it wasn't sometime AFTER it was released, we've lost all loyalty
to the AIT line.



   Unless you frequently have a need to read old tapes, keeping a
an old drive or two around just to read old tapes isn't a big deal.


I gotta disagree with this though.  Keeping an old AIT drive around
to be able to infrequently read old tapes is a recipe for disappointment.
Old AIT drives' capstans eventually fail and lead to drive errors.  If
you want to read old AIT tapes, either use a modern drive with backwards
read compatibility, plan on having to refurbish that old drive you're
keeping around every now and then, or migrate the data to modern media.



The advantage of not switching formats is that you can just replace
the drives and the tapes to upgrade a library to higher capacity.


Agreed.  Which is the only reason we will consider buying AIT5 drives
now, because we already have an AIT2 and an AIT3 library.  AIT4 was,
apparently, just a very bad dream.

-Mitch


Re: new backup server

2006-12-14 Thread Mitch Collinsworth


On Thu, 14 Dec 2006, Chris Hoogendyk wrote:


I'm interested in whether anyone on the list has any experience or
comments on my choice of tape changer, or comments on issues related to
how it is configured and potential modes of upgrading (adding another
tape drive, adding another changer, etc.)


You didn't say which AIT drive is going in your AIT changer.  Here we
have gone from AIT1 to AIT2 to AIT3.  Just yesterday I ordered a new
library with LTO3.  What soured us on the AIT line is that AIT4 is not
backwards read compatible with any earlier AIT drives.  In other words
if I went to AIT4 I would not be able to use it to even read any of our
large existing collection of AIT1, 2, and 3 tapes.  So at this point it
no longer matters to us whether we stay with the AIT line or not.
Depending on which AIT drive you're choosing, this may or may not be a
concern for you.

Given that it is for us, we took this as our opportunity to move to LTO,
which is at least an industry standard with multiple vendors supplying
drives.  (Sony can take as long as they want to come out with the next
generation of AIT, since they're the only supplier.  We waited what
seemed like forever for AIT3 to finally come out.  Way past its expected
release date.  And AIT4 was promised all along to be backwards read
compatible, but that was dropped at the very last minute.)


The tape changer I'm looking at is the Sony StorStation AIT Library
LIB-162/A4. It is a carousel rather than a robot. It holds 16 tapes
(3.2TB native, anybody's guess compressed) and can have a second tape
drive added. It is significantly less expensive than the expandable
robot systems I was looking at. Also, in the expandable systems,
adding the expansions was very expensive.


Not sure what systems you looked at, but I was surprised to find that
in the Qualstar RLS series of expandable libraries, adding more tape
slots is not a big money proposition.  The LTO library I ordered starts
with 12 slots and is expandable up to 44 slots in increments of 8, for
$1000 (list) per increment.  As an .edu you may do better than that on
price.  Also with AIT slots being smaller, they might come cheaper, too.
I don't know.

Hope this helps in some way.

-Mitch


Re: dd problems

2006-11-06 Thread Mitch Collinsworth


I am trying to use dd to copy a tape made by amanda to a temp spot on the 
HDD, but I am having great difficulty.  When I try to use dd, it will only 
copy one file at a time then stop, no matter what I do.  As far as I know, dd 
should just take the entire tape from beginning to end. . .


dd does one tape file at a time, which yes, is what amanda writes.
In order to do what you're trying to do you just need to write a
short script to put your dd into a loop.

-Mitch


Re: amdump issues involving wildcards--quoted wildcards appear to be sent to, and thus being interpreted literally by, gtar

2006-11-02 Thread Mitch Collinsworth


On Thu, 2 Nov 2006, Ian R. Justman wrote:

My conjecture of what's happening is that the gtar command is being run with 
quotes around the items in the include statement in my disklist file, 
causing gtar to not perform any globbing and, instead, look for any pathname 
with actual brackets and globbing characters in it.


Have a look in /tmp/amanda/sendbackup.xxx.debug.  That should show you
what the command line arguments being used look like.

-Mitch


Re: Still struggling with L0

2006-07-31 Thread Mitch Collinsworth


On Mon, 31 Jul 2006, Alan Pearson wrote:


I've tried to force a L0 backup of it, like :

amadmin DailySet1 force qtvpdc.lab:/Shares
amadmin: qtvpdc.lab:/Shares is set to a forced level 0 at next run.


But when amdump runs, it just schedules a L1 or L2 backup.


Syntax:

% amadmin help

Usage: amadmin conf command {args} ...
Valid commands are:
version # Show version info.
force [hostname [disks]* ]+ # Force level 0 at next run.
unforce [hostname [disks]* ]+   # Clear force command.
force-bump [hostname [disks]* ]+# Force bump at next run.
force-no-bump [hostname [disks]* ]+ # Force no-bump at next run.
unforce-bump [hostname [disks]* ]+  # Clear bump command.
reuse tapelabel ...   # re-use this tape.
no-reuse tapelabel ...# never re-use this tape.
find [hostname [disks]* ]*  # Show which tapes these dumps are on.
delete [hostname [disks]* ]*# Delete from database.
info [hostname [disks]* ]*  # Show current info records.
due [hostname [disks]* ]*   # Show due date.
balance # Show nightly dump size balance.
tape# Show which tape is due next.
bumpsize# Show current bump thresholds.
export [hostname [disks]* ]*# Export curinfo database to 
stdout.
import  # Import curinfo database from stdin.
disklist [hostname [disks]* ]*  # Show disklist entries.


So your amadmin DailySet1 force qtvpdc.lab:/Shares
should be amadmin DailySet1 force qtvpdc.lab /Shares

-Mitch


Re: Incorrect bahaviour causes backup loss - further update!!

2006-07-28 Thread Mitch Collinsworth


On Fri, 28 Jul 2006, Alan Pearson wrote:


How can I force a level 0 dump ?


amadmin force


Re: Getting list of DLE's w/o Lev 0s

2006-07-28 Thread Mitch Collinsworth


On Fri, 28 Jul 2006, C. Chan wrote:


Is there a quick way to determine which DLEs do not have any
Lev 0s on any tapes?


amoverview


Re: Incorrect bahaviour causes backup loss !!

2006-07-27 Thread Mitch Collinsworth


[Please don't cross-post.]

It sounds like you don't have enough tapes in your rotation.  You
probably want to look into adding more.

Also, amcheck will alert you to this problem before you amdump.  If
you choose to ignore the warning...

-Mitch


Re: Incorrect bahaviour causes backup loss !!

2006-07-27 Thread Mitch Collinsworth


[Please don't cross-post.]

On Thu, 27 Jul 2006, Alan Pearson wrote:


It happened because some large files were added in between amcheck and
amdump, and over the past month more machines added, taking us nearer the
tape capacity. I will add more tapes (when they arrive !!), but it _could_
still happen, and I believe it should NEVER be allowed to happen that you
get into a state where you don't have a full backup of the machine.


Well let's consider your options...  If the next tape in sequence has a
dump you don't want overwritten, then you need to pull it out of the
sequence and put it in a drawer somewhere to keep it safe.  Then you'll
want to go to the next tape in the sequence.  But if your tape cycle is
too short you're likely to have the same problem with that one, too.  If
your plan is to write to holding disk until a new box of tapes arrive then
don't put any tape in the drive.  If you're this short on space (and yes I
know it happens, it's happening to me right now, too) then you have to
monitor things closely.  Keep an eye on amcheck output, your daily amdump
e-mails, and on amoverview.  I really doubt you're going to find a
majority of amanda users who agree with changing the system to write nothing
to the tape that's in the drive when it's the correct next tape in sequence,
just because your data doesn't fit on the number of tapes you've providing.

-Mitch


Re: One DLE too many!

2006-03-15 Thread Mitch Collinsworth


On Wed, 15 Mar 2006, Jean-Louis Martineau wrote:

That'a not fixed in 2.5.0, that will be done after the dumper-api get 
implemented.


dumper-api?  I thought the story was that's being replaced with
application-api.

In any case I offered a programmer's time 2 years ago to fix
this w/o waiting on any api.  I just wanted agreement first
that if we developed it as proposed it would be accepted.
But we were never told it would be accepted.  Or not accepted.
Just no answer at all.

-Mitch


Re: One DLE too many!

2006-03-14 Thread Mitch Collinsworth


On Tue, 14 Mar 2006, Jon LaBadie wrote:


On Tue, Mar 14, 2006 at 01:44:50PM -0500, Vytas Janusauskas wrote:


Amanda Backup Client Hosts Check

ERROR: hal: [Can't open disk '/mnt/data06/Deforest3']
ERROR: hal: [No include for '/mnt/data06/Deforest3_PART1']
Client check: 5 hosts checked in 3.259 seconds, 2 problems found



IIRC amcheck does NOT run some of its checks as root.
Thus if the amanda user running amcheck can not visit
/mnt/data06/Deforest3 and needed directories below that,
it could cause errors like that above when amcheck looked
for files.


It's worse than that.  Include processing is performed before setuid
root mode is started.  selfcheck, sendsize, and sendbackup all fail
to include directories that the amanda user can't read.  You just
lose and the directories don't get backed up.  Makes it hard to
provide a backup service for machines that aren't centrally managed,
which is more and more the way the world (for some of us) is going.

-Mitch


Re: One DLE too many!

2006-03-14 Thread Mitch Collinsworth


On Tue, 14 Mar 2006, Stefan G. Weichinger wrote:


Mitch Collinsworth schrieb:


On Tue, 14 Mar 2006, Jon LaBadie wrote:

IIRC amcheck does NOT run some of its checks as root.
Thus if the amanda user running amcheck can not visit
/mnt/data06/Deforest3 and needed directories below that,
it could cause errors like that above when amcheck looked
for files.


It's worse than that.  Include processing is performed before setuid
root mode is started.  selfcheck, sendsize, and sendbackup all fail
to include directories that the amanda user can't read.  You just
lose and the directories don't get backed up.  Makes it hard to
provide a backup service for machines that aren't centrally managed,
which is more and more the way the world (for some of us) is going.


Sounds BAD.

I'd like to know more on that ... is this a major issue for Amanda in
general ... do we have to do major patches ... what can we do ...


It's BAD for those of us who are trying to offer backups for machines
that aren't centrally managed.  It was discussed a bit on -hackers in
2004 - April and again in July.  Solutions were suggested but never
agreed upon.  One was to add another setuid to perform include
parsing.  Another was to add include parsing to gnutar.



Amanda 2.5.0 is near and I would really prefer to get it out without
major problems in its source code.


I haven't looked at 2.5.0 code to see if it's fixed there or not.

-Mitch


Re: One DLE too many!

2006-03-14 Thread Mitch Collinsworth


On Tue, 14 Mar 2006, Stefan G. Weichinger wrote:


Mitch Collinsworth schrieb:


It's BAD for those of us who are trying to offer backups for machines
that aren't centrally managed.
It was discussed a bit on -hackers in
2004 - April and again in July.  Solutions were suggested but never
agreed upon.  One was to add another setuid to perform include
parsing.  Another was to add include parsing to gnutar.


I don't remember that thread:
- What exactly do you mean by (not) centrally managed?


Sorry.  What I mean is, in the case where the admin for the backup
system is also the admin for the machines being backed up (or at least
works in the same shop with that admin) it is reasonably easy to work
around this problem by just modifying the directory permissions to
allow the amanda user to read what it needs.  Easier than trying to
solve the problem by modifying amanda.

In the case where the backup admin is offering a service to folks who
admin their own machines, each time this problem is encountered it
takes a bunch of time to walk the user through making the permission
changes.  And of course one of the questions one often hears in this
situation is I have to fiddle my permissions in order for your backup
software to work?  Are you sure this is the best software for this?

Modifying amanda to avoid the problem in the first place looks much
more attractive in this situation.

-Mitch


Re: Off-Topic: gmail problems with the list

2006-02-16 Thread Mitch Collinsworth


On Thu, 16 Feb 2006, Guy Dallaire wrote:


Could the list insert it's e-mail address in the reply-to header of
it's messages ?


Hack the list to work around a missing feature in your e-mail client?
Please, no.


Otherwise, when there's nothing, I thing gmail uses
the Original poster's email adress for the reply to.


Is there really no reply-all function?  If not then you should find a
better e-mail client.

-Mitch


Re: Running multiple amdumps in paralell?

2005-09-12 Thread Mitch Collinsworth


On Mon, 12 Sep 2005, Toralf Lund wrote:

Is it safe to run multiple instances of amdump simultaneously? I mean, with 
different configs, but possibly the same hosts and disks?


Different configs: yes
same client hosts: no

-Mitch


Re: Holding disk size question

2005-08-27 Thread Mitch Collinsworth


On Sat, 27 Aug 2005, Marcus wrote:


In amanda.conf it says
If a dump is too big to fit on the holding disk than
it will be written directly to tape.

I'm hoping to write 200g at a time to DLT, and would
like to stream it if possible. Does that mean I need
200g of holding disk?


Yep.

I scrounged up 20g, but another

180g doesn't look likely.


Huh?  Disks are dirt cheap these days.  Look at this one:

http://www.newegg.com/product/Product.asp?Item=N82E16822144309CMP=OTC-FroogleATT=Western+Digital+Caviar+SE+WD2500JB+250GB+7200+RPM+IDE+Ultra+ATA100+Hard+Drive

250 GB for $110.  That's nothing compared to what you'll be spending on
a rack of tapes, and what you probably already paid for the DLT drive.

-Mitch


Re: [off-topip] Better Backup Media

2005-06-23 Thread Mitch Collinsworth


On Tue, 21 Jun 2005, Chris Loken wrote:

BUT - my AIT-4 drive apparently can't even read AIT-3 tapes. Seems 
mind-bogglingly stupid but my vendor assures me it's true (haven't dared to 
try).


Anybody understand if this is an issue that's going to be resolved and, if 
so, will it be in firmware or hardware?


Answer looks pretty clear from the charts here:

http://b2b.sony.com/documents/category/storage/branded-tape/AIT_Drives/AIT-4/AITMedia_05.pdf

I suppose it was necessary in order to progress, but bailing on backwards
compatibility is very annoying to the customer.  It's also not especially
good for business.  If I've been buying all the AIT line drives, at least
in part because the newer ones will read tapes I wrote with the older
ones, then I have a business reason to engage in brand loyalty.  Once
that reason is gone then the next time I upgrade drives, all media
choices are back on the table again and I may choose something other than
AIT the next time.  Didn't the same thing happen with DLT between DLT8000
and SDLT?

-Mitch


Re: [off-topip] Better Backup Media

2005-06-23 Thread Mitch Collinsworth


On Thu, 23 Jun 2005, Joshua Baker-LePain wrote:


Something else I heard on another mailing list:

However AIT-4 (unlike AIT-1 til -3) appears to write fill
 bytes onto the tape if it's not fed with data quickly enough,
 thus wasting lots of capacity.

The person said they heard it somewhere and hadn't seen it confirmed.  If
true, though... yuck.


Well, maybe, maybe not.  DLT8000 did that, too.  Lots of people griped
bitterly about poor performance with their DLT4000's and 7000's.
Problem was usually that their 1-pass backup software kept starving the
write buffer and the drives had to shoe-shine in order to deal with it.
The DLT8000 had variable speed write, which meant the tape kept streaming
and the data was laid down as fast as it came in, even if that was slower
than what the tape could handle.

I always had a good chuckle at conferences listening to vendors trying
to explain this problem to all the folks griping about their expensive
tape drives that would only write at a fraction of their advertised
speed.  In general the vendors would do everything they could to avoid
pointing the blame at the expensive commercial backup software products
because they usually sold that to the customers, too.

The good news for amanda users is that when you stage your dumps to
holding disk, you eliminate the most frequent cause of the data
starvation problem, which is the backup program scouring the partition
looking for which files to backup today.  Once you have your data on
the holding disk you're unlikely to starve the tape drive and it can
stream the data at full speed onto the tape.  If it can't, you have a
h/w problem that is typically easy to fix.

-Mitch


Re: Tape Library Recommendations

2005-06-23 Thread Mitch Collinsworth


On Thu, 23 Jun 2005, Michael Loftis wrote:

LTO, DLT, S-LTO, etc all have the huge advantage of the same physical form 
factor.  So 'upgrading' a DLT library to LTO, or S-LTO is just 
adding/upgrading the tape drives in it.  DLT (sometimes called Compactape IV) 
has a long history and is a good reliable medium, with a lot of vendors 
selling drives and


Yeah that's what you'd think.  I used to run an Overland loader with
DLT4000.  The thing was rock-solid, so when the time came I tried to
swap out the drive for a DLT7000.  Nope, Overland wouldn't hear of it.
They insisted I had to start all over with a new loader.  (Yeah, much
more $$, too.)  Turned out at least part of the problem was that the
SCSI version of the 4000 loader wasn't fast enough for the 7000 drive.

-Mitch


Re: [off-topip] Better Backup Media

2005-06-23 Thread Mitch Collinsworth


On Thu, 23 Jun 2005, Joshua Baker-LePain wrote:


Indeed -- the other person referred to the AIT-4 behavior as DLT
syndrome.  But isn't variable speed write different than writing fill
bytes?  Does DLT8000 lose capacity when not writing as fast as it can?


Yes.  Same idea.  Tape spins at constant rate, data is written as fast
as possible or else as fast as it comes in.  They probably had some
minimum below which it would stop and restart, but you'd get more on a
tape if you could keep the write buffer from starving.



Actually, with LTO3, I'm a bit worried about keeping the tape streaming
when writing from staged dumps.  Native transfer rate of LTO-3 is stated
as 288 GB/hr, which is about 76 MiB/s.  That's more than most single
spindles can handle, *especially* if you're trying to write to tape while
other dumps are coming in to the holding disk.  Looks like I'll have to do
some work on the server end to really get this thing going.


Wow, that's impressive.  First question is: what does the drive do when
you can't feed it data fast enough?  Depending on the answer you may
or may not actually care enough to worry about it.

-Mitch


Re: Amanda Statistics.

2005-06-16 Thread Mitch Collinsworth


On Thu, 16 Jun 2005, Erik P. Olsen wrote:


Below the statistics from my last backup. I assume the times are all
elapsed times, but why is amanda so wrong in estimating the times? Or is
it a bug?

STATISTICS:
 Total   Full  Daily
         
Estimate Time (hrs:min)0:03
Run Time (hrs:min) 1:11
Dump Time (hrs:min)0:51   0:51   0:01
Output Size (meg)7692.4 7648.9   43.5
Original Size (meg) 11996.711886.5  110.2
Avg Compressed Size (%)64.1   64.4   39.4
(level:#disks ...)
Filesystems Dumped7  2  5   (1:5)
Avg Dump Rate (k/s)  2559.5 2584.4  950.4

Tape Time (hrs:min)0:49   0:49   0:00


0:03 is not the estimated time for the run, it's how long it took to
perform the estimate phase of the run.

-Mitch


Re: Levels

2005-05-12 Thread Mitch Collinsworth
On Thu, 12 May 2005, Guy Dallaire wrote:
I'm still having a hard time figuring out how levels work.
Can someone confirm to me that each time amanda is run, it will backup
a file if it has been modified during the day (or since the last run)
In other words: does amanda guarantee that if a file is modified after
a dump, it will be included in the next dump, whatever the level is
used ?
No.  Amanda guarantees it will take all data your backup program
sends it, put it onto the tape, and get it back off again, assuming
the tape is still good at that time.
Whether or not a modified file gets included in the next dump is
the responsibility of whatever backup program you tell amanda to
invoke.  Typically dump or gnutar, but folks have managed to glue
in a number of others as well.  Amanda doesn't do backups, she
only oversees the process.
-Mitch


Re: Levels

2005-05-12 Thread Mitch Collinsworth
[Please reply to amanda-users when asking questions on the list]
On Thu, 12 May 2005, Guy Dallaire wrote:
2005/5/12, Mitch Collinsworth [EMAIL PROTECTED]:
No.  Amanda guarantees it will take all data your backup program
sends it, put it onto the tape, and get it back off again, assuming
the tape is still good at that time.
Whether or not a modified file gets included in the next dump is
the responsibility of whatever backup program you tell amanda to
invoke.  Typically dump or gnutar, but folks have managed to glue
in a number of others as well.  Amanda doesn't do backups, she
only oversees the process.
-Mitch
Well...
What I meant is that, provided I have a standard dump cycle, I use
gnutar for some DLE's, and dump for other DLE's, and I run amanda at
time x. Will amanda run dump or gnu-tar in a way that if a file is
modified before time x, it will ALWAYS be included in the backup ?
ALWAYS is a very strong word.  I see others have been responding yes
to this but nevertheless the answer is still no.  Amanda makes a very
good effort to always do the right thing but there are various ways in
which things can still go wrong.  The best you can really say is that
amanda will ALWAYS try.
For example if you have way too much data to fit on the tape, sometimes
amanda will say [dumps way too big, must skip incremental dumps].
Now this is generally a very rare occurance and typically one that is
not caused by any failing in amanda, only that you gave her more to do
than is physically possible and she had to decide SOMETHING.
Also note that there are ways to configure your amanda setup to make
this highly unlikely.  For example a tape loader with plenty of
tapes available and a configuration that tells amanda she can use
multiple tapes per run if necessary.  etc.
-Mitch


Re: Amanda and Star

2005-04-25 Thread Mitch Collinsworth
On Mon, 25 Apr 2005, Jon LaBadie wrote:
The only way I know of to get an accurate backup of windows
systems is to use a Windows-based backup program that outputs
its results into a file.  Then that file can be saved onto
tape by amanda.  Restores would similarly be an unpleasant,
two-step process.
This seems to be one place where Bacula has the advantage over
Amanda.
-Mitch


mail.vh.org is re-sending old mail to amanda-users list

2005-04-22 Thread Mitch Collinsworth
Todd,
mail.vh.org is re-sending old messages (from earlier today) to the
amanda-users mailing list.  Example below.  It should probably be
blocked from posting until this is fixed.
vh.org postmaster, please fix your system.
-Mitch
-- Forwarded message --
Return-Path: [EMAIL PROTECTED]
Received: from mercury.ccmr.cornell.edu [128.84.231.97]
by twin.ccmr.cornell.edu with POP3 (fetchmail-6.2.1)
for [EMAIL PROTECTED] (single-drop); Fri, 22 Apr 2005 17:01:01 -0400 (EDT)
Received: from guinness.omniscient.com (guinness.omniscient.com [64.134.101.78])
by mercury.ccmr.cornell.edu (8.13.1/8.12.10) with ESMTP id j3MKweY8010571
for [EMAIL PROTECTED]; Fri, 22 Apr 2005 16:58:40 -0400
Received: from guinness.omniscient.com (localhost [127.0.0.1])
by guinness.omniscient.com (8.13.3/8.13.1) with ESMTP id j3MKwcgb020690
(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=NO);
Fri, 22 Apr 2005 16:58:38 -0400 (EDT)
Received: from localhost ([EMAIL PROTECTED])
by guinness.omniscient.com (8.13.3/8.12.11/Submit) with SMTP id
j3MKwcws000871;
Fri, 22 Apr 2005 16:58:38 -0400 (EDT)
Received: by guinness.omniscient.com (bulk_mailer v1.13); Fri,
22 Apr 2005 16:47:40 -0400
Received: from guinness.omniscient.com (localhost [127.0.0.1])
by guinness.omniscient.com (8.13.3/8.13.1) with ESMTP id j3MKTGfl024158
(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=NO);
Fri, 22 Apr 2005 16:29:17 -0400 (EDT)
Received: (from [EMAIL PROTECTED])
by guinness.omniscient.com (8.13.3/8.12.11/Submit) id j3MKTFfB006444;
Fri, 22 Apr 2005 16:29:15 -0400 (EDT)
X-Authentication-Warning: guinness.omniscient.com: majordom set sender to
[EMAIL PROTECTED] using -f
Received: from sapphire.omniscient.com (sapphire.omniscient.com [64.134.101.71])
by guinness.omniscient.com (8.13.3/8.13.1) with ESMTP id j3MKTCE3005050
(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=NO)
for amanda-users@amanda.org; Fri, 22 Apr 2005 16:29:12 -0400 (EDT)
Received: from mail.vh.org (mail.vh.org [129.255.233.40])
by sapphire.omniscient.com (8.13.1/8.13.1) with SMTP id j3MKTAhG015538
for amanda-users@amanda.org; Fri, 22 Apr 2005 16:29:10 -0400 (EDT)
Received: (qmail 2652 invoked by uid 501); 22 Apr 2005 20:28:54 -
MBOX-Line: From [EMAIL PROTECTED]  Fri Apr 22 15:28:54 2005
Received: (qmail 22870 invoked by uid 18); 22 Apr 2005 19:48:55 -
Received: from [EMAIL PROTECTED] by mail by uid 131 with
qmail-scanner-1.20rc4
 (clamuko: 0.80   Clear:RC:0:.
 Processed in 0.052428 secs); 22 Apr 2005 19:48:55 -
Received: from guinness.omniscient.com (64.134.101.78)
  by mail.vh.org with SMTP; 22 Apr 2005 19:48:55 -
Received: from guinness.omniscient.com (localhost [127.0.0.1])
by guinness.omniscient.com (8.13.3/8.13.1) with ESMTP id j3MJms5R026943
(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=NO);
Fri, 22 Apr 2005 15:48:54 -0400 (EDT)
Received: from localhost ([EMAIL PROTECTED])
by guinness.omniscient.com (8.13.3/8.12.11/Submit) with SMTP id
j3MJmph5013477;
Fri, 22 Apr 2005 15:48:51 -0400 (EDT)
Received: by guinness.omniscient.com (bulk_mailer v1.13); Fri,
22 Apr 2005 15:46:55 -0400
Received: from guinness.omniscient.com (localhost [127.0.0.1])
by guinness.omniscient.com (8.13.3/8.13.1) with ESMTP id j3MJktuc020306
(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=NO);
Fri, 22 Apr 2005 15:46:55 -0400 (EDT)
Received: (from [EMAIL PROTECTED])
by guinness.omniscient.com (8.13.3/8.12.11/Submit) id j3MJktwA005561;
Fri, 22 Apr 2005 15:46:55 -0400 (EDT)
X-Authentication-Warning: guinness.omniscient.com: majordom set sender to
[EMAIL PROTECTED] using -f
Received: from sapphire.omniscient.com (sapphire.omniscient.com [64.134.101.71])
by guinness.omniscient.com (8.13.3/8.13.1) with ESMTP id j3MJkscn021387
(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=NO)
for amanda-users@amanda.org; Fri, 22 Apr 2005 15:46:54 -0400 (EDT)
Received: from loncoche.terra.com.br (loncoche.terra.com.br [200.154.55.229])
by sapphire.omniscient.com (8.13.1/8.13.1) with ESMTP id j3MJkqeK016062
for amanda-users@amanda.org; Fri, 22 Apr 2005 15:46:53 -0400 (EDT)
Received: from estero.terra.com.br (estero.terra.com.br [200.154.55.138])
by loncoche.terra.com.br (Postfix) with ESMTP id 6DA4EE78AF6
for amanda-users@amanda.org; Fri, 22 Apr 2005 16:46:46 -0300 (BRT)
X-Terra-Karma: 0%
X-Terra-Hash: 3f16c97bdda1a80776a43a2705e3fcab
Received-SPF: pass (estero.terra.com.br: domain of terra.com.br designates
200.154.55.138 as permitted sender) client-ip=200.154.55.138;
[EMAIL PROTECTED]; helo=terra.com.br;
Received: from terra.com.br (tupiza.terra.com.br [200.176.3.182])
(authenticated user vlpg)
by estero.terra.com.br (Postfix) with ESMTP id 59282728025
for amanda-users@amanda.org; Fri, 22 Apr 2005 16:46:45 -0300 (BRT)
Date: Fri, 22 Apr 2005 

RE: Amanda vs Homegrown

2005-04-21 Thread Mitch Collinsworth
On Thu, 21 Apr 2005, Mark Lidstone wrote:
It would still be worth pointing out what a huge security risk the rcp
command is, and if they insist on using their scripts at least get them
to remove the r* accounts setup stuff and use something like rsync over
an encrypted channel (why bother protecting the file on the disk if
you're going to potentially transfer it in plain text over the network).
So have you modified amanda to encrypt your network transfers?
It doesn't do that out of the box you know.
-Mitch


Re: Changer file for Compaq MSL5000 series

2005-01-20 Thread Mitch Collinsworth
On Thu, 20 Jan 2005, Jon LaBadie wrote:
On Thu, Jan 20, 2005 at 10:53:26AM -0800, DK Smith wrote:
It would be really cool if a single changer glue module was
re-written in an OOP fashion, reusing the same logic code
for every situation, but overriding the driver interface methods
for specific driver interfaces.
Ahh, you mean a changer-api to go along
with the infamous DUMPER-API?
Well... that _would_ actually be quite useful.  But I'm not holding
my breath waiting for it!  :-)
-Mitch


Re: Amanda and PostgreSQL

2005-01-12 Thread Mitch Collinsworth
On Wed, 12 Jan 2005, [iso-8859-1] Germán C. Basisty wrote:
2. To use pg_dump to get a hot snapshot of the desired DB, put it on
some directory and then backup that directory with amanda.
We use pg_dump.  We just run it from cron in advance of amanda's
start time.
-Mitch

Re: Amanda and PostgreSQL

2005-01-12 Thread Mitch Collinsworth
On Thu, 13 Jan 2005, Jamie Wilkinson wrote:
So I had a crazy idea that hasn't yet been implemented, a pg dumper similar
in appearance to GNUTAR and DUMP on the client machine.  Your DLE would look
somethign like:
host database/table pg
where pg would be the dumptype telling amanda to use the pg dumper.
The net effect is that you'd get a hot snapshot written straight to amanda,
so you wouldn't have to worry about scheduling the dumps of each at the
right time.
If someone wants to beat me to the implementation though, please go right
ahead :-)
What you want is something called DUMPER-API.  Unfortunately it doesn't
actually exist.  It's documented but still waiting for someone to code it.
In the mean time, the standard workaround is to use the framework here:
  ftp://gandalf.cc.purdue.edu/pub/amanda/gtar-wrapper.*
This is a wrapper script that will pretend to be gnutar to amanda, but
run whatever you tell it to instead on the client.
-Mitch


Re: Backing up directories with spaces

2004-12-25 Thread Mitch Collinsworth
On Sat, 25 Dec 2004, Paul Bijnens wrote:
I am trying to backup a directory with a space in its name,
however, amanda doesn't seem to like it in the disklist.
That's a known limitation of the current implementation.
But there's an easy workaround:  create a symlink on the
client without spaces (if you can't rename the directory).

That's only an easy workaround if you are an administrator of the
client system.
If you are not then you have to go to the include-style syntax.
E.g.:
host /VNTI /Library {
nocomp-osx
include ./Application?Support/VNTI?Database
}
The problem here is this is still buggy, as discussed on -hackers
in April and July 2004.
-Mitch


Re: amcheck not saying expecting tapeno. or a new tape

2004-11-05 Thread Mitch Collinsworth

Would people please stop cross-posting between -users and -hackers.
If your message is about using amanda, send it to -users.  If it's
about the source, send it to -hackers.

Don't mean to single out Gavin here.  This has happened several times
recently, each then multiplied by all the followups.

Thank you.

-Mitch


Re: Is there a proper way of killing a dumper?

2004-11-05 Thread Mitch Collinsworth

On Fri, 5 Nov 2004, Kevin Dalley wrote:

 I have one computer which is busy now, and is dumping *very* slowly.
 I want to kill the dumper for this computer.  Killing all the dumpers
 is even OK.  I could do a killall dumper, but it seems a bit crude.
 Is there a more polite way of killing a dumper, or just telling it to
 give up on a DLE?

Don't kill the dumper, these get re-used from one client to the next.
If you can get a shell on the slow client, go there and kill sendbackup
or dump or tar.

-Mitch


Re: auto-responder hell here

2004-09-03 Thread Mitch Collinsworth

On Fri, 3 Sep 2004, Gene Heskett wrote:

 I consider such crap as exactly that, crap, totally useless crap at
 that, usually brought about by someones overblown ego trying to
 convince the rest of the world how important they are and without
 them the world will stop.

Never ascribe to malice, that which can be explained by incompetence.
  --  Napoleon Bonaparte

Or to put it differently, most of the morons who auto-reply to list
mail these days really aren't doing it intentionally.

The ones that annoy me enough I filter to /dev/null in procmail.
Sometimes I send a nasty-gram instead.  I rarely do both to the same
sender.

I guess I'm about to find out who today's brilliant 9 are...

-Mitch


Re: sendbackup file renaming failure

2004-08-26 Thread Mitch Collinsworth

On Thu, 26 Aug 2004, Paul Bijnens wrote:

 Mitch Collinsworth wrote:

  Today one of my dumps failed in an unusual manner.
 
  After the dump finished and the index was completed, I received the
  following error:
 
  error [renaming /var/adm/amanda/gnutar-lists/usda01afs:usda01_b_.*.backup_0.new
  to /var/adm/amanda/gnutar-lists/usda01afs:usda01_b_.*.backup_0: No such file or
  directory]

 I guess the '*' in the message above is a placeholder for the real
 name?  Or is this the real filename?

 Even then, where is the '.backup' coming from?  My gnutar-lists are
 named:  hostname_disk_0  (with the last 0 indicating the level,
 and any '/' in the hostname or disk is replaced with a '_').
 There is no place for a '.*.backup' in that scheme.

The disklist entry is:
usda01  afs:usda01/b/.*.backup  nocomp-user

So it follows the format you describe.  The last 0 does indeed indicate
a level 0 dump.


 Or does that has something to do with some afs modifications, as
 the name of the host suggests?

The only modifications we made to amanda itself were 1) in selfcheck, to
not try to test accessibility of a gnutar directory name beginning with
afs, and 2) gnutar itself is replaced with a wrapper script that
examines the DLE and chooses whether to run gnutar or an AFS command to
produce the actual backup file.  .*.backup is meaningful to the wrapper
script.

This has all been working fine for a couple of years now.  There have been
no recent changes.  And despite this error on partition b, partitions a,
c, and d ran just fine on this host.


  Somehow this was sufficient for the dump to fail.  Therefore it was
  deleted from the holding disk rather than copied to tape.  [Grr..!]

 The difference between success and failure of a backup is indeed a gray
 area, instead of a sharp line.  This one is just on the boundary.
 The next dump would have trouble anyway because it wouldn't have a
 gnutar-list to base its incremental dumps on.  That would result in
 a full dump, which is indeed a good fallback in that case.

Certainly a better fallback than throwing up hands and putting nothing
on tape!  :-)


  In roughly 5 years of using amanda I have never seen this happen before.

 Me neither.

 
  This is 2.4.3b3.

 Try an upgrade to 2.4.4p3, at least on that client.

Am planning to do this globally some time 'soon'.  But it doesn't help
with why this has been working fine for many months and suddenly messed
up yesterday.

There is one possibly-interesting thing about this one partition.  It
has been growing recently.  Enough so that I have several times recently
had to bump up dtimeout just for it.  The problem there is that the
program we substituted for doing indexing of AFS dumps seems particularly
slow on large dumps containing many, many small files.  I did have to
do this just the day before this happened.  Is it somehow possible that
we ran into a corner case where it didn't timeout while the index was
being created, but got so close that the timer ran out just after it
finished and whoever checks the timer started some cleanup processing
that zapped this file?  And meanwhile whoever actually reports the timeout
error did not do so, because the dump did in fact finish successfully?
Sounds a bit crazy, but it's all I can think of so I may as well ask.  :-)

-Mitch


Re: Use of uninitialized value at /opt/amanda/sbin/amstatus line 868.

2004-08-26 Thread Mitch Collinsworth

On Tue, 17 Aug 2004, Paul Bijnens wrote:

 Ranveer Attalia wrote:

  When running the amanda backup yesterday. It still doesnt appear to have
  finished. I checked the /tmp/ambackup_Daily.log file and its giving me
  the error:
  Use of uninitialized value at /opt/amanda/sbin/amstatus line 868

 I tried very hard to reproduce the error, using the files you sent,
 but I'm unable to get any error like that.

I don't see a response to this, but will add that I've been seeing
something very similar.  When I run amstatus I frequently (but not
always!) see a few lines of this:

Use of uninitialized value in numeric ne (!=) at /usr/local/amanda/sbin/amstatus line 
653.

 I'm almost out of inspiration here.
 What version of perl are you using on the amanda server?
/usr/bin/perl --version

Here it's perl v5.6.1 and amanda 2.4.3b3.  I'd been ignoring this, hoping
it would go away when I upgrade to 2.4.4whatever, but since it's come up...

-Mitch


Re: sendbackup file renaming failure

2004-08-25 Thread Mitch Collinsworth

On Wed, 25 Aug 2004, Jim Summers wrote:

 On Wed, 2004-08-25 at 19:53, Mitch Collinsworth wrote:
 
  error [renaming /var/adm/amanda/gnutar-lists/usda01afs:usda01_b_.*.backup_0.new
  to /var/adm/amanda/gnutar-lists/usda01afs:usda01_b_.*.backup_0: No such file or
  directory]

 Just a guess, but possibly the fs involved has reached 100% capacity?

Nope.

-Mitch


Re: FreeBSD backups are extremely slow?

2004-07-19 Thread Mitch Collinsworth

On Mon, 19 Jul 2004, Joe Rhett wrote:

 FYI, dumper appears to break the backup into 1gb chunks?  I don't know if
 this is relevant or not.

You mean on the holding disk?  That's normal but you can configure the
chunksize or turn it off if you wish.

  I'm seeing EXTREMELY slow backups of a moderately-sized file system on a
  FreeBSD host.  I'm a little confused by this, as everything appears to be
  working .. it's just REALLY slow...

The most common cause of this is a network duplex mismatch somewhere.
Often between the host and the switch.  Note that full---auto IS a
mismatch.  If one side is locked to full, the other side must be locked,
too.

-Mitch


Re: Rsync, Amanda's best friend

2004-07-02 Thread Mitch Collinsworth

On Fri, 2 Jul 2004, Luc Lalonde wrote:

 In case of backup server failure, I 'rsync' all my configuration files
 in /etc with the Amanda index files onto another server.   If my server
 crashes or becomes unavailable, I have instantaneaous access to my
 backup indices and configurations without mucking about and trying to
 find it on the backup tapes without an online Amanda setup.

We keep ours in our OpenAFS filesystem.  If our amanda server fails,
we stand up another, fire up cfengine on it to install it as an amanda
server, and off we go.

-Mitch


Re: amanda and OpenAFS

2004-06-25 Thread Mitch Collinsworth

On Fri, 25 Jun 2004, Frederic Medery wrote:

 I want to replace NFS by OpenAFS.
 I know that inside afs cell, root is not god anymore.

 So can I still use amanda with afs ?


Hi Frederic,

To save re-typing, here is a copy of my answer to someone who asked
this a few weeks ago.

-Mitch

-

From: Mitch Collinsworth [EMAIL PROTECTED]
Date: Tue, 8 Jun 2004 16:38:38 -0400 (EDT)
Subject: Re: amanda and OpenAFS?


On Tue, 8 Jun 2004 [EMAIL PROTECTED] wrote:

 Is there anyone backing up OpenAFS using amanda? Is there anything
 special that needs to be done, or is it just like backing up any
 other file system?

 I'd like to use gnutar if possible.


We are, and I've heard from a few other sites who also are.  Don't
have a good feel for how many yet.  OpenAFS is not just like any
other filesystem.  In general, root is nobody special to OpenAFS.
So your backup program needs to acquire appropriate privilege to
read files.  If you can do that (reauth is one way), you can just
use gnutar.  But then you'll back up the file contents without the
ACLs and volume structure needed to put everything back the way it
was before the Bad Thing that happened that caused your need to
perform a restore.

First stop:  read docs/HOWTO-AFS

This is just a pointer, but it tells you where to find some code we
put together a few years ago to backup OpenAFS with amanda more or
less right.  I say or less because there are still some
improvements I'd like to make sometime.


One new item, for those who've heard all this before.  Just today
we created a new mailing list for amanda-afs users.  Info can be
found here:

http://lists.ccmr.cornell.edu/mailman/listinfo/amanda-afs

-Mitch


Re: amanda and OpenAFS?

2004-06-08 Thread Mitch Collinsworth

On Tue, 8 Jun 2004 [EMAIL PROTECTED] wrote:

 Is there anyone backing up OpenAFS using amanda? Is there anything
 special that needs to be done, or is it just like backing up any
 other file system?

 I'd like to use gnutar if possible.


We are, and I've heard from a few other sites who also are.  Don't
have a good feel for how many yet.  OpenAFS is not just like any
other filesystem.  In general, root is nobody special to OpenAFS.
So your backup program needs to acquire appropriate privilege to
read files.  If you can do that (reauth is one way), you can just
use gnutar.  But then you'll back up the file contents without the
ACLs and volume structure needed to put everything back the way it
was before the Bad Thing that happened that caused your need to
perform a restore.

First stop:  read docs/HOWTO-AFS

This is just a pointer, but it tells you where to find some code we
put together a few years ago to backup OpenAFS with amanda more or
less right.  I say or less because there are still some
improvements I'd like to make sometime.


One new item, for those who've heard all this before.  Just today
we created a new mailing list for amanda-afs users.  Info can be
found here:

http://lists.ccmr.cornell.edu/mailman/listinfo/amanda-afs

-Mitch


Re: Managing out of the office twits

2004-06-03 Thread Mitch Collinsworth

On Thu, 3 Jun 2004, Justin Gombos wrote:

 [anti_ooo.msg file]

   My scripts have detected that you posted an out of office reply to a
   public forum.  Please control your auto-responder.

   If you are receiving this in error, I apologize; please disregard it.

At the risk of prolonging this thread even further, it does seem
incongruous for an auto-responder script aimed as mis-behaving
auto-responders to request that if it itself mis-behaves, the
recipient should please disregard.

-Mitch


Re: Remote tape

2004-04-27 Thread Mitch Collinsworth

On Tue, 27 Apr 2004, Paul Bijnens wrote:

 What you cannot do is have the amanda server drive a tapedrive on
 another host.

Agreed, in the traditional sense.  However iSCSI sounds like a
means of accomplishing this.  I say sounds because I haven't
actually looked into it yet.

-Mitch


Re: Amanda-induced sluggishness on Mac OS X?

2004-02-15 Thread Mitch Collinsworth

On Sun, 15 Feb 2004, Kirk Strauser wrote:

 I use Amanda to backup an iMac running Mac OS X (10.2).  The backup runs
 perfectly, but the machine is horribly unresponsive while tar is running,
 both during the estimate and dump stages.  Renice'ing the tar processes
 doesn't seem to help, so I'm guessing that it might be an IDE DMA thing,
 although I don't know my way around Mac hardware well enough to know.

 Has anyone else seen this, and hopefully found a way to lessen the impact?

Yes, we've seen it, too.  Have not (yet) dug into it to try to figure
out what's going on.  The machine we've noticed it on is a Powerbook.

-Mitch


Re: extremely varied tape write rates?

2004-01-09 Thread Mitch Collinsworth

On Fri, 9 Jan 2004, Kurt Yoder wrote:

 In the report, see that some borneo tape writes were fast, but one
 is much slower. So why is that particluar one *always* slow? It
 should be just a mindless tape dump, right?

Are you using a holding disk?


Re: amanda client on MacOS

2004-01-06 Thread Mitch Collinsworth

On Tue, 6 Jan 2004, Paul Root wrote:

 it runs, the Mac times out waiting for a random local
 port.

Any firewalls involved?

-Mitch


Re: amanda client on MacOS

2004-01-06 Thread Mitch Collinsworth


On Tue, 6 Jan 2004, Paul Root wrote:

 Nope. Same subnet

Software firewall on client?



 Mitch Collinsworth wrote:

  On Tue, 6 Jan 2004, Paul Root wrote:
 
 
 it runs, the Mac times out waiting for a random local
 port.
 
 
  Any firewalls involved?
 
  -Mitch


Re: Estimates taking two hours - is this normal?

2004-01-06 Thread Mitch Collinsworth


On Tue, 6 Jan 2004, Fran Fabrizio wrote:

 I have a system and I am attempting to backup a filesystem with
 approximately 30G of data.  The estimates for a Level 0 take 3000
 seconds.  For a level 1, 7000 seconds.  Is two hours just to get an
 estimate for an incremental dump?  Is it typical to have to bump up the
 estimate timeouts to several hours?  I just want to make sure that what
 I am seeing is normal.  It seems like an awful lot of churning just to
 estimate a dump size.  This happens to be my only Solaris system being
 backed up.  On a linux system, I'm backing up an area approx. twice the
 size with no estimate timeouts.  That may or may not have any relevance,
 but I thought it was worth mentioning.

You might find estimates run faster with dump than tar.  Which on a
Sun is probably a safe thing to do, depending on what file system you
use.

-Mitch


RE: Dump Vs Tar tradeoffs (if any)

2003-12-23 Thread Mitch Collinsworth

On Tue, 23 Dec 2003 [EMAIL PROTECTED] wrote:

 Also, I've found that 'tar' seems to take longer in the estimate phase
 than do 'dump' (and 'vxdump'), but for all I know, that could be due to
 local influences (e.g., disk traffic).  Or has anyone else also found it
 to be the case that 'tar' estimates take longer than those from 'dump'?

Tar has to stat every file to find out how much is going to be backed
up.  Dump just has to look at the filesystem as a whole.  Far less
work involved.

-Mitch


Re: moved to new disk, now amanda wants to do level 0's on whole system

2003-11-14 Thread Mitch Collinsworth

On Fri, 14 Nov 2003, Eric Siegerman wrote:

 But all of those -- tar, cpio, rsync -- are kludges.  Is it just
 me, or do other people also find it ludicrous that 30+ years on,
 UNIX still doesn't have a proper copy command?

Huh?  You just showed there are enough flavors to suit just about
any taste.  The only obvious one I see missing is dd, which is
great for copying a whole partition.

-Mitch


Re: slow amanda performance on ONE system.

2003-11-04 Thread Mitch Collinsworth

On Tue, 4 Nov 2003, Gene Heskett wrote:

 Good grief Toomas!  Can you not institute a mail box size limit, about
 10 megs maybe?

Good grief indeed.  This discussion has absolutely nothing to do
with using amanda.  Please take it offline.

-Mitch


P.S.  To whomever posted the original 'one system too slow' query.
Please double-check your duplex settings on the slow backing up
machine.  Mismatched duplex is one of the most common causes of this
problem.


Re: Book: Automating UNIX and Linux Administration

2003-09-26 Thread Mitch Collinsworth

The plug you sent to another list a few minutes ago ended with:

 This ends my one-time-only plug for my book.  Thanks!


Then you posted it again here, without the one-time-only caveat!
So now I've received it twice, so far, and have to wonder how many
more lists I will have to see it on before the end of the day!  :-(

-Mitch


Re: send req failed: Message too long

2003-09-24 Thread Mitch Collinsworth

On Wed, 24 Sep 2003, Nicolas Ecarnot wrote:

 Nicolas Ecarnot a écrit :
  Hi,
 
  I recently had this kind of problem with my amanda server. It has to
  backup around 120 samba clients :
 
  GETTING ESTIMATES...
  planner: time 0.371: dgram_send_addr: sendto(192.168.10.66.10080)
  failed: Message too long
  send req failed: Message too long

 I'm sorry to insist but this problem is really blocking me and I don't
 really know what to do to solve it.

 I read in the faq that this may be due to an UDP packet size problem.
 The suggested workaround is to shorten the names used in the directories
 to save.
 Unfortunately, I'm backing up many samba clients, and their names can't
 be all changed, and I have no way to create symbolic links or that knid
 of things.

 In my disklist, I have 120 lines like this :
 backup.foo.com //winHost001/share nocomp-user-gnutar

 * I don't know HOW I could shorten this ?

The other way to work around this problem is to employ multiple smbclient
machines.  So run some samba clients through backup.foo.com and some
other samba clients through backup2.foo.com, etc.

-Mitch


Re: About Linus's condemnation of Dump

2003-09-06 Thread Mitch Collinsworth

On Sat, 6 Sep 2003, Scott Phelps wrote:

 Dump was a stupid program in the first place. Leave it behind.

Linus has been bad-mouthing dump for as long as I can remember.
Linus is obviously not a sysadmin or he would have more clue than
this.


 Any comments, advice? (Does Amanda even work with xfsdump/xfsrestore?

Works fine on IRIX.  I haven't tried it on Linux, yet.  Probably
should.

-Mitch


Re: Spam is getting old...

2003-08-27 Thread Mitch Collinsworth

On Wed, 27 Aug 2003, Chris Barnes wrote:

 Changing an email address is a horrid way of dealing with spam.  Either
 personally or for a list.  *IF* it's a problem, running the email
 through a filter (my favorite is SpamAssassin) before it gets
 distributed is a MUCH better way to go.

SpamAssassin has blocked all spam/virus mail arriving through this
list for me since I started using it.  (It hasn't done as well for
non-list spam however.)  So now I don't see the spam/virus, I just
see the handful of 'you sent me a spam/virus' replies that follow
each one and are sent erroneously to the list.

If I were list manager I would either unzubscribe or block posting
from sites generating these replies.

-Mitch


Re: Spam is getting old...

2003-08-27 Thread Mitch Collinsworth

On Wed, 27 Aug 2003, Kurt Yoder wrote:

 Mitch Collinsworth said:
  If I were list manager I would either unzubscribe or block posting
  from sites generating these replies.

 So who *is* the list manager who would be in charge of doing this
 anyway?

I believe it's Todd Kover.  Also note that I said If I were list
manager.  I'm not, Todd is.  So it's his decision what to implement
and what not.  Not mine or anyone else's.

-Mitch


Re: accessing data from tape on fresh amanda system (disaster recover)

2003-08-14 Thread Mitch Collinsworth

On Mon, 11 Aug 2003, Jon LaBadie wrote:

 On Mon, Aug 11, 2003 at 11:15:08AM -0400, Mitch Collinsworth wrote:
 
  There's also amtoc.  You can run it at the end of each amdump run.
  I like to then print its output to paper and save in a folder.

 When did that show up ???  It wasn't there the last time I looked :)

Yah, I hate it when that happens, too.  :-)

From the comments in the script:

# HISTORY
# 1.0 19??-??-?? [EMAIL PROTECTED]
#   don't remember :-)
# 2.0 1996-??-?? [EMAIL PROTECTED]
#   amanda 2.2.6 support
# 3.0 1999-02-17 [EMAIL PROTECTED]
#   major rewrite, incompatible with release 2.0, amanda 2.4 support


So it seems no one quite knows, but it wasn't recently.

-Mitch


Re: accessing data from tape on fresh amanda system (disaster recover)

2003-08-12 Thread Mitch Collinsworth

On Mon, 11 Aug 2003, Jon LaBadie wrote:

 On Mon, Aug 11, 2003 at 08:34:41AM -0600, Steven J. Backus wrote:
  Selon Jon LaBadie [EMAIL PROTECTED] writes;
 
   It helps greatly if there is a TOC of each tape available.
 
  Is there a slick way to create these rather than just print out the
  nightly email?

 I just set lbl-templ in my tapetype definition.  In my case I
 use the 3hole.ps template provided with the distribution and
 it prints out on my default printer.

There's also amtoc.  You can run it at the end of each amdump run.
I like to then print its output to paper and save in a folder.


Re: sdlt versus DLT library

2003-07-15 Thread Mitch Collinsworth

On Tue, 15 Jul 2003, Kurt Yoder wrote:

 AFAIK it will hit end of tape and ask for another tape. However,
 there is no way to flush a single dump image to multiple tapes. So
 if you have a 20 GB dump image (even if it's been split up into
 pieces due to the chunksize parameter) and a 10 GB tape, there is no
 way to put the dump image on that tape.


Does this strike anyone besides me as dumb?  Yes I've known these
details separately for years, but when they are put together in
the same paragraph it jumps out and says Hey, the dump has already
been split into managable-sized pieces.  All we need is to modify
taper to put them on multiple tapes and voila, we can span tapes!

Of course then we'd need to do some fiddling with amrestore to be
able to retrieve all the chunks during a restore...

-Mitch


Re: OT vacation?

2003-07-08 Thread Mitch Collinsworth

On Tue, 8 Jul 2003, Jeroen Heijungs wrote:

 Do I overlook something, or isn't there a way to temporarily disable delivery of 
 mail during holidays?
 Most lists do have that possibility, but I cannot find it, it seems I have to 
 unsubscribe and subscribe again.

Whats the difference?

-Mitch


Re: Help compiling on OS X/Darwin

2003-06-25 Thread Mitch Collinsworth

One of my staff encountered the same problem with 2.4.4.  He was able
to successfully compile 2.4.3 and we've done some successful dump and
restore tests with that.  I was going to look into it further but
haven't so far.

-Mitch


On Wed, 25 Jun 2003, Ken Simpson wrote:

 Hi, I am a newbie to UN*X, and have downloaded the 2.4.4 sources and
 read the notes on compiling on OS X and have searched the lists.

 However I can't get it to compile on OS X Jag. There are warnings in
 the configure and when I compile I get recursive errors in Amanda.h.
 Can anyone point me to a FAQ or a help page please? Or offer any
 other advice?

 Thanks
 --
 Regards
 Ken Simpson
 .



Re: Using only one dumper

2003-04-06 Thread Mitch Collinsworth

On Sun, 6 Apr 2003, Brandon D. Valentine wrote:

 You need to set maxdumps to something greater than 1 in your
 amanda.conf.  It defaults to '1'.  See amanda(8) for details.

Urg.  yes, maxdumps not inparallel.  Somebody kick me for not checking
my facts before posting.

-Mitch


RE: Cygwin client - advantages?

2003-03-23 Thread Mitch Collinsworth

On Sun, 23 Mar 2003, Bort, Paul wrote:

 1. Client-side compression
 2. Better chance of backing up ACLs (using special tar)
 3. Works on machines that have all shares disabled (I have seen disabling
 shared recommended for some servers, especially SQL.)
 4. More straightforward restore
 5. Doesn't require SAMBA on AMANDA Server

I haven't used it yet here, but I suspect another advantage not listed
above is:

6. Can do dumps at higher levels than 0 and 1.

Now my question: What special tar does one use to get the ACLs?

-Mitch


Re: Pornography

2003-03-20 Thread Mitch Collinsworth

Have we reached the point yet where the spam discussion has generated
more list messages today than all the spam that's crossed it in the
past year?

-Mitch


Re: ACLs

2003-03-10 Thread Mitch Collinsworth

On Mon, 10 Mar 2003, Adam Smith wrote:

 On FreeBSD 5.0 with UFS2 + ACLs, what is my best method for backing up my
 ACLs along with my files?

 I am only experimenting with Amanda at this point, but it seems to use the
 native tar utility, however tar does not support the backing up of ACLs.
 Can anyone show me what I need to do?

I don't have a 5.0 system to look at, but looking briefly at the CVS
tree, it appears their ACL system is designed to not require special
consideration.  There seems to be more explanation of the UFS1
implementation, which requires more manual setup, than the UFS2
implementation, which is natively available.  Do your filesystems
have a special directory at their root?  Possibly .attribute ?
(This is the name documented for UFS1.)  If this is there then this
is where your ACLs are stored and if you're backing up those dirs then
you're getting the ACLs.

-Mitch


Re: NAK: amandad busy

2003-02-11 Thread Mitch Collinsworth

On Tue, 11 Feb 2003, Joshua Baker-LePain wrote:

 On Mon, 10 Feb 2003 at 5:06pm, justin m. clayton wrote

  I've been receiving this error on some, but not always all, of my hosts
  during amcheck. What could be causing this issue? Which logs are most
  likely to be housing the magic info I need to solve this?

 If there's already an amandad running (i.e. one from a previous run that
 never died), you'll get this error.  Look in /tmp/amanda/amandad*debug.
 Also look at the output of 'ps' when you get the error.

The other way this can happen is if you have multiple names in your
disklist that refer to the same client machine.  Amanda will consider
them unique due to their names being different and try to contact
each name separately, but the amanda client is not multi-threaded and
will NAK the 2nd connection attempt if the 1st is still running.

This should be a FAQ if it isn't already.

-Mitch



RE: speed problem

2003-02-07 Thread Mitch Collinsworth

Hi David,

Sun hardware is really not my area of expertise, but I'm sure there are
others on amanda-users who can answer this.  If your Sun boxes will only
do 10 Mbps/half-duplex then just make sure your switch ports are set to
either auto/auto or 10/half and you'll be fine.

-Mitch


On Fri, 7 Feb 2003 [EMAIL PROTECTED] wrote:

 Mitch.
 Here is the version of my Sun OS:
 SunOS gitpocs02 5.7 Generic_106541-08 sun4u sparc SUNW,Ultra-5_10
 The /dev/hme is FEPS Ethernet Driver  v1.115
 The host seems too old to support 100/full-duplex.
 Do you agree?

 Thanks!

 David

 -Original Message-
 From: Mitch Collinsworth [mailto:[EMAIL PROTECTED]]
 Sent: Thursday, February 06, 2003 7:18 PM
 To: [EMAIL PROTECTED]
 Cc: [EMAIL PROTECTED]
 Subject: Re: speed problem



 On Thu, 6 Feb 2003 [EMAIL PROTECTED] wrote:

  My tape server is a Linux host (Redhat 7.3).
  My clients are either Linux hosts or Sun Solaris hosts.
  All client hosts are on the same domain.
  When I run 'amdump' to backup Linux hosts, the speed is pretty good.
  When I run 'amdump' to backup Sun Solaris hosts, the speed is extremely
  slow.
  It seems not a network or hardware issue.

 Seems?  You don't sound particularly sure on this point...  In my
 experience one thing that is a frequent cause of deadly slow network
 backups is a duplex mismatch between a backup client and the ethernet
 switch it connects to.  I've seen systems that the user had been happily
 using without complaint for weeks or months before asking for backups.
 When their backup went painfully slow I'd check their network settings
 and invariably find a duplex mismatch.

 Another cause of slowness can be doing client compression, especially
 client best on slow hardware.  How old/slow are your Solaris boxen?

 -Mitch




Re: speed problem

2003-02-06 Thread Mitch Collinsworth

On Thu, 6 Feb 2003 [EMAIL PROTECTED] wrote:

 My tape server is a Linux host (Redhat 7.3).
 My clients are either Linux hosts or Sun Solaris hosts.
 All client hosts are on the same domain.
 When I run 'amdump' to backup Linux hosts, the speed is pretty good.
 When I run 'amdump' to backup Sun Solaris hosts, the speed is extremely
 slow.
 It seems not a network or hardware issue.

Seems?  You don't sound particularly sure on this point...  In my
experience one thing that is a frequent cause of deadly slow network
backups is a duplex mismatch between a backup client and the ethernet
switch it connects to.  I've seen systems that the user had been happily
using without complaint for weeks or months before asking for backups.
When their backup went painfully slow I'd check their network settings
and invariably find a duplex mismatch.

Another cause of slowness can be doing client compression, especially
client best on slow hardware.  How old/slow are your Solaris boxen?

-Mitch



Re: columm widths in daily mail report

2003-01-30 Thread Mitch Collinsworth

On Thu, 30 Jan 2003, Joshua Baker-LePain wrote:

 On Thu, 30 Jan 2003 at 12:47pm, Don Carlton wrote

  Has anyone changed the mail report to produce html?

 Oh, the horror!  ;)

Agreed if sent by e-mail,  but if it were instead dumped into a web-served
directory...

-Mitch



Re: TSM

2003-01-15 Thread Mitch Collinsworth

On Wed, 15 Jan 2003, Aline wrote:

 Is TSM free?

No.  It's an IBM product...

-Mitch



Re: advantages of amanda over ADSM or other backup utilities?

2003-01-13 Thread Mitch Collinsworth

  May I know some of the reasons why AMANDA edges over other traditional
  Unix backup utilities.
 
  stable
  does what it claims
  source available
  well supported
  good, unique scheduling module
  networked
  scales well from a single system to moderatly large installations

 Also that it uses native utilities for the backup images, so you
 can do a restore without having to reinstall Amanda first.

This one is more important than it might first appear.  In ADSM if you
lose ADSM's database or it gets corrupted and you can't restore it, you
can't restore anything else, even if the tapes your backups are on are
perfectly fine.  With amanda you can lose everything and still be able
to restore from a tape with standard unix tools.

Secondly, and I'm no ADSM/TSM expert, but the folks who run it here aren't
yet offering OS X backups.  I'm not sure if that's because the product
doesn't support it yet or because of some other problem.  Just this week
someone reported here getting amanda to do OS X backups ok.

But it should be noted that there are advantages of ADSM/TSM as well.
One is that the client can request a backup asyncronously.  Handy for
e.g. the laptop user who is only connected sporadically.  People have
hacked up various workarounds for this for amanda, but it doesn't do it
out of the box, yet.

Another is that ADSM/TSM recognizes a mobile machine whose IP is changing.
If you backup a laptop at work, then take it home or wherever and request
a fresh backup, it does the right thing.  Amanda would require a bit of
work to do this.

Another is their incrementals forever.  I've argued this one both
ways, but it can be handy in some situations.  ADSM/TSM takes one full
dump when the client is first backed up and then does only incrementals
from there on.  They can do this because they backup files rather than
filesystems, and keep track of everything in the previously-mentioned
database.  The big advantage to this is when you want to put a machine
behind a slow, overcommitted, or expensive network link.  You could for
example do a full dump on-site first, then move the machine to the remote
location with the slow link and it will only need to do incremenals from
there on.

-Mitch



Re: amandad busy

2002-12-30 Thread Mitch Collinsworth

On Mon, 30 Dec 2002, Adnan Olia wrote:

 I was just wondering why would amanda send a message amandad busy.  Do I
 need to check before running amdump that would solve my problem??

Typically this means that a previous run has not yet finished.  Maybe
something got hung?  Does ps show any amanda processes running?

Another possibility is that you listed multiple filesystems from the
same client with different hostnames in disklist.  This would cause
amanda server to view them as different clients and attempt a separate
connection to each.  The second connection will find amandad already
busy from the first connection.

-Mitch




Re: Problem with initial install of amanda on SCO openserver

2002-12-30 Thread Mitch Collinsworth

On Mon, 30 Dec 2002, Josh More wrote:

 FAILURE AND STRANGE DUMP SUMMARY:
   backupmast /home RESULTS MISSING

Firstly, before running amdump did you run amcheck?

If yes, what do you see in /tmp/amanda?

-Mitch




Re: Still get No index records...

2002-12-30 Thread Mitch Collinsworth

On Mon, 30 Dec 2002, John Oliver wrote:

 501 No index records for host: backup.indyme.local. Invalid?
 Trying backup.indyme.local ...
 501 No index records for host: backup.indyme.local. Invalid?
 Trying backup ...
 501 No index records for host: backup. Invalid?

What did you use for the hostname for this client in disklist?

-Mitch




Re: COMPAQ TSL-9000 DAT autoloader device

2002-12-01 Thread Mitch Collinsworth

On Sat, 23 Nov 2002, Jon LaBadie wrote:

 Often the same device is sold under many brand name plates.  I don't think
 Sun manufactures any tape drives and Sears doesn't make any dishwashers.
 They just relabel/repackage others products.

 And often a library/changer sold by one company uses a standard tape drive
 from another company.  All tapetype cares about is the drive, not the changer.

Yes.  But often that standard tape drive has custom firmware installed
for use in the library/changer.  Which makes it non-standard in terms
of ordering and pricing.

-Mitch




Re: use --build, --host, --target

2002-09-25 Thread Mitch Collinsworth


On Tue, 24 Sep 2002, John Dalbec wrote:

 Hadad wrote:

   [root@localhost amanda-2.4.3b4]# ./configure --disable-libtool
 --without-client --with-user=amanda and --with-group=amanda

 Take out the word and here and you should be OK.  That is,
 ./configure --disable-libtool --without-client --with-user=amanda \
  --with-group=amanda

I think he also doesn't want --without-client.  Didn't he say hes
going to back up just this one machine?  It will need the client
pieces installed in order for that to work.

-Mitch




Re: backing up commercial apps

2002-09-21 Thread Mitch Collinsworth


On Sat, 21 Sep 2002, Neil wrote:

 What is amanda's approach to backup commercial software like Microsoft
 Exchange 5.5 mail server? It's because in Exchange, they have this public

Jon's reply is a good description of amanda itself.  Haven't checked
into it yet but I've heard there's another project at sourceforge
that's working on a native windows client for amanda.  It may well be
better than doing the samba thing.

 Will there be any future path that Amanda is looking into? Or is Amanda is
 just really looking into perfecting back up of unix filesystem? I also heard
 that backing up registry of the Windows Oses is a bit crappy at the moment.

Well as Jon pointed out amanda isn't perfecting any filesystem's backup.
That's the filesystem maintainer's job.  Amanda just wants to make it
easier for you to run backups so you can spend more time doing something
else.  Those of us wanting to backup fs's and db's that amanda doesn't
already do are building bridges between amanda's architecture and the
backup tools used by the things we want to backup.

Registry backup is a challenge for most windows backup schemes.  There
is a tool somewhere (sorry, don't recall the name) that will snapshot
the registry into a quiescent file.  If you want to get a good backup
of the registry, find this tool and schedule it to run before your
backup starts.  Then backup the registry dump file.  This is pretty much
the same scheme used to backup most any database regardless of OS.

-Mitch




Re: Deja vu all over again

2002-09-18 Thread Mitch Collinsworth


Here's the relevent section of headers:

Received: from pwd.reeusda.gov ([192.73.224.125])
by surly.omniscient.com (8.11.6/8.11.6) with ESMTP id g8IImln797017
for [EMAIL PROTECTED]; Wed, 18 Sep 2002 14:48:47 -0400 (EDT)
Received: from reeusda.gov - 192.73.224.2 by pwd.reeusda.gov  with
Microsoft SMTPSVC(5.5.1774.114.11);
 Wed, 18 Sep 2002 14:46:22 -0400
Received: (qmail 17216 invoked by uid 0); 18 Sep 2002 18:32:14 -
Received: from guiness.omniscient.com (64.134.101.78)
  by maat.reeusda.gov with SMTP; 11 Sep 2002 18:30:49 -

-Mitch

On Wed, 18 Sep 2002, Frank Smith wrote:

 What's up with last week's postings getting re-injected into the list?

 Frank

 --
 Frank Smith[EMAIL PROTECTED]
 Systems Administrator Voice: 512-374-4673
 Hoover's Online Fax: 512-374-4501






Re: Configuration help?

2002-08-16 Thread Mitch Collinsworth


On Fri, 16 Aug 2002, Joshua Baker-LePain wrote:

 Tread lightly -- thar be dragons here.  :)  They both have their pros and
 cons, and it can be a matter of deep seated religous belief for people.
 FS specific dump programs can back up things that tar doesn't know about
 (e.g. ACLs), and can sometimes be faster.  But they're limited to
 partitions only, and require that the recovery machine have them
 installed.  Tar can do subdirectories and doesn't care about OS/FS (and
 thus you can recover on just about any machine).

 It's all a matter of choice.

Basically I agree with Joshua, with one addition.  For Linux systems
stick with tar.  dump on Linux is, well, not know for being robust.

-Mitch




Re: Holding Disk Question

2002-08-15 Thread Mitch Collinsworth


On Thu, 15 Aug 2002, Kevin Passey wrote:

 What is the best size for a holding disk - is it a how long is a piece
 string or are there some optimal values.

It mostly a tuning thing that's up to you to decide what's best for you.

That said, I prefer my setups to have as a minimum, enough to hold any
single night's backup in its entirety.  Then if something bad happens
with the tape drive/media, the backups can run to completion spooled to
the holding disk(s) and in the morning I can solve the tape problem and
flush the holding disk(s) to tape then.  Also, having enough to hold all
of a single run allows you to take the greatest advantage of dumping
parallelism, thus getting your client backups finished as quickly as
possible.

Further improvements on this theme are, 1) enough holding disk to hold an
entire weekend's backups, and 2) enough holding disk to hold an entire
insert length of your longest anticipated family vacation here.

My motto:  Disk is cheap, don't skimp on holding disk.

-Mitch




Re: Holding Disk Question

2002-08-15 Thread Mitch Collinsworth


On Thu, 15 Aug 2002, John Koenig wrote:

 Yup... after I deployed a 181 GB drive for our holding disk, I saw
 somewhat different behavior in amstatus... though I would have
 expected the total time of the backups to drop (compared to a
 relatively small 25 GB holding space I used before) they did not... I
 conclude this was because the entire run is tape i/o bound... Holding
 disk usage on the runs with this disk were about 90% capacity... so
 the clients were not taking so long to complete their data
 transfers...

 This would seem to indicate I need to double this amount of space to
 hold 2 days of backups, currently... I'd like to have a week's worth,
 actually in case the tape goes South and replacement is not easy
 and quick.

 So does the chunksize parameter affect performance in any way?

No.  Taping of a dump spooled to holding disk does not begin until all
chunks are received.

The best way to figure out where your bottleneck and what configuration
changes will give you the greatest return is is to run amplot.  It will
give you a graphical representation of what's happening when.  Very
revealing.

-Mitch




Re: Holding Disk Question

2002-08-15 Thread Mitch Collinsworth


On Thu, 15 Aug 2002, Gene Heskett wrote:

 On Thursday 15 August 2002 14:49, John Koenig wrote:

 So does the chunksize parameter affect performance in any way?

 Only in that it prevents troubles with a filesystem that can't
 handle large files.  There was, at the time amanda was first
 deployed, a 2gig limit to the filesize in many of its various
 platforms filesystems.  Many of those limits are now historical,
 but you'd hate to find it out by doing a recovery and having it
 blow up because a tape of 20gigs was all one big file.

A 20 GB dump _is_ all one big file.  Chunking is only implemented
on the holding disk during dumping.  The chunks are merged back
into a single file when spooled from holding disk to tape.  When
you do a recovery what comes back is a single large file.  If you
want to recover the whole dump from tape to your holding disk
before pulling out the files that need to be restored, then you'll
have to split it up by hand in the process.

-Mitch




Re: No end to problems

2002-08-04 Thread Mitch Collinsworth


On Sat, 3 Aug 2002, Paul G. Allen wrote:

 [root@ista root]# amtape csd show
 amtape: scanning all 12 slots in tape-changer rack:
 slot 2: reading label: Input/output error
 slot 3: reading label: Input/output error
 slot 4: reading label: Input/output error
 slot 5: reading label: Input/output error
 slot 6: reading label: Input/output error
 slot 7: reading label: Input/output error
 slot 8: reading label: Input/output error
 slot 9: reading label: Input/output error
 slot 10: reading label: Input/output error
 slot 11: reading label: Input/output error
 slot 0: reading label: Input/output error

One common problem is that some changers will return from a tape
load request before the tape is fully loaded and the tape drive is
ready to go to work.  In these cases it becomes necessary to add
some delay time between the tape load command and subsequent tape
drive use commands, or else do a sleep/retry/timeout sequence.

Here's what worked for me a few years back with a DLT4000 changer:

# see if tape drive is ready.  if not, sleep a while and check again
$remaining = $MAXDELAY;
while (!open(TAPE,$TAPE)  $remaining  0) {
$remaining -= sleep 5;
}
if ($remaining = 0) {
print $0 drive $TAPE timed out, not responding\n;
exit 2;
}

I did this just after loading a tape, so any following commands would
not have to worry about it.  For that changer it took I think close to
a minute for the drive to become ready.

More recently, with an AIT2 changer I found the open() call here didn't
work, so replaced it with a call to mt status and watched the output for
General status bits on.

Whatever works.  :-)

-Mitch




Re: amanda report

2002-07-31 Thread Mitch Collinsworth


Hmm...  Looks like you didn't have a (usable) tape in the drive.
Were the level 0 dumps of these new disks bigger than would fit on
your holding disk(s)?

-Mitch


On Wed, 31 Jul 2002, Michael P. Blinn wrote:

   A few questions about my report. I have a cron set to force level 0 dumps
 a minute or two before the actual backup goes into place so that I can get a
 full level 0 on each tape.

 I understand can't switch to incremental dump because it was forced to be
 dump 0, right?

 But why would the entire backup fail for this one machine? It is on and
 available!

 Thanks in advance,
   -Michael Blinn


  FAILURE AND STRANGE DUMP SUMMARY:
mail   /configs lev 0 FAILED [can't switch to incremental dump]
mail   /var/lib/mysql lev 0 FAILED [can't switch to incremental
 dump]
mail   /usr/local/apache lev 0 FAILED [can't switch to incremental
 dump]
 
 
  STATISTICS:
Total   Full  Daily
        
  Estimate Time (hrs:min)0:00
  Run Time (hrs:min) 0:12
  Dump Time (hrs:min)0:12   0:00   0:12
  Output Size (meg) 158.60.0  158.6
  Original Size (meg)   607.40.0  607.4
  Avg Compressed Size (%)26.1--26.1   (level:#disks ...)
  Filesystems Dumped2  0  2   (1:2)
  Avg Dump Rate (k/s)   221.6--   221.6
 
  Tape Time (hrs:min)0:00   0:00   0:00
  Tape Size (meg) 0.00.00.0
  Tape Used (%)   0.00.00.0
  Filesystems Taped 0  0  0
  Avg Tp Write Rate (k/s) -- -- --
 
 
  NOTES:
planner: Adding new disk mail:/usr/local/apache.
planner: Adding new disk mail:/var/lib/mysql.
planner: Adding new disk mail:/configs.
planner: Last full dump of localhost://ntserver/ppidocs on tape daily5
 overwritten in 2 runs.
 
 
  DUMP SUMMARY:
   DUMPER STATSTAPER STATS
  HOSTNAME DISKL ORIG-KB OUT-KB COMP% MMM:SS  KB/s MMM:SS  KB/s
  -- - 
  localhost-sinesswork 1   19320   3968  20.5   0:54  73.7   N/A   N/A
  localhost-er/ppidocs 1  602623 158400  26.3  11:19 233.3   N/A   N/A
  mail /configs0 FAILED ---
  mail -cal/apache 0 FAILED ---
  mail -/lib/mysql 0 FAILED ---
 
  (brought to you by Amanda version 2.4.2p2)






Re: Survey of Changer Users: After amdump returns, where is tape?

2002-07-21 Thread Mitch Collinsworth


On Sun, 21 Jul 2002, Gene Heskett wrote:

 On Sunday 21 July 2002 02:44, Mitch Collinsworth wrote:
 You asked what amanda does with changers.  I believe when amdump
  completes the tape is rewound but not ejected.

 Its not even rewound here Mitch.  Since amanda uses the
 non-rewinding device, the only time a rewind command is issued is
 when it wants to verify its got the right tape to write to. amcheck
 does this, and amdump does at startup, but not again.

Sorry.  I should be more careful about what I believe at 2:30 AM.


 I've used a number of different tape types and not seen any
  problem that could be traced to leaving a tape in a drive.  But
  Exabyte is a drive I've had very little experience with.

 I can verify that too, I've had zilch trouble from leaving the tape
 in the drive, using a Seagate 4586np changer with a 4 tape
 magazine.  FWIW, once one of these changers is loaded, I found that
 an 'mt -f /dev/nst0 rewoffl' will in fact cause it to load the next
 tape after that one has been rejected.

Interesting.  I've seen different behavior from different changers,
but I haven't seen this.  Is this in gravity mode?

-Mitch




Re: Hardware to backup upto 60-80 GB

2002-07-19 Thread Mitch Collinsworth


On Fri, 19 Jul 2002, Marcus Schopen wrote:

 anyone knows a good Streamer to backup 60-80 GB with Amanda?

AIT-2, AIT-3, DLT7000, DLT8000, LTO

-Mitch




Re: Times from amreport

2002-07-19 Thread Mitch Collinsworth


On Fri, 19 Jul 2002, Ulrik Sandberg wrote:

 Estimate Time (hrs:min)1:25
 Run Time (hrs:min) 4:02
 Dump Time (hrs:min)3:07   2:38   0:30
 Tape Time (hrs:min)0:43   0:42   0:02

 Does this mean:

 a) Total run time (estimate, dump and tape): 4:02

 b) Total run time (estimate, dump and tape): 5:27 (4:02 + 1:25)

Total run time is 4:02, as stated.  The numbers don't add up because
there is parallelism to be had from dumping simultaneous dumps to
holding disk, plus there can be another dump simultaneously being
spooled to from holding disk to tape.

-Mitch




Re: Windows-XP

2002-07-18 Thread Mitch Collinsworth


On Thu, 18 Jul 2002, Gene Heskett wrote:

 On Thursday 18 July 2002 12:33, Marc Mengel wrote:

 Amanda-hackers -- shouldn't we have a link to their project on the
 main Amanda page, next to the SAMBA links?
 
 Marc

 Much as we'ed like to be a bit political here, this being primarily
 a *x type list, I have to agree with Marc.  Only by inviting the
 windows users in can we make it 100% compatible on a faster
 timetable, and widen the userbase of amanda.

I strongly agree.  Good tape drives and tapes are expensive.  There
is a strong incentive to find one backup system that will handle all
your backup needs.  The samba stuff sorta-kinda works, but having a
good client that runs directly on M$ os's will enable a lot more folks
to choose amanda.

-Mitch




Re: amanda in an Andrew environment?

2002-07-15 Thread Mitch Collinsworth


We've been working on amanda backups of AFS here, but not using
AFS's backup system.  We found that too painful.  We ended up building
our own dump program that wraps around vos dump.

It's often said that DUMPER-API is coming, but I've seen nothing to
suggest there's been any recent progress.  Some preliminary work was
done on it a while back but I think it stopped.

I know very little about Coda but I don't see a need for an AFS
backup that produces BSD dump-format output.  We also wanted to be
able to restore AFS dumps into non-AFS filesystems, but we were able
to build a restore program that does this.  It's just a matter of
being able to decode the vos dump format.

-Mitch


On 11 Jul 2002, Greg Troxel wrote:

 Search the archives and google.  Previous posters have mentioned
 modified amanda versions that incorporate the afs dump tool.  I am
 only familiar with Coda (www.coda.cs.cmu.edu), which has its own
 backup program.  Basically the idea is to use amanda to transport and
 store the bytestream from the native 'dump' version for the non-ufs
 filesystem.

 For various reasons, I use gnu tar to back up Coda.  This loses the
 atomic snapshot that the 'real' backup does, and doesn't back up coda
 metadata (acls), but it works well enough.  With the coming 'DUMPER'
 API, it should be possible to hook in native coda and afs clone/dump
 programs.  I believe the coda folks have a modified amanda that does
 Coda.

 rant I really wish that the coda and afs backup programs would
 produce a BSD dump-format stream, with the files as files, and
 metadata as files with some funny names (e.g. lots of __, or ^A^F^S),
 so that one could restore them either on the coda/afs filesystems, or
 read them anywhere else.  But this is really orthogonal to amanda
 running sendbackup-coda.c. /rant

 Greg Troxel [EMAIL PROTECTED]





Re: New Amanda release (off-topic)

2002-07-08 Thread Mitch Collinsworth


On Mon, 8 Jul 2002, Jonathan R. Johnson wrote:

 *  * * * New Release of Amanda * * *
 *
 * Amanda Ruth Johnson
 *
 *   Born June 24, 2002  24 June, 2002
 *   8 lbs, 2 oz(translated out  3.7 kg
 *   20 in  of American) -  50.8 cm

Congratulations!  And born on my birthday, too!  :-)

-Mitch




Re: syncronize backups

2002-07-03 Thread Mitch Collinsworth


On Wed, 3 Jul 2002, Scaglione Ermanno wrote:

 This is a CISCO document explaining NAT:
 http://www.cisco.com/warp/public/cc/pd/iosw/ioft/ionetn/prodlit/1195_pp.htm

 It states that Any TCP/UDP traffic that does not carry source and/or
 destination IP addresses in the application data stream is supported and
 without the sendsize problem also amanda UDP traffic is supported.

 When port translation is configured, there is finer control over translation
 entry timeouts, because each entry contains more context about the traffic
 using it. Non-DNS UDP translations time out after 5 minutes; DNS times out
 in 1 minute. TCP translations time out after 24 hours, unless a RST or FIN
 is seen on the stream, in which case it times out in 1 minute.

 The problem exists certainly also with linux firewall using iptables becouse
 it uses even smaller timeouts.

Worse yet, iptables in Red Hat 7.x doesn't allow control of the
timeout value.  A huge step backwards from ipchains in Red Hat 6.x,
which did allow tuning this parameter.  Amanda is not the only thing
that breaks without this feature.

-Mitch




  1   2   3   >