Re: Getting Amanda (3.5.1) to run properly under MacOSX 10.15

2023-06-09 Thread Deb Baddorf
Time Machine is built into current Mac OSes. It wants its own disk, which 
should be a bit (or a lot) bigger than the one you are backing up.  I don’t 
think it can backup to another machine. It works well, since it’s baked in.

Deb 

> On Jun 8, 2023, at 10:36 PM, Robert Heller  wrote:
> I know nothing about "Time Machine".  Will it backup to a Linux machine?
> 
> At Thu, 8 Jun 2023 17:39:47 -0600 Orion Poplawski  wrote:
> 
>> 
>> On 6/8/23 09:50, Robert Heller wrote:
>>> Already doing that.
>>> I'm beginning to think that I need to find some other way of backing up this
>>> machine, or maybe just not back it up at all (it is just a build box and has
>>> nothing on it that is not available either elsewhere on my LAN (eg the
>>> subversion tree on my main desktop) or out on the Internet (eg the O/S and
>>> XCode at Apple.com or via Mac Ports).
>> 
>> Why not Time Machine?
> 
> -- 
> Robert Heller -- Cell: 413-658-7953 GV: 978-633-5364
> Deepwoods Software-- Custom Software Services
> http://www.deepsoft.com/  -- Linux Administration Services
> hel...@deepsoft.com   -- Webhosting Services




Re: Is there a write-up on using amvault with an archive config?

2022-03-23 Thread Deb Baddorf
Jon — check out the amvault flag “—dry-run”. It should tell you which items 
would be included, without actually doing any vaulting. That might answer your 
question about “latest” and “full-only”.

Deb Baddorf
Now retired and without Amanda access, except for googling manuals

> On Mar 23, 2022, at 2:23 AM, Jon LaBadie  wrote:
> 
> Thanks Winston, this moves me a bit closer.
> 
> A couple of inline questions:
> 
>> On Tue, Mar 22, 2022 at 12:34:16AM -0400, Winston Sorfleet wrote:
>> I use amvault to tertiarary media (LTO-2, I am just a casual home user)
>> while my main archive is VTL on a slow NAS.  I use cron, but obviously I
>> could just run as-needed from the command line.
> 
> My archive will be on its own disk.  I planned to keep it in the server
> along with the regular backups, but moving it to a different computer
> could further reduce some failure modes.
> 
>> You're right, it is a bit hard to intuit, and I had to get some help
>> from the community here as it is using overrides.
>> 
>> The command line I use is as follows:
>> 
>> /usr/sbin/amvault -q --latest-fulls --dest-storage "tape_storage" vtl
>> 
>> Where vtl is the config.  The key part is the "tape_storage" which
>> refers to the appropriate vault-storage template in the amanda conf
>> file.  E.g.
> 
> So "vtl" is the config name of the archive, correct?
> Would that config name be used in restore/recovery commands as well?
> 
> Just doing "latest-fulls" would not be appropriate for my use case.
> For some pretty static DLEs I only do fulls about every 6 weeks.
> An example is my almost never changing collection of online music.
> No need for amvault to archive many copies of that.  I likely would
> use a date specification ("--src-timestamps ...) and "--fulls-only".
> 
> I see that specific DLE can be specified at the end of the amvault
> command line.  From the manpage it shows:
> 
>   [hostname [ disk [ date [ level ...
> 
> However I could have multiple amanda configs with the same hostname and disk 
> combination, say a DailySet and a WeeklySet.  Where would you
> specify the amanda config you wish to archive (or "vault" if you wish).
> 
> The manpage says "latest" can be used as an alternative to a date
> specification.  The wording is "then the most recent amdump or amflush
> run will be used."  Do you know if that is literally accurate?  If I
> use both "--fulls-only" and "latest" plus list a specific DLE, will
> nothing be archived if the level 0 was in the 2nd most recent dump?
> Or might it locate the latest level 0 of that DLE?
> 
> 
> 
>> storage "vtl"
>> vault-storage "tape_storage"
>> 
>> define storage "tape_storage" {
>> erase-on-failure yes
>> policy "HP_Robot"
>> runtapes 1
>> set-no-reuse no
>> tapedev "LTO-2"
>> tapetype "LTO2"
>> tapepool "$r"
>> tpchanger "LTO-2"
>> labelstr "Vault-[1-7]"
>> autolabel "Vault-%" any
>> }
> 
> For this requirement, I love the idea of autolabeling.
> Will be another first for me.
> 
>> define changer LTO-2 {
>> tpchanger "chg-single:/dev/nst0"
>> device-property "LEOM" "TRUE"
>> }
>> 
>> define tapetype LTO2 {
>>comment "HP Ultrium 448, hardware compression off"
>>length 193024 mbytes
>>filemark 0 kbytes
>>speed 20355 kps
>> }
>> 
>> Obviously for you it will be simpler since you don't have to engage the
>> SCSI subsystem and define actual tapetype parameters or fiddle with
>> blocksizes.  And you're not limited to a single "tape".
> 
> I think I'll be looking to "spin down" the archive disk.
> It would be used so seldom.
> 
> Thanks again.
> Jon
> 
>> 
>>> On 2022-03-21 15:46, Jon LaBadie wrote:
>>> *** Apologies if a near duplicate has been posted ***
>>> *** I initially submitted it with the wrong email ***
>>> 
>>> 
>>> Amazing, I've used amanda for about 25 years and never set up
>>> an archive config nor used amvault.  No time like the present
>>> as I setup a new server with increased capacity.
>>> 
>>> I don't want an archive config that does periodic massive
>>> dumps.  Instead I'd prefer that on-demand I could copy a
>>> level 0 DLE to the archive in such a way that amrecover/
>>> amrestore could use the archive config.  Both the source
>>> and the archive destination would be vtapes but on
>>> different drives in different housings.
>>> 
>>> I "think" that amvault would be the appropriate tool.
>>> If not, correct my error please.
>>> 
>>> Has anyone done a write-up on setting up and using such
>>> a scheme?
>>> 
>>> Thanks,
>>> Jon
>>> 
>>>> End of included message <<<
> 
> -- 
> Jon H. LaBadie j...@labadie.us
> 154 Milkweed Dr (540) 868-8052 (H)
> Lake Frederick, VA 22630(703) 935-6720 (M)
> 




Re: Inparallel - not paralleling

2009-11-13 Thread Deb Baddorf

At 9:43 PM -0600 11/12/09, Frank Smith wrote:

Deb Baddorf wrote:

I have a 6 yr old amanda config which has been very nicely using up to
10 parallel dumpers.   I've got 32 client nodes,  so I have  MAXDUMPS
set to 2.  The parallelism I desire is across different clients, 
not so much

on the same client.

Taking the same configuration (editted)   to a new machine and new 
tape changer

robot,   I've still got INPARALLEL  set to 10 ...   but no parallelism is
occurring  (in test runs of only 7 client nodes).   What am I missing --
why is my new setup not using multiple dumpers?   Seven clients ought
to be enough to cause parallelism.

Deb Baddorf
Fermilab


Oh -- the new server is Linux rather than FreeBSD,  so that's  another
difference.   But:
ulimit -a
core file size  (blocks, -c) 0
data seg size   (kbytes, -d) unlimited
scheduling priority (-e) 0
file size   (blocks, -f) unlimited
pending signals (-i) 32767
max locked memory   (kbytes, -l) 32
max memory size (kbytes, -m) unlimited
open files  (-n) 1024
pipe size(512 bytes, -p) 8
POSIX message queues (bytes, -q) 819200
real-time priority  (-r) 0
stack size  (kbytes, -s) 10240
cpu time   (seconds, -t) unlimited
max user processes  (-u) 32767
virtual memory  (kbytes, -v) unlimited
file locks  (-x) unlimited

-
my configs:
--
#
# configMAIN.include - Amanda configuration global definitions
#
#
dumpuser operator
inparallel 10
dumporder BTBTBTBTBT
taperalgo largestfit

flush-threshold-dumped 70
flush-threshold-scheduled 100
taperflush 0
netusage  1800 Kbps
bumpsize 2 Mb
bumpdays 1
bumppercent 20
bumpmult 4

maxdumpsize -1
amrecover_do_fsf yes
amrecover_check_label yes
amrecover_changer changer

etimeout 2000
dtimeout 1800
ctimeout 30

tapebufs 20
tpchanger chg-zd-mtx  # the tape-changer glue script

reserve 02 # percent
autoflush yes
##


define tapetype SDLT320 {
 comment HP Super DLTtape I, data cartridge, C7980A, compression on
 length 139776 mbytes
 filemark 0 kbytes
 speed 13980 kps
}


#old 30-tape stacker.   Keep in case needed to read older tapes.
define tapetype DLT4000-IV {
comment Quantum DLT4000 or DLT7000 writing DLT4000 format, 
with DLTtape IV uncompr

essed
length 4 mbytes
filemark 8 kbytes
speed 1500 kps
lbl-templ /usr/local/etc/amanda/3hole.ps
}


#==#
define dumptype BDglobal {
comment Global definitions
index yes
priority medium
compress client fast
}
define dumptype BDnormal {
BDglobal
record yes
}

--
---
#  DAILY configuration
#=#

includefile /usr/local/etc/amanda/configMAIN.include

#=#


dumpcycle 7 days
runspercycle 5
runtapes 5
tapecycle 100 tapes

# 42-stacker unit:   # 11/01/09
changerdev /dev/changer
tapedev tape:/dev/nst2# the no-rewind tape device to be used
#   nst2  = changer 0   top unit,  where 2/3 of daily tapes are

changerfile /usr/local/etc/amanda/chg-daily-42   #my config data

tapetype SDLT320# what kind of tape
#  (see tapetypes in  ../configMAIN.include)

##
holdingdisk hd1 {
comment main holding disk
directory /spool/amanda/daily   # where the holding disk is
use -100 Mb # how much space can we use on it
# a non-positive value means:
#use all space but that value
   # 20Gb was causing a perl glitch: - 
values in flush

chunksize 2000Mb# size of chunk if you want big dump to be
# dumped on multiple files on holding disks
#  N Kb/Mb/Gb split images in chunks of size N
# The maximum value should be
# (MAX_FILE_SIZE - 1Mb)
#  0  same as INT_MAX bytes
}
holdingdisk hd2 {
directory /spool2/amanda/daily
use -100 Mb
chunksize 2000Mb# size of chunk if you want big dump to be
# dumped on multiple files on holding disks
#  N Kb/Mb/Gb split images in chunks of size N
# The maximum value should be
# (MAX_FILE_SIZE - 1Mb)
#  0  same as INT_MAX bytes
}

##
define dumptype dailyNormal {
BDnormal
}

define dumptype dailyNormalFast {
BDnormal
maxdumps 3
}


The report and debug files on the server may provide more clues,
but for starters I would verify that you actually have

Re: Inparallel - not paralleling

2009-11-13 Thread Deb Baddorf

At 4:38 PM -0600 11/13/09, Deb Baddorf wrote:

At 9:43 PM -0600 11/12/09, Frank Smith wrote:

Deb Baddorf wrote:

I have a 6 yr old amanda config which has been very nicely using up to
10 parallel dumpers.   I've got 32 client nodes,  so I have  MAXDUMPS
set to 2.  The parallelism I desire is across different clients, not so much
on the same client.

Taking the same configuration (editted)   to a new machine and new 
tape changer

robot,   I've still got INPARALLEL  set to 10 ...   but no parallelism is
occurring  (in test runs of only 7 client nodes).   What am I missing --
why is my new setup not using multiple dumpers?   Seven clients ought
to be enough to cause parallelism.

Deb Baddorf
Fermilab


Oh -- the new server is Linux rather than FreeBSD,  so that's  another
difference.   But:
ulimit -a
core file size  (blocks, -c) 0
data seg size   (kbytes, -d) unlimited
scheduling priority (-e) 0
file size   (blocks, -f) unlimited
pending signals (-i) 32767
max locked memory   (kbytes, -l) 32
max memory size (kbytes, -m) unlimited
open files  (-n) 1024
pipe size(512 bytes, -p) 8
POSIX message queues (bytes, -q) 819200
real-time priority  (-r) 0
stack size  (kbytes, -s) 10240
cpu time   (seconds, -t) unlimited
max user processes  (-u) 32767
virtual memory  (kbytes, -v) unlimited
file locks  (-x) unlimited

-
my configs:
--
#
# configMAIN.include - Amanda configuration global definitions
#
#
dumpuser operator
inparallel 10
dumporder BTBTBTBTBT
taperalgo largestfit

flush-threshold-dumped 70
flush-threshold-scheduled 100
taperflush 0
netusage  1800 Kbps
bumpsize 2 Mb
bumpdays 1
bumppercent 20
bumpmult 4

maxdumpsize -1
amrecover_do_fsf yes
amrecover_check_label yes
amrecover_changer changer

etimeout 2000
dtimeout 1800
ctimeout 30

tapebufs 20
tpchanger chg-zd-mtx  # the tape-changer glue script

reserve 02 # percent
autoflush yes
##


define tapetype SDLT320 {
 comment HP Super DLTtape I, data cartridge, C7980A, compression on
 length 139776 mbytes
 filemark 0 kbytes
 speed 13980 kps
}


#old 30-tape stacker.   Keep in case needed to read older tapes.
define tapetype DLT4000-IV {
comment Quantum DLT4000 or DLT7000 writing DLT4000 format, 
with DLTtape IV uncompr

essed
length 4 mbytes
filemark 8 kbytes
speed 1500 kps
lbl-templ /usr/local/etc/amanda/3hole.ps
}


#==#
define dumptype BDglobal {
comment Global definitions
index yes
priority medium
compress client fast
}
define dumptype BDnormal {
BDglobal
record yes
}

--
---
#  DAILY configuration
#=#

includefile /usr/local/etc/amanda/configMAIN.include

#=#


dumpcycle 7 days
runspercycle 5
runtapes 5
tapecycle 100 tapes

# 42-stacker unit:   # 11/01/09
changerdev /dev/changer
tapedev tape:/dev/nst2# the no-rewind tape device to be used
#   nst2  = changer 0   top unit,  where 2/3 of daily tapes are

changerfile /usr/local/etc/amanda/chg-daily-42   #my config data

tapetype SDLT320# what kind of tape
#  (see tapetypes in  ../configMAIN.include)

##
holdingdisk hd1 {
comment main holding disk
directory /spool/amanda/daily   # where the holding disk is
use -100 Mb # how much space can we use on it
# a non-positive value means:
#use all space but that value
   # 20Gb was causing a perl glitch: - 
values in flush

chunksize 2000Mb# size of chunk if you want big dump to be
# dumped on multiple files on holding disks
#  N Kb/Mb/Gb split images in chunks of size N
# The maximum value should be
# (MAX_FILE_SIZE - 1Mb)
#  0  same as INT_MAX bytes
}
holdingdisk hd2 {
directory /spool2/amanda/daily
use -100 Mb
chunksize 2000Mb# size of chunk if you want big dump to be
# dumped on multiple files on holding disks
#  N Kb/Mb/Gb split images in chunks of size N
# The maximum value should be
# (MAX_FILE_SIZE - 1Mb)
#  0  same as INT_MAX bytes
}

##
define dumptype dailyNormal {
BDnormal
}

define dumptype dailyNormalFast {
BDnormal
maxdumps 3
}


The report and debug files on the server may provide more clues

Inparallel - not paralleling

2009-11-12 Thread Deb Baddorf

I have a 6 yr old amanda config which has been very nicely using up to
10 parallel dumpers.   I've got 32 client nodes,  so I have  MAXDUMPS
set to 2.  The parallelism I desire is across different clients,   not so much
on the same client.

Taking the same configuration (editted)   to a new machine and new tape changer
robot,   I've still got INPARALLEL  set to 10 ...   but no parallelism is
occurring  (in test runs of only 7 client nodes).   What am I missing --
why is my new setup not using multiple dumpers?   Seven clients ought
to be enough to cause parallelism.

Deb Baddorf
Fermilab


Oh -- the new server is Linux rather than FreeBSD,  so that's  another
difference.   But:
ulimit -a
core file size  (blocks, -c) 0
data seg size   (kbytes, -d) unlimited
scheduling priority (-e) 0
file size   (blocks, -f) unlimited
pending signals (-i) 32767
max locked memory   (kbytes, -l) 32
max memory size (kbytes, -m) unlimited
open files  (-n) 1024
pipe size(512 bytes, -p) 8
POSIX message queues (bytes, -q) 819200
real-time priority  (-r) 0
stack size  (kbytes, -s) 10240
cpu time   (seconds, -t) unlimited
max user processes  (-u) 32767
virtual memory  (kbytes, -v) unlimited
file locks  (-x) unlimited

-
my configs:
--
#
# configMAIN.include - Amanda configuration global definitions
#
#
dumpuser operator
inparallel 10
dumporder BTBTBTBTBT
taperalgo largestfit

flush-threshold-dumped 70
flush-threshold-scheduled 100
taperflush 0
netusage  1800 Kbps
bumpsize 2 Mb
bumpdays 1
bumppercent 20
bumpmult 4

maxdumpsize -1
amrecover_do_fsf yes
amrecover_check_label yes
amrecover_changer changer

etimeout 2000
dtimeout 1800
ctimeout 30

tapebufs 20
tpchanger chg-zd-mtx  # the tape-changer glue script

reserve 02 # percent
autoflush yes
##


define tapetype SDLT320 {
 comment HP Super DLTtape I, data cartridge, C7980A, compression on
 length 139776 mbytes
 filemark 0 kbytes
 speed 13980 kps
}


#old 30-tape stacker.   Keep in case needed to read older tapes.
define tapetype DLT4000-IV {
comment Quantum DLT4000 or DLT7000 writing DLT4000 format, with 
DLTtape IV uncompr

essed
length 4 mbytes
filemark 8 kbytes
speed 1500 kps
lbl-templ /usr/local/etc/amanda/3hole.ps
}


#==#
define dumptype BDglobal {
comment Global definitions
index yes
priority medium
compress client fast
}
define dumptype BDnormal {
BDglobal
record yes
}

--
---
#  DAILY configuration
#=#

includefile /usr/local/etc/amanda/configMAIN.include

#=#


dumpcycle 7 days
runspercycle 5
runtapes 5
tapecycle 100 tapes

# 42-stacker unit:   # 11/01/09
changerdev /dev/changer
tapedev tape:/dev/nst2  # the no-rewind tape device to be used
#   nst2  = changer 0   top unit,  where 2/3 of daily tapes are

changerfile /usr/local/etc/amanda/chg-daily-42   #my config data

tapetype SDLT320# what kind of tape
#  (see tapetypes in  ../configMAIN.include)

##
holdingdisk hd1 {
comment main holding disk
directory /spool/amanda/daily   # where the holding disk is
use -100 Mb # how much space can we use on it
# a non-positive value means:
#use all space but that value
   # 20Gb was causing a perl glitch: - values in flush
chunksize 2000Mb# size of chunk if you want big dump to be
# dumped on multiple files on holding disks
#  N Kb/Mb/Gb split images in chunks of size N
# The maximum value should be
# (MAX_FILE_SIZE - 1Mb)
#  0  same as INT_MAX bytes
}
holdingdisk hd2 {
directory /spool2/amanda/daily
use -100 Mb
chunksize 2000Mb# size of chunk if you want big dump to be
# dumped on multiple files on holding disks
#  N Kb/Mb/Gb split images in chunks of size N
# The maximum value should be
# (MAX_FILE_SIZE - 1Mb)
#  0  same as INT_MAX bytes
}

##
define dumptype dailyNormal {
BDnormal
}

define dumptype dailyNormalFast {
BDnormal
maxdumps 3
}



GLIB too old - Linux ?

2009-05-20 Thread Deb Baddorf

I've found some reports of GLIB too old errors on initial amanda compilation,
on Solaris boxes,   but I'm having the problem on a Linux box.

Any ideas?   I can send the config.log but it's 32000 lines long or so.
Deb Baddorf
Fermi National Accelerator Lab


Re: GLIB too old - Linux ?

2009-05-20 Thread Deb Baddorf

At 7:03 PM -0400 5/20/09, Dustin J. Mitchell wrote:

On Wed, May 20, 2009 at 6:55 PM, Deb Baddorf badd...@fnal.gov wrote:

 # yum info glib2
 Loading kernel-module plugin
 Installed Packages
 Name   : glib2
 Arch   : i386
 Version: 2.12.3
 Release: 4.el5_3.1


This should be fine.  If you look at the end of config.log, there are
three sections of variable dumps at the end; *before* those, you will
see the log data.  Please examine, and if necessary copy and paste the
last bit of log data into an email.

Dustin

--
Open Source Storage Engineer
http://www.zmanda.com




Holler if I've removed too much data:

| /* end confdefs.h.  */
|
|
| int
| main ()
| {
| return main ();
|   ;
|   return 0;
| }
configure:56189: result: no
configure:56208: checking for pkg-config
configure:56227: found /usr/bin/pkg-config
configure:56239: result: /usr/bin/pkg-config
configure:56395: checking pkg-config is at least version 0.7
configure:56398: result: yes
configure:56416: checking for GLIB - version = 2.2.0
configure:56579: result: no
configure:56617: gcc -o conftest -g -O2 -fno-strict-aliasing 
-D_LARGEFILE_SOURCE -D_FILE_OFFSET_BITS=64   -fno-strict-aliasing 
-D_GNU_SOURCE   conftest.c -lnsl -lresolv5

conftest.c:253:18: error: glib.h: No such file or directory
conftest.c: In function 'main':
conftest.c:259: error: 'glib_major_version' undeclared (first use in 
this function)

conftest.c:259: error: (Each undeclared identifier is reported only once
conftest.c:259: error: for each function it appears in.)
conftest.c:259: error: 'glib_minor_version' undeclared (first use in 
this function)
conftest.c:259: error: 'glib_micro_version' undeclared (first use in 
this function)

configure:56624: $? = 1
configure: failed program was:
| /* confdefs.h.  */
| #define PACKAGE_NAME 
| #define PACKAGE_TARNAME 
| #define PACKAGE_VERSION 
| #define PACKAGE_STRING 
| #define PACKAGE_BUGREPORT 
| #define PACKAGE amanda
| #define VERSION 2.6.1p1
| #define STDC_HEADERS 1
| #define HAVE_SYS_TYPES_H 1
| #define HAVE_SYS_STAT_H 1
| #define HAVE_STDLIB_H 1
| #define HAVE_STRING_H 1
| #define HAVE_MEMORY_H 1
| #define HAVE_STRINGS_H 1
| #define HAVE_INTTYPES_H 1
| #define HAVE_STDINT_H 1
| #define HAVE_UNISTD_H 1
| #define __EXTENSIONS__ 1
| #define _ALL_SOURCE 1
| #define _GNU_SOURCE 1
| #define _POSIX_PTHREAD_SEMANTICS 1
| #define _TANDEM_SOURCE 1
| #define HAVE_ALLOCA_H 1
| #define HAVE_ALLOCA 1
| #define HAVE_ARPA_INET_H 1
| #define HAVE_FLOAT_H 1
| #define HAVE_SYS_PARAM_H 1
| #define HAVE_SYS_VFS_H 1
| #define HAVE_SYS_SOCKET_H 1
| #define HAVE_NETDB_H 1
| #define HAVE_NETINET_IN_H 1
| #define HAVE_SYS_TIME_H 1
| #define HAVE_STDINT_H 1
| #define HAVE_WCHAR_H 1
| #define HAVE_STDIO_H 1
| #define HAVE_STDLIB_H 1
| #define HAVE_STRING_H 1
| #define HAVE_SYS_STAT_H 1
| #define HAVE_UNISTD_H 1
| #define restrict __restrict
| #define HAVE_INCLUDE_NEXT 1
| #define HAVE_IPV4 1
| #define HAVE_IPV6 1
| #define HAVE_GETOPT_H 1
| #define HAVE_GETOPT_LONG_ONLY 1
| #define HAVE_DECL_GETENV 1
| #define HAVE_GETTIMEOFDAY 1
| #define HAVE_LSTAT 1
| #define USE_POSIX_THREADS 1
| #define USE_POSIX_THREADS_WEAK 1
| #define HAVE_PTHREAD_RWLOCK 1
| #define HAVE_PTHREAD_MUTEX_RECURSIVE 1
| #define HAVE_DECL_SNPRINTF 1
| #define HAVE__BOOL 1
| #define HAVE_STDBOOL_H 1
| #define HAVE_LONG_LONG_INT 1
| #define HAVE_UNSIGNED_LONG_LONG_INT 1
| #define HAVE_DECL_STRDUP 1
| #define _FILE_OFFSET_BITS 64
| #define HAVE_WCHAR_T 1
| #define HAVE_WINT_T 1
| #define HAVE_INTTYPES_H_WITH_UINTMAX 1
| #define HAVE_STDINT_H_WITH_UINTMAX 1
| #define HAVE_INTMAX_T 1
| #define HAVE_ALLOCA 1
| #define HAVE_SYS_MOUNT_H 1
| #define STAT_STATFS2_BSIZE 1
| #define HAVE_SYS_STATFS_H 1
| #define HAVE_GETHOSTBYNAME 1
| #define HAVE_DECL_GETADDRINFO 1
| #define HAVE_DECL_FREEADDRINFO 1
| #define HAVE_DECL_GAI_STRERROR 1
| #define HAVE_DECL_GETNAMEINFO 1
| #define HAVE_STRUCT_ADDRINFO 1
| #define HAVE_INET_NTOP 1
| #define HAVE_DECL_INET_NTOP 1
| #define HAVE_MALLOC_POSIX 1
| #define HAVE_MKDTEMP 1
| #define HAVE_SYS_SYSINFO_H 1
| #define HAVE_SYS_PARAM_H 1
| #define HAVE_SYS_SYSCTL_H 1
| #define HAVE_SYSCTL 1
| #define HAVE_STDINT_H 1
| #define HAVE_SNPRINTF 1
| #define HAVE_STRDUP 1
| #define HAVE_DECL_MKDIR 1
| #define HAVE_SNPRINTF 1
| #define HAVE_WCSLEN 1
| #define HAVE_DECL__SNPRINTF 0
| #define HAVE_VISIBILITY 1
| #define HAVE_STDINT_H 1
| #define CLIENT_LOGIN operator
| #define CONFIG_DIR /usr/local/etc/amanda
| #define GNUTAR_LISTED_INCREMENTAL_DIR /usr/local/var/amanda/gnutar-lists
| #define AMANDA_TMPDIR /tmp/amanda
| #define CHECK_USERID 1
| #define BINARY_OWNER operator
| #define USE_REUSEADDR 1
| #define AMANDA_DBGDIR /tmp/amanda
| #define AMANDA_DEBUG_DAYS 4
| #define SERVICE_SUFFIX 
| #define AMANDA_SERVICE_NAME amanda
| #define KAMANDA_SERVICE_NAME kamanda
| #define WANT_SETUID_CLIENT 1
| #define DEFAULT_SERVER bdback
| #define DEFAULT_CONFIG daily
| #define DEFAULT_TAPE_SERVER bdback
| #define DEFAULT_TAPE_DEVICE /dev/nsa0
| #define DEFAULT_CHANGER_DEVICE /dev/null

Re: GLIB too old - Linux ?

2009-05-20 Thread Deb Baddorf

At 7:46 PM -0400 5/20/09, Dustin J. Mitchell wrote:

On Wed, May 20, 2009 at 7:38 PM, Deb Baddorf badd...@fnal.gov wrote:

 conftest.c:253:18: error: glib.h: No such file or directory


This suggests you may not have the glib development package (that
includes the headers) installed.  Ah, the joys of binary-only distros
:)

Dustin

--
Open Source Storage Engineer
http://www.zmanda.com





Hmmm, yes.  FIND   over the whole disk only finds  glib.so   type files.
Thanks!
Deb


Re: GLIB too old - Linux ?

2009-05-20 Thread Deb Baddorf

At 6:07 PM -0400 5/20/09, Dustin J. Mitchell wrote:

On Wed, May 20, 2009 at 4:53 PM, Deb Baddorf badd...@fnal.gov wrote:

 I've found some reports of GLIB too old errors on initial amanda
 compilation,
 on Solaris boxes,   but I'm having the problem on a Linux box.


What distro?  What version of glib is installed?  Note that glib != glibc.

Dustin

--
Open Source Storage Engineer
http://www.zmanda.com


amanda-2.6.1p1

# uname -a
Linux beams-docdb1.fnal.gov 2.6.18-128.1.6.el5 #1 SMP Wed Apr 1 
07:03:59 EDT 2009 i686 i686 i386 GNU/Linux

# cat /etc/redhat-release
Scientific Linux SLF release 5.2 (Lederman)

# yum info glib2
Loading kernel-module plugin
Installed Packages
Name   : glib2
Arch   : i386
Version: 2.12.3
Release: 4.el5_3.1

]# yum install glibc
Loading kernel-module plugin
Setting up Install Process
Parsing package install arguments
Package glibc - 2.5-24.i686 is already installed.
Nothing to do


# yum info glib
Loading kernel-module plugin
Installed Packages
Name   : glib
Arch   : i386
Epoch  : 1
Version: 1.2.10

# yum info gcc
Loading kernel-module plugin
Installed Packages
Name   : gcc
Arch   : i386
Version: 4.1.2
Release: 42.el5


Re: Amanda must be run as user amandabackup when using bsdtcp authentication

2009-05-19 Thread Deb Baddorf

At 3:01 PM +0200 5/19/09, Abilio Carvalho wrote:

owner is amandabackup:disk

I can log in to the account just fine, I don't think any more logging 
is possible though I'll check. I checked the manifest for the service 
and it confirms that it is SUPPOSED to start as amandabackup.


If I do what you say, and log into amandabackup and run that, I get 
the following on /tmp/amanda/amandad/amandad.TIMESTAMP.debug:


1242737635.958239: amandad: pid 9504 ruid 6028 euid 6028 version 
2.6.1: start at Tue May 19 14:53:55 2009
1242737635.989035: amandad: security_getdriver(name=bsdtcp) returns 
ff31c788

1242737635.992943: amandad: version 2.6.1
1242737635.992955: amandad: build: VERSION=Amanda-2.6.1
1242737635.992961: amandad:BUILT_DATE=Mon May 18 12:33:06 
CEST 2009

1242737635.992967: amandad:BUILT_MACH=sparc-sun-
solaris2.10 BUILT_REV=1609
1242737635.992973: amandad:BUILT_BRANCH=amanda-261 CC=/
opt/SUNWspro/bin/cc
1242737635.992979: amandad: paths: bindir=/bin sbindir=/sbin 
libexecdir=/libexec
1242737635.992984: amandad:amlibexecdir=/libexec/amanda 
mandir=/share/man
1242737635.992990: amandad:AMANDA_TMPDIR=/tmp/amanda 
AMANDA_DBGDIR=/tmp/amanda
1242737635.992995: amandad:CONFIG_DIR=/etc/amanda 
DEV_PREFIX=/dev/dsk/

1242737635.993000: amandad:RDEV_PREFIX=/dev/rdsk/ DUMP=/
usr/sbin/ufsdump
1242737635.993005: amandad:RESTORE=/usr/sbin/ufsrestore 
VDUMP=UNDEF VRESTORE=UNDEF
1242737635.993011: amandad:XFSDUMP=UNDEF XFSRESTORE=UNDEF 
VXDUMP=UNDEF VXRESTORE=UNDEF

1242737635.993016: amandad:SAMBA_CLIENT=/usr/sfw/bin/
smbclient
1242737635.993021: amandad:GNUTAR=/usr/sfw/bin/gtar 
COMPRESS_PATH=/usr/bin/gzip
1242737635.993026: amandad:UNCOMPRESS_PATH=/usr/bin/gzip 
LPRCMD=/usr/bin/lp

1242737635.993032: amandad: MAILER=UNDEF listed_incr_dir=/
var/amanda/gnutar-lists
1242737635.993037: amandad: defs:  DEFAULT_SERVER=galadhrim 
DEFAULT_CONFIG=DailySet1
1242737635.993042: amandad:DEFAULT_TAPE_SERVER=galadhrim 
DEFAULT_TAPE_DEVICE=
1242737635.993047: amandad:HAVE_MMAP NEED_STRSTR 
HAVE_SYSVSHM AMFLOCK_POSIX AMFLOCK_LOCKF
1242737635.993053: amandad:AMFLOCK_LNLOCK SETPGRP_VOID 
AMANDA_DEBUG_DAYS=4 BSD_SECURITY
1242737635.993058: amandad:USE_AMANDAHOSTS 
CLIENT_LOGIN=amandabackup CHECK_USERID
1242737635.993063: amandad:HAVE_GZIP COMPRESS_SUFFIX=.gz 
COMPRESS_FAST_OPT=--fast
1242737635.993069: amandad:COMPRESS_BEST_OPT=--best 
UNCOMPRESS_OPT=-dc
1242737635.997381: amandad: getpeername returned: Socket operation on 
non-socket
1242737635.997434: amandad: pid 9504 finish time Tue May 19 14:53:55 
2009



so it does seem like as inetd problem and not amanda. I just have no 
clue as to how that's possible


These are my instructs (to myself)  for Linux machines -- but they may spark
a thought in your situation:
the client needs lines like this

add these lines to /etc/services
amanda 10080/udp # Dump server control
amidxtape 10083/tcp # Amanda tape indexing
amandaidx 10082/tcp # Amanda recovery program

add these lines to   /etc/inetd.conf   and then kill -HUP  inetd process
 (2 lines --- mine may wrap)

amanda dgram udp wait amandabackup  /usr/local/libexec/amanda/amandad amandad
amidxtape stream tcp nowait amandabackup 
/usr/local/libexec/amanda/amidxtaped amidxtaped






On May 19, 2009, at 2:45 PM, Jean-Louis Martineau wrote:


 Who is the owner of /tmp/amanda/amandad/amandad.20090519111556.debug

 Can you use the amandabackup account? Can you log to that account?
 Can you enabled more logging in inetd? It is an inetd 
 misconfiguration if amandad is run as root.


 Log as amandabackup and run '/libexec/amanda/amandad -auth=bsdtcp 

  amdump'


 Jean-Louis

 Abilio Carvalho wrote:

 follow-up:

 I was wrong, it wasn't syslog, it was messages. There I now see a  
 couple lines like:


 May 19 13:58:23 galadhrim inetd[24015]: [ID 317013 daemon.notice]  
 amanda[27116] from 172.22.0.23 44223
 May 19 13:58:31 galadhrim inetd[24015]: [ID 317013 daemon.notice]  
 amanda[27214] from 172.22.0.23 703
 May 19 13:59:12 galadhrim inetd[24015]: [ID 317013 daemon.notice]  
 amanda[27619] from 172.22.0.23 703



 On May 19, 2009, at 1:37 PM, Jean-Louis Martineau wrote:



 Abilio Carvalho wrote:


 the log directory on the client only has the following:

 r...@backupclient:/tmp/amanda/amandad# cat amandad. 
 20090519111556.debug
 1242724556.328466: amandad: pid 18933 ruid 0 euid 0 version 
 2.6.1:   start at Tue May 19 11:15:56 2009




 ruid 0 euid 0
 That's root user
 Do you have an amandabackup user on the client
 Check inet log

 Jean-Louis


 1242724556.339271: amandad: security_getdriver(name=bsdtcp)  
 returns  ff31c788
 1242724556.339369: amandad: critical (fatal): Amanda must be run  
 as  user 'amandabackup' when using 'bsdtcp' authentication


 I can't even see what user it's 

Re: [Amanda-users] amanda error

2009-03-04 Thread Deb Baddorf

On this page
   http://wiki.zmanda.com/man/amanda.conf.5.html

re  the equations in the definition of flush-threshold-dumped int
and  flush-threshold-scheduled int and   taperflush int

What is the math symbol between  t  and  d  ?
times  (multiply) is the only thing that makes sense to me,
but what I see (in 2 broswers;  the third leaves it blank)   is
? in a diamond box.

IE   I see   h + s   t ? d

but I think it should meanh + s   t (times) d

Is this correct?
Deb Baddorf
Fermilab  Accelerator Controls


Re: sendsize finishes, planner doesn't notice...

2007-10-11 Thread Deb Baddorf

The seems a bit similar to firewall issues we had a while back ---
the sendsize estimate took long enough that the connection FROM the
server was closed.  The firewall only allowed connections made by the
server, or replies back through the same connection... and needed to be
opened for the client to start a new connection back TO the server,  when
the estimate took over a certain amount of time.
   My understanding of it may be poor, but perhaps this will jog somebody's
mind
Deb



At 3:39 PM -0400 10/11/07, Paul Lussier wrote:

Jean-Louis Martineau [EMAIL PROTECTED] writes:


 It's weird.

 Do you have an amdump log file or just amdump.1?
 The only way to get this is if you killed amanda process on the
 server, maybe a server crash.
 Do you still have amanda process running on the server?


I do now. I started amanda off Tuesday night at Tue Oct  9 22:48:34 2007.

According the /var/log/amanda/amandad/amandad.20071009224834.debug file:

  amandad: time 21604.147: pid 26218 finish time Wed Oct 10 04:48:39 2007

According to sendsize.20071009224835.debug:

amanda2:/var/log/amanda/client/offsite# tail sendsize.20071009224835.debug
errmsg is /usr/local/libexec/runtar exited with status 1: see 
/var/log/amanda/client/offsite/sendsize.20071009224835.debug
sendsize[26687]: time 37138.237: done with amname /permabit/user/uz 
dirname /permabit/user spindle -1
sendsize[26379]: time 37823.330: Total bytes written: 541649408000 
(505GiB, 14MiB/s)

sendsize[26379]: time 37823.453: .
sendsize[26379]: time 37823.453: estimate time for /permabit/user/eh 
level 0: 37823.251
sendsize[26379]: time 37823.453: estimate size for /permabit/user/eh 
level 0: 528954500 KB

sendsize[26379]: time 37823.453: waiting for runtar /permabit/user/eh child
sendsize[26379]: time 37823.453: after runtar /permabit/user/eh wait
errmsg is /usr/local/libexec/runtar exited with status 1: see 
/var/log/amanda/client/offsite/sendsize.20071009224835.debug
sendsize[26379]: time 37823.537: done with amname /permabit/user/eh 
dirname /permabit/user spindle -1


So, sendsize claims to be done, yet planner doesn't think so:

  planner: time 16531.383: got partial result for host amanda2 disk \
 /permabit/user/uz: 0 - -2K, -1 - -2K, -1 - -2K
  [...]
  planner: time 16531.384: got partial result for host amanda2 disk \
 /permabit/user/eh: 0 - -2K, -1 - -2K, -1 - -2K

amdump is currently still running, amandad has finished, but we're
still waiting for estimates which will never arrive.

I also find it disturbing that the debug log I'm looking at,
sendsize.20071009224835.debug, tells me to look at the log I'm looking
at for further information:

errmsg is /usr/local/libexec/runtar exited with status 1: see \
/var/log/amanda/client/offsite/sendsize.20071009224835.debug

Any idea why amandad is dying before sending the estimate data back to
the planner?  My etimeout is currently set to:

  # grep timeout /etc/amanda/offsite/amanda.conf
  etimeout  72000  # number of seconds per filesystem for estimates.
  dtimeout  72000 # number of idle seconds before a dump is aborted.
  ctimeout30  # maximum number of seconds that amcheck waits
  amanda2:/var/log/amanda/server/offsite# su - backup -c 'amadmin 
offsite config' | grep -i timeout

  ETIMEOUT  72000
  DTIMEOUT  72000
  CTIMEOUT  30

  amanda2:/var/log/amanda/server/offsite# /usr/local/sbin/amgetconf 
offsite etimeout

72000

su - backup -c 'amadmin offsite version'
build: VERSION=Amanda-2.5.2p1
   BUILT_DATE=Tue Sep 4 15:45:27 EDT 2007
   BUILT_MACH=Linux amanda2 2.6.18-4-686 #1 SMP Mon Mar 26 
17:17:36 UTC 2007 i686 GNU/Linux

   CC=gcc-4.2
   CONFIGURE_COMMAND='./configure' '--prefix=/usr/local' 
'--enable-shared' '--sysconfdir=/etc' '--localstatedir=/var/lib' 
'--with-gnutar-listdir=/var/lib/amanda/gnutar-lists' 
'--with-index-server=localhost' '--with-user=backup' 
'--with-group=backup' '--with-bsd-security' '--with-amandahosts' 
'--with-smbclient=/usr/bin/smbclient' 
'--with-debugging=/var/log/amanda' 
'--with-dumperdir=/usr/lib/amanda/dumper.d' 
'--with-tcpportrange=5,50100' '--with-udpportrange=840,860' 
'--with-maxtapeblocksize=256' '--with-ssh-security'

paths: bindir=/usr/local/bin sbindir=/usr/local/sbin
   libexecdir=/usr/local/libexec mandir=/usr/local/man
   AMANDA_TMPDIR=/tmp/amanda
   AMANDA_DBGDIR=/var/log/amanda CONFIG_DIR=/etc/amanda
   DEV_PREFIX=/dev/ RDEV_PREFIX=/dev/ DUMP=UNDEF
   RESTORE=UNDEF VDUMP=UNDEF VRESTORE=UNDEF XFSDUMP=UNDEF
   XFSRESTORE=UNDEF VXDUMP=UNDEF VXRESTORE=UNDEF
   SAMBA_CLIENT=UNDEF GNUTAR=/bin/tar
   COMPRESS_PATH=/bin/gzip UNCOMPRESS_PATH=/bin/gzip
   LPRCMD=/usr/bin/lpr MAILER=/usr/bin/Mail
   listed_incr_dir=/var/lib/amanda/gnutar-lists
defs:  DEFAULT_SERVER=localhost DEFAULT_CONFIG=DailySet1
   DEFAULT_TAPE_SERVER=localhost HAVE_MMAP NEED_STRSTR
   HAVE_SYSVSHM LOCKING=POSIX_FCNTL SETPGRP_VOID 

amreport crash - proposed patch

2007-02-09 Thread Deb Baddorf


amdump  myconfig
  has been crashing during the report phase  -- after the dumps are
done and okay.   We are at 2.5.1p2which was thought to fix this,
but it hasn't helped.
   After some local debugging,   my  expert source tells me
that this patch should fix the problem  (it fixed ours):



The main() function in reporter.c has the lines:

if (postscript) {
do_postscript_output();
}

such that the function do_postscript_output() will only be called if the
variable postscript is not null...

however the function do_postscript_output() subsequently calls the
function copy_template_file() which MAY under several conditions call the
function afclose(postscript), that sets the postscript variable to NULL

after the return from copy_template_file() to do_postscript_output() the
do_postscript_output() function proceeds to call fprintf() with the
assumption that the postscript variable is valid, resulting in a core
dump.

The following diff should cure that core dump.

diff -wub reporter.c.orig reporter.c
--- reporter.c.orig Thu Feb  8 14:16:40 2007
+++ reporter.c  Thu Feb  8 14:17:16 2007
@@ -2796,6 +2796,8 @@

copy_template_file(tapetype_get_lbl_templ(tp));

+   if (postscript == NULL) return;
+
/* generate a few elements */
fprintf(postscript,(%s) DrawDate\n\n,
nicedate(run_datestamp ? run_datestamp : 0));

Please report this information back to the amanda developers.
(I'll add the patch and install on our server.)

Thanks,

 - Tim


-auth vs 2.4.4p1 clients 2.5.1p2 server

2007-01-19 Thread Deb Baddorf

What kind of configuration do I need,   now that I've upgraded my
server to  2.5.1p2? Most of the clients are 2.4.something
and are no longer able to connect to do a recover,   although dumps
work fine.

The server is itself a client.  It can do a recover now,  but if we change
something I'll need to make this work too.
Oh,  and one new client  is at 2.5.something.

All are backing up fine,   but what can I do about recover connecting?
I got mixed versions of clients,  I guess!

Deb Baddorf


amidxtape req'd on clients?

2007-01-19 Thread Deb Baddorf
My clients seem to fail at doing recovers   unless I have the 
amidxtaped  process
installed  (and in inetd.conf) on the client. Yet doing an 
install as  client-only
doesn't install this piece.   I wind up doing server installs on most 
of my clients.


What am I doing wrong?   I gather that amidxtaped  isn't supposed to 
be needed on

the client,  but I can't make do without it.

Deb Baddorf


not allowed to execute service amindexd

2006-12-18 Thread Deb Baddorf

Amanda help gurus:
My backup server NODEX is the only node (so far)  to run at 2.5.1p1.
So other client nodes aren't having this problem, but
NODEX   acting as a client (and also the server)  has this complaint:

NODEX  amrecover  daily
AMRECOVER Version 2.5.1p1. Contacting server on  NODEX ...
NAK: user root from NODEX  is not allowed to execute the service 
amindexd: Please add amindexd amidxtaped to the line in 
/home/operator/.amandahosts



So,  okay,   I changed the   .amandahosts line from
OLD: NODEX  root
to
NEW:NODEX  root  amindexd  amidxtaped


Now the error message changes, but I still can't run amrecover:
NODEX  amrecover daily
AMRECOVER Version 2.5.1p1. Contacting server on NODEX ...
NAK: amindexd: invalid service


What else does version 2.5.1p1  want me to do here?
Deb Baddorf


amtoc -a is producing tape.toc not 'tape'.toc

2006-11-10 Thread Deb Baddorf

Hiya --
   I recompiled to version 2.5.0p2  a week ago.   Previously 
(probably a lower version

but I forget)
amtoc  -a logfile
produced a file with the tape label as the name. ex:   mytape01.toc

Since then,  it is producing the literal file tape.toc
which is less helpful, cause each day overwrites the previous file!

Am I doing something wrong,  or has   amtoc   changed?

Deb Baddorf


Re: distlist order ?

2005-10-24 Thread Deb Baddorf

On Mon, Oct 24, 2005 at 12:06:51PM -0700, Mike Delaney wrote:

 On Mon, Oct 24, 2005 at 01:57:58PM -0400, Jon LaBadie wrote:
 
  I've not used them, but aren't there some disklist config options
  to specify do these at a specific time or delay these for some time
  after starting amdump?

 There's the starttime dumptype option to specify a fixed not before time
 of day, but using that for this purpose means estimating when the dumps of

  all the other systems are likely to be done.




How about specifying  not before   cron-start-time  + 1 hour   for everything
*except*   the backup node itself?Force it to be the first one.It is
easier to estimate how long one node will take,than how long all 
the rest of

them will take.

Of course, that means specifying a starttime  for all nodes but one.

Deb


Re: y didn't amanda report this as an error?

2003-09-25 Thread Deb Baddorf
At 09:39 AM 9/25/2003 -0400, Jean-Louis Martineau wrote:
Hi Deb,

Which release of amanda are you using?
server is running Amanda-2.4.3  on FreeBSD 4.7-RELEASE-p3
client is running Amanda-2.4.3b4 on FreeBSD 4.8-RELEASE i386
amanda-2.4.4p1 will report a failed dump for this kind error and
reschedule a level 0 for the next day.
amadmin CONFIG due NODE DISK
indicated the next level 0 wasn't scheduled for 7 days yet
(I forced one)

That was fixed on 2003-04-26.

Jean-Louis

On Wed, Sep 24, 2003 at 01:54:49PM -0500, Deb Baddorf wrote:
 From a client  machine,  the admin sent me this:

 Sep 24 02:45:32 daesrv /kernel: pid 7638 (gzip), uid 2: exited on 
signal 11
 (core dumped)

 The above message shows gzip crashed on daesrv last night.  It crashed
 because there is a hardware problem on that machine, but since it was
 probably part of an amanda backup that did not work as expected, I wanted
 to be sure amanda had reported something about it to you.   -client admin

 Amanda herself had reported a strange error in her mail report:

 daesrv.fna /usr lev 0 STRANGE
 .
 | DUMP: 33.76% done, finished in 1:20
 ? sendbackup: index tee cannot write [Broken pipe]
 | DUMP: Broken pipe
 | DUMP: The ENTIRE dump is aborted.
 ? index returned 1
 ??error [/sbin/dump returned 3, compress got signal 11]? dumper: strange
 [missing size line from sendbackup]
 ? dumper: strange [missing end line from sendbackup]
 \


 But it appears that she went ahead and stored the partial data on tape
 anyway,   and considered this a good level 0 backup.   (admin config due
 shows the next level 0 is 7 days away)

 daesrv.fnal.gov /usr 0 0 3605024 -- 47:40 1260.7 12:35 4773.9


 Why doesn't amanda recognize this as a failure?
 Am I missing something that I should have noticed?
 Or am I reading it wrong (the fact that due implies a level 0 was done)?
 Deb Baddorf
 ---
 Deb Baddorf [EMAIL PROTECTED]  840-2289
 Nobody told me that living happily ever after would be such hard work ...
 S. WhiteIXOYE



--
Jean-Louis Martineau email: [EMAIL PROTECTED]
Departement IRO, Universite de Montreal
C.P. 6128, Succ. CENTRE-VILLETel: (514) 343-6111 ext. 3529
Montreal, Canada, H3C 3J7Fax: (514) 343-5834
---
Deb Baddorf [EMAIL PROTECTED]  840-2289
Nobody told me that living happily ever after would be such hard work ...
S. WhiteIXOYE




Re: amcheck - why run it?

2003-09-25 Thread Deb Baddorf

on Donnerstag, 25. September 2003 at 15:57 you wrote to amanda-users:
JL Gee, someone already thought about that problem and the solution?
JL   It's in there
JL Feeling silly, sorry for the noise.
Never mind, we all come to that RTFM over and over again ;-)
And in any case ... now *I* don't have to go through the same
several steps you used to find the answer.  Thank you very much
for posting the solution to the list!!!
amverifyrun  -- who'da thunk it?  Kewl!

Deb Baddorf
---
Deb Baddorf [EMAIL PROTECTED]  840-2289
Nobody told me that living happily ever after would be such hard work ...
S. WhiteIXOYE




Re: Rewind before ejecting?

2003-09-25 Thread Deb Baddorf
At 11:57 AM 9/25/2003 -0400, M3 Freak wrote:
Hello,

I have what may seem like a silly questions, but let me assure you, I have 
absolutely no idea how tape backups work.  I've only just figured out why 
my RH9 system wasn't seeing the tape drive.  Configuring amanda and 
administering it is a whole different thing!

Anyway, my question is basic.  Last night I manually ran amdump, and it 
completed successfully.  I read the email amanda sent me, and it's now 
waiting for a new tape.  I know that I have to put in the new tape, 
label it, and then set up cron to run automatically tonight.  However, 
before I put in the new tape, should I just issue an eject command to 
the drive to spit the tape out, or do I have to rewind it before ejecting it?

I haven't used a tape backup system before, so I don't know what the 
consequences are of rewinding tapes or not before ejecting them.  I would 
very much appreciate suggestions/advice on this.

Thanks in advance.

Regards,

Kanwar
All tapes drives that I know of  will rewind the tape if you merely
issue an eject  or offline command.   Only an audio cassette
unit (ok,  or a VCR)   will ever hand you a tape in a half-way state.
So . commanding  rewind  and then  eject  is redundant.
But certainly not harmful!
Deb Baddorf

---
Deb Baddorf [EMAIL PROTECTED]  840-2289
Nobody told me that living happily ever after would be such hard work ...
S. WhiteIXOYE




y didn't amanda report this as an error?

2003-09-24 Thread Deb Baddorf
From a client  machine,  the admin sent me this:

Sep 24 02:45:32 daesrv /kernel: pid 7638 (gzip), uid 2: exited on signal 11 
(core dumped)

The above message shows gzip crashed on daesrv last night.  It crashed
because there is a hardware problem on that machine, but since it was
probably part of an amanda backup that did not work as expected, I wanted
to be sure amanda had reported something about it to you.   -client admin
Amanda herself had reported a strange error in her mail report:

daesrv.fna /usr lev 0 STRANGE
.
| DUMP: 33.76% done, finished in 1:20
? sendbackup: index tee cannot write [Broken pipe]
| DUMP: Broken pipe
| DUMP: The ENTIRE dump is aborted.
? index returned 1
??error [/sbin/dump returned 3, compress got signal 11]? dumper: strange 
[missing size line from sendbackup]
? dumper: strange [missing end line from sendbackup]
\

But it appears that she went ahead and stored the partial data on tape
anyway,   and considered this a good level 0 backup.   (admin config due
shows the next level 0 is 7 days away)
daesrv.fnal.gov /usr 0 0 3605024 -- 47:40 1260.7 12:35 4773.9

Why doesn't amanda recognize this as a failure?
Am I missing something that I should have noticed?
Or am I reading it wrong (the fact that due implies a level 0 was done)?
Deb Baddorf
---
Deb Baddorf [EMAIL PROTECTED]  840-2289
Nobody told me that living happily ever after would be such hard work ...
S. WhiteIXOYE




Re: y didn't amanda report this as an error?

2003-09-24 Thread Deb Baddorf
At 03:36 PM 9/24/2003 -0400, Jon LaBadie wrote:
On Wed, Sep 24, 2003 at 01:54:49PM -0500, Deb Baddorf wrote:
 From a client  machine,  the admin sent me this:

 Sep 24 02:45:32 daesrv /kernel: pid 7638 (gzip), uid 2: exited on 
signal 11
 (core dumped)

 The above message shows gzip crashed on daesrv last night.  It crashed
 because there is a hardware problem on that machine, but since it was
 probably part of an amanda backup that did not work as expected, I wanted
 to be sure amanda had reported something about it to you.   -client admin

 Amanda herself had reported a strange error in her mail report:

 daesrv.fna /usr lev 0 STRANGE
 .
 | DUMP: 33.76% done, finished in 1:20
 ? sendbackup: index tee cannot write [Broken pipe]

Note the problem was in making the index, not the backup.
Wel  but the client was doing it's own compressing.   So when the
gzipper failed,  the whole backup failed.   At only 33% finished.
I just did a test amrestore  (true,  amrecover wouldn't touch it).
Got about 1/3 the amount of data that ought to be on that disk.
So I think it really did fail,   but registered it as a successful level 0
backup.  :-(

 | DUMP: Broken pipe
 | DUMP: The ENTIRE dump is aborted.
 ? index returned 1
 ??error [/sbin/dump returned 3, compress got signal 11]? dumper: strange
 [missing size line from sendbackup]
 ? dumper: strange [missing end line from sendbackup]
 \


 But it appears that she went ahead and stored the partial data on tape
 anyway,   and considered this a good level 0 backup.   (admin config due
 shows the next level 0 is 7 days away)

 daesrv.fnal.gov /usr 0 0 3605024 -- 47:40 1260.7 12:35 4773.9

 Why doesn't amanda recognize this as a failure?
 Am I missing something that I should have noticed?
 Or am I reading it wrong (the fact that due implies a level 0 was done)?
Did your report show it was taped.  If so I suspect the backup is ok,
but using amrecover with the index will be suspect/problematical.
--
Jon H. LaBadie  [EMAIL PROTECTED]
 JG Computing
 4455 Province Line Road(609) 252-0159
 Princeton, NJ  08540-4322  (609) 683-7220 (fax)



barcode readers and am-labels

2003-08-20 Thread Deb Baddorf
Hi all --
I have a barcode reader with my tape robot.   I've tested the robot
and got it playing nicely with amanda;  nut I haven't yet told her that
it can do barcodes.
 In what fashion does amanda make use of the bar codes,   assuming
I tell her that I have a reader (havereader = 1)?
Does amanda use them while finding tapes,  or just report them to me
for my visual usage?
Does the barcode label need to match the amlabel string?
(it is kind of long for barcoding)
Or is the barcode  just *associated* with the amlabel
(in the mentioned barcode database)?
i.e.  is  UX0001  associated with  MyTapeStringDaily-034
by the barcode database
 or does it have to actually *say*   MyTapeStringDaily-034
on the barcode?
Deb Baddorf
Fermilab, Beams Division
---
Deb Baddorf [EMAIL PROTECTED]  840-2289
Nobody told me that living happily ever after would be such hard work ...
S. WhiteIXOYE




Gene Heskett's amanda build script

2003-03-12 Thread Deb Baddorf
On Sun February 23 2003 08:23, Carsten Rezny wrote:
Thanks for your reply, Jay.

On Sat, 2003-02-22 at 21:57, Jay Lessert wrote:
 [Posted and Cc'ed]

 On Sat, Feb 22, 2003 at 08:09:16PM +0100, Carsten Rezny wrote:
  I have installed Amanda 2.4.2p2 on a SuSE 8.0 box. The machine
  is server and client.
 
  When I run amcheck I get the following result
  ERROR: /dev/nst0: no tape online
 (expecting tape maphy-d05 or a new tape)

 I assume you understand this and will fix it, right?

Right, that's not the problem.

  
  WARNING: localhost: selfcheck request timed out.  Host down?
  Client check: 1 host checked in 30.006 seconds, 1 problem
  found

 You always hate to see localhost here, so many ways that can
 go wrong.

 Change the DLE (disklist entry) from localhost to the true
 hostname, double check ~amanda/.amandahosts according to
 docs/INSTALL, and try again.

OK, I replaced localhost with the real hostname, checked
~amanda/.amandahosts and still get the same result. Here is what I
 think is the problem:

[amandad debug file]
got packet:

Amanda 2.4 REQ HANDLE 000-20B90708 SEQ 1046003786
SECURITY USER amanda
SERVICE selfcheck
OPTIONS ;
GNUTAR /mirror 0 OPTIONS

|;bsd-auth;index;exclude-list=/usr/local/lib/amanda/exclude.gtar;

GNUTAR /home 0 OPTIONS

|;bsd-auth;index;exclude-list=/usr/local/lib/amanda/exclude.gtar;



sending ack:

Amanda 2.4 ACK HANDLE 000-20B90708 SEQ 1046003786


amandad: dgram_send_addr: sendto(0.0.0.0.875) failed: Invalid
 argument ^ amandad: sending
 REP packet:

Amanda 2.4 REP HANDLE 000-20B90708 SEQ 1046003786
ERROR [addr 0.0.0.0: hostname lookup failed]
^^^


It looks like amandad doesn't know the server's IP address. Both
 client and server run on the same machine and the machine knows
 its hostname. Any more ideas?

Thanks, Carsten
First off, 2.4.2p2 is pretty ancient these days.  Second, this looks
as if it wasn't told the host machines address when it was
configured.
There have been some changes since 2.4.2p2, most notably in how
excludes are handled, and in the support for backups to disk, but
you should probably goto the amanda.org site, and down near the
bottom of the page you'll see a link to snapshots, follow it and
get 2.4.4b1-20030220.  Then follow the build it directions.  They
are basicly, become the user amanda after making amanda a member of
the group disk(or some other equally high ranking group), unpack it
in /home/amanda (makes perms easier to track), cd into the
resultant directory and configure and make it.  Then become root,
and install it.
As one needs an anchor point so that a newer version can be built to
the same configuration as the older one, thereby maintaining
continuity of its characteristics, I've been using a script to do
that configuration, one that gets copied to each new incarnation of
amanda as I unpack it.
At risk of boring the rest of the list, here it is again, modify to
suit where needed of course but read the docs for the details,
always a good idea.
gh.cf, run as ./gh.cf after setting execute perms---
#!/bin/sh
# since I'm always forgetting to su amanda...
if [ `whoami` != 'amanda' ]; then
echo
echo  Warning 
echo Amanda needs to be configured and built by the user amanda,
echo but must be installed by user root.
echo
exit 1
fi
make clean
rm -f config.status config.cache
./configure --with-user=amanda \
--with-group=disk \
--with-owner=amanda \
--with-tape-device=/dev/nst0 \
--with-changer-device=/dev/sg1 \
--with-gnu-ld --prefix=/usr/local \
--with-debugging=/tmp/amanda-dbg/ \
--with-tape-server=192.168.1.3 \
--with-amandahosts \
--with-configdir=/usr/local/etc/amanda
--
By using this script, I have built and used with only a couple of
problems, nearly every snapshot released since 2.4.2p2, probably
close to 40 snapshots since I started using amanda.
If you installed from rpm's, remove all traces of the rpm's first
before installing the tarball built version.  I'd found that when
using RH's up2date, if it thought there was a fingerprint of amanda
installed, it would cheerfully proceed to update it, thoroughly
mucking up your carefully crafted local install.  I'd expect
something similar might occur with Suse's update tool if given half
a chance.
Does your distribution use inetd, or xinetd?

HTH

--
Cheers, Gene
AMD [EMAIL PROTECTED] 320M
[EMAIL PROTECTED]  512M
99.23% setiathome rank, not too shabby for a WV hillbilly
---
Deb Baddorf [EMAIL PROTECTED]  840-2289
You can't help getting older, but you don't have to get old.
- George Burns  IXOYE




Re: Full Backup Configuration

2003-01-17 Thread Deb Baddorf
At 05:26 PM 1/17/2003 -0500, you wrote:

On Friday 17 January 2003 16:04, DK Smith wrote:
In this discussion, talking about discounting days of of the week
 etc... is there an inherent assumption here that the amdump is
 invoked once per day? Or is that not a factor in the behavior of
 the system?

Thats the generally accepted practice, with amanda supposedly
compensating for the case of dumpcycle = 1 week, and
runsperdumpcycle = 5.  I think this is general target area of the
current discussion.  But then I'm not an real expert, I just play
one here, sometimes pretty foolishly... :-)


Actually though,  the situation being discussed here is:
dumpcycle = 3
backup MTWRF  not SSu
Why does a full always come due on M  rather than rotating
through the week?

There is an inherent assumption that Saturday and Sunday
really exist ... so when you say dumpcycle of ... well, any number,
amanda continues to count Saturday and Sunday.The question
here is can we ask amanda to bend reality  and ignore Sat and
Sun entirely.
We're asking to define a work week   rather than a
real time  week.   Apparently the lady (amanda) only knows
real time, just because *she* is willing to work any day of
the week!

Deb Baddorf



--
Cheers, Gene
AMD K6-III@500mhz 320M
Athlon1600XP@1400mhz  512M
99.22% setiathome rank, not too shabby for a WV hillbilly


---
Deb Baddorf [EMAIL PROTECTED]  840-2289
You can't help getting older, but you don't have to get old.
- George Burns  IXOYE






Re: Full Backup Configuration

2003-01-17 Thread Deb Baddorf
My understanding was that the OP just wanted to run 3-out-of-7, and
I have *no* idea why that should be any problem at all.

--
Jay Lessert   [EMAIL PROTECTED]
Accelerant Networks Inc.   (voice)1.503.439.3461
Beaverton OR, USA(fax)1.503.466.9472



I wasn't the one raising the question.  I was just trying to
provide an explanation so that people understood his question.

He wanted to run backups MTWRF ,  and have a 3 day dumpcycle.
I.E.   Full dumps every 3 days.   And he couldn't understand
why his ignoring Sat  Sun caused amanda to always be due for
a full dump on Monday.   I think he wanted

FULL  tues wed  FULL fri   [sat, sun skipped]
mon  FULL  tues  wed  FULL  [ sat, sun skipped]
mon tues  FULL thur fri  [sat, sun skipped]
FULL tues wed  FULL fri  [sat, sun skipped]

thinking that he'd get a full backup every third day that
he said  amdump  and that Sat, Sun would be ignored
if he didn't call amdump those days.   But since amanda
knows about a 7 day week . she wouldn't forget them.
And I'm not sure if a config can be made, that will to do what
he wanted.

(Sorry orig user --- I've forgotten your name!)
Deb Baddorf



Re: amanda 2.4.3 RESULTS MISSING

2003-01-15 Thread Deb Baddorf
Oh yes,  the refuses to mail report thing.   I've had it too.
Have no idea *why*  but the only fix I've heard about
works for me, too.

Fix: remove thelbl-temp   line from your configuration.

The report should get emailed OK now.   Tomorrow you
can probably put the line back in,  and it will be okay
for a while.

Maybe this fix  can prompt an actual developer to recognize
what it causing this?
Deb Baddorf




And what is funnier: I forgot changing the tape yesterday, so I have had to
run amflush, and the report DID get the results:

---
Subject: AMFLUSH MAIL REPORT FOR January 15, 2003
Date: Wed, 15 Jan 2003 12:04:55 +0100

The dumps were flushed to tape PRMD-002.
The next tape Amanda expects to use is: a new tape.


STATISTICS:
  Total   Full  Daily
      
Estimate Time (hrs:min)0:00
Run Time (hrs:min) 0:01
Dump Time (hrs:min)0:00   0:00   0:00
Output Size (meg)   0.00.00.0
Original Size (meg) 0.00.00.0
Avg Compressed Size (%) -- -- --
Filesystems Dumped0  0  0
Avg Dump Rate (k/s) -- -- --

Tape Time (hrs:min)0:00   0:00   0:00
Tape Size (meg) 0.20.00.2
Tape Used (%)   0.00.00.0   (level:#disks ...)
Filesystems Taped 2  0  2   (1:2)
Avg Tp Write Rate (k/s)22.0--22.0


NOTES:
  taper: tape PRMD-002 kb 160 fm 2 [OK]


DUMP SUMMARY:
 DUMPER STATSTAPER STATS
HOSTNAME DISKL ORIG-KB OUT-KB COMP% MMM:SS  KB/s MMM:SS  KB/s
-- - 
prmb /etc1 260 96  36.9   N/A   N/A0:05  18.3
prmb -ias/enfrio 1  10 64 640.0   N/A   N/A0:02  31.2

(brought to you by Amanda version 2.4.3)
---

Why amreport invoked by amdump fails to collect the stats and
invoked by amflush does not?

Sergio


---
Deb Baddorf [EMAIL PROTECTED]  840-2289
You can't help getting older, but you don't have to get old.
- George Burns  IXOYE






RE: swap out a tape within the rotation

2003-01-13 Thread Deb Baddorf
At 03:41 PM 1/13/2003 -0500, wab wrote:

That's a REALLY good point. the idea is to keep the data on this tape
forever... or at least
until we're sure we will never need the data again. Would amrestore
still work, though? If that
is true, then I'm less worried about losing the index.


how about
amadmin config  no-reuse  tapelabel
keeps indexes, etc   but doesn't try to ask for this tape ever again

amlabelup a new tape at the end of your number sequence

Deb Baddorf
---
Deb Baddorf [EMAIL PROTECTED]  840-2289
You can't help getting older, but you don't have to get old.
- George Burns  IXOYE






Re: dumpcycle - amanda with a mind of its own

2003-01-10 Thread Deb Baddorf


 it seems amanda does do a level 0 more
often than 7 days on almost every server, no matter what I put for
runcycle or dumpcycle


This  triggers a memory.  Maybe you are having the problem I had --
or something like it.


I was doing something I didn't realize was a problem.
It's a scope and nesting issue.
Perhaps other may benefit from my goof:



global file:
define dumptype AAA   {}

ERRONEOUS config file:
include global file
dumpcycle 1 week
define dumptype BBB {AAA;  then more stuff}


ERRONEOUS diskfile:
node  disk BBB #this is fine;  uses dumpcycle 7 days = 1week
node2disk2   AAA#  NOT ok;   uses some default dumpcycle of 10 
days


The FIX:
config file:
 include global file
 dumpcycle 1 week
 define dumptype BBB  {AAA;  more stuff}
 define dumptype CCC  {AAA}


You may use BBB or CCC   but do not ever use AAA directly in a disklist.

It's a scope and nesting issue.
AAA has only heard of the *default*  value for dumpcycle.

Deb Baddorf


---
Deb Baddorf [EMAIL PROTECTED]  840-2289
You can't help getting older, but you don't have to get old.
- George Burns  IXOYE





Re: Apple file names -- NameCleaner

2003-01-08 Thread Deb Baddorf


( mac strange file names)
Any commercial software that would solve the problem?

TIA
Rick


http://www.sigsoftware.com/namecleaner/index.html

Must be run on a mac  (where the cpu can still read the names).
We use it for moving files to a Windows server,  but the description
also says it is good for going to a Unix server.   Enjoy!
Not overly expensive,   as I recall.

Deb Baddorf
---
Deb Baddorf [EMAIL PROTECTED]  840-2289
You can't help getting older, but you don't have to get old.
- George Burns  IXOYE






Re: Holding disk died: how to make amanda forget the dump?

2003-01-03 Thread Deb Baddorf
At 03:53 PM 1/3/2003 +0100, Alexander JOLK wrote:

... holding disk has bad blocks and
amflush cannot flush out the dumps.  What should I do?  Can I make
amanda forget about the disk it failed to flush out so that it will get
rescheduled?  Or forget about the whole amdump run?


amadmin  YourConfig   clientname  [disks*]

This will delete the backups from amanda's records.
It might  (I dunno)  try to delete them from the holding disk too,
which is probably what you want.
See man admin   for more details.

Deb Baddorf
---
Deb Baddorf [EMAIL PROTECTED]  840-2289
You can't help getting older, but you don't have to get old.
- George Burns  IXOYE






Re: Holding disk died: how to make amanda forget the dump?

2003-01-03 Thread Deb Baddorf
At 03:53 PM 1/3/2003 +0100, Alexander JOLK wrote:

... holding disk has bad blocks and
amflush cannot flush out the dumps.  What should I do?  Can I make
amanda forget about the disk it failed to flush out so that it will get
rescheduled?  Or forget about the whole amdump run?


amadmin  YourConfig   clientname  [disks*]

This will delete the backups from amanda's records.


Then again --- re-reading the help file --- this appears to delete
ALL the backups for that client, setting it up as a new node
the next time you do backups.   That's probably *NOT*
what you want!Sorry -- scratch that suggestion.
Deb Baddorrf




Re: Upgrade to 2.4.3 has hiccup

2003-01-03 Thread Deb Baddorf


 And did you put a file   exclude.gtar   in there?

Yes, I did, and that got rid of the message.  But if amanda needs this
file, why doesn't it just create it?


Amanda doesn't need it --- but you told her to look for it.
It's a file that YOU fill in,   to say what to exclude from
the gnutar backup.   If you don't want to use it,   remove
the  exclude= yada yada yada /exclude.gtar  line
from your dumptype definition.

Deb Baddorf
---
Deb Baddorf [EMAIL PROTECTED]  840-2289
You can't help getting older, but you don't have to get old.
- George Burns  IXOYE






Re: tape filling algorithm? (flush, or dump)

2002-11-21 Thread Deb Baddorf


On Wed, Nov 20, 2002 at 01:47:50PM -0600, Deb Baddorf wrote:
 What kind of optimiaztion does amanda use while flushing
 dumps to tape?

There is no optimazation, first in, first out.
That's a need feature.

Jean-Louis



Here's a basic algorithm  (in pseudo-code).
Is this useful to somebody who is already working on this code?
Or should I attempt to fit it into the actual   taper  code myself?

Deb Baddorf

=
This *should*  handle both the FLUSH case  (all data is available from
the start)   and the on-going AMDUMP case  (data is added as we go).
I may notice more errors but I think I got them all.

To grok  the basic algorithm  (ridiculously simple)   ignore
or white out the  SPIN  paragraphs which wait for more data to
arrive.
===

stuff = collection of items to be put into knapsack.
   Dumpfiles in this case (includes tarfiles too)
Stuff is probably a doubly(?) linked list.  Operations required:
status = insert(item)
status = remove(ptr)  or maybe remove(item)
ptr = get_largest()
#boolean = is_empty()
boolean = has_data() #opposite sense to   is_empty()
boolean = more_coming()  #somebody elsewhere needs to tell us when no more 
dumps are coming down the pipe

item
number = size(item)
ptr = get_next_smaller()# return item next smaller than self
 (maybe should be a function of the COLLECTION instead of the item, 
for project OO code)
next_ptr #if you use a doubly linked list
prev_ptr #if you use a doubly linked list
string = get_name(item)   ??  for the index,  or some such.  Maybe for 
error message when dump won't fit.

knapsack#(or,  tape!)
number = size_remaining()
status = add(item) # responsible to DELETE from holding disk after 
successful   from COLLECTION above
make_full()# decree it full  when no available items can fit 
in the remaining space
boolean = has_space()# is not yet full


bunch-o-knapsacks   #(or,  how many tapes are available?)
ptr = gimme_one()
boolean = is_more()
==

PACK_ALL_STUFF   #i.e.  start taping
{
mystuff = new stuff()   # filled somewhere else, or here, but before 
algorithm below
allsacks = new bunch-o-knapsacks()  #number comes from   RUNTAPES parameter

yada yada yada
maybe twiddle thumbs till significant data arrives, if holding disk allows


while ( allsacks.is_more()   # stop when we run out of sacks (tapes)
or  (not mystuff.has_data() # or when we run out of data
 and  not mystuff.more_coming() ) )# and know that no more 
is coming
 {
   this_sack = allsacks.gimme_one()

   while ( not mystuff.has_data()  )   #wait for some data to arrive
 {spin
  check more coming in case a client dies
 }


   while ( mystuff.has_data()and #there is still data waiting to 
be taped
   this_sack.has_space() ) #the sack (tape) has room left
   {
   #remember, in timed version, a bigger one may be added during 
any pass
   #so, we start with biggest each time and add the first that 
will fit
  fill_sack (this_sack, mystuff.get_largest())

  while ( not mystuff.has_data()  )   #wait for some more data to arrive
 {spin
  check more coming in case a client dies
 }
   }
 }
}
==
fill_sack(sackptr, item_ptr)
{
   if (item_ptr is NULL) #there are no more smaller items
{
sackptr.make_full()  #decree sack (tape) to be full, so we can 
move on to another
return
}

   isize=item_ptr.size();  ssize=sackptr.size_remaining();   #store for 
possible error msg
   #or assign to variable IN the if statement,  but too messy to 
do in psuedo-code here!

   if ( item_ptr.size() = sackptr.size_remaining() )#if item fits, 
according to sack specs,
 {  #  put it in the sack
status = sackptr.add(item_ptr)# add MUST delete from holding 
disk,  if it finishes successfully
 # add MUST also adjust 
sack's  size_remaining value
  # it writes the INDEX, and does 
other bookkeeping too
  # Also removes ITEM from stuff 
list of available dumps to flush

if ( not status)  #  failed -- tape size must've been 
slightly wrong.  That's life.
 {
sackptr.make_full() #  we've already written to EOT so 
can't try another size item
write message Sack(tape) wasn't as big as  isize + ssize.
write message May want to correct sack specs  (tapesize)
 }

 }
   else #find next smaller item and try that one
 {
fill_sack (sackptr, item_ptr.get_next_smaller())
 }

}





tape filling algorithm? (flush, or dump)

2002-11-20 Thread Deb Baddorf
What kind of optimiaztion does amanda use while flushing
dumps to tape?

While doing the dump initially,  she seems to
optimize to get the small jobs done first, and thus they
go onto tape in a similar order.   But I'm currently without
a stacker.  So only the first part of the dumps make it
to tape live  (so to speak)  and the rest get flushed as I
manually mount tapes.   I'm hoping that some kind of
knapsack-packing algorithm is used . to fit the largest
files on the tape first,   but then to add smaller ones to
fill the top of the sack.
My archival do all the level 0's is taking up 5 tapes,
hence the optimization question!

Deb Baddorf
---
Deb Baddorf [EMAIL PROTECTED]  840-2289
You can't help getting older, but you don't have to get old.
- George Burns  IXOYE






Re: tape filling algorithm? (flush, or dump)

2002-11-20 Thread Deb Baddorf


  What kind of optimiaztion does amanda use while flushing
  dumps to tape?
 
  ... I'm hoping that some kind of
  knapsack-packing algorithm is used . to fit the largest
  files on the tape first,   but then to add smaller ones to
  fill the top of the sack.

Isn't the general knapsack problem one of these P==NP cases
like the traveling salesman problem? This might be challenging
even for the Amanda developers. I don't want the packing estimate
time to explode combinatorially, it takes long enough to back
up my terabytes over 100BaseT. Perhaps FNAL has enough
compute power to throw at it, but we don't. Not yet.


LOL!   Yeah,  but I could swear I remember some simple
greedy algorithm from class,  which produces a reasonable
knapsack filling  without spending too much time doing it.
I'll think about it .
Deb Baddorf
---
Deb Baddorf [EMAIL PROTECTED]  840-2289
You can't help getting older, but you don't have to get old.
- George Burns  IXOYE






Re: wierd amandad timeout error

2002-11-19 Thread Deb Baddorf
At 12:53 PM 11/19/2002 +, you wrote:

 amcheck -c on the server says that all my clients are fine. However,
when I run amdump, one of the clients, which worked fine previously, is
now failing.

 The (rather long) debug file in /tmp/amanda seems to indicate that it was
working fine, then timed out. Any idea why this could be ? It's always
just this host.


Are you *sure*  there aren't any firewalls in between?
If the data size on this client got large enough,  it could have bumped
over to the realm of problem.
My problem went like this:

server contacts client:   sent me estimate
client:   thinks
client:  replies inside of link created  owned by server - fine.

but if client thinks T long,  that link dies.
 Then client has to create a new link for the reply  (all done by the
lower levels of software)  and the firewall may not ALLOW client
to initiate a link.

So that's where the problem arises -- when the reply takes longer
than . some value.

You might try splitting the client's tar files into 2 pieces -- see if
that makes it work OK again.   That would strongly suggest this kind
of situtation!
Deb Baddorf

---
Deb Baddorf [EMAIL PROTECTED]  840-2289
You can't help getting older, but you don't have to get old.
- George Burns  IXOYE






Re: tar/ not missing any new directories

2002-11-05 Thread Deb Baddorf
At 10:56 AM 10/14/2002 +0200, Toralf Lund wrote:

With tar, and some sort of a guarantee that no individual file will
exceed the tape capacity, this can be done by breaking the disklist
entries up into subdirs,

Yes, that's what I'm doing. The problem with this is that something easily 
gets left out as new directories are created.

I'm late to this conversation,  but I don't think it is possible to
leave out any new directories with a scheme like this:

define dumptype  diskA-TheRest {
   comp-user-tar   #or other local globally defined TAR dumptype
exclude list /diskA/diskA.exclude
}

on client,/diskA/diskA.exclude contains:
fred
sally
sam
tom

on server,   disklist:

client.fqdn  /diskA   diskA-TheRest #excludes fred,sally,sam,tom
client.fqdn  /diskA/fred   comp-user-tar
client.fqdn  /diskA/sally   comp-user-tar
client.fqdn  /diskA/sam   comp-user-tar
client.fqdn  /diskA/tom   comp-user-tar

(a)  when remembered,  add a new directory by 
editting   disklist   (explicitly 
add  deb   and  client:/diskA/diskA.exclude  explicitly exclude  deb
(b)  when forgotten,   new directory is already included in the
diskA-TheRest  because you haven't explicitly excluded it.


Deb Baddorf
---
Deb Baddorf [EMAIL PROTECTED]  840-2289
You can't help getting older, but you don't have to get old.
- George Burns  IXOYE





Re: huge filesystems timing out on calcsize.

2002-10-24 Thread Deb Baddorf
At 08:17 AM 10/24/2002 -0400, you wrote:

On Wed, 23 Oct 2002 at 4:22pm, Alan Horn wrote

 Has anyone else encountered this problem ?


Yup.



 FAILURE AND STRANGE DUMP SUMMARY:
   releng.ink /export/releng2/shipped lev 0 FAILED [Request to
 releng.inktomi.com timed out.]

 With the disks being larger then the capacity of a single backup volume
 I'm therefore backing up subdirs using gtar. The filesystems have lots and
 lots of small files. I've increased etimeout in amanda.conf to 6000s, and
 still I get the timeouts.


In my case  (and several others that I've read about)
it's a firewall in between which is timing out.   Since the connection
is started by the server,  the connection is allowed until that
initial connect times out.   If the reply takes longer than that
timeout,   it comes back as a new request initiated by client  -
and in my case, I needed a filewall rule to permit it.

YMMV,  but do check this if you have any firewalls!
Deb Baddorf
---
Deb Baddorf [EMAIL PROTECTED]  840-2289
You can't help getting older, but you don't have to get old.
- George Burns  IXOYE






Re: due vs dumpcycle

2002-10-11 Thread Deb Baddorf

I was doing something I didn't realize was a problem.
Perhaps other may benefit from my goof:


global file:
 define dumptype AAA   {}
ERRONEOUS config file:
 dumpcycle 1 week
 define dumptype BBB {AAA;  then more stuff}

ERRONEOUS diskfile:
 node  disk BBB
   #this is fine;  uses dumpcycle 7 days = 1week
 node2disk2   AAA
  #  NOT ok;   uses some generic default dumpcycle of 10 days

The FIX:
config file:
  dumpcycle 1 week
  define dumptype BBB  {AAA;  more stuff}
  define dumptype CCC  {AAA}

Use BBB or CCC   but do not ever use AAA directly in a disklist.
This wasn't obvious to me!
Deb Baddorf
---
Deb Baddorf [EMAIL PROTECTED]  840-2289
You can't help getting older, but you don't have to get old.
- George Burns  IXOYE






due vs dumpcycle, runspercycle, runtapes, etc

2002-10-02 Thread Deb Baddorf


Question:   amadmin config  due
doesn't give the due dates that I expect.   What am I doing wrong?
Am I mis-interpreting some of the basic params regarding
dumpcycles, etc.?


Config values are at bottom, since lengthy.

SCENARIO:
I started a new config and all new tapes on Monday.  Did an amdump on
Monday and Tuesday  (2, two!).  Each took only 1 tape,  not 2 as allowed
(but not required).
Today is Wednesday,  so I expect due to
tell me they are due in 5 days  (by next Monday), since dumpcycle=1 week=7 
days.
Filesystem   luthor.../export/home2   (*)  behaves as expected.

One dump was promoted and done early, to start things becoming balanced.
So  luthor.../export/home1  (**)  is not due for 6 days ... also as expected.
All good, so far.

The other file systems are not doing what *I* expect.After 1 day,  they
said due in 9 days.   After 2 days, they say due in 8 days.  It almost
seems that they are starting from  (runspercycle * runtapes)  rather
than the dumpcycle  value.

Or am I just losing my mind??

Deb

==

$ amadmin daily due
Due in  8 days: bdback.FQDN:/var
Due in  6 days: luthor.FQDN:/export/home1**
Due in  5 days: luthor.FQDN:/export/home2*
Due in  8 days: luthor.FQDN:/export/home1/neuffer
Due in  8 days: luthor.FQDN:/export/home1/syphers

===


dumpcycle 1 week# the number of days in the normal dump cycle
runspercycle 5 # the number of amdump runs in dumpcycle days
runtapes 2 #  some jobs take 2 tapes so allow up to 2 every day
tapecycle 50 tapes  # the number of tapes in rotation

 do comments on the line cause problems???



# these 5 dumptypes are in a global file (configMAIN.include)
# which is included in each config's   amanda.conf
#   via includefile /usr/local/etc/amanda/configMAIN.include

define dumptype BDglobal {
 comment Global definitions
 index yes
 record yes
 priority medium
 compress client fast
}
define dumptype with-tar {
 program GNUTAR
}
define dumptype BDtar {
 BDglobal
 with-tar
 maxdumps 2
}
define dumptype BDnormal {
 BDglobal
 record yes
}
define dumptype amanda_server {
 BDglobal
 priority high
 #   so the amanda server backup is FIRST/EARLY on the tape
}




# in this config's amanda.conf

define dumptype luthor-home1 {
 BDtar
 exclude list /usr/local/ap/var/amanda/home1.exclude
}
define dumptype luthor-home2 {
 BDtar
 exclude list /usr/local/ap/var/amanda/home2.exclude
}
define interface localdisk {
 comment a local disk
 use 1000 kbps
}



# disklist

bdback.FQDN /var  amanda_server -1 localdisk

luthor.FQDN  /export/home1  luthor-home1  1
luthor.FQDN  /export/home2  luthor-home2  2

luthor.FQDN  /export/home1/neuffer BDtar  1 #100059
luthor.FQDN  /export/home1/syphers BDtar  1 #95773

---  Are my comments re: size  causing problems?   But the first one
---  bdback... /varalso doesn't work right, and it doesn't have
---  a comment on the line.




Re: dumpcycle 0 not using holding space

2002-07-25 Thread Deb Baddorf

At 02:35 PM 7/25/2002 -0400, Cory Visi wrote:
I am running an Amanda configuration intended to do a full backup every 2
weeks on 2 tapes. I need 2 tapes because I know at least one of the
partitions will not fit on the tape. I intend to run amflush to get the
last partition on the second tape. My problem is that with the current
configuration, Amanda never leaves anything in the holding space! I have
plenty of room, but there is never anything left there.

Look at the comments about holding disk, in the config file.
By default,  all the space is reserved for incremental backups,
and no fulls are stored on the holding disk  (once the first
tape is full,  I mean).

Here's the section:

# If amanda cannot find a tape on which to store backups, it will run
# as many backups as it can to the holding disks.  In order to save
# space for unattended backups, by default, amanda will only perform
# incremental backups in this case, i.e., it will reserve 100% of the
# holding disk space for the so-called degraded mode backups.
# However, if you specify a different value for the `reserve'
# parameter, amanda will not degrade backups if they will fit in the
# non-reserved portion of the holding disk.

reserve 30 # percent   ##  (this is MY value)
# This means save at least 30% of the holding disk space for degraded
# mode backups.

---
Deb Baddorf [EMAIL PROTECTED]  840-2289
You can't help getting older, but you don't have to get old.
- George Burns  IXOYE






Re: tape error???

2002-07-25 Thread Deb Baddorf

At 01:49 PM 7/25/2002 -0700, Chris Bourne wrote:
Hello,
I need help with figuring out why amanda is giving me tape errors. .
bash-2.05$ /usr/sbin/amcheck DailySet1
-

ERROR: /dev/nst0: not an amanda tape

(expecting a new tape)

amanda wants you to label each tape before use:

amlabel DailySet1  DailySet101 #for example

tape name is specified here, from your config file:
labelstr ^DailySet1[0-9][0-9]*$   # label constraint regex: all tapes 
must match


Deb




amdump ignoring sendsize results

2002-06-27 Thread Deb Baddorf

Hey people --
When I add more disks to the disklist of a working test configuration,
amdump stops listening to the sendsize results.

Amdump is still waiting,   with amstatus saying  getting estimate...
the client finishes the sendsize,  tries to return the data,  and
finds nobody is listening:

amandad: dgram_recv: timeout after 10 seconds
amandad: waiting for ack: timeout, retrying
amandad: dgram_recv: timeout after 10 seconds
amandad: waiting for ack: timeout, retrying
amandad: dgram_recv: timeout after 10 seconds
amandad: waiting for ack: timeout, retrying
amandad: dgram_recv: timeout after 10 seconds
amandad: waiting for ack: timeout, retrying
amandad: dgram_recv: timeout after 10 seconds
amandad: waiting for ack: timeout, giving up!
amandad: pid 10266 finish time Thu Jun 27 11:57:04 2002

So the client's  amandad  has given up.

Meanwhile,   on the server,   amdump thinks it is still waiting
and will take another hour or so to reach the etimeout 150
and actually quit.

There are 41 disk entries on remote client,
times 2.5 minutes (150 seconds)  == 1 hr 42 minutes.
(if I understand etimeout correctly)
It took the client about 30 minutes to return the estimate,
but nobody was listening by then.   *BUT*   amdump
is still waiting.

Where do I look now,   please?  Anybody?
(Mind you,  this DID work,  with a smaller number of
disk entries.  But I needed to test the tape changer use of 1
tape . so I added more entries.   This is how I eventually want
to run it.If it'll run.)

Deb Baddorf


the bottom of amdump  (which hasn't timed out yet,
though the client found nobody to ACK it):

.  top snipped .
GETTING ESTIMATES...
driver: pid 9552 executable /usr/local/libexec/amanda/driver version 2.4.2p2
driver: send-cmd time 0.002 to taper: START-TAPER 20020627
taper: pid 9553 executable taper version 2.4.2p2
changer: opening pipe to: /usr/local/libexec/amanda/chg-multi -info
dumper: dgram_bind: socket bound to 0.0.0.0.960
dumper: pid 9562 executable dumper version 2.4.2p2, using port 960
driver: started dumper0 pid 9562
driver: started dumper1 pid 9564
driver: started dumper2 pid 9565
driver: started dumper3 pid 9566
dumper: dgram_bind: socket bound to 0.0.0.0.962
dumper: pid 9564 executable dumper version 2.4.2p2, using port 962
dumper: dgram_bind: socket bound to 0.0.0.0.963
dumper: pid 9565 executable dumper version 2.4.2p2, using port 963
dumper: dgram_bind: socket bound to 0.0.0.0.964
dumper: pid 9566 executable dumper version 2.4.2p2, using port 964
changer: got exit: 0 str: 3 7 0
changer_query: changer return was 7 0
changer_query: searchable = 0
changer_find: looking for bdbkTEST5-002 changer is searchable = 0
changer: opening pipe to: /usr/local/libexec/amanda/chg-multi -slot current
changer: got exit: 0 str: 3 /dev/nsa0
taper: slot 3: date 20020627 label bdbkTEST5-001 (active tape)
changer: opening pipe to: /usr/local/libexec/amanda/chg-multi -slot next
got result for host bdback.fnal.gov disk /var: 0 - 1492K, -1 - -1K, -1 - -1K
got result for host bdback.fnal.gov disk /usr: 0 - 388289K, -1 - -1K, -1 
- -1
K
got result for host bdback.fnal.gov disk /: 0 - 43240K, -1 - -1K, -1 - -1K
changer: got exit: 0 str: 4 /dev/nsa0
taper: slot 4: date 20020625 label bdbkTEST5-002 (exact label match)
taper: read label `bdbkTEST5-002' date `20020625'
taper: wrote label `bdbkTEST5-002' date `20020627'

---
Deb Baddorf [EMAIL PROTECTED]  840-2289
You can't help getting older, but you don't have to get old.
- George Burns  IXOYE






Re: amanda and large filesystems

2002-05-22 Thread Deb Baddorf


On Wed, May 22, 2002 at 02:50:05PM -0400, Andre Gauthier wrote:

  Is there a limit to the size of a filesystem that Amanda can backup? if
  so what's the limit? Does Amanda support ext3?

At 08:27 PM 5/22/2002 +0100, Niall O Broin wrote:
Amanda doesn't do the backup as such - dump or tar does that. So if dump or
tar can handle your filesystem, amanda should be able to too. It's a
question of underlying OS limits really. As to ext3, the same applies - if
tar or dump can see it, amanda can back it up.

However,  you should point out that Amanda currently won't
split dumps over multiple tapes.   So if your dump is bigger
than your tapes are,  you have to split up the filesystem using
tar.

Deb Baddorf
---
Deb Baddorf [EMAIL PROTECTED]  840-2289
You can't help getting older, but you don't have to get old.
- George Burns  IXOYE






gtar - sendsize reports no size

2002-05-22 Thread Deb Baddorf

Hi --
 I've got a test config of amanda,  which works okay with
dump.   I've switched one disklist entry to use  GNUTAR,
and can't get that to work yet.

I *think*  the problem lies in this area:

sendsize.20020522155439.debug
 ...removed...
calculating for amname '/usr', dirname '/usr'
sendsize: getting size via gnutar for /usr level 0
opening /usr/local/var/amanda/gnutar-lists/bdback.fnal.gov_usr_0.new:
 No such file or directory
calculating for amname '/var', dirname '/var'

 and it has already moved on to the next disk.
One of the amandad.*.debug files shows this disk as not having a size estimate:

/ 0 SIZE 49721
/usr 0 SIZE -1
/var 0 SIZE 2875


What parameter tells gnutar where this file should go?
(It's the first time,  so of course it won't exist,  but I imagine the
directory has to exist . and I don't  HAVE   a  /usr/local/var...  )

I do have these parameters defined:

infofile /usr/local/etc/amanda/logs/test4/curinfo # database DIRECTORY
logdir   /usr/local/etc/amanda/logs/test4 # log directory
indexdir /usr/local/etc/amanda/logs/test4/index   # index directory
tapelist /usr/local/etc/amanda/logs/test4/tapelist# list of used tapes


Deb Baddorf
---
Deb Baddorf [EMAIL PROTECTED]  840-2289
You can't help getting older, but you don't have to get old.
- George Burns  IXOYE






Re: Question in case of disaster

2002-05-14 Thread Deb Baddorf

I could be wrong here, but:
I think the part they want you to print is only an OVERVIEW --
which disk entry and what number file it is on the tape.
NOT a listing of individual files.

If your whole disk is bad,   you don't need an index very much --
you won't want to restore individual files.   You want to restore
the whole disk.   And therefore the short list   (just the disk names)
is useful.

If I'm hearing both sides aright .But I could be wrong!
Deb Baddorf

At 06:24 PM 5/14/2002 +0300, you wrote:
On Mon, 13 May 2002, Anthony A. D. Talltree wrote:

  There's a lot to be said for printing tape labels or case inserts that
  document the contents of each tape -- or for printing each day's results
  and keeping them in a binder.

In my case this cannot be done: I have a mission critical server, with
thousand of small files and there is an ocean between my location and
server location. In case of a disaster I have to be able to restore a huge
directory tree with more than 10,000 files within minutes or hours at
most. With a paper list and tapes that I have to get a visa and fly a day
in order to touch them this is not an option.

---
Deb Baddorf [EMAIL PROTECTED]  840-2289
You can't help getting older, but you don't have to get old.
- George Burns  IXOYE






dump incomplete (but no errors about it)

2002-04-26 Thread Deb Baddorf

I realize this problem has been mentioned before:
At 02:11 PM 1/16/2002 -0500, Stephen Hillier wrote:
  inspecting the archive of tapes
that I have, not one of the dump images was complete and valid.

Is there a known cause ( hopefully fix)  for incomplete dumps
being recorded?   I wonder if it is a timing problem, between
the incoming dump,   and the gzipping and the outgoing
tape file.   ???

I am still testing an Amanda setup before putting it into
production and replacing our existing backup scheme.
While trying a full restore to the amanda server
I discovered that one of the dumps was incomplete,  yet
had produced no errors.   It almost looks like something
stepped on the last bit of the file.   It's size is sooo close
to the good dump,   but not quite.   Recover falls off the end,
aborts, and doesn't (a) have  all the data
or  (b) correctly set file ownership.

FreeBSD 4.4-RELEASE i386
amanda 2.4.2p2

Since I have a good size holding disk,  I used  dd
to copy the bad file from its tape,   and a good file
from the day before(test machine;  virtually no activity
in between except for clock ticking).

dd if=/dev/nrs0 bs=32k of=1tape.file
dd if=1tape.file bs=32k skip=1 of=2less.header
cp 2less.header 3renamed.gz
cp 3renamed.gz 4unzipped.gz  #since gzip will over-write this one
/usr/bin/gzip -d 4unzipped.gz#yields  4unzipped   -- a dump file

=
bdback# ls -al bad
total 831698
drwxr-xr-x  2 root  wheel512 Apr 26 15:17 .
drwxr-xr-x  6 root  wheel512 Apr 26 15:17 ..
-rw-r--r--  1 root  wheel  153255936 Apr 26 14:37 1tape.file
-rw-r--r--  1 root  wheel  153223168 Apr 26 14:42 2less.header
-rw-r--r--  1 root  wheel  153223168 Apr 26 14:43 3renamed.gz
-rw-r--r--  1 root  wheel  391475200 Apr 26 14:44 4unzipped


bdback# ls -al good
total 831690
drwxr-xr-x  2 root  wheel512 Apr 26 15:17 .
drwxr-xr-x  6 root  wheel512 Apr 26 15:17 ..
-rw-r--r--  1 root  wheel  153255936 Apr 26 15:00 1tape.file
-rw-r--r--  1 root  wheel  153223168 Apr 26 15:01 2less.header
-rw-r--r--  1 root  wheel  153223168 Apr 26 15:02 3renamed.gz
-rw-r--r--  1 root  wheel  391464960 Apr 26 15:02 4unzipped


===
None of the amanda log  or debug files  or mail reports
noticed anything wrong with the dump. Nor did amverify.

Any suggestions on how to ensure that my dumps are complete
and usable?
Deb Baddorf

---
Deb Baddorf [EMAIL PROTECTED]  840-2289
You can't help getting older, but you don't have to get old.
- George Burns  IXOYE






Re: Can you change /tmp/amanda to a more permanent directory?

2002-02-08 Thread Deb Baddorf

At 04:38 AM 2/7/2002 -0500, Joshua Baker-LePain wrote:
On Thu, 7 Feb 2002 at 10:19am, Sascha Wuestemann wrote

  I found out, that a reboot of the amandaserver causes the night 
 following backup to fail,
  because /tmp is emptied by the system after wakeup.

.

   --with-tmpdir[=/temp/dir] area Amanda can use for temp files

If a system is erasing  /tmp   (on a reboot,  or on any schedule)
does Amanda recreate its  /tmp/amanda   directory,
or expect it to already be there?  Is that where the
problem arises?

(still reading  re-reading;   wondering if this is going to cause
me probs too when I try my first configurations)
Deb Baddorf

---
Deb Baddorf [EMAIL PROTECTED]  840-2289
You can't help getting older, but you don't have to get old.
- George Burns  IXOYE






Re: S.O.S.

2002-01-14 Thread Deb Baddorf

At 06:42 PM 1/14/2002 -0200, you wrote:
span a single disklist entry across multiple tapes

If you are saying this is what does not function,   no  it doesn't.
It isn't supposed to.   So it is not a  question of  not functioning
but of  not designed to do this.

If you want the backup of one filesystem   to go to
more than one tape,   you have to change your dumptype.
In the sampleamanda.config   file, there are dumptypes
labelled  -tar.   In your disks configuration file,
make sure you use a dumptype  of   -tar   for this
disk.That way,   it should use   tar  to do the backup,
and not dump.
Also,  you would specify a sub-directory on the filesystem,
not the top of the disk-tree.Pick a sub-directory which
will fit onto one tape.   Make anotherxxx-tar  entry
for the next sub-directory.And so on.

Clearer?
Deb Baddorf

P.S.   theoretical knowledge only;   still looking for time
to setup my first amanda system



Em Seg 14 Jan 2002 17:58, Joshua Baker-LePain escreveu:
  On Mon, 14 Jan 2002 at 5:49pm, Túlio Machado de Faria wrote
 
   using parametro:
  
   program GNUTAR
  
   in amanda.conf
  
   it does not function?
 
  What does not function?
 
  AMANDA will not split a single disklist entry across multiple tapes.  I
  don't know that I can make it any clearer.
 
  If you have a filesystem/disklist entry that is bigger than your tapes,
  you need to use program GNUTAR and split the filesystem in multiple
  disklist entries.

---
Deb Baddorf [EMAIL PROTECTED]  840-2289
You can't help getting older, but you don't have to get old.
- George Burns  IXOYE






Re: FAQ-o-MATIC

2001-10-17 Thread Deb Baddorf

At 10:34 AM 10/16/2001 -0400, Jon LaBadie wrote:
OK, we are all agreed that it is broken.

The real question is who maintains the web page www.amanda.org?

FWIW:   I have printouts of several FAQ-o-MATIC pages
dated  Sept 26, 2001   ...   so it broke more recently than
that.

Deb Baddorf
---
Deb Baddorf [EMAIL PROTECTED]  840-2289
You can't help getting older, but you don't have to get old.
- George Burns  IXOYE