Re: how to specify a linux disk

2011-07-22 Thread Frank Smith
On 07/22/2011 11:53 AM, myron wrote:
 On Jul 22, 2011, at 12:47 PM, Jean-Louis Martineau wrote:
 
 On 07/22/2011 11:37 AM, myron wrote:
 On Jul 22, 2011, at 11:14 AM, Jean-Louis Martineau wrote:

 Which amanda release are you using?

 root@errol:~# dpkg -l | grep amanda
 ii  amanda-client
 1:2.6.1p1-2   Advanced Maryland Automatic  
 Network Disk Arc
 ii  amanda-common 1:2.6.1p1-2
 Advanced Maryland Automatic Network Disk Arc

 Which program/application are you using to do the backup.

 I guess it's tar or dump. Whatever is on linux? How do I find out?
 In the disklist file
 If you are using tar, it is better to specify the mount point.
 
 I guess it is tar. this is what  have in the disklist
 
 errol   /dev/md0 linux-tar

Tar deals with filesystems, so you need to either change /dev/md0
to / (or whatever directory is mounted on it), or use dump, which
deals with devices. Personally, I'd stick with tar, but others may
prefer to use dump.

Frank

 


 Post all the related client debug files.

 I don't see any debug files. The only log entry I have from this  
 morning's backup is

 Then search, amandad.*.debug, sendsize.*.debug
 amadmin version version | grep AMANDA_DBGDIR

 Jean-Louis

 


-- 
Frank Smith  fsm...@hoovers.com
Sr. Systems Administrator   Voice: 512-374-4673
Hoover's Online   Fax: 512-374-4501


Re: No Joy with 3.3.0

2011-06-03 Thread Frank Smith
On 06/03/2011 02:14 PM, Steven Backus wrote:
 I installed 3.3.0 and can't connect to any of my clients:

I'm not sure if it's your issue, but the first bullet item of the Release Notes 
is:

The default auth is changed to bsdtcp, if you are using the default bsd then 
you must add it to your configuration:
* in amanda.conf
* in amanda-client.conf
* in dumptype/disklist
* in xinetd (if no '-auth' argument to amandad)


Frabnk

 
 WARNING: whimsy.med.utah.edu: selfcheck request failed: Connection refused
 WARNING: genepi.med.utah.edu: selfcheck request failed: Connection refused
 WARNING: episun7.med.utah.edu: selfcheck request failed: Connection refused
 WARNING: eclectic.med.utah.edu: selfcheck request failed: Connection refused
 WARNING: grandeur.med.utah.edu: selfcheck request failed: Connection refused
 WARNING: balance.med.utah.edu: selfcheck request failed: Connection refused
 WARNING: harmony.med.utah.edu: selfcheck request failed: Connection refused
 WARNING: clarity.med.utah.edu: selfcheck request failed: Connection refused
 WARNING: symmetry.med.utah.edu: selfcheck request failed: Connection refused
 WARNING: serendipity.med.utah.edu: selfcheck request failed: Connection 
 refused
 WARNING: genepi.hci.utah.edu: selfcheck request failed: Connection refused
 WARNING: episun8.med.utah.edu: selfcheck request failed: Connection refused
 Client check: 13 hosts checked in 10.440 seconds.  12 problems found.
 
 When I go back to 3.2.3 I have no problems connecting.  Here's the
 configure line I'm using:
 
 ./configure --with-user=root --with-group=root --with-config=gen 
 --with-bsd-security --with-amandahosts --prefix=/local
 
 This on RHEL 5.1.  Did something change that I have to re-configure?
 
 Thanks,
   Steve


-- 
Frank Smith  fsm...@hoovers.com
Sr. Systems Administrator   Voice: 512-374-4673
Hoover's Online   Fax: 512-374-4501


Re: backup very hit or miss

2009-11-18 Thread Frank Smith
Brian Cuttler wrote:
 Amanda users,
 
 The backup of this particular DLE is pretty much hit or miss,
 it'll run great for a week and then fail for a week, I haven't
 been able to disern a pattern nor reason.
 
 The amanda server is old, 2.4.4 on a Solaris/Sparc system,
 the newest client is a Solaris/x86 with a current version
 2.6.1 of amanda.
 
 The client sc1 backs up up fine, we backup the ldom logical
 domain separately since it lives on a ZFS partition that is not
 available to the underlying OS.
 
 Because of the size of the dorldom1z1 partition, which is a
 non-global zone running oracle, we back it up separately from
 the other non-global zones, which run other, lighter apps.
 
 The base system sc1 backups up fine, the ldom and 3 of the 4
 non-global zones backup without error, its just this one additional
 non-global zone that has been problematic.
 
 We are in process of upgrading the amanda server, but that will
 come along with an upgrade of the box it sits on, a fire wall
 we plan to replace.
 
   thank you,
 
   Brian
 
 - Forwarded message from Amanda on Gat0 ama...@wadsworth.org -

 
 These dumps were to tape MIMOSA15.
 The next tape Amanda expects to use is: MIMOSA16.
 
 FAILURE AND STRANGE DUMP SUMMARY:
   dorldom1   /export/zones/dorldom1z1 lev 0 FAILED [mesg read: Connection 
 timed out]
 
 
 STATISTICS:
   Total   Full  Daily
       
 Estimate Time (hrs:min)0:03
 Run Time (hrs:min) 2:13
 Dump Time (hrs:min)0:00   0:00   0:00
 Output Size (meg)   0.00.00.0
 Original Size (meg) 0.00.00.0
 Avg Compressed Size (%) -- -- -- 
 Filesystems Dumped0  0  0
 Avg Dump Rate (k/s) -- -- -- 
 
 Tape Time (hrs:min)0:00   0:00   0:00
 Tape Size (meg) 0.00.00.0
 Tape Used (%)   0.00.00.0
 Filesystems Taped 0  0  0
 Avg Tp Write Rate (k/s) -- -- -- 
 
 USAGE BY TAPE:
   Label  Time  Size  %Nb
   MIMOSA15   0:00   0.00.0 0
 
 
 FAILED AND STRANGE DUMP DETAILS:
 
 /-- dorldom1   /export/zones/dorldom1z1 lev 0 FAILED [mesg read: Connection 
 timed out]
 sendbackup: start [dorldom1:/export/zones/dorldom1z1 level 0]
 sendbackup: info BACKUP=/usr/sfw/bin/gtar
 sendbackup: info RECOVER_CMD=/bin/gzip -dc |/usr/sfw/bin/gtar -xpGf - ...
 sendbackup: info COMPRESS_SUFFIX=.gz
 sendbackup: info end
 ? /usr/sfw/bin/gtar: ./root/proc: file changed as we read it
 \
 
 
 NOTES:
   planner: Forcing full dump of dorldom1:/export/zones/dorldom1z1 as directed.
   driver: WARNING: /amanda/work: 57344000 KB requested, but only 57092025 KB 
 available.
   taper: tape MIMOSA15 kb 0 fm 0 [OK]
 
 
 DUMP SUMMARY:
   DUMPER STATSTAPER STATS 
 HOSTNAME DISK   L   ORIG-KBOUT-KB COMP% MMM:SS   KB/s MMM:SS   
 KB/s
  --- -
 dorldom -es/dorldom1z1 0 FAILED --
 
 (brought to you by Amanda version 2.4.4)
 
 - End forwarded message -
 

You're probably on the edge of either the estimate or data timeout
values (etimeout or dtimeout).  You could try increasing those
values.  I'm not familiar with Solaris zones, but if ./root/proc
is really the proc filesystem, you should exclude that from
your backup as it can be quite a rabbit hole. Try  using an
exclude file or exclude list to exclude ./root/proc and see if
that speeds things up.

-- 
Frank Smith  fsm...@hoovers.com
Sr. Systems Administrator   Voice: 512-374-4673
Hoover's Online   Fax: 512-374-4501


Re: Inparallel - not paralleling

2009-11-12 Thread Frank Smith
 as well as just a df). Too little
space will cause the dumps to be serially written to tape.

Frank


--
Frank Smith  fsm...@hoovers.com
Sr. Systems Administrator   Voice: 512-374-4673
Hoover's Online   Fax: 512-374-4501


Re: very slow dumper (42.7KB/s)

2009-08-31 Thread Frank Smith
Try looking on the client while the backup is running. Could be
any of a lot of things.  Network problems (check for errors on
the NIC and the switch port), lack of CPU to run the compression,
disk I/O contention, huge numbers of files (either in aggregate
or in a single directory), or possibly even impending disk failure
(lots of read retries or a degraded RAID).
   Looking at something like 'top' during the backup should give
you an idea of whether your CPU is overloaded or if you are always
waiting for disk, and if there is some other process(es) running
that may also be trying to do a lot of disk I/O.  Your system logs
should show if you are seeing disk errors, and the output of ifconfig
or similar will show the error counts on the NIC.
   If you don't see anything obvious at first, try running your
dump program (dump or tar or whatever Amanda is configured to use)
with the output directed to /dev/null and see how long that takes,
if that is also slow then it is not the network or Amanda. Then
try it without compression to see how much that speeds things up.


Frank

Tom Robinson wrote:
 -BEGIN PGP SIGNED MESSAGE-
 Hash: SHA1
 
 Tom Robinson wrote:
 Hi,

 I'm running amanda (2.6.0p2-1) but have an older client running
 2.4.2p2-1. On that client the full backup of a 4GB disk takes a very
 long time:

 DUMP SUMMARY:
DUMPER
 STATS   TAPER STATS
 HOSTNAME DISK  L   ORIG-KB OUT-KB  COMP%
 MMM:SS   KB/s MMM:SS KB/s
 
 -- ---
 host / 0   42567901819411   42.7
 637:22   47.6  26:01   1165.9

 I'm not sure where to start looking for this bottle-neck.

 Any clues would be appreciated.
 bump
 
 - --
 
 Tom Robinson
 System Administrator
 
 MoTeC
 
 121 Merrindale Drive
 Croydon South
 3136 Victoria
 Australia
 
 T: +61 3 9761 5050
 F: +61 3 9761 5051
 M: +61 4 3268 7026
 E: tom.robin...@motec.com.au
 -BEGIN PGP SIGNATURE-
 Version: GnuPG v1.4.5 (GNU/Linux)
 Comment: Using GnuPG with CentOS - http://enigmail.mozdev.org/
 
 iD8DBQFKnFXv+brnGjUTUjARAgcaAJ9mhqo4enmr/VSkjJ9k4n6lpUsU/ACg38ZL
 EHyS6r23vri4I7azpfEk2BY=
 =6ngI
 -END PGP SIGNATURE-
 


-- 
Frank Smith  fsm...@hoovers.com
Sr. Systems Administrator   Voice: 512-374-4673
Hoover's Online   Fax: 512-374-4501


Re: [Amanda-users] Advice needed on Linux backup strategy to LTO-4 tape

2009-08-14 Thread Frank Smith
Chris Hoogendyk wrote:
 
 Rory Campbell-Lange wrote:
 Hi Chris

 On 13/08/09, Chris Hoogendyk (hoogen...@bio.umass.edu) wrote:
   
 snip
 Typically, we set up Amanda with holding disk space.
 
 snip

 If all the storage is locally attached (actually, AoE drives storage
 units connected over Ethernet), I am hoping to avoid the disk space if I
 can write to tape fast enough. I'd like to avoid paying for up to 15TB
 of fast holding disk space if I can avoid it.
 
 So, one way would be to logically divide the storage into smaller DLE's. 
 A DLE (Disk List Entry -- http://wiki.zmanda.com/man/disklist.5.html) 
 for Amanda can be a mount point or directory. Obviously, I don't know 
 how your storage is organized; but, if you can define your DLE's as 
 separate directories on the storage device, each one of which is much 
 smaller, then you could use a smaller holding disk and still benefit 
 from Amanda's parallelism. In one of the other departments here, the 
 sysadmin has successfully divided a large array this way and is driving 
 LTO4 near top speed.
 
 Compression can be done either on the client, on the server, or on
 the tape drive. Obviously, if you use software compression, you want
 to turn off the tape drive compression. I use server side
 compression, because I have a dedicated Amanda server that can
 handle it. By not using the tape drive compression, Amanda has more
 complete information on data size and tape usage for its planning.
 If your server is more constrained than your clients, you could use
 client compression. This is specified in your dumptypes in your
 amanda.conf.
 
 I don't have any clients, so this is an interesting observation. I'll be
 trying to do sofware compression then I think. The Unix backup book
 (google for amanda software compression) suggests that compression can
 be used on a per-image basis; presumably I can pass the backup data
 stream through gzip or bzip2 on the way to a tape?
 
 Amanda will do the compression for you. You define it in the dumptype in 
 amanda.conf. If you have a holding disk, then it will compress the data 
 as it goes onto the holding disk. If you don't have a holding disk, then 
 you might have issues with being able to stream a backup to tape, 
 compressing it on the fly. Even with a really fast cpu, I don't know if 
 you can maintain the throughput to drive LTO4 at a good speed.

You might want to consider configuring for client compression.  Not
only will that give you more CPU for feeding your tape, it also
minimizes network bandwidth. As usual, YMMV, it all depends on where
the bottlenecks are in your environment.

Frank


-- 
Frank Smith  fsm...@hoovers.com
Sr. Systems Administrator   Voice: 512-374-4673
Hoover's Online   Fax: 512-374-4501


Re: [Amanda-users] Cloud Backup...but to my own Data Center

2009-06-03 Thread Frank Smith
Hopifan wrote:
 Thank you for response.
 To clarify, my current setup is:
 about 30 remote offices with between2-50gb of data each. Each office has 
 Symantec BackupExec running ($700 initial cost), each server in each location 
 has a tape drive ($800 initial cost) and 10 tapes ($300 initial cost), so 
 basically to backup these 30 offices locally cost me 30x1800=$54,000 first 
 year + admin overhead and time, etc. so the question is: what can I use to 
 backup data from these 30 offices to my central DataCenter in Wisconsin? I 
 was doing some testing backing up one of the offices using BackupExec over 
 the WAN I got 200mgb/hr transfer ratio, not too good. SO I need some software 
 with good compression or other algorythm to pump data over the WAN, is Amanda 
 or Zmada the answer?
 

If your links are slow compared to the size of your data, it may be
more efficient to use something like rsync to make a central copy of
all the remote servers, and then just back up that copy locally
using Amanda or even your existing backup software.  That way you only
have to copy the unchanging parts of your data once across the WAN,
and from then on the only WAN traffic will be new or changed blocks
of data, and it won't load your WAN to have your full tape backups
run as often as you like.
   I currently use this approach with some offsite servers and it
works well, however I'm strictly in the Linux world and don't know
how well the Windows rsync programs (such as DeltaCopy) actually
perform.  Perhaps someone else on the list can comment on that.

Frank

-- 
Frank Smith  fsm...@hoovers.com
Sr. Systems Administrator   Voice: 512-374-4673
Hoover's Online   Fax: 512-374-4501


Re: amanda keeps asking for same tape

2009-06-02 Thread Frank Smith
: free 600 if local: free 1000 
 if le0: free 400
 driver: hdisk-state time 37.821 hdisk 0: free 16080392 dumpers 0
 planner: time 167.729: got partial result for host r4p17.rmtohio.com disk 
 /usr: 0 - 41959467K, 1 - 9341565K, -1 - -2K
 planner: time 167.731: got result for host r4p17.rmtohio.com disk /usr: 0 - 
 41959467K, 1 - 9341565K, -1 - -2K
 planner: time 167.731: getting estimates took 167.596 secs
 FAILED QUEUE: empty
 DONE QUEUE:
0: r4p17.rmtohio.com /usr
 
 ANALYZING ESTIMATES...
 pondering r4p17.rmtohio.com:/usr... next_level0 -4 last_level 0 (due for 
 level 0) (picking inclevel for degraded mode) 
   picklev: last night 0, so tonight level 1
 
curr level 0 nsize 41959467 csize 22418551 total size 22418647 total_lev0 
 22418551 balanced-lev0size 22418551
 INITIAL SCHEDULE (size 22418647):
r4p17.rmtohio.com /usr pri 6 lev 0 nsize 41959467 csize 22418551
 
 DELAYING DUMPS IF NEEDED, total_size 22418647, tape length 36802560 mark 0
delay: Total size now 22418647.
 
 PROMOTING DUMPS IF NEEDED, total_lev0 22418551, balanced_size 22418551...
 planner: time 167.732: analysis took 0.000 secs
 
 GENERATING SCHEDULE:
 
 DUMP r4p17.rmtohio.com feff9ffeff7f /usr 20090601 6 0 1970:1:1:0:0:0 
 41959467 22418551 10292 2178 1 
 2009:5:29:3:2:40 9341565 4670782 4561 1024
 
 driver: flush size 0
 dump of driver schedule before start degraded mode:
 
r4p17.rmtohio.com/usr  lv 0 t 10292 s 22418592 p 6
 
 dump of driver schedule after start degraded mode:
 
r4p17.rmtohio.com/usr  lv 1 t  4561 s 22418592 p 6
 
 driver: find_diskspace: time 167.734: want 22418592 K
 driver: find_diskspace: time 167.734: find diskspace: size 22418592 hf 
 16080392 df 16079880 da 16079880 ha 16080392
 find diskspace: not enough diskspace. Left with 6338712 K
 driver: find_diskspace: time 167.734: want 22418592 K
 driver: find_diskspace: time 167.734: find diskspace: size 22418592 hf 
 16080392 df 16079880 da 16079880 ha 16080392
 find diskspace: not enough diskspace. Left with 6338712 K
 driver: find_diskspace: time 167.734: want 22418592 K
 driver: find_diskspace: time 167.734: find diskspace: size 22418592 hf 
 16080392 df 16079880 da 16079880 ha 16080392
 find diskspace: not enough diskspace. Left with 6338712 K
 driver: find_diskspace: time 167.734: want 22418592 K
 driver: find_diskspace: time 167.734: find diskspace: size 22418592 hf 
 16080392 df 16079880 da 16079880 ha 16080392
 find diskspace: not enough diskspace. Left with 6338712 K
 driver: state time 167.734 free kps: 2000 space: 16080392 taper: DOWN 
 idle-dumpers: 4 qlen tapeq: 0 runq: 0 roomq: 0 
 wakeup: 0 driver-idle: no-diskspace
 driver: interface-state time 167.734 if default: free 600 if local: free 1000 
 if le0: free 400
 driver: hdisk-state time 167.734 hdisk 0: free 16080392 dumpers 0
 driver: QUITTING time 167.735 telling children to quit
 driver: send-cmd time 167.735 to dumper0: QUIT
 driver: send-cmd time 167.735 to dumper1: QUIT
 driver: send-cmd time 167.736 to dumper2: QUIT
 driver: send-cmd time 167.737 to dumper3: QUIT
 driver: send-cmd time 167.737 to taper: QUIT
 taper: DONE [idle wait: 129.916 secs]
 driver: FINISHED time 168.740
 amdump: end at Mon Jun  1 23:02:49 EDT 2009
 Scanning /usr/dumps/amanda...
 0
 0
 0
 0
 0
 0
 0
 


-- 
Frank Smith  fsm...@hoovers.com
Sr. Systems Administrator   Voice: 512-374-4673
Hoover's Online   Fax: 512-374-4501


Re: Amanda must be run as user amandabackup when using bsdtcp authentication

2009-05-18 Thread Frank Smith
Abilio Carvalho wrote:
 Anyone? I'm kinda lost at a problem that seems so basic, and I could  
 really use a second pair of eyes. As far as I can see, all config  
 files are correct, permissions on /tmp and /var amanda subdirectories  
 are fine, etc. It's NOT just a case of me overlooking something that  
 can be found on the 15 mins amanda backup doc. At least, I don't think  
 so.
 
 Thanks
 
 Abilio
 
 
 On May 18, 2009, at 5:30 PM, Abilio Carvalho wrote:
 
 I'm getting this when I do am amcheck -c, but I don't know what I'm
 missing.

 Here's the relevant data:

 Server works: I have 6 hosts configured, only 1 is failing. Client
 that fails is a new install I'm doing atm.

 amanda was configured with: ./configure --without-server --with-
 user=amandabackup --with-group=disk --prefix= --with-amandahosts --
 without-ipv6

 /var/lib/amanda and the .amandahosts in it is owned by  
 amandabackup:disk

 output of inetadm -l svc:/network/amanda/tcp is:

 SCOPENAME=VALUE
  name=amanda
  endpoint_type=stream
  proto=tcp
  isrpc=FALSE
  wait=FALSE
  exec=/libexec/amanda/amandad -auth=bsdtcp
  user=amandabackup
 default  bind_addr=
 default  bind_fail_max=-1
 default  bind_fail_interval=-1
 default  max_con_rate=-1
 default  max_copies=-1
 default  con_rate_offline=-1
 default  failrate_cnt=40
 default  failrate_interval=60
 default  inherit_env=TRUE
 default  tcp_trace=FALSE
 default  tcp_wrappers=FALSE


 Can you help me figure out what unbelievably basic thing I missed?
 Thanks


Shouldn't the 'wait' parameter be 'true'?

Frank


-- 
Frank Smith  fsm...@hoovers.com
Sr. Systems Administrator   Voice: 512-374-4673
Hoover's Online   Fax: 512-374-4501


Re: Amanda and dual tape libraries.

2009-05-14 Thread Frank Smith
Chris Hoogendyk wrote:
 
 Onotsky, Steve x55328 wrote:
 I agree, but the caveat is that the planner will do its darndest to make 
 full use of the extended capacity of the LTO4 cartridge.

 In my case, our backups went from between 5 and 8 hours with LTO2 tapes to 
 well over 24 hours in some cases with the LOT4s - same DLEs.  It took some 
 fancy footwork to get it to a reasonable window (about the same length of 
 time as with the 2s but some of the larger DLEs are forced to incremental on 
 weekdays).  This is so we can get the cartridges ready for pickup by our 
 offsite storage provider.
 I don't get that at all. Doesn't make sense. Were there other things you 
 changed? Or did you previously have much more designated for backup than your 
 system and/or configuration could handle and the planner was always falling 
 back? Maybe you had a setup that would result in 800G per day if it could, 
 but you had a runtapes of 1, so Amanda was constantly forced to back up less 
 than what it should have? Then you gave it LTO4 and it began to do what it 
 should have all along?
 
 In my own case, I use much less than the capacity of my tapes. The 
 designation of a dumpcycle of 1 week and runspercycle of 7 means I basically 
 get 1 full per DLE per week, though that may vary slightly. I don't get how 
 having a larger tape is going to affect that at all.
 

Amanda gets wonky if your tape is much larger than your backups, or
at least it does with 2.5.1p2.  On one configuration with a 50GB
tape (that can hold about two weeks of daily backups), Amanda was
constantly promoting fulls trying to level out tape usage, but I
really didn't want full backups every 2 or 3 days.  If I had, I
would have set dumpcycle to 2 or 3 days, but it was set to 5.
  Since I was trying to fill tapes by leaving dumps on the holding
disk and only occasionally flush them to tape, I really didn't
want all the incrementals constantly get promoted to fulls, wasting
my holding disk space. Eventually I just forced it by setting
maxpromoteday to 4 or 5 days, which solved the problem for me.
  Perhaps part of the issue was the result of having a couple of
relatively larger DLEs and several smaller ones, making level
tape usage impossible, no matter how they were arranged.


Frank


-- 
Frank Smith  fsm...@hoovers.com
Sr. Systems Administrator   Voice: 512-374-4673
Hoover's Online   Fax: 512-374-4501


Re: Backing up to CDR

2009-05-11 Thread Frank Smith
Dustin J. Mitchell wrote:
 On Mon, May 11, 2009 at 12:39 PM, Matt Burkhardt m...@imparisystems.com 
 wrote:
 I'm doing some pro bono work for our local Boys and Girls Club and
 helping them set up a server.  We're using Amanda for a backup and
 trying to back everything up to a CDR.  I thought there was a device for
 it or a cdrtaper utility or something.  I've tried googling it, but no
 luck and was hoping someone might know...
 
 I think you're looking for this:
   http://www.tivano.de/software/amanda/Installation.shtml
 
 I *seriously* doubt the thing works anymore, though.  I contacted the
 authors a while back to see if they were interested in writing a
 CD-RW Device and the answer was not quite NO, but I doubt there's
 been any work done.
 
 This would be a good project for someone with a bit of familiarity with C.
 
 Dustin

If that code no longer works, one option might be to back up to suitably
sized vtapes, and periodically burn your vtapes to CD.  In order to
make the restores simpler, perhaps after burning to CD (and verifying
the copy) you could make the vtape a symlink to your CD drive.
  I've never tried doing this, so it might not be as simple as I
imagine it.

Frank



-- 
Frank Smith  fsm...@hoovers.com
Sr. Systems Administrator   Voice: 512-374-4673
Hoover's Online   Fax: 512-374-4501


Re: [Amanda-users] best way to organise tape file listings ?

2009-02-19 Thread Frank Smith
rory_f wrote:
 Dustin J. Mitchell wrote:
 
 how hard would it be to migrate our naming convention over to match
 our labels ? is there any way of doing it?
 

If you're talking about changing barcodes to match Amanda's tape label,
couldn't you just replace the barcodes on the tapes with ones matching
your labels (possibly with a preceding digit added to ensure uniqueness
(might have to tweak your labelstr regex to support it)? That way you
wouldn't need to make any changes to Amanda except your barcodes file
(if you configured your changer script to use one).

Frank

-- 
Frank Smith  fsm...@hoovers.com
Sr. Systems Administrator   Voice: 512-374-4673
Hoover's Online   Fax: 512-374-4501


Re: changer problems with 2.6.1

2009-02-18 Thread Frank Smith
Quick and dirty fix is to add the directory where the Amanda libraries
are to your /etc/ld.so.conf and run ldconfig.
   Better (IMHO) option is to recompile Amanda either statically linked
or with runtime library paths.  Otherwise you risk the problem of other
programs potentially linking in the wrong libraries (although probably
not an issue if the only ones there are the Amanda libraries, since the
libam* namespace is not too likely to collide with other common libraries).

Frank

stan wrote:
 On Wed, Feb 18, 2009 at 03:23:15PM -0500, Dustin J. Mitchell wrote:
 On Wed, Feb 18, 2009 at 3:06 PM, stan st...@panix.com wrote:
 OK, now we have uncovered what is probably the root cause of my problems:
 Yay!

 It exists and seems to have a set of permissions that I would think would
 work.
 Try running 'ldd' on it -- perhaps it cannot find the amanda libraries
 it links against?
 
 Seems that you are on the right track:
 
 ama...@amanda:/opt/amanda.2.6.1_02172009/sbin$ ldd
 /usr/local/share/perl/5.8.8/auto/Amanda/Types/libTypes.so
 libamglue.so = not found
   libamanda-2.6.1.so = not found
   libm.so.6 = /lib/libm.so.6 (0x2b7c9f05f000)
   libgmodule-2.0.so.0 = /usr/lib/libgmodule-2.0.so.0 
 (0x2b7c9f2e)
   libdl.so.2 = /lib/libdl.so.2 (0x2b7c9f4e3000)
   libgobject-2.0.so.0 = /usr/lib/libgobject-2.0.so.0 
 (0x2b7c9f6e8000)
   libgthread-2.0.so.0 = /usr/lib/libgthread-2.0.so.0 
 (0x2b7c9f92c000)
   librt.so.1 = /lib/librt.so.1 (0x2b7c9fb3)
   libglib-2.0.so.0 = /usr/lib/libglib-2.0.so.0 
 (0x2b7c9fd3a000)
   libnsl.so.1 = /lib/libnsl.so.1 (0x2b7ca0007000)
   /lib/libresolv.so.2 (0x2b7ca022)
   libpthread.so.0 = /lib/libpthread.so.0 (0x2b7ca0436000)
   libc.so.6 = /lib/libc.so.6 (0x2b7ca0651000)
   /lib64/ld-linux-x86-64.so.2 (0x4000)
 
 Any thoughts as to how to correct this?


-- 
Frank Smith  fsm...@hoovers.com
Sr. Systems Administrator   Voice: 512-374-4673
Hoover's Online   Fax: 512-374-4501


Re: Device mapper doesn't seem to be guilty, all disk majors are still 8, but...

2008-11-09 Thread Frank Smith
My Debian lenny system has tar 1.20, and tar --help shows that as
a run-time option:

  --no-check-device  do not check device numbers when creating
 incremental archives

Frank


Toomas Aas wrote:
 Gene Heskett wrote:
 
 I can't find anything in the latest amanda, but tar-1.20's NEWS file says it 
 now has a --no-check-device configuration option.

 Unforch:
 [EMAIL PROTECTED] tar-1.20]$ ./configure --no-check-device
 configure: error: unrecognized option: --no-check-device
 Try `./configure --help' for more information.

 Same if root.  What the heck?

 
 I haven't looked at tar 1.20 myself, but I guess --no-check-device is a 
 run-time, not configure-time option.
 
 --
 Toomas
 ... Kindred: Fear that relatives are coming to stay.


-- 
Frank Smith  [EMAIL PROTECTED]
Sr. Systems Administrator   Voice: 512-374-4673
Hoover's Online   Fax: 512-374-4501


Re: Errors on labeling tape

2008-09-18 Thread Frank Smith
Prashant Ramhit wrote:
 Hi All,
 
 I am doing a daily backup of my system on an LTO-4 tape with a single 
 tape drive.
 When labelling the tape, I am receiving the errors described below.
 
 Someone please give me a few notes on what is wrong.
 
 Kind regards,
 Prashant
 
 
 
 
 ###  Errors ##
 
 [EMAIL PROTECTED]:/etc/amanda/fullback# amlabel fullback FULLBACK-01
 /etc/amanda/fullback/amanda.conf, line 22: configuration keyword expected
 /etc/amanda/fullback/amanda.conf, line 22: end of line is expected
 /etc/amanda/fullback/amanda.conf, line 24: configuration keyword expected
 /etc/amanda/fullback/amanda.conf, line 24: end of line is expected
 /etc/amanda/fullback/amanda.conf, line 25: configuration keyword expected
 /etc/amanda/fullback/amanda.conf, line 27: configuration keyword expected
 /etc/amanda/fullback/amanda.conf, line 27: end of line is expected
 /etc/amanda/fullback/amanda.conf, line 42: dumptype parameter expected
 /etc/amanda/fullback/amanda.conf, line 42: end of line is expected
 /etc/amanda/fullback/amanda.conf, line 43: dumptype parameter expected
 /etc/amanda/fullback/amanda.conf, line 43: end of line is expected
 /etc/amanda/fullback/amanda.conf, line 44: dumptype parameter expected
 amlabel: errors processing config file /etc/amanda/fullback/amanda.conf
 [EMAIL PROTECTED]:/etc/amanda/fullback#

What version of amanda are you running?  The line 22 of your config that
seems to be causing the error is diskdir /var/tmp/amanda, and diskdir
doesn't seem to be a valid parameter in my version (and probably not
yours either since it is showing an error).  Is this a new install or
a currently running one, perhaps upgraded using the old config?

Frank

 
 
  My Config ###
 
 
 [EMAIL PROTECTED]:/etc/amanda/fullback# more amanda.conf
 org Project Data - Daily Tape
 mailto root
 dumpuser amanda
 inparallel 4
 netusage 600
 
 # a filesystem is due for a full backup once every day
 dumpcycle 0 days
 tapecycle 10 tapes
 runspercycle 1
 runtapes 1
 
 bumpsize 20 MB
 bumpdays 1
 bumpmult 4
 
 tapedev /dev/nst0
 
 tapetype LTO-4
 labelstr ^FULLBACK-[0-9][0-9]*$
 
 diskdir /var/tmp/amanda
 disksize 2000 MB
 infofile /var/lib/amanda/fullback/curinfo
 logfile /var/log/amanda/fullback/log
 
 indexdir /var/lib/amanda/fullback/index
 
 define tapetype LTO-4 {
comment HP LTO4 800Gb - Compression Off
length 802816 mbytes
filemark 0 kbytes
speed 52616 kps
 }
 
 define dumptype root-tar {
 program GNUTAR
 comment root partition dump with tar
 options index, exclude-list /etc/amanda/fullback/fullback.exclude
 priority high
 }
 
 ### Tape Status ###
 
 
 [EMAIL PROTECTED]:/etc/amanda/fullback# mt -f /dev/nst0 status
 SCSI 2 tape drive:
 File number=0, block number=0, partition=0.
 Tape block size 0 bytes. Density code 0x46 (no translation).
 Soft error count since last status=0
 General status bits on (4101):
  BOT ONLINE IM_REP_EN
 [EMAIL PROTECTED]:/etc/amanda/fullback#
 
 
 


-- 
Frank Smith  [EMAIL PROTECTED]
Sr. Systems Administrator   Voice: 512-374-4673
Hoover's Online   Fax: 512-374-4501


Re: Theoretical question

2008-08-09 Thread Frank Smith
Aaron J. Grier wrote:
 On Fri, Aug 08, 2008 at 03:38:25PM -0700, Steffan Vigano wrote:
 the LTO tapes only hold 100GB natively, and you can't backup a file
 system with dump that's larger then the actual capacity of the tape.
 [...]
 A) Was there any truth to his original statement?
 
 only if compression is not used, and the partitions are full.  (or if
 compression is used, and the partitions are full of non-compressible
 data.)

In my younger days I did manual backups using dump, and when it
hit EOT it would ask for a new tape and wait for you to change tapes
and hit y to continue.  Made full backups of large filesystems
a pain, as you couldn't just start it in cron at let it run (well
actually I did, I had it kick off on Saturday morning at a time
that would let it be finishing the first tape about the time I
planned to stop by to change the tape.  Fortunately it fit on
two tapes so I wasn't stuck there for hours).
  Originally, Amanda was not able to backup any filesystem/directory
larger than the capacity of a single tape. The ability to span
a dump image across multiple tapes was added to Amanda awhile
back.  Amanda has always had the ability to use tar instead
of dump, enabling you to back up subdirectories that would fit on
a single tape.
   Personally, I feel that having a multiple tape dump image just
increases the odds of a failed restore, as well as making bare
metal recovery without Amanda more difficult. YMMV.

Frank

-- 
Frank Smith  [EMAIL PROTECTED]
Sr. Systems Administrator   Voice: 512-374-4673
Hoover's Online   Fax: 512-374-4501


Re: problem with amflush?

2008-08-07 Thread Frank Smith
Andre Brandt wrote:
 Sure :)
 
 Logfile:
 
 DISK amflush test.mynet /data
 DISK amflush srv1.mynet /data
 DISK amflush srv1.mynet /etc
 DISK amflush srv1.mynet /var
 START amflush date 20080807082315
 START driver date 20080807082315
 STATS driver startup time 0.012
 START taper datestamp 20080807082315 label DailySet1-001 tape 0
 SUCCESS taper test.mynet /data 20080729 1 [sec 6.938 kb 2944 kps 424.3 {wr: 
 writers 92 rdwait 0.000 wrwait 0.036 filemark 6.901}]
 SUCCESS taper srv1.mynet /data 20080729 2 [sec 34.533 kb 641856 kps 18586.6 
 {wr: writers 20058 rdwait 0.743 wrwait 29.223 filemark 4.506}]
 SUCCESS taper srv1.mynet /var 20080729 2 [sec 27.319 kb 517664 kps 18948.8 
 {wr: writers 16177 rdwait 0.691 wrwait 24.682 filemark 1.895}]
 SUCCESS taper srv1.mynet /etc 20080729 1 [sec 1.909 kb 96 kps 50.3 {wr:
 writers 3 rdwait 0.000 wrwait 0.004 filemark 1.905}]
 INFO taper tape DailySet1-001 kb 1162560 fm 4 [OK]
 
 
 
 Report mail:
 
 *** THE DUMPS DID NOT FINISH PROPERLY!
 
 The dumps were flushed to tape DailySet1-001.
 The next tape Amanda expects to use is: DailySet1-002.
 
 
 STATISTICS:
   Total   Full  Incr.
       
 Estimate Time (hrs:min)0:00
 Run Time (hrs:min) 0:01
 Dump Time (hrs:min)0:00   0:00   0:00
 Output Size (meg)   0.00.00.0
 Original Size (meg) 0.00.00.0
 Avg Compressed Size (%) -- -- -- 
 Filesystems Dumped0  0  0
 Avg Dump Rate (k/s) -- -- -- 
 
 Tape Time (hrs:min)0:01   0:00   0:01
 Tape Size (meg)  1135.30.0 1135.3
 Tape Used (%)   0.60.00.6   (level:#disks ...)
 Filesystems Taped 4  0  4   (1:2 2:2)
 
 Chunks Taped  0  0  0
 Avg Tp Write Rate (k/s) 16443.8-- 16443.8
 
 USAGE BY TAPE:
   Label  Time  Size  %NbNc
   DailySet1-001   0:01  1162560k0.6 4 0
 
 
 NOTES:
   taper: tape DailySet1-001 kb 1162560 fm 4 [OK]
 
 
 DUMP SUMMARY:
DUMPER STATS   TAPER STATS 
 HOSTNAME DISKL ORIG-kB  OUT-kB  COMP%  MMM:SS   KB/s MMM:SS   KB/s
 -- - -
 test.mynet   /data   1 N/A2944-- N/AN/A0:07  424.3
 srv1.mynet   /data   2 1651560  641856   38.9N/AN/A0:35 
 18586.6
 srv1.mynet   /etc1 N/A  96-- N/AN/A0:02   50.3
 srv1.mynet   /var2  836610  517664   61.9N/AN/A0:27 
 18948.8
 
 (brought to you by Amanda version 2.5.1p1)
 
 
 Any ideas?
 

Any chance you have an incomplete dump image in your holdingdisk?
It may be telling you that all the complete dumps were flushed to
tape, but incomplete ones were not.  When that has happened  to
me, I've just had amflush offer me those dates to flush, but when
I selected them it would write nothing to tape and tell me the
flush failed.  At least in the older versions I have used, parts
of Amanda just check for the existence of subdirectories in your
holdingdisk, while other parts actually check to see if they
contain valid (complete) dumps.

Frank


-- 
Frank Smith  [EMAIL PROTECTED]
Sr. Systems Administrator   Voice: 512-374-4673
Hoover's Online   Fax: 512-374-4501


Re: Dump Write

2008-07-09 Thread Frank Smith
Dustin J. Mitchell wrote:
 On Wed, Jul 9, 2008 at 5:25 PM, Mike Fahey [EMAIL PROTECTED] wrote:
 Can I force amanda to write the dumps it has finished while it is still
 dumping?
 
 Amanda already does that!  Note that Amanda can only write one dump to
 the device (even hard disk) at any given time.  Still, that should go
 pretty quickly.
 
 Can you give some more evidence that the taper isn't starting until
 all of the dumps are finished?

Not using a holding disk would cause that behavior, as would a setting
of 'inparallel 1'.

Frank

 
 Dustin
 


-- 
Frank Smith  [EMAIL PROTECTED]
Sr. Systems Administrator   Voice: 512-374-4673
Hoover's Online   Fax: 512-374-4501


Re: speeding up estimates

2008-07-02 Thread Frank Smith
John Heim wrote:
 I'm having trouble making backups because the estimates are taking too long. 
 I inherited an amanda backup setup and my predecessor had already increased 
 the etimeout to 1800. I doubled it to 3600 and I was still getting timeout 
 warnings at the top of the email when I did an amdump. So I doubled it again 
 and now it's been running for 3 days.
 
 I saw in the amanda.conf man page that there's an 'estimate' directive. 
 Values are client, calcsize, and server. But amdump complains when I add the 
 directive to amanda.conf saying, configuration keyword expected. I'm 
 pretty sure the spelling is right since I cut and pasted right from the man 
 page.
 # Do faster estimates
 estimate calcsize

Did you put it in a dumptype definition where it belongs, or did you
try it as a global option where it can't work?  If you just search
for estimate in the man page it's easy to overlook that it is only
valid inside a dumptype definition.
 
 Questions:
 1. Is something wrong with my setup that it takes that long to do the 
 estimates?
Possibly, but more likely you just have busy disks and/or a filesystem
with a lot of small files.
 2. Can I spped up the estimates?
Use 'estimate calcsize'.

Good luck,
Frank


-- 
Frank Smith  [EMAIL PROTECTED]
Sr. Systems Administrator   Voice: 512-374-4673
Hoover's Online   Fax: 512-374-4501


Re: Release of Amanda-6.2

2008-04-01 Thread Frank Smith
Jean-Francois Malouin wrote:
 * Jean-Louis Martineau [EMAIL PROTECTED] [20080401 14:33]:
 Hello,

 The Amanda core team is pleased to announce the release of Amanda 6.2.
 
 It takes courage to release on April 1st!
 And I suppose this is meant to be 2.6 :)
 
 I'd love to test but how to I get glib  2.2.0? On my production
 systems running Debian/Etch all I have is glib 2.12.4-2 and I don't
 see any backport for it. Even Sid is at 2.16 I believe.

2.12 is newer than 2.2.  Think of it as twelve vs. two instead of
one two vs. two.

Debian sarge even has libc 2.3.2, which is greater than 2.2.0.

Frank

 
 Where can I get a version that meets the requirement to compile
 amanda-2.6 so that I can start testing it in my environment?
 
 jf
 

 Source tarballs are available from

* http://www.amanda.org
* https://sourceforge.net/project/showfiles.php?group_id=120

 Binaries for many systems are available from

* http://www.zmanda.com/download-amanda.php

 Documentation can be found at

* http://wiki.zmanda.com



 Here's a list of the changes for release 6.2 (from the NEWS file):
 Look at the ReleaseNotes and ChangeLog file for more details.

* configure --disable-shared doesn't work because perl modules
  require shared libraries.  Use configure --with-static-binaries to
  build statically linked binaries.
* New --enable-kitche-sink flag: calls the plumber when you kitchen
  sink is backed up.
* 'amverify' and 'amverifyrun' are deprecated and replaced with the
* new, more flexible 'amcheckdump'
* 'amdd' and 'ammt' are deprecated.
* Some Amanda files are now installed in new amanda/
  subdirectories: libraries are now installed in $libdir/amanda and
  internal programs are now installed in $libexecdir/amanda. If you
  mix 2.6.0 and earlier version with rsh/ssh auth, you need to add
  an 'amandad_path' to the dumptype and to amanda-client.conf.
* The amandates file, previously at /etc/amandates, is now at
  $localstatedir/amanda/amandates.  You may want to move your
  existing /etc/amandates when you upgrade Amanda.
* New 'amcryptsimple', 'amgpgcrypt' - encryption plugins based on gpg.
* New amenigma - WWII enigma machine encryption.  Super duper secure.
* New 'amserverconfig', 'amaddclient' - Initial Amanda configuration
  tools these tools make assumptions, please see man page.
* Many bugs fixed and code rewrite/cleanup
  o Speedup in 'amrecover find' and starting amrecover.
* glib is required to compile and run amanda.
* Device API: pluggable interface to storage devices, supporting
  tapes, vtapes, RAIT, and Amazon S3
* New perl modules link directly to Amanda, to support writing
  Amanda applications in Perl. Perl module are installed by default
  in the perl installsitelib directory. It can be changed with
  'configure --with-amperldir'.* New 'local' security driver
  supports backups of the amanda server without any network
  connection or other configuration.
* Almost 200 unit tests are available via 'make installcheck'.
* Amanda configuration file changes
  o amanda.conf changes
+ flush-threshold-dumped
+ flush-threshold-scheduled
+  taperflush
+  device_property
+  usetimestamps default to yes


 Jean-Louis Martineau

 


-- 
Frank Smith  [EMAIL PROTECTED]
Sr. Systems Administrator   Voice: 512-374-4673
Hoover's Online   Fax: 512-374-4501


Re: amanda server cannot back itself up

2008-03-20 Thread Frank Smith
Byarlay, Wayne A. wrote:
 I know I'm missing something simple here; but I have beat my head
 against this for hours. Here's the deal:
 
 I have a RHEL 4 machine 2.6.9-42.0.10.EL (Slightly older kernel due to
 Apple XServe SCSI Drivers). I have installed Amanda 2.5.2p1 server, it
 works; several other clients have connected  I've been happily backing
 them up for weeks. But, I have never gotten this machine to back ITSELF
 up!
 
 Here's the error message, I'm sure you've seen it a thousand times:
 Amanda Tape Server Host Check
 -
 Holding disk /data/amanda/dumps: 4538211200 KB disk space available,
 using 4538108800 KB
 slot 7: read label `DailySet1-07', date `20080225'
 NOTE: skipping tape-writable test
 Tape DailySet1-07 label ok
 NOTE: host info dir /etc/amanda/DailySet1/curinfo/myserver.mydomain does
 not exist
 NOTE: it will be created on the next run.
 Server check took 0.085 seconds
 
 Amanda Backup Client Hosts Check
 
 WARNING: myserver.mydomain: selfcheck request failed: timeout waiting
 for ACK
 Client check: 5 hosts checked in 30.097 seconds, 1 problem found
 
 
 Likewise, trying to run amrestore as root, , I get:
 
 # amrecover
 AMRECOVER Version 2.5.2p1. Contacting server on myserver.mydomain ...
 [request failed: timeout waiting for ACK]
 
 
 
 I checked /etc/hosts. Itself is there. i.e.,
 
 127.0.0.1 localhost.localdomain   localhost
 1.2.3.4   myserver.mydomain   myserver -- I know these
 are correct even though I obscured it in this e-mail. (They match the
 correct IP, anyway.)
 .
 .
 Etc.
 
 I'm guessing /etc/hosts.allow doesn't need to be touched, since my other
 server didn't.
 
 I checked /etc/services. It does have the following:
 
 #grep amanda /etc/services
 amanda  10080/tcp   # amanda backup services
 amanda  10080/udp   # amanda backup services
 kamanda 10081/tcp   # amanda backup services
 (Kerberos)
 kamanda 10081/udp   # amanda backup services
 (Kerberos)
 amandaidx   10082/tcp   # amanda backup services
 amidxtape   10083/tcp   # amanda backup services
 
 Amanda was installed on the client using the username amandabackup. 
 
 I checked /home/amandabackup/.amandahosts; it looks like this:
 
 
 Myserver.mydomain amandabackup
 amandahost amanda
 amandahost.localdomain Amanda
 
 This directly matches my OTHER Linux RHEL machine that works!
 
 Uuugh! Any ideas?

Did you remember to add the amanda services into your (x)inetd conf
and restart the (x)indet service?

Frank

 
 WAB


-- 
Frank Smith  [EMAIL PROTECTED]
Sr. Systems Administrator   Voice: 512-374-4673
Hoover's Online   Fax: 512-374-4501


Re: amrecover

2007-12-07 Thread Frank Smith
Krahn, Anderson wrote:
 Ahh that makes sense now, the DLE is about 
 /dev/vx/dsk/rootdg/qadb01_backup
   110G  103G  7.3G  94% /backup
 103gig the Whole DLE that is, what is the preferred method for getting a
 few files from a DLE entry?
 Anderson

One method is to split your DLE into smaller pieces such as subdirectories.
There are some postings in the group archives about how to do this
while stil ensuring that newly created subdirectories will also get
backed up even if you forget to add them as a DLE.
   Otherwise just hope your files are near the beginning, since then
you will have your data restored and can use it long before tar
scans through the entire DLE on tape (looking for other copies of
the same file that won't be there since Amanda doesn't append to tar
archives).
Frank



-- 
Frank Smith  [EMAIL PROTECTED]
Sr. Systems Administrator   Voice: 512-374-4673
Hoover's Online   Fax: 512-374-4501


Re: IBM 3584 and mtx?

2007-11-27 Thread Frank Smith
Jean-Francois Malouin wrote:
 So from the lack of replies I gather that I'm alone on this...
 

It probably just means nobody on the list has an IBM 3584.  However,
the top link on a Google search for IBM 3584 mtx returned a page
with this link:
http://www.ibm.com/developerworks/forums/message.jspa?messageID=13745192
   I reached IBM 3584 functions through mtx prog such as
   mtx -f /dev/sg1 status (all the other options work fine
   as well). I compiled kernel 2.6.12 with sg (scsi generic
   and (scsi tape)st. It works well with amanda backup system.
   I don't use IBMtape.o.

Frank

 jf
 
 * Jean-Francois Malouin [EMAIL PROTECTED] [20071123 10:50]:
 Hi,

 Really this does not pertain to the amanda list but before I go deeper
 in my search I was wondering if anyone has any experience and
 recommendations wrt to an IBM 3584 tape library: is it possible to
 drive the robot with mtx? Comes with 9 LTO-2 FC tape drives and I
 could get it cheap...

 thanks!
 jf
 -- 
 ° 
 


-- 
Frank Smith  [EMAIL PROTECTED]
Sr. Systems Administrator   Voice: 512-374-4673
Hoover's Online   Fax: 512-374-4501


Re: can amanda backup symlink?

2007-11-21 Thread Frank Smith
Chris Hoogendyk wrote:
 
 Frank Smith wrote:
 fedora wrote:
   
 Hi huys..

 I have 2 situations:

 1) default directory for MySQL is /var/lib/mysql. If this directory is link
 to /var2/db/mysql and if I put /var/lib/mysql in disklist, can amanda do
 backup? I think this should be can backup. May I know which directory is the
 best to put in disklist for this case?
 
 Amanda will back up the link, but you probably want it to back up the data,
 so you should use /var2/db/mysql as your disklist entry. Or you may want
 both, so if you are rebuilding the entire server you would get the link
 as well, but in that case you might want most or all of /var and not just
 the database link.
   
 2) how about this one. In /var/lib/mysql has databases but certain databases
 in linking to /var3/mysql like:

 lrwxrwxrwx1 root root   42 May 16  2007 AngelClub -
 /var3/mysql/AngelClub
 drwx--2 mysqlmysql4096 Nov 13 11:55 BabyMobile
 drwx--2 mysqlmysql4096 Nov 13 11:55 BestClub

 If I put only /var/lib/mysql in disklist, can amanda backup for /var3/mysql?
 Or should add both /var/lib/mysql and /var3/mysql in disklist? Please
 advise.
 
 You would need both.  /var/lib/mysql would pick up the BabyMobile and
 BestClub databases, but would only record the link to AngelClub and not
 the database itself, so you would have to add /var3/mysql/AngelClub to
 get that database.
 
 
 This requires just a bit of clarification.
 
 Amanda calls on native tools to do the backups. On Solaris, people 
 typically choose to use ufsdump. On Linux, people typically choose to 
 use gnutar. So the question depends on the behavior of those tools and 
 possibly the parameters Amanda calls them with, though I doubt Amanda 
 would call gnutar with --dereference and --create.
 
 ufsdump will faithfully backup a partition. That is, it backs up links 
 as links and restores them as links. It backs up mount points as mount 
 points and doesn't follow them into other mounted partitions. It deals 
 properly with weird things such as doors. So, to paraphrase, when you 
 ask ufsdump to do a partition, you get the partition, the whole 
 partition, and nothing but the partition.
 
 I'm less familiar with all the gnuances of gnutar, and some people will 
 substitute star or a wrapper of their own. But gnutar will typically 
 backup a symlink as a symlink, though it has parameters that can be 
 tweaked to do otherwise. Gnutar also typically follows mount points into 
 other mounted partitions, though I'm going to take a guess that Amanda 
 passes it the parameter that tells it not to do that. It would seem 
 contrary to the concept of the way DLE's are configured to have gnutar 
 expanding mount points.

Yes, tar can follow links, but Amanda calls it with the --one-file-system
option so it won't cross filesystem boundariea.

Frank

 
 Aside from the above, I go along with Frank's response.
 
 In addition, you should read the backup reports. Check the sizes of your 
 partitions with `df -k` and then compare those with what the Amanda 
 reports as the amount of data having been backed up. See if they make 
 sense. And do trial recoveries to confirm that what you think got backed 
 up really did get backed up and that you can recover it.
 
 Also, since you are doing mySQL, be sure you are taking into account the 
 peculiarities of backing up databases.
 
 
 ---
 
 Chris Hoogendyk
 
 -
O__   Systems Administrator
   c/ /'_ --- Biology  Geology Departments
  (*) \(*) -- 140 Morrill Science Center
 ~~ - University of Massachusetts, Amherst 
 
 [EMAIL PROTECTED]
 
 --- 
 
 Erdös 4
 
 


-- 
Frank Smith  [EMAIL PROTECTED]
Sr. Systems Administrator   Voice: 512-374-4673
Hoover's Online   Fax: 512-374-4501


Re: can amanda backup symlink?

2007-11-20 Thread Frank Smith
fedora wrote:
 Hi huys..
 
 I have 2 situations:
 
 1) default directory for MySQL is /var/lib/mysql. If this directory is link
 to /var2/db/mysql and if I put /var/lib/mysql in disklist, can amanda do
 backup? I think this should be can backup. May I know which directory is the
 best to put in disklist for this case?

Amanda will back up the link, but you probably want it to back up the data,
so you should use /var2/db/mysql as your disklist entry. Or you may want
both, so if you are rebuilding the entire server you would get the link
as well, but in that case you might want most or all of /var and not just
the database link.
 
 2) how about this one. In /var/lib/mysql has databases but certain databases
 in linking to /var3/mysql like:
 
 lrwxrwxrwx1 root root   42 May 16  2007 AngelClub -
 /var3/mysql/AngelClub
 drwx--2 mysqlmysql4096 Nov 13 11:55 BabyMobile
 drwx--2 mysqlmysql4096 Nov 13 11:55 BestClub
 
 If I put only /var/lib/mysql in disklist, can amanda backup for /var3/mysql?
 Or should add both /var/lib/mysql and /var3/mysql in disklist? Please
 advise.

You would need both.  /var/lib/mysql would pick up the BabyMobile and
BestClub databases, but would only record the link to AngelClub and not
the database itself, so you would have to add /var3/mysql/AngelClub to
get that database.

Frank


-- 
Frank Smith  [EMAIL PROTECTED]
Sr. Systems Administrator   Voice: 512-374-4673
Hoover's Online   Fax: 512-374-4501


Re: Changing the from address in reports

2007-11-10 Thread Frank Smith
Dan Brown wrote:
 I would like my Amanda backup reports to mail out to a non-local
 address.  Currently when I get them, they come from
 [EMAIL PROTECTED] so they don't even have the hostname of the
 backup server associated with it.  I've used Amanda for quite a while to
 backup a number of servers and am currently using 2.5.2p1 but up until
 recently I just had reports delivered to the local amanda user.
 
 [EMAIL PROTECTED] is getting caught up by our spam filter for
 obvious reasons.  Is there somewhere within the amanda config I can set
 this or is this something I have to tell the mail system to forcefully
 add a domain/hostname onto for outgoing mail?

Your mail system is already adding  @hostname.domainname to it.  The
email is just from your backup username without anything being added.
   Most likely you are running a Linux distro, as many of those by
default (incorrectly, in my opinion) set up the system  to be
localhost.localdomain, and also usually add that name to the loopback
address as well, if you don't provide the real names during install,
or do a default 'don't ask any questions' install.
   If you try sending mail from the command line as any user you will
see the same result.

Ways to fix it:
If your system uses /etc/mailname, have that contain the fully-qualified
name of your system (i.e., myhost.example.com) and that should solve
your problem.
Look for all occurrences of localhost.localdomain in your various system
configuration files and replace them with the proper name.  This can
help you in the future with other programs that need to know where they
are running in order to work correctly (mailers, backup software,
automated CM tools, SW with host-based licenses, etc.).

Frank

 
 
 
 ---
 Dan Brown
 [EMAIL PROTECTED]


-- 
Frank Smith  [EMAIL PROTECTED]
Sr. Systems Administrator   Voice: 512-374-4673
Hoover's Online   Fax: 512-374-4501


Re: taper signal 2 ?

2007-09-29 Thread Frank Smith
Tim Johnson J. wrote:
I am going though the logs and trying to get my issue
 of not running. I finally ran across the signal 2
not sure what the signal two is.
Is there a listing of the signals, and is the below error
 a reference back to an in ability to see the tapes?

On POSIX systems, a signal 2 is a SIGINT, interrupt from
keyboard. Try checking for man pages for signal, kill, or
related functions to see how it is defined on your system.

The POSIX.1-1990 standard signals:
   Signal Value Action   Comment
   -
   SIGHUP1   TermHangup detected on controlling terminal
 or death of controlling process
   SIGINT2   TermInterrupt from keyboard
   SIGQUIT   3   CoreQuit from keyboard
   SIGILL4   CoreIllegal Instruction
   SIGABRT   6   CoreAbort signal from abort(3)
   SIGFPE8   CoreFloating point exception
   SIGKILL   9   TermKill signal
   SIGSEGV  11   CoreInvalid memory reference
   SIGPIPE  13   TermBroken pipe: write to pipe with no readers
   SIGALRM  14   TermTimer signal from alarm(2)
   SIGTERM  15   TermTermination signal
   SIGUSR1   30,10,16TermUser-defined signal 1
   SIGUSR2   31,12,17TermUser-defined signal 2
   SIGCHLD   20,17,18Ign Child stopped or terminated
   SIGCONT   19,18,25ContContinue if stopped
   SIGSTOP   17,19,23StopStop process
   SIGTSTP   18,20,24StopStop typed at tty
   SIGTTIN   21,21,26Stoptty input for background process
   SIGTTOU   22,22,27Stoptty output for background process

added in POSIX.1-2001:
   Signal   Value Action   Comment
   -
   SIGBUS  10,7,10 CoreBus error (bad memory access)
   SIGPOLL TermPollable event (Sys V). Synonym of SIGIO
   SIGPROF 27,27,29TermProfiling timer expired
   SIGSYS  12,-,12 CoreBad argument to routine (SVr4)
   SIGTRAP5CoreTrace/breakpoint trap
   SIGURG  16,23,21Ign Urgent condition on socket (4.2BSD)
   SIGVTALRM   26,26,28TermVirtual alarm clock (4.2BSD)
   SIGXCPU 24,24,30CoreCPU time limit exceeded (4.2BSD)
   SIGXFSZ 25,25,31CoreFile size limit exceeded (4.2BSD)


Frank
 
 
 
 
 START taper datestamp 20070929215819 label DailySet116 tape 0
 INFO taper Received signal 2
 INFO taper Received signal 2
 


-- 
Frank Smith  [EMAIL PROTECTED]
Sr. Systems Administrator   Voice: 512-374-4673
Hoover's Online   Fax: 512-374-4501


Re: Self Backups Possible?

2007-08-16 Thread Frank Smith
Debaecke, David M CTR SPAWAR, MILL DET wrote:
 Hello  I'm wondering if self backups are possible with AMANDA?
 
 That is, not having any client machines, but simply installing AMANDA
 server on a linux box and using it to backup the machine its on?
 
 If not, would it be possible to setup both client AND server on the same
 box and use it in this form for self backups?
 

The server can back up  itself, but it does need both the client and
server parts of Amanda installed.
To avoid future problems, configure Amanda to back itself up by
hostname and not localhost.

Frank

-- 
Frank Smith  [EMAIL PROTECTED]
Sr. Systems Administrator   Voice: 512-374-4673
Hoover's Online   Fax: 512-374-4501


Re: a client disappeared

2007-08-07 Thread Frank Smith
Glenn English wrote:
 -BEGIN PGP SIGNED MESSAGE-
 Hash: SHA1
 
 Dustin J. Mitchell wrote:
 
 A few things to check:

  - forward or reverse DNS doesn't work for the server or the client.
 
 Hadn't thought of that. But unfortunately, it works perfectly.
 
  - firewall on the client changed
 
 It did change, but it's the same as on the other client, there's no
 record of anything being blocked, I can see packets getting there with
 tcpdump, and disabling the packet filter doesn't change anything.
 

A few more ideas to check:
Have you tried running the command in your inetd.conf, such as
/usr/local/libexec/amandad on the command line as your backup user?
You might have permissions issues or missing libraries that prevents
it from running.
If it works on the command line, try restarting inetd, and also
check the system logs to see if inetd has trouble starting it.

Frank



-- 
Frank Smith  [EMAIL PROTECTED]
Sr. Systems Administrator   Voice: 512-374-4673
Hoover's Online   Fax: 512-374-4501


Re: amrestore does not work NAK: amindexd: invalid service

2007-07-11 Thread Frank Smith
:

 amrecover -s Amandaserver3.fqdn

 Error Message:

 AMRECOVER Version 2.5.1p1. Contacting server on localhost ...
 NAK: amindexd: invalid service

 Yes i would like to recover from and to localhost.

 Because of the successfully amrestore from the Centos with 2.4.4p3 i 
 think that there is no
 bsd/bsdudp/bsdtcp authentication configured.

 Right?

 So i´m looking for a solution of my Problem.

 Have tried the indicated solution from 
 http://wiki.zmanda.com/index.php/Configuring_bsd/bsdudp/bsdtcp_authentication
  

 but it does not work.

 I will put The configfiles to the end.

 I think there are 2 possible ways for solving this Problem:

 1. Get bsd/bsdudp/bsdtcp authentication working
 - with the problem, that the Centos Client cant´t recover the data 
 himself(old amrestore version)

 2. Tell amrestore 2.5.1p1 not to use bsd/bsdudp/bsdtcp authentication

 right?

 The debian amanda package is configured with --with-bsd-seurity


 Configfiles:

 more /etc/xinetd.d/amanda
 service amanda
 {
socket_type = dgram
protocol= udp
wait= yes
user= backup
group   = backup
groups  = yes
server  = /usr/lib/amanda/amandad
server_args = -auth=bsd amdump amindexd amidxtaped
disable = no
 }


 -- 

 more /etc/xinetd.d/amandaidx
 #default: on
 # description: The amanda index service
 service amandaidx
 {
 disable = no
 socket_type = stream
 protocol= tcp
 wait= no
 user= backup
 group   = backup
 server  = /usr/lib/amanda/amindexd
 }

 -- 

 more /etc/xinetd.d/amidxtape
 #default: on
 # description: The amanda tape service
 service amidxtape
 {
 disable = no
 socket_type = stream
 protocol= tcp
 wait= no
 user= backup
 group   = backup
 server  = /usr/lib/amanda/amidxtaped
 }

 -- 

 :~# grep -v '#' /etc/inetd.conf








 amandaidx stream tcp nowait backup /usr/sbin/tcpd 
 /usr/lib/amanda/amindexd
 amidxtape stream tcp nowait backup /usr/sbin/tcpd 
 /usr/lib/amanda/amidxtaped
 amanda dgram udp wait backup /usr/sbin/tcpd /usr/lib/amanda/amandad 
 -auth=bsd amdump amindexd amidxtaped

 -- 

 Configure Options

 ./configure --prefix=/usr --bindir=/usr/sbin --mandir=/usr/share/man \
 --libexecdir=/usr/lib/amanda --enable-shared\
 --sysconfdir=/etc --localstatedir=/var/lib \
 --with-gnutar-listdir=/var/lib/amanda/gnutar-lists \
 --with-index-server=localhost \
 --with-user=backup --with-group=backup  \
 --with-bsd-security --with-amandahosts \
 --with-smbclient=/usr/bin/smbclient \
 --with-debugging=/var/log/amanda \
 --with-dumperdir=/usr/lib/amanda/dumper.d \
 --with-tcpportrange=5,50100 
 --with-udpportrange=840,860 \
 --with-maxtapeblocksize=256 \
 --with-ssh-security

 -- 

 snapits from netstat -tupln:

 Proto Recv-Q Send-Q Local Address   Foreign Address 
 State   PID/Program name
 tcp0  0 0.0.0.0:10082   0.0.0.0:* LISTEN 
 2246/inetd
 tcp0  0 0.0.0.0:10083   0.0.0.0:* LISTEN 
 2246/inetd
 udp0  0 0.0.0.0:10080   0.0.0.0:*2358/xinetd
 udp0  0 0.0.0.0:10080   0.0.0.0:*2358/xinetd
 udp0  0 0.0.0.0:10080   0.0.0.0:*2246/inetd

 -- 




 Hope that someone could help me.

 bye

 B²






 
 


-- 
Frank Smith  [EMAIL PROTECTED]
Sr. Systems Administrator   Voice: 512-374-4673
Hoover's Online   Fax: 512-374-4501


Re: 2.5.2p1 make install oddity

2007-06-21 Thread Frank Smith
It wasn't a problem, just a comment about something that looked
odd. If the .a is not needed by the executables, then to avoid
cluttering the filesystem it shouldn't be installed.  Same thing
with the library symlinks, if they aren't used, why create them?
   It's probably just historical cruft, but it is confusing to
those of us that aren't familiar enough with the code to know
what's needed and what isn't.

Thanks for the clarification,
Frank

Jean-Louis Martineau wrote:
 The static library *.a are not needed at execution time.
 All executable link with the version numbered dynamic library, the 
 symlink is not use by amanda.
 
 Can you give more detail of your problem?
 an `ldd` of your executable
 an `ls -l` of the library.
 
 Jean-Louis
 
 Frank Smith wrote:
 While building 2.5.2p1 to upgrade a 2.4.5 version, I ran across an
 oddity running 'make install'.  I had built both versions with the
 '--with suffixes' option.  Evidently that option doesn't work on
 the static libraries that are built, as it just creates the .a files
 without a suffix and the install happily overwrites the previous
 versions of the libraries with a new one.
   Also, the .so files are created with the version suffix, but the
 install creates symlinks pointing to the new version, which seems wrong
 since the executables are not automatically symlinked when installed,
 causing a seemingly broken installation by default, as the old .a
 libraries are no longer there to support the old version.
   Or have I missed something?

 Frank

   
 


-- 
Frank Smith  [EMAIL PROTECTED]
Sr. Systems Administrator   Voice: 512-374-4673
Hoover's Online   Fax: 512-374-4501


2.5.2p1 make install oddity

2007-06-20 Thread Frank Smith
While building 2.5.2p1 to upgrade a 2.4.5 version, I ran across an
oddity running 'make install'.  I had built both versions with the
'--with suffixes' option.  Evidently that option doesn't work on
the static libraries that are built, as it just creates the .a files
without a suffix and the install happily overwrites the previous
versions of the libraries with a new one.
  Also, the .so files are created with the version suffix, but the
install creates symlinks pointing to the new version, which seems wrong
since the executables are not automatically symlinked when installed,
causing a seemingly broken installation by default, as the old .a
libraries are no longer there to support the old version.
  Or have I missed something?

Frank

-- 
Frank Smith  [EMAIL PROTECTED]
Sr. Systems Administrator   Voice: 512-374-4673
Hoover's Online   Fax: 512-374-4501


Re: Tapes not reusing

2007-05-25 Thread Frank Smith
James Wilson wrote:
 Amanda is not reusing tapes in my library. I have them setup to overwrite 
 when it reaches the last tape but now I am getting this tape error. Any 
 ideas? Thanks in advance.
 
 
 *** A TAPE ERROR OCCURRED: [No writable valid tape found].
 Some dumps may have been left in the holding disk.
 Run amflush again to flush them to tape.
 The next 3 tapes Amanda expects to use are: a new tape, a new tape, 
 DailyTape-21.
 The next new tape already labelled is: WeeklyTape-14.
 
Perhaps your tapecycle is greater than the number of tapes you have
labeled.  Amanda won't label tapes automatically. How many tapes
have you labeled, which ones are in your library, and what do you
have tapecycle set to in your config?

Frank


-- 
Frank Smith  [EMAIL PROTECTED]
Sr. Systems Administrator   Voice: 512-374-4673
Hoover's Online   Fax: 512-374-4501


Re: gtar and related issues

2007-05-22 Thread Frank Smith
Geert Uytterhoeven wrote:
 I'm using 1.16 (from Debian) since a while. Seems to work fine (no
 restore done so far, though ;-)

1.16 works fine with the newer 2.5.x releases of Amanda (I've been
using it for a while on my home network), but if you use it on a
2.4.5 system the estimates will fail and you'll get a level 0 of
the DLE on every run.

Speaking of which, now that Debian Etch is the stable release, and
it comes with tar 1.16.something, is there a stable 2.5.x release
of Amanda?  I need to upgrade from 2.4.5 at work to support the new
Etch release, but there seem to have been several postings to the
list relating to failures in the released versions, with the usual
workaround being using an SVN snapshot.
   Does the May 7th 2.5.2 release include fixes for all the major
bugs reported recently?

Frank

-- 
Frank Smith  [EMAIL PROTECTED]
Sr. Systems Administrator   Voice: 512-374-4673
Hoover's Online   Fax: 512-374-4501


Re: amanda-client version

2007-05-10 Thread Frank Smith
Paddy Sreenivasan wrote:
 You can also use amadmin xx version

That only works on the server.  If '--without-server' was used to
build the software for some of your clients, there won't be an
amadmin command to run, so you have to check the *.debug files.

 
 See 
 http://wiki.zmanda.com/index.php/Quick_start#Collecting_information_for_configuration
 
 Paddy
 
 On 5/10/07, [EMAIL PROTECTED] [EMAIL PROTECTED] wrote:
 On Thu, May 10, 2007 at 01:35:46PM -0400, Steven Settlemyre wrote:
 How do I find out which version of amanda-client is running on my hosts? I 
 found the debian ones through the package manager, but wonder if there's a 
 command-line switch to use?
 Use
   amgetconf build.VERSION
 if the box has 'amgetconf'; otherwise you can look in your sendbackup or
 sendsize debug logs.


-- 
Frank Smith  [EMAIL PROTECTED]
Sr. Systems Administrator   Voice: 512-374-4673
Hoover's Online   Fax: 512-374-4501


Re: RegEx not working?

2007-05-10 Thread Frank Smith
Richard Stockton wrote:
 Hi there,
 
 I have 3 separate configs running on one server (bak13, bak14, bak15).
 In the past I have made sure that they did not use each other's tapes
 by use of the label regex.  Here is the regex for bak14;
 
 labelstr ^VOL[14][0-9] # label constraint regex: all tapes must match
 
 The other 2 are similar, replacing only the 14 with 13 or 15.
 However, when I run amcheck bak14 with the wrong tape in the drive,
 I get this message;
 ==
 Subject: BAK-05 /bak14 AMANDA PROBLEM: FIX BEFORE RUN, IF POSSIBLE
 
 Amanda Tape Server Host Check
 -
 read label `VOL132', date `20070510005900'
 label VOL132 match labelstr but it not listed in the tapelist file.
 (expecting a new tape)
 Server check took 61.588 seconds
 
 Amanda Backup Client Hosts Check
 
 Client check: 1 host checked in 0.183 seconds, 0 problems found
 
 (brought to you by Amanda 2.5.1p3)
 ==
 
 Why is the above regex matching the label for bak14 which should only
 accept VOL140 through Vol149?  This exact same regex appears to
 work properly in earlier versions of amanda.

Did id?  A regex of ^VOL[14][0-9] would match two ranges, both
VOL10 to VOL19 and VOL40 to VOL49. A [] with a list of characters
is an 'either' unless it is two characters separated by a - which
makes it a range, so [14] matches either a 1 or a 4.  That makes
your label VOL14 match because it starts with a 1, but it would
also match VOL12 or VOL19, as well as VOL40.

Don't you need something like:
^VOL14[0-9]  to match VOL140 to VOL149
or
^VOL14[0-9][0-9] to match VOL1400 to VOL1499 (actually either
should match longer strings as well that start with 140-149).

Frank

 
 TIA for any enlightenment.
   - Richard
 


-- 
Frank Smith  [EMAIL PROTECTED]
Sr. Systems Administrator   Voice: 512-374-4673
Hoover's Online   Fax: 512-374-4501


Re: Slow client

2007-05-09 Thread Frank Smith
Olivier Nicole wrote:
 Hi,
 
 I noticed yesterday that one of my clients suddenly became very slow:
 
 by wall clock, seeing the file grows on the holding disk, it takes 9
 minutes for 5 MB of GNUTAR level 2. previous run (Apr. 28th) for the
 same machine, same DLE, same level was 107 KB/s for the dump according
 to Amanda report.

That works out to about 9kB/s, which is pretty slow.
Do you have a duplex mismatch or other errors on the network interface
or switch port?
Are you seeing any system messages about read errors on your disks?
Have you recently changed OS/kernel/tar versions?
What does the Amanda report show for dumper time and taper time for
the DLE?

Frank

 
 The slowness can be seen for every DLE on that client. Other than
 that, the client is working fine, load average is close to 0, it is
 not swapping or anything, network seems to be working fine.
 
 Any idea?
 
 I will run some more diagnostics later, once the backup is finished.
 
 Best regards,
 
 Olivier
 
 


-- 
Frank Smith  [EMAIL PROTECTED]
Sr. Systems Administrator   Voice: 512-374-4673
Hoover's Online   Fax: 512-374-4501


Re: HELP ! Setting up a RAID-0 holding disk on centos 4.4

2007-04-13 Thread Frank Smith
Chris Marble wrote:
 Guy Dallaire wrote:
 I really wish there was an easy way to instruct amanda to wait for all the
 client files,  before starting dumping the holding disk to tape, but there
 isn't.
 
 Just run the backup without a tape in the drive and then amflush the backup
 sometime later.

Unfortunately, that method doesn't work well if you're using a library.
While it is possible to use a wrapper script to change the devices
to something non-existent before each run and then changing it back
and running amflush, it would be much more convenient to simply
have a config option that delays taper until after the dumps are
finished.

-- 
Frank Smith  [EMAIL PROTECTED]
Sr. Systems Administrator   Voice: 512-374-4673
Hoover's Online   Fax: 512-374-4501


Re: testing a patched tar question.

2007-04-06 Thread Frank Smith
Gene Heskett wrote:
 Greetings;
 
 I don't know if everyone has been following the trail of tears regarding 
 doing backups with the newer kernel versions, but there is a gotcha 
 coming up.  There needs to be a patch applied to the linux kernel that 
 takes the hard drive device major numbers out of the experimental area as 
 defined by a document called LANANA.  This move the ide device number 
 from the major 253 to the major 238 in the /dev/major,minor numbers 
 scheme.
 
 This *should* not have an effect on tar, but it does.  Rebooting to one of 
 these newer kernels, or back to an older kernel if the newer one has been 
 running for a few backup sessions results in the Device: hex/decimal 
 values changing when doing a 'stat' on a file.  Tar treats this as an 
 indicator that the data is brand new and must therefore be given a level 
 0 backup.  Of course our typical tape size, affordable by home users such 
 as I, doesn't approach being able to handle a single backup of nearly 
 50GB, the best we can do is raise heck with our schedule by doing 2 or 3 
 backup sessions with as large a tapetype size setting as we have combined 
 vtape and holding disk area to handle.
 
 My view is that since the inode data doesn't change, and the files 
 timestamps don't change just because I've rebooted to a different kernel, 
 that tar is miss-firing on this when asked to do a --listed-incremental 
 and the reference file furnished was built under a different kernel with 
 a different major device number for the ide/atapi drives.
 
 Ok, that's the background as I try to head off what could be an upgrade 
 disaster that I now know about, but which will scare the bajezus out of 
 someone who isn't ready for it.  As of 2.6.21-rc6, that patch has been 
 reverted while the details of how to do this with less disturbance are 
 being worked out.  I've also queried the guys listed in the tarballs 
 ChangeLog just this afternoon after it became obvious that [EMAIL PROTECTED], 
 a moderated list, wasn't being moderated in a timely manner.
 
 I've downloaded, from gnu.org, the latest tarball snapshot, 
 1.16.2-20070123, and installed a 1 line patch that removes this Device: 
 sensitivity.
 
 I also don't know but what I may have opened that proverbial can of worms 
 either.  Since nothing in the file or filesystem has actually changed, I 
 view this sensitivity to the device major numbers as a bug, but there may 
 in fact be a very good reason for it that I'm not aware of.  If there is, 
 then someone should make me aware of the reasons that this is a part of 
 tars logic.
 
 At any rate, this new tar works with my testing scripts when executed 
 insitu in its build tree, and the only fuss when doing a 'make test' in 
 the src tree is a claim that the exclude function failed.  That I could 
 tolerate I think, at least long enough to test.
 
 To that end, I modified my configure script to point at this new version 
 of tar, and did the usual install here.  That appears to have succeeded.
 
 What I want to know from all of you, is do I push to get this patch into 
 tar, or how else should we attempt to reduce the pain here? Let me 
 emphasize that once amanda has been able to make all new level 0's of 
 your data, then amanda will settle down and this will be just another 
 bump in the road.
 

I would be hesitant to push for that change.  However inconvenient it
might be as a result of a kernel upgrade, a new major number is a new
device, and to ignore it changing is inviting disaster.
   You would need to look at tar's code to see how it's using the device
number, it might have an effect on the --one-file-system option.

Frank


 So lets talk, please.  In the meantime I have a test backup running for 
 effect...
 


-- 
Frank Smith  [EMAIL PROTECTED]
Sr. Systems Administrator   Voice: 512-374-4673
Hoover's Online   Fax: 512-374-4501


Re: Does amrecover automatically use unflushed files on the holding disk ?

2007-03-30 Thread Frank Smith
[EMAIL PROTECTED] wrote:
 On Fri, Mar 30, 2007 at 03:19:35PM -0400, Guy Dallaire wrote:
A quick question.
Suppose you run some level 0 backup (GTAR method) with amanda and leav
the files on the holding disks until there is enough files to fill a
tape and run amflush. Say this takes 5-6 working days.
Now, if, after 3 days, I have to restore something that has been
dumped on the disk on the first day and run amrecover on the client
and setdate -MM-DD (today - 3 days)
Are the holding disk files indexed ? Will amanda use them and not
ask for a tape when comes the time to extract ?
 
 Quick answer: yep!
 

And as an added bonus, its faster than restoring from tape.

Frank



-- 
Frank Smith  [EMAIL PROTECTED]
Sr. Systems Administrator   Voice: 512-374-4673
Hoover's Online   Fax: 512-374-4501


Re: forcing a failed backup to disk to use the same slot

2007-03-24 Thread Frank Smith
René Kanters wrote:
 Hi,
 
 I have had some problems that an overnight backup 'hung up', i.e., a  
 level 0 dump started but never properly finished so backups from  
 other machines did not work either. So far the problem has been with  
 the western digital disk on my Mac amanda server, where a restart of  
 the external disk solves the problem.
 
 I catch these problems the same day they happen, so  I am wondering  
 whether it is possible to run amdump with a configuration of dumping  
 to a disk (using tpchanger chg-disk) and not have amanda use the  
 next slot, but the slot that the failed dump started on.
 
 I could not find any information on that in the man pages.
 
 Any ideas?

If you're sure that nothing you need was written to the 'tape', you
can use amrmtape and then relabel it with the same label.  That
should make it the first tape to use on the next run.
  I've often wondered why Amanda marks a tape as 'used' on a failed
backup even when nothing was written to tape.

Frank

 
 Thanks,
 René
 
 


-- 
Frank Smith  [EMAIL PROTECTED]
Sr. Systems Administrator   Voice: 512-374-4673
Hoover's Online   Fax: 512-374-4501


Re: missing result

2007-03-23 Thread Frank Smith
Gene Heskett wrote:
 snippage re /etc/amandates 
 ===
 /GenesAmandaHelper-0.6 0 1174288560
 /GenesAmandaHelper-0.6 1 1174453248
 /GenesAmandaHelper-0.6 2 1174541095
 /GenesAmandaHelper-0.6 3 1174625228
 [snip another 100k of different pathlistings]
 ===
 The number after the path is the level, and I'm not sure what the last 
 number is, possibly a timestamp, but I've NDI what notation format that 
 represents.  Swahili to this old fart.

The number is a timestamp in UNIX epoch time (seconds since 1-1-1970).
Your first number, 1174288560, is Mon Mar 19 07:16:00 2007 UTC

You can do the conversion with the Perl function localtime, , or a
C program using the system ctime function, or use one of the online
javascript converters like http://dan.drydog.com/unixdatetime.html

Frank

-- 
Frank Smith  [EMAIL PROTECTED]
Sr. Systems Administrator   Voice: 512-374-4673
Hoover's Online   Fax: 512-374-4501


Re: dumps way too big

2007-03-23 Thread Frank Smith
[EMAIL PROTECTED] wrote:
 Le jeudi 22 mars 2007 à 20:13 -0400, Gene Heskett a écrit :
 On Thursday 22 March 2007, [EMAIL PROTECTED] wrote:
 Hello,

 One backup partly failed with :

 FAILURE AND STRANGE DUMP SUMMARY:
  k400  /mnt/d_mails  lev 1  FAILED [dumps way too big, 1025270 KB, must
 skip incremental dumps]
  k400  /home/jpp lev 1  FAILED [dumps way too big, 1116100 KB, must
 skip incremental dumps]
  k400  /etc  lev 0  STRANGE

 for some other directories and machines the backup is OK.
 What is the problem ?

 Regards

 Storm66
 Your kernel version please?
 
 Kernel 2.6.16 on the master machine, 2.6.18 and 2.6.20 on other
 machines.
 
 Frank Smith asks for the size of tape I am using : it is a virtual tape
 on a separate disk whth more than 100G avalaible.

The physical disk it's on may be big enough , but is the length of
your vtapes in your tapetype set to something larger than 1 GB?

Frank

 
 Regards
 
 Storm66
 


-- 
Frank Smith  [EMAIL PROTECTED]
Sr. Systems Administrator   Voice: 512-374-4673
Hoover's Online   Fax: 512-374-4501


Re: dumps way too big

2007-03-22 Thread Frank Smith
[EMAIL PROTECTED] wrote:
 Hello,
 
 One backup partly failed with :
 
 FAILURE AND STRANGE DUMP SUMMARY:
   k400  /mnt/d_mails  lev 1  FAILED [dumps way too big, 1025270 KB, must
 skip incremental dumps]
   k400  /home/jpp lev 1  FAILED [dumps way too big, 1116100 KB, must
 skip incremental dumps]
   k400  /etc  lev 0  STRANGE 
 
 for some other directories and machines the backup is OK.
 What is the problem ?

Evidently your tape size is less than the 1.025 GB size of your data,
and you have no previous level 0 so Amanda can't generate an incremental.
You can either use bigger tapes, enable the tape spanning feature found
in recent versions of Amanda, or split those DLEs into smaller pieces.

The 'STRANGE' entries are usually nothing to worry about.  There will
be a section in your report for the DLEs marked STRANGE that tell you
exactly why.  Any unexpected output from your dump program (dump or tar)
is considered STRANGE.  Usually its because a file changed while it was
being backed up (log files and mail spools often cause this).  The rest
of the DLE is backed up, but the file(s) in question may or may not be
depending on the exact circumstances.

Frank

 
 Regards
 
 Storm66
  
 


-- 
Frank Smith  [EMAIL PROTECTED]
Sr. Systems Administrator   Voice: 512-374-4673
Hoover's Online   Fax: 512-374-4501


Re: *** A TAPE ERROR OCCURRED: [new tape not found in rack].

2007-02-08 Thread Frank Smith
mario wrote:
 Hello List,
 
 i am trying to backup to disk and i am getting the error:
 
 *** A TAPE ERROR OCCURRED: [new tape not found in rack].
 
 
 i get the same error when i try to run amflush DailySet1.
 
 Any ideas or tips?
 
 Here is my amanda.conf:
 
 
 org DailySet1
 mailto root
 dumpuser backup
 inparallel 4
 netusage  2000 
 
 bumpsize 20 MB   
 bumpdays 1  
 bumpmult 4   
 
 dumpcycle 14 days
 tapecycle 14
 runtapes 1
 tpchanger chg-multi
 changerfile /etc/amanda/DailySet1/changer.conf
 
 tapetype HARD-DISK
 labelstr ^DailySet1[0-9][0-9]*$
 
 diskdir /backup/fallback/amanda  # where the holding disk is
 disksize 33290 MB   # how much space can we use on it
 
 infofile /var/lib/amanda/DailySet1/curinfo # database filename
 logfile  /var/log/amanda/DailySet1/log  # log filename
 
 # where the index files live
 indexdir /var/lib/amanda/DailySet1/index
 
 
 define tapetype HARD-DISK {
 comment Hard disk instead of tape
 length 4000 mbytes  # Simulates end of tape on hard disk (a 4 GB 
 disk here)
 }
 
 define dumptype hard-disk-dump {
 comment Back up to hard disk instead of tape - using dump
 holdingdisk no
 index yes
 priority high
 }
 
 define dumptype hard-disk-tar {
 hard-disk-dump
 comment Back up to hard disk instead of tape - using tar
 program GNUTAR
 }
 
 define dumptype comp-user-tar {
 program GNUTAR
 comment Root partitions with compression
 options compress-fast, index
 priority low
 }
 
 
 
 
 Changer: /etc/amanda/DailySet1/changer.conf
 ---
 
 multieject 0
 gravity 0
 needeject 0
 ejectdelay 0
 
 statefile /var/lib/amanda/DailySet1/changer-status
 
 firstslot 1
 lastslot 5
 
 slot 1 file:/var2/amandadumps/tape01
 slot 2 file:/var2/amandadumps/tape02
 slot 3 file:/var2/amandadumps/tape03
 slot 4 file:/var2/amandadumps/tape04
 slot 5 file:/var2/amandadumps/tape05
  
 Thanks, Mario

You have tapecylce set to 14 but only have 5 'tapes' in your
changer.  Amanda is probably looking for tape06.  Either change
tapecyle to 5 or create more tapes (and increase lastslot to match).

Frank

-- 
Frank Smith  [EMAIL PROTECTED]
Sr. Systems Administrator   Voice: 512-374-4673
Hoover's Online   Fax: 512-374-4501


Re: Question about gtar error 1 and scheduling

2007-02-05 Thread Frank Smith
Gene Heskett wrote:
 On Monday 05 February 2007 11:03, Mark Hennessy wrote:
 The machine has 3GB of RAM and 8GB of available swap.
 It's running FreeBSD 6.2 and AMANDA 2.51p2-20070202 with GNU tar 1.16
 from FreeBSD ports patchlevel 2.

 Humm, darn, that seems to be more than sufficient, I have a gig on this 
 board  about 3GB of swap it never uses and have never seen that error.
 
 In simpler terms, that was my best shot at it.  Has anyone else a clue?
 Does the boot log show all 3GB of that ram?
 
Is it failing on the same filesystem that 'worked' other than the 'gtar
returns 1' message in your previous post?  If so, I'm not sure why
Amanda ignoring the '1' return would cause tar to get a memory error
when it didn't before.

Any chance your Amanda user has a low ulimit set for memory size?
Can you run tar manually on the filesystem to /dev/null and watch the
memory usage (with top or whatever) to see how big it is when the
error occurs?

Frank
 --
 Mark Hennessy

 -Original Message-
 From: Gene Heskett [mailto:[EMAIL PROTECTED]
 Sent: Monday, February 05, 2007 11:00 AM
 To: amanda-users@amanda.org
 Cc: Mark Hennessy; Jean-Louis Martineau
 Subject: Re: Question about gtar error 1 and scheduling

 On Monday 05 February 2007 09:57, Mark Hennessy wrote:
 Thanks for telling me about that.  I have updated that and now it
 doesn't die on error 1 anymore on a L0 backup of that disk.

 I was originally using the 2.51p2 release, not the latest snapshot.

 Now I'm seeing another problem, this time with the
 incremental backup of

 the same disk.
 AMANDA and gtar try to do an L1 of that same disk and this is what I
 get: ? gtar: memory exhausted
 That's pretty plainly stated, how much ram is in that box?

 ? gtar: Error is not recoverable: exiting now
 sendbackup: error [dump (6349) /usr/local/bin/gtar returned 2]

 Any advice?

 --
 Mark Hennessy

 -Original Message-
 From: Jean-Louis Martineau [mailto:[EMAIL PROTECTED]
 Sent: Friday, February 02, 2007 11:50 AM
 To: Mark Hennessy
 Cc: Amanda Users
 Subject: Re: Question about gtar error 1 and scheduling

 Mark,

 Install latest amanda 2.5.1p2 snapshot from
 http://www.zmanda.com/community-builds.php,
 or wait for 2.5.1p3, it will be release next week.

 Jean-Louis

 Mark Hennessy wrote:
 I'm using AMANDA 2.51p2 and gtar 1.16.

 I want to make AMANDA ignore any error 1 response from gtar
 and react like it

 has not FAILED.

 This would make AMANDA accept the backup as successful with
 STRANGE and then

 AMANDA wouldn't have to run a second attempt in the same
 run and then retry

 the next run.

 This is the error I see:
 | Total bytes written: 129261885440 (121GiB, 2.0MiB/s)

 sendbackup: error [/usr/local/bin/gtar returned 1]

 How can I do that? Any advice would be greatly appreciated.

 I'm using GNU tar 1.16

 From NEWS for GNU tar 1.16:
 version 1.16 - Sergey Poznyakoff, 2006-10-21

 * After creating an archive, tar exits with code 1 if some
 files were

 changed while being read.  Previous versions exited with
 code 2 (fatal

 error), and only if some files were truncated while
 being archived.

 I think that due to the fact that the e-mail files are in
 constant flux I

 have a very low chance of not getting an error code 1 no
 matter when I run

 the backup.

 --
  Mark Hennessy
 --
 Cheers, Gene
 There are four boxes to be used in defense of liberty:
  soap, ballot, jury, and ammo. Please use in that order.
 -Ed Howdershelt (Author)
 Yahoo.com and AOL/TW attorneys please note, additions to the above
 message by Gene Heskett are:
 Copyright 2007 by Maurice Eugene Heskett, all rights reserved.
 


-- 
Frank Smith  [EMAIL PROTECTED]
Sr. Systems Administrator   Voice: 512-374-4673
Hoover's Online   Fax: 512-374-4501


Re: Query about full backups

2007-01-22 Thread Frank Smith
Yogesh Hasabnis wrote:
 --- Frank Smith [EMAIL PROTECTED] wrote:
 
 Yogesh Hasabnis wrote:
 Hi All,

 As per the AMANDA faqs, AMANDA spreads full
 backups
 along the dumpcycle, so you wont' have any
 full-only
 or incremental-only runs. 

 What I understand from this is that if you have
 multiple filesystems on multiple hosts to be
 backed up
 in an AMANDA dumpcycle, AMANDA will take care of
 spreading the full backups of different
 areas/filesystems on different days so that it is
 not
 overloaded on weekends for doing the full backups
 (Correct me if I am wrong). However in my case, I
 have
 only one area or one filesystem on one host to be
 backed up. Data created by all users which needs
 to be
 backed up is stored by them under one directory on
 a
 central file server, which will be backed up by
 AMANDA. Now in this case, how will AMANDA spread
 the
 full and the incremental backup load (because it
 has
 only one area to be backed up in the whole
 dumpcycle)?

 Yes, if you have only one DLE Amanda obviously can't
 spread
 its full dump over several days.  You could use tar
 instead
 of dump, and make each subdirectory a separate DLE,
 then the
 fulls could be spread out.
 
 Thanks for the reply. So if I only have one DLE, will
 the full backup always take place on a particular day
 and will only incremental backups be taken on the
 remaining days? Or will it be handled in some
 different way?

dumpcycle is just the maximum time between fulls.  Amanda
often schedules them sooner if it thinks it will level out
daily tape usage.  I don't know how it handles a single
DLE, as that is not really what it was designed for.
   If you really want it to always be the same day, you
might be able to force it with a maxpromoteday setting,
but it may be easier to just use two configs, an always-full
one that runs on the prefered day, and an incremental-only
one that runs on the other days.

Frank

 
 Thanks
 
 Yogesh
 
 
 
  
 
 Want to start your own business?
 Learn how on Yahoo! Small Business.
 http://smallbusiness.yahoo.com/r-index


-- 
Frank Smith  [EMAIL PROTECTED]
Sr. Systems Administrator   Voice: 512-374-4673
Hoover's Online   Fax: 512-374-4501


Re: Tape errors, hardware? (SOLVED)

2007-01-19 Thread Frank Smith
Just for the archives, evidently the inability to write a filemark
is a known failure mode on the AIT-3 drives (affecting about 1% of
the drives, according to the library support folks).  However, the
really bizarre thing is that it can be fixed with a firmware
update that can be installed via a utility downloadable from the
Sony web site.

Frank

Frank Smith wrote:
 Just trying to verify that I'm having an actual hardware error.
 Backups on one tape server (that's been in use for years) failed with
 the following:
 taper: tape archive03 kb 0 fm 0 writing filemark: Input/output error
 taper: retrying q42:/d5/backups/oracle/Dmp.0 on new tape: [writing filemark: 
 Input/output error]
 taper: tape archive04 kb 1796480 fm 1 writing file: Input/output error
 
 and everything remained on the holdingingdisk.  From using dd I can
 see that Amanda is successfully updating the date in the header block.
 I can successfully run amlabel on the tapes, and I can use tar to
 write and read a tar file to/from the tape.  However, 'mt eof' fails,
 and gives the same error in the system logs as when Amanda runs:
 
 kernel: st1: Error with sense data: 6st1: Current: sense key: Medium Error
 kernel: Additional sense: Write error
 kernel: Info fld=0x1
 
 Is it definitely a drive failure when data can be written but not
 EOF marks, or is there something else I should check before replacing
 the drive?
 
 Thanks,
 Frank
 


-- 
Frank Smith  [EMAIL PROTECTED]
Sr. Systems Administrator   Voice: 512-374-4673
Hoover's Online   Fax: 512-374-4501


Re: Cant run two Linux Servers behind my firewall at the same time - only one and vice versa.

2007-01-18 Thread Frank Smith
Chuck Amadi Systems Administrator wrote:
 Hi List
 
 Sorry to nag on is there any suggestions to my post.
 
 Cheers
 
 On Thu, 2007-01-18 at 08:27 +, chuck.amadi wrote:
 Hi List I was hoping for some direction to my issue with two servers 
 behind a firewall running ipchains
 I can backup one or the other but when I uncomment both DLE I get host down.

 Thanks in advance.


 chuck.amadi wrote:

 Hi I have two Linux SuSE 9 SLES servers outside of my lan behind a 
 firewall using (I know don't laugh) IPChains.
 The first server I setup worked without problems by compiling with the 
 tcp and udp port range and changing a parameter in security.c file
 and increasing the timeout using a ipchain rule, which worked a treat 
 but I have another new server outside and behine a firewall.

 Thus when I tried following the same reciepe and compile using the 
 same tcp and udp port range and thus a separate tcp and udp port range 
 to no joy I am unable to get both to work at the same time if I 
 comment out one of the amanda clients within the disklist the other 
 doesn't work and vice versa So I know it is not the setup or configure.

 #The timeout is in seconds. If you set the timeout of TCP, TCPFIN
 #and UDP to 5 seconds, 5 seconds and 5 seconds, I think they are
 #too short.  Please try to set them to 5min, 1min
 #and 5min respectively such as 300 60 300.

 # ipchains -M -S 300 tcp 60 tcpfin 300 udp works ok.
 ipchains -M -S 7200 60 300


 I get the Warning: selfcheck request timed out. Host down!. Note that 
 when I comment out one of them amcheck works accordingly
 I am aware the it's using udp over the firewall But I haven't been 
 able to suss this out I assume that my connection is poor.
 I have checked both /tmp/amanda/amanda-date.debug and they both moan 
 about timeouts failed But are OK when only one of them
 is in use.

Any chance your firewall is dong NAT, and mapping both clients to the
same IP?

Frank


 Cheers




-- 
Frank Smith  [EMAIL PROTECTED]
Sr. Systems Administrator   Voice: 512-374-4673
Hoover's Online   Fax: 512-374-4501


Tape errors, hardware?

2007-01-08 Thread Frank Smith
Just trying to verify that I'm having an actual hardware error.
Backups on one tape server (that's been in use for years) failed with
the following:
taper: tape archive03 kb 0 fm 0 writing filemark: Input/output error
taper: retrying q42:/d5/backups/oracle/Dmp.0 on new tape: [writing filemark: 
Input/output error]
taper: tape archive04 kb 1796480 fm 1 writing file: Input/output error

and everything remained on the holdingingdisk.  From using dd I can
see that Amanda is successfully updating the date in the header block.
I can successfully run amlabel on the tapes, and I can use tar to
write and read a tar file to/from the tape.  However, 'mt eof' fails,
and gives the same error in the system logs as when Amanda runs:

kernel: st1: Error with sense data: 6st1: Current: sense key: Medium Error
kernel: Additional sense: Write error
kernel: Info fld=0x1

Is it definitely a drive failure when data can be written but not
EOF marks, or is there something else I should check before replacing
the drive?

Thanks,
Frank

-- 
Frank Smith  [EMAIL PROTECTED]
Sr. Systems Administrator   Voice: 512-374-4673
Hoover's Online   Fax: 512-374-4501


Re: Tape errors, hardware?

2007-01-08 Thread Frank Smith
Joshua Baker-LePain wrote:
 On Mon, 8 Jan 2007 at 11:39am, Frank Smith wrote
 
 Just trying to verify that I'm having an actual hardware error.
 Backups on one tape server (that's been in use for years) failed with
 the following:
 taper: tape archive03 kb 0 fm 0 writing filemark: Input/output error
 taper: retrying q42:/d5/backups/oracle/Dmp.0 on new tape: [writing filemark: 
 Input/output error]
 taper: tape archive04 kb 1796480 fm 1 writing file: Input/output error

 and everything remained on the holdingingdisk.  From using dd I can
 see that Amanda is successfully updating the date in the header block.
 I can successfully run amlabel on the tapes, and I can use tar to
 write and read a tar file to/from the tape.  However, 'mt eof' fails,
 and gives the same error in the system logs as when Amanda runs:

 kernel: st1: Error with sense data: 6st1: Current: sense key: Medium Error
 kernel: Additional sense: Write error
 kernel: Info fld=0x1

 Is it definitely a drive failure when data can be written but not
 EOF marks, or is there something else I should check before replacing
 the drive?
 
 What sort of drive?  Does it just need a cleaning?
 
Oops, sorry I left that out:  AIT3
What is different about writing filemarks (which fails) and writing large
streams of data (which works)?

Frank

-- 
Frank Smith  [EMAIL PROTECTED]
Sr. Systems Administrator   Voice: 512-374-4673
Hoover's Online   Fax: 512-374-4501


Re: amflush oddity

2007-01-07 Thread Frank Smith
Jean-Louis Martineau wrote:
 Frank Smith wrote:
 Debian package 2.5.1p1-2.1
 Anyway, now amflush reports
 *** THE DUMPS DID NOT FINISH PROPERLY!
 even though everything was successfully flushed to tape according
 to the dump summary.  I'm guessing that error message was triggered
 by the WARNING in the NOTES section:
 driver: WARNING: This is not the first amdump run today. Enable the
 usetimestamps option in the configuration file if you want to run
 amdump more than once per calendar day.

 I didn't run amdump more than once.  It ran early this morning
 to do the backups, and then I ran amflush this afternoon.  Does
 amflush count as an amdump run?
   
 It should not, try the attached patch.

After applying the patch and running amflush on a weeks worth of
backups on the holding disk, it was successful and the report
looked like it should, so hopefully the patch will make it into
the release version.

Thanks for creating the patch.

Frank

 
 Jean-Louis
 


-- 
Frank Smith  [EMAIL PROTECTED]
Sr. Systems Administrator   Voice: 512-374-4673
Hoover's Online   Fax: 512-374-4501


Re: Partial results in planner

2007-01-06 Thread Frank Smith
David Stillion wrote:
 I believe I've solved the issue with the planner: partial result. 
 While it appeared Amanda was attempting to do the backup, it really
 couldn't because the disklist was misconfigured.   My initial
 understanding on reading the docs was that I could use the device path
 ie /dev/hdaX to indicate what data I wanted to backup.  When I changed
 these to a filesystem path /, /home and so forth, amdump was more
 successful.   
 
 However, I have a new error message for which I can not find any info.
  This message is part of the amstatus output for the set: 
 
 planner: [dumps too big, 3319760 KB, but cannot incremental
 dump new disk]

'dumps too big' means Amanda thinks that the size of a full dump of
that filesystem is larger than the space available to dump it to.
If runtapes = 1 or if tape spanning is not enabled, then it means
your dump is larger than your tape length.  If runtapes  1 and spanning
is enabled, then it means dump  ( runtapes * tape lenght ).
   What size is the filesystem and how big are your tapes (or vtapes)?
If you think it should fit, either the estimate is wrong (check your
logs for the estimated size), you are backing up more than you think
(some tar versions incorrectly cross filesystem boundaries, or your
excludes aren't what you think they are), or perhaps your config is
not what you think it is .
   The can't incremental part of the message just lets you know
that while normally a lack of tape space would just cause Amanda to
delay the full and do a smaller incremental instead, it can't do that
in this case since there is no previous full dump to base the incremental
on.

Frank

 
  If you need more info let me know and I can send the entire amstatus
 output.
 


-- 
Frank Smith  [EMAIL PROTECTED]
Sr. Systems Administrator   Voice: 512-374-4673
Hoover's Online   Fax: 512-374-4501


Re: fulls when the holding disk is small?

2006-12-28 Thread Frank Smith
server_encrypt /usr/sbin/amcrypt
server_decrypt_option -d
 }
 
 define dumptype client-encrypt-nocomp {
global
program GNUTAR
comment no compression and client symmetric encryption
compress none
encrypt client
client_encrypt /usr/sbin/amcrypt
client_decrypt_option -d
 }
 
 
 # To use gpg public-key encryption, gpg does compress with zlib by default.
 # Thus, no need to specify compress
 
 #define dumptype gpg-encrypt {
 #global
 #program GNUTAR
 #comment server public-key encryption, dumped with tar
 #compress none
 #encrypt server
 #server_encrypt /usr/sbin/amgpgcrypt
 #server_decrypt_option -d
 #}
 
 
 # network interfaces
 #
 # These are referred to by the disklist file.  They define the attributes
 # of the network interface that the remote machine is accessed through.
 # Notes: - netusage above defines the attributes that are used when the
 #  disklist entry doesn't specify otherwise.
 #- the values below are only samples.
 #- specifying an interface does not force the traffic to pass
 #  through that interface.  Your OS routing tables do that.  This
 #  is just a mechanism to stop Amanda trashing your network.
 # Attributes are:
 # use - bandwidth above which amanda won't start
 #   backups using this interface.  Note that if
 #   a single backup will take more than that,
 #   amanda won't try to make it run slower!
 
 define interface local {
 comment a local disk
 use 1000 kbps
 }
 
 define interface le0 {
 comment 10 Mbps ethernet
 use 400 kbps
 }
 
 # You may include other amanda configuration files, so you can share
 # dumptypes, tapetypes and interface definitions among several
 # configurations.
 
 #includefile /etc/amanda/amanda.conf.main
 - /etc/amanda/amanda.conf.main


-- 
Frank Smith  [EMAIL PROTECTED]
Sr. Systems Administrator   Voice: 512-374-4673
Hoover's Online   Fax: 512-374-4501


Re: BogusMonth 0, 0 ??

2006-12-19 Thread Frank Smith
Don Murray wrote:
 
 
 Hi everyone, I just had a weird thing happen.  I had already received my 
 AMANDA MAIL REPORT for my daily backup for today.  But 5 hours after I 
 received the usual backup report, I got another email, this time from 
 user [EMAIL PROTECTED] rather from the usual [EMAIL PROTECTED].
 
 The subject was:
 
 MYDOMAIN AMANDA MAIL REPORT FOR BogusMonth 0, 0

Perhaps some forgotten root cron (or 'at') job that was an attempt to run
monthly backups?  Amanda doesn't run itself, so perhaps you can look in
wherever your system cron logs are and track it down that way.  Probably
not a big issue since it failed, although you might want to run amcheck to
see that it didn't muck with your changer or mark a tape as used (since
its been my experience even a totally failed backup marks the current tape
as used).

Frank

 
 I have appended the result below.  It has all the correct DLE's reported.
 
 Ordinarily I wouldn't worry about this, I'd just let it go and see if it 
 happens again.  But since I will be leaving town for the holidays in a 
 few days I thought I would ask if anyone has seen this before and has 
 any ideas about why it would happen.  I'm just worried about impending 
 difficulties while I'm out of town.
 
 Thanks in advance,
 Don
 
 (Note: the actual domain has been changed to protect the innocent :))
 
 
 
 
 The message:
 
 *** THE DUMPS DID NOT FINISH PROPERLY!
 
 The next tape Amanda expects to use is: MYDOMAIN-04.
 
 FAILURE AND STRANGE DUMP SUMMARY:
amreport: ERROR could not open log /var/lib/amanda/mydaily/log: No 
 such file or directory
windsor/ RESULTS MISSING
gilmore/ RESULTS MISSING
gilmore/nonbackedup/work3/backups/burrard RESULTS MISSING
gilmore/nonbackedup/work3/backups/fs2 RESULTS MISSING
gilmore/nonbackedup/work3/backups/glen RESULTS MISSING
gilmore/nonbackedup/work3/backups/nootka RESULTS MISSING
gilmore/nonbackedup/work3/backups/princeedward RESULTS MISSING
gilmore/nonbackedup/work3/backups/spruce RESULTS MISSING
gilmore/nonbackedup/work3/backups/vancouver RESULTS MISSING
gilmore/backedup/project RESULTS MISSING
gilmore/backedup/home RESULTS MISSING
imap   / RESULTS MISSING
asterisk   / RESULTS MISSING
 
 
 STATISTICS:
Total   Full  Daily
        
 Estimate Time (hrs:min)0:00
 Run Time (hrs:min) 0:00
 Dump Time (hrs:min)0:00   0:00   0:00
 Output Size (meg)   0.00.00.0
 Original Size (meg) 0.00.00.0
 Avg Compressed Size (%) -- -- --
 Filesystems Dumped0  0  0
 Avg Dump Rate (k/s) -- -- --
 
 Tape Time (hrs:min)0:00   0:00   0:00
 Tape Size (meg) 0.00.00.0
 Tape Used (%)   0.00.00.0
 Filesystems Taped 0  0  0
 Avg Tp Write Rate (k/s) -- -- --
 
 
 


-- 
Frank Smith  [EMAIL PROTECTED]
Sr. Systems Administrator   Voice: 512-374-4673
Hoover's Online   Fax: 512-374-4501


amflush oddity

2006-12-15 Thread Frank Smith
Debian package 2.5.1p1-2.1

At home I leave the tape out of the drive most of the time and
only insert a tape and run amflush when the space used on the
holding disk approaches the capacity of my tape. This has been
working well until the latest upgrade (which overwrote the patched
version I had manually built to overcome the failures due to tar
1.16 return status of 1 for changed files causing dumps to fail).
Anyway, now amflush reports
*** THE DUMPS DID NOT FINISH PROPERLY!
even though everything was successfully flushed to tape according
to the dump summary.  I'm guessing that error message was triggered
by the WARNING in the NOTES section:
driver: WARNING: This is not the first amdump run today. Enable the
usetimestamps option in the configuration file if you want to run
amdump more than once per calendar day.

I didn't run amdump more than once.  It ran early this morning
to do the backups, and then I ran amflush this afternoon.  Does
amflush count as an amdump run?

Frank

-- 
Frank Smith  [EMAIL PROTECTED]
Sr. Systems Administrator   Voice: 512-374-4673
Hoover's Online   Fax: 512-374-4501


Re: new backup server

2006-12-14 Thread Frank Smith
Mitch Collinsworth wrote:
 On Thu, 14 Dec 2006, Chris Hoogendyk wrote:
 
 I'm interested in whether anyone on the list has any experience or
 comments on my choice of tape changer, or comments on issues related to
 how it is configured and potential modes of upgrading (adding another
 tape drive, adding another changer, etc.)
 
 You didn't say which AIT drive is going in your AIT changer.  Here we
 have gone from AIT1 to AIT2 to AIT3.  Just yesterday I ordered a new
 library with LTO3.  What soured us on the AIT line is that AIT4 is not
 backwards read compatible with any earlier AIT drives.  In other words
 if I went to AIT4 I would not be able to use it to even read any of our
 large existing collection of AIT1, 2, and 3 tapes.  So at this point it
 no longer matters to us whether we stay with the AIT line or not.
 Depending on which AIT drive you're choosing, this may or may not be a
 concern for you.

AIT5 recently came out, and it can read AIT3 and AIT4 tapes, and has
400GB native capacity.  In addition, it supports WORM tapes, for the
folks that have requirements for unmodifiable backups (you can write
a tape and append to a tape, but not overwrite or erase).
Unless you frequently have a need to read old tapes, keeping a
an old drive or two around just to read old tapes isn't a big deal.
The advantage of not switching formats is that you can just replace
the drives and the tapes to upgrade a library to higher capacity.

 
 Given that it is for us, we took this as our opportunity to move to LTO,
 which is at least an industry standard with multiple vendors supplying
 drives.  (Sony can take as long as they want to come out with the next
 generation of AIT, since they're the only supplier.  We waited what
 seemed like forever for AIT3 to finally come out.  Way past its expected
 release date.  And AIT4 was promised all along to be backwards read
 compatible, but that was dropped at the very last minute.)
 
 The tape changer I'm looking at is the Sony StorStation AIT Library
 LIB-162/A4. It is a carousel rather than a robot. It holds 16 tapes
 (3.2TB native, anybody's guess compressed) and can have a second tape
 drive added. It is significantly less expensive than the expandable
 robot systems I was looking at. Also, in the expandable systems,
 adding the expansions was very expensive.
 
 Not sure what systems you looked at, but I was surprised to find that
 in the Qualstar RLS series of expandable libraries, adding more tape
 slots is not a big money proposition.  The LTO library I ordered starts
 with 12 slots and is expandable up to 44 slots in increments of 8, for
 $1000 (list) per increment.  As an .edu you may do better than that on
 price.  Also with AIT slots being smaller, they might come cheaper, too.
 I don't know.

I'll second the recommendation for Qualstar libraries.

Frank

 
 Hope this helps in some way.
 
 -Mitch


-- 
Frank Smith  [EMAIL PROTECTED]
Sr. Systems Administrator   Voice: 512-374-4673
Hoover's Online   Fax: 512-374-4501


Re: not allowed to execute the service noop: Please add amdump to the line in /root/.amandahosts

2006-12-08 Thread Frank Smith

mario wrote:

Hi,

On Fri, 2006-12-08 at 09:12 -0500, Jean-Louis Martineau wrote:

mario wrote:

Hello List,

i am running Amanda version 2.4.5 on Ubuntu Breezy and i get this error:

These dumps were to tape TEST-2.
The next tape Amanda expects to use is: a new tape.
The next new tape already labelled is: TEST-3.

FAILURE AND STRANGE DUMP SUMMARY:
  planner: ERROR testserverNAK :  user backup from testserver is not
allowed to execute the service noop: Please add amdump to the line
in /root/.amandahosts
  testserver  /dev/sda2 RESULTS MISSING



But amdump is already in /root/.amandahosts:

cat /root/.amandahosts
localhost root amdump

  

why localhost? why root? The message tell you it's testserver and backup.

add: testserver backup amdump



I still get the same error. Looks like localhost gets resolved to
testserver?


Check /etc/hosts.  Some distros don't separate the localhost entry
from the actual hostname (i.e., put localhost on the real IP and
not just on 127.0.0.1)

Frank



Thanks, Mario





Re: not allowed to execute the service noop: Please add amdump to the line in /root/.amandahosts

2006-12-08 Thread Frank Smith

mario wrote:

Hi,

On Fri, 2006-12-08 at 08:40 -0600, Frank Smith wrote:

mario wrote:

Hi,

On Fri, 2006-12-08 at 09:12 -0500, Jean-Louis Martineau wrote:

mario wrote:

Hello List,

i am running Amanda version 2.4.5 on Ubuntu Breezy and i get this error:

These dumps were to tape TEST-2.
The next tape Amanda expects to use is: a new tape.
The next new tape already labelled is: TEST-3.

FAILURE AND STRANGE DUMP SUMMARY:
  planner: ERROR testserverNAK :  user backup from testserver is not
allowed to execute the service noop: Please add amdump to the line
in /root/.amandahosts
  testserver  /dev/sda2 RESULTS MISSING



But amdump is already in /root/.amandahosts:

cat /root/.amandahosts
localhost root amdump

  

why localhost? why root? The message tell you it's testserver and backup.

add: testserver backup amdump


I still get the same error. Looks like localhost gets resolved to
testserver?

Check /etc/hosts.  Some distros don't separate the localhost entry
from the actual hostname (i.e., put localhost on the real IP and
not just on 127.0.0.1)


My /etc/hosts:

127.0.0.1 localhost
192.168.178.20localhost testserver

# The following lines are desirable for IPv6 capable hosts
::1 ip6-localhost ip6-loopback
fe00::0 ip6-localnet
ff00::0 ip6-mcastprefix
ff02::1 ip6-allnodes
ff02::2 ip6-allrouters
ff02::3 ip6-allhosts


That looks good to me. Did you mean it like that?


Lookups using /etc/hosts are linear, from top to bottom, scanning
each line until a match is found.  In your case, the first name
that matches 192.168.178.20 is 'localhost'.  Try either reversing
the order of the names on the 192.xx line or removing the localhost
entry from that line.  Personally, I only have localhost assigned
to the loopback address (127.0.0.1), but there may be some reason
for listing it on both lines.

Frank




Re: Different issue this time: Rate-limiting?

2006-12-04 Thread Frank Smith

Paul Haldane wrote:


On Mon, 4 Dec 2006, Ian R. Justman wrote:

I managed to get things working on limiting the files I want in a 
given set, but it hasn't solved my larger problem, the ethernet 
interface goes down during the course of the backup, but bounces back 
online.  From what I've been able to garner via Google, the machine, a 
Sun Ultra 5, appears to suffer a design defect that if the network 
interface is under heavy load, the interface will shut down, then 
shortly thereafter, it'll come back online. Our NetSaint server will 
sometimes send (false) notifications showing that the server machine 
is down then back up.


Any ideas on how to get AMANDA to take it easy when sending data to 
the backup server?  I'd rather do it at the application layer if 
possible.


I don't think there's a way of doing this with Amanda.  The network 
bandwidth settings don't control the bandwidth used by backups once 
they're started, they just stop more backups being started until 
bandwidth has been freed up.


THe real solution to your problem is to fix the hardware.  We had the 
same issues with some Ultra5s being used as file servers (it's just some 
revisions of the system board).  Find a supported PCI network card and 
use that instead of the on-board interface.


A temp workaround might be to rsync the disk to another system and back
it up from there.  You might have the same interface issues with rsync
(especially on the first rsync run), but the Amanda backup would finish.
The rsyncs may or may not timeout during the link failure, but you could
wrap it in a script that would keep re-running it until it completes, and
rsync won't have to start over like Amanda would.

Frank
('vacationing' at LISA)


Re: Upgrade failure?

2006-12-04 Thread Frank Smith

Gardiner Leverett wrote:
 


-Original Message-

Hi,

how is /mnt/usbdrive mounted? Amanda automatically runs the 
native filesystem dump program. It will run xfsdump for 
XFS filesystems, vxdump for Veritas filesystems, vdump for 
AdvFS (Tru64), dump for other filesystems.

I suspect dump cannot hanlde /mnt/usbdrive.
Consider using gnutar instead. See 
http://wiki.zmanda.com/index.php/Backup_client#Backup_programs




The filesystem is ext3.  It worked with dump normally until 
I upgraded Amanda.  

Here's something I noticed: the amdump usually runs unattended 
over a weekend.  To test, I'll run manually and check the progress

throughout the day.  Before the upgrade, it would backup this
disk (about 122G) in about 4 hours (the server and client are 
directly connected via a cross-over cable, so there's no other
traffic the two have to deal with).  After the upgrade, when 
I run the manual test, it stops at 320M, and I get all the 
dump and index errors.  (why exactly at 320M?).  


Have you tried unmounting and fsck'ing it?  It may just be a coincidence
that it became corrupted the same day you upgraded Amanda.

Also try running dump manually and see if it completes.  That would
narrow down the possibilities of where the problem is occurring.

Frank
('vacationing' at LISA)


Re: Need help converting to Debian amanda package

2006-11-30 Thread Frank Smith
Zembower, Kevin wrote:
 I've had an Amanda system running successfully and mostly problem-free
 for the last five years. Because of some firewall issues, I compiled
 Amanda from source with some specific flags for ports used. About a year
 ago, the firewall issue were removed, and amanda continued to work
 without any problems or configuration changes. Just recently, I got a
 new server with RHEL4 instead of the Debian servers I normally run. I
 decided to try to convert my Amanda system to use the packages provided
 by the distributions, since I no longer needed the customization and, as
 one amanda-user wrote, the Amanda packages 'just seemed to work.'
 
 I either deleted the amanda-owned files manually, or moved them out of
 the way. Then, I installed the amanda-client, -server and -command
 Debian sarge packages on the tapehost. Then, I moved the one's I thought
 I needed back into place, such as /var/amanda, /etc/amamda and
 /tmp/amanda. This was all on centernet (cn2), my Amanda tapehost.
 
 In general, the system seems to be minimally working, but I get this
 error:
 cn2:/tmp/amanda-dbg# su -s /bin/bash amanda '/usr/sbin/amcheck DBackup'
 Amanda Tape Server Host Check
 -
 Holding disk /dumps/amanda2: 14596644 KB disk space available, using
 14596644 KB
 Holding disk /dumps/amanda: 14473792 KB disk space available, using
 14473792 KB
 NOTE: skipping tape-writable test
 Tape DBackup39 label ok
 Server check took 5.210 seconds
 
 Amanda Backup Client Hosts Check
 
 WARNING: centernet: selfcheck request timed out.  Host down?
 WARNING: www: selfcheck request timed out.  Host down?
 Client check: 2 hosts checked in 34.333 seconds, 2 problems found
 
 (brought to you by Amanda 2.4.4p3)
 cn2:/tmp/amanda-dbg# date
 Thu Nov 30 15:00:59 EST 2006
 cn2:/tmp/amanda-dbg#
 
 However, nothing's generated in the /tmp/amanda-dbg directory, and
 nothing of any use in the /tmp/amanda directory:
 
 cn2:/tmp/amanda-dbg# ls -ltr
 total 384
 snip
 -rw---  1 amanda disk  2054 Nov 29 18:11
 sendbackup.20061129180201.debug
 -rw---  1 amanda disk   206 Nov 29 18:21
 amtrmlog.20061129182141.debug
 -rw---  1 amanda disk   553 Nov 29 18:21
 amtrmidx.20061129182141.debug
 cn2:/tmp/amanda-dbg# cd ../amanda
 cn2:/tmp/amanda# ls -ltr
 total 32
 snip
 -rw---  1 backup backup 184 Nov 30 14:30
 amcheck.20061130142955.debug
 -rw---  1 backup backup 184 Nov 30 15:00
 amcheck.20061130150004.debug
 cn2:/tmp/amanda# cat amcheck.20061130150004.debug
 amcheck: debug 1 pid 12684 ruid 1001 euid 0: start at Thu Nov 30
 15:00:04 2006
 amcheck: dgram_bind: socket bound to 0.0.0.0.856
 amcheck: pid 12684 finish time Thu Nov 30 15:00:38 2006
 cn2:/tmp/amanda#
 
 Can anyone help me trouble-shoot this problem, starting with where to
 find the debug files that would be useful? Right now, I'm not worried
 about getting the remote host www to backup.

   Any chance the package version of Amanda is using a different
user/group than you were previously, so the files you copied back
might have incorrect user/group/perms?
   The Debian package puts some logs in /var/log/amanda/ as well
as /tmp/amanda, perhaps RedHat is doing something similar.  Maybe
use locate or find to look for filenames containing amanda to see
if there's any amandad.*.debug files?

Frank

 
 Thanks for all your advice and suggestions.
 
 -Kevin
 
 Kevin Zembower
 Internet Services Group manager
 Center for Communication Programs
 Bloomberg School of Public Health
 Johns Hopkins University
 111 Market Place, Suite 310
 Baltimore, Maryland  21202
 410-659-6139 
 


-- 
Frank Smith  [EMAIL PROTECTED]
Sr. Systems Administrator   Voice: 512-374-4673
Hoover's Online   Fax: 512-374-4501


Re: amandas group membership in FC6?

2006-11-25 Thread Frank Smith
Gene Heskett wrote:
 Greetings;
 
 Despite the fact that the user 'amanda' is a member of the group 'disk', 
 all compilations and new files generated by the user amanda seem to be 
 owned by amanda:amanda instead of the expected amanda:disk.
 
 The end result is that many of my backup operations are failing because 
 the amanda utility doesn't have perms to delete or write to files 
 actually owned by amanda:disk.
 
 I just went thru all the directory trees amanda needs to access and 
 chowned everything back the way its supposed to be, but then I built the 
 20061124 tarball just now, and everything is still owned by 
 
 amanda:amanda.
 
 From my  /etc/group file:
 disk:x:6:root,amanda
 
 So I blew it away, called up KUsr and verified that amanda was indeed a 
 member of the group disk.  Even deleted the user and re-added it and made 
 sure this new copy of amanda was a member of the group disk.
 
 Then as amanda, I unpacked it again and rebuilt it, but I still have the 
 same problem.  Because none of the files are owned by amanda:disk, the 
 end result is several megs of dead, can't do a thing code that I'd just 
 as well not bother with the 'make install'.
 
 Anything that amanda has touched over the last 4 days since I started 
 running it again has been converted to being owned by amanda:amanda, and 
 if the file existed, and was to be deleted as part of the housekeeping, 
 was not because the old file was owned by amanda:disk.  So my backups are 
 being slowly trashed because the indice files are not updatable.
 
 Whats the deal with FC6 and its owner:group handling?  Am I setting up the 
 user wrong or what?
 
Perhaps something changed the amanda user's primary group in
/etc/passwd?  When new files are created, the user/group set are
the ones in passwd, and /etc/group is only consulted by the system
if the user is not the owner of the directory, then it checks if it
is in the same group (assuming you have group write perms).

Another possibility is that you are forcing the group amanda runs as
in xinetd to be 'amanda' and not 'disk'.


Frank


-- 
Frank Smith  [EMAIL PROTECTED]
Sr. Systems Administrator   Voice: 512-374-4673
Hoover's Online   Fax: 512-374-4501


Re: amandas group membership in FC6?

2006-11-25 Thread Frank Smith
Gene Heskett wrote:
 On Saturday 25 November 2006 20:42, Frank Smith wrote:
 Gene Heskett wrote:
 Greetings;

 Despite the fact that the user 'amanda' is a member of the group
 'disk', all compilations and new files generated by the user amanda
 seem to be owned by amanda:amanda instead of the expected amanda:disk.

 The end result is that many of my backup operations are failing
 because the amanda utility doesn't have perms to delete or write to
 files actually owned by amanda:disk.

 I just went thru all the directory trees amanda needs to access and
 chowned everything back the way its supposed to be, but then I built
 the 20061124 tarball just now, and everything is still owned by

 amanda:amanda.

 From my  /etc/group file:
 disk:x:6:root,amanda

 So I blew it away, called up KUsr and verified that amanda was indeed
 a member of the group disk.  Even deleted the user and re-added it and
 made sure this new copy of amanda was a member of the group disk.

 Then as amanda, I unpacked it again and rebuilt it, but I still have
 the same problem.  Because none of the files are owned by amanda:disk,
 the end result is several megs of dead, can't do a thing code that I'd
 just as well not bother with the 'make install'.

 Anything that amanda has touched over the last 4 days since I started
 running it again has been converted to being owned by amanda:amanda,
 and if the file existed, and was to be deleted as part of the
 housekeeping, was not because the old file was owned by amanda:disk. 
 So my backups are being slowly trashed because the indice files are
 not updatable.

 Whats the deal with FC6 and its owner:group handling?  Am I setting up
 the user wrong or what?
 Perhaps something changed the amanda user's primary group in
 /etc/passwd?  When new files are created, the user/group set are
 the ones in passwd, and /etc/group is only consulted by the system
 if the user is not the owner of the directory, then it checks if it
 is in the same group (assuming you have group write perms).

So what group does amanda have in /etc/passwd (the number in the fourth
field)?  See what that number maps to in /etc/group.  I'm betting it
goes to an 'amanda' group and not the 'disk' group.  It's also possible
that you have two amanda lines in your passwd file, or two groups in
/etc/group that map to the same number (or the same group name in twice
with two different numbers).  In those cases the first match is what
the system uses, but it can certainly be confusing to debug if you
don't notice the other one.
   Your system is finding an amanda group for the amanda user somewhere,
it's just a mater of finding out where it is getting it from.  I would
suggest looking into whether you might have compiled it in, but I know
you always use your same build script, so I'll just mention it as a
possibility for future readers of the archives.

 Another possibility is that you are forcing the group amanda runs as
 in xinetd to be 'amanda' and not 'disk'.

 I hadn't thought of that, but the amanda file in the xinetd.d dir is the 
 same one I used for FC2:
 
 # default = off
 #
 # description: Part of the Amanda server package
 # This is the list of daemons  such it needs
 service amanda
 {
   only_from   = coyote.coyote.den # gene.coyote.den
   disable = no
   socket_type = dgram
   protocol= udp
   wait= yes
   user= amanda
   group   = disk
   groups  = yes
   server  = /usr/local/libexec/amandad
   server_args = -auth=bsd amdump amindexd amidxtaped
 }
 service amandaidx
 {
   disable = no
 socket_type = stream
 protocol= tcp
 wait= no
 user= amanda
 group   = disk
 groups  = yes
 server  = /usr/local/libexec/amindexd
 }
 service amidxtape
 {
   disable = no
 socket_type = stream
 protocol= tcp
 wait= no
 user= amanda
 group   = disk
 groups  = yes
 server  = /usr/local/libexec/amidxtaped
 }
 
 According to the amanda coders, this is correct usage.  Is it not now?

That looks correct to me, so I'd look more into the passwd/group
files mentioned above.

Frank


-- 
Frank Smith  [EMAIL PROTECTED]
Sr. Systems Administrator   Voice: 512-374-4673
Hoover's Online   Fax: 512-374-4501


Re: More Debian Amanda package upgrade issues (this time with amdump)

2006-11-16 Thread Frank Smith
Frank Smith wrote:
 Jean-Louis Martineau wrote:
 NOTES:
   planner: Last full dump of clientA:/home/user1 on tape  overwritten in 1 
 run.
 ...
 Bug, your last full is on holding disk, it should not report this 
 mesage, try the overwrite.diff attached patch.
 ...

The patch fixed that issue, or at least no longer reported the warning. I'm
not sure how to test if it will still report an actual occurrence other
than deleting all my previous level 0s.

 * After creating an archive, tar exits with code 1 if some files were
   changed while being read.  Previous versions exited with code 2 (fatal
   error), and only if some files were truncated while being archived.

 Does this mean that many backups will now be marked as FAILED instead
 of STRANGE?
   
 Yes
 Could you try the amanda-tar-1.1.6.diff attached patch?
 Since older releaser never returned 1, I think it is safe to ignore 
 return status of 1.

I didn't happen to have any 'file changed' messages last night, so
I'll have to wait and see what happens when it does occur.

Thanks,
Frank

-- 
Frank Smith  [EMAIL PROTECTED]
Sr. Systems Administrator   Voice: 512-374-4673
Hoover's Online   Fax: 512-374-4501


Re: More Debian Amanda package upgrade issues (this time with amdump)

2006-11-15 Thread Frank Smith
Jean-Louis Martineau wrote:

 It's a tar bug that it cross filesystem boundaries, you will have to use 
 exclude as workaround.

OK, I've added to my excludes the secondary drives and NFS mounts, but I
see this causing problems in the future when Debian flips the testing
distribution to stable next month, and the new tar becomes the default
install for many new (and upgrading) users.

 NOTES:
   planner: Last full dump of clientA:/home/user1 on tape  overwritten in 1 
 run.
...
 Bug, your last full is on holding disk, it should not report this 
 mesage, try the overwrite.diff attached patch.
...
 * After creating an archive, tar exits with code 1 if some files were
   changed while being read.  Previous versions exited with code 2 (fatal
   error), and only if some files were truncated while being archived.

 Does this mean that many backups will now be marked as FAILED instead
 of STRANGE?
   
 Yes
 Could you try the amanda-tar-1.1.6.diff attached patch?
 Since older releaser never returned 1, I think it is safe to ignore 
 return status of 1.

Alright, I downloaded the source, applied the patches, and built
it (thank goodness the Amanda package was built with debug options
so the all the configure parameters were written into the debug logs).
I'll see how it goes tonight and report back tomorrow.

Thanks for the help,
Frank


 
 Jean-Louis
 


-- 
Frank Smith  [EMAIL PROTECTED]
Sr. Systems Administrator   Voice: 512-374-4673
Hoover's Online   Fax: 512-374-4501


Re: gtar dumper errors

2006-11-14 Thread Frank Smith
Gardiner Leverett wrote:
 I'm having a similar problem someone reported earlier. 
 I have a subdirectory I'm tying to archive with gtar from
 the client arnold.  It failed, so I subdivided it into 
 smaller pieces.  For example, I have about 10 user directories, 
 so I have a config for the directories a-b, c-d, e-g, h-m, and n-z.  
 Here's what I get:
 
 
 -Original Message-
  
 FAILURE AND STRANGE DUMP SUMMARY:
 arnold  /z/workareas-eg  lev 0 FAILED [mesg read: Connection reset by
 peer]
 arnold  /z/workareas-cd  lev 0 FAILED [cannot read header: got 0 instead
 of 32768]
 arnold  /z/workareas-ab  lev 0 FAILED [cannot read header: got 0 instead
 of 32768]
 arnold  /z/workareas-eg  lev 0 FAILED [too many dumper retry]
 arnold  /z/workareas-eg  lev 0 FAILED [cannot read header:  got 0 instead
 of 32768]
 arnold  /z/workareas-cd  lev 0 FAILED [too many dumper retry]
 arnold  /z/workareas-cd  lev 0 FAILED [cannot read header:  got 0 instead
 of 32768]
 arnold  /z/workareas-ab  lev 0 FAILED [too many dumper retry]
 arnold  /z/workareas-ab  lev 0 FAILED [cannot read header:  got 0 instead
 of 32768]
 
 And in the dump summary, I get this:
  
 DUMP SUMMARY:
 arnold   -rkareas-ab 0 FAILED  
 arnold   -rkareas-cd 0 FAILED 
 arnold   -rkareas-eg 0 N/A  242 --N/A   N/A N/A   N/A  PARTIAL
 arnold   -rkareas-hm 0 242  32  13.3  25:58 158.9   0:0311598.7
 arnold   -rkareas-nz 0 106  22  21.1  10:58 164.6   0:0211269.6
 
 (brought to you by Amanda version 2.5.0b1-20051205)
 
 Any idea why I get two of the sub sections, partail of one, and nothing from
 the
 other two? 
 

What version of tar are you using?  It might be related to what I
saw in my recent posting, where tar 1.16 now returns an error code
of 1 when a file changes during the run.  Previous versions returned
a 2 in that case, which I think just caused Amanda to flag it as
STRANGE, but a 1 is interpreted as failed, so Amanda retries from the
beginning until either it works or you hit 'too many dumper retries',
whatever number that is.  I haven't seen any replies on what 'PARTIAL'
means.

Frank


-- 
Frank Smith  [EMAIL PROTECTED]
Sr. Systems Administrator   Voice: 512-374-4673
Hoover's Online   Fax: 512-374-4501


More Debian Amanda package upgrade issues (this time with amdump)

2006-11-13 Thread Frank Smith
 
TAPER STATS
HOSTNAME DISK   L  ORIG-kB   OUT-kB  COMP%  MMM:SSKB/s 
MMM:SS   KB/s
-  
-
clientA  /  0  N/A  2775998-- N/A N/AN/A
N/A  PARTIAL
clientA  /home/user11   12342085756   69.50:13  6369.7   N/A
N/A
clientA  /home/user81   101   10.00:0034.5   N/A
N/A
clientA  /home/user21 3710  1072.90:0342.5   N/A
N/A
clientA  /home/user3125280 8023   31.70:02  5163.1   N/A
N/A
clientA  /home/user41  190   189.50:00   329.4   N/A
N/A
clientA  /home/user51  11054.50:00   164.4   N/A
N/A
clientA  /home/user9117160 8328   48.50:02  5390.3   N/A
N/A
clientA  /home/user61   101   10.00:0046.0   N/A
N/A
clientA  /home/user71   101   10.00:0036.8   N/A
N/A
server   /  2   601820   249679   41.51:30  2763.1   N/A
N/A
clientB  /  110680 3375   31.60:47   227.1   N/A
N/A

(brought to you by Amanda version 2.5.1p1)

What does 'PARTIAL' mean?  ClientA / was reported as about 4.1GB
during both its failed runs above, so is the 2.7GB reported in the
dump summary the compressed size (the dumptype uses compress-fast)
or was it in reality a partial dump?

I've really been surprised by the number of issues I've encountered
making a minor version change (2.5.0-2.5.1), especially since the
upgrade from 2.4.2 to 2.5 went fairly smoothly.  I'm sure part of
the problems are a result of tar 1.16.  One relevant changelog entry
for tar (1.15.92) is:
* After creating an archive, tar exits with code 1 if some files were
  changed while being read.  Previous versions exited with code 2 (fatal
  error), and only if some files were truncated while being archived.

Does this mean that many backups will now be marked as FAILED instead
of STRANGE?

Frank

-- 
Frank Smith  [EMAIL PROTECTED]
Sr. Systems Administrator   Voice: 512-374-4673
Hoover's Online   Fax: 512-374-4501


Debian Amanda package upgrade issues

2006-11-12 Thread Frank Smith
Since I play further on the edge with my home backups than the ones
for work, I updated my home Amanda server and clients (Debian etch
packages) from 2.5.0p2-2.1 to 2.5.1p1-2, and tar from 1.15.91-2 to
1.16-1.
  The first ones upgraded were a couple of the clients yesterday,
and that appeared to go well, as last nights backups were successful.
   Today I upgraded the server (and the client on it as well), and
ran into some issues concerning my amanda.conf, which hadn't changed.
amcheck errored out on the following in my amanda.conf:
/etc/amanda/daily/amanda.conf, line 48: configuration keyword expected
/etc/amanda/daily/amanda.conf, line 48: end of line is expected
/etc/amanda/daily/amanda.conf, line 122: dumptype parameter expected
/etc/amanda/daily/amanda.conf, line 122: end of line is expected
/etc/amanda/daily/amanda.conf, line 126: dumptype parameter expected
/etc/amanda/daily/amanda.conf, line 126: end of line is expected
/etc/amanda/daily/amanda.conf, line 134: dumptype parameter expected
/etc/amanda/daily/amanda.conf, line 134: end of line is expected
/etc/amanda/daily/amanda.conf, line 143: dumptype parameter expected
/etc/amanda/daily/amanda.conf, line 143: end of line is expected

line48:
logfile  /var/log/amanda/daily/log
which worked before today.  Evidently 'logfile' is no longer a valid
parameter, I guess I just use 'logdir' instead and can no longer
specify the filename.

line 122:
maxcycle 0
I'm not sure where I got this from or what it was supposed to do, it
was in an always-full dumptype that I didn't use, but amanda never
complained about it before.  I just deleted it.

lines 126, 134, 143:
options no-compress
options compress-fast
options srvcompress
Evidently you can no longer use the word 'options' but just use
the option name by itself.

Once I fixed all the above to please the parser, amcheck completed
successfully, although it warned of what I think is a serious
change:

WARNING: holding disk /hdb/holdingdisk/daily: use nothing because
'use' is set to 0

According to the man page, and how it worked in the past,
use  int
Default: 0 Gb.  Amount of space that can be used in this holding
disk  area.   If  the  value is zero, all available space on the
file system is used.  If the value is negative, Amanda will  use
all available space minus that value.

Since the disk my holdingdisk is on is only used for Amanda, and
since I let a week's worth of dumps collect there before flushing
to tape (it will hold 2 week's worth), 'use 0' seemed like a
good value. I changed it to 'use -1mb', but either the code or the
documentation needs to change so they match.
   I don't know if these are Amanda developer issues or Debian
package maintainer issues, but I thought I would give a heads-up
to the list in case others get surprised by the changes.

Frank

-- 
Frank Smith  [EMAIL PROTECTED]
Sr. Systems Administrator   Voice: 512-374-4673
Hoover's Online   Fax: 512-374-4501


Re: Question on bandwidth limits

2006-10-27 Thread Frank Smith
Joel Coltoff wrote:
 A recent thread on Loong backups got me to look at my configuration. It
 always seemed to me that it should run a bit faster than it does. I wasn't too
 concerned given that it started at 9:00 PM and was done by 2:00 AM. It never
 got in the way of things. We are moving to a much larger server and I'd like
 to resolve this for the flood of new files we'll have. We are running 2.4.4p2.
 
 I've been trying different numbers in my interface setup and they don't seem
 to have any effect. This is what I have in my amanda.conf file. Most of the
 DLEs are on the host that runs amanda. I don't have a setting for netusage
 so I get the default. When the dump is done amstatus reports.
 
  network free kps: 10700
 
 define interface local {
 comment a local disk
 use 1 kbps
 }
 
 define interface ethernet {
 comment 10 Mbps ethernet
 use 400 kbps

You're limiting yourself quite a bit here.  Amanda won't
start another dumper if the current dumpers are using more
than 4% of your bandwidth, so you'll probably never see
two remote dumps happening simultaneously.  I would suggest
setting it much higher (like maybe 7 or 8 Mbps), and if
other apps are impacted too much during the dumps, lower it
a little bit each day until the other apps are no longer
too affected.

}
 
 My disklist looks like this
 
 phoenix.wmi.com/export/cdburn project-files   2   local
 phoenix.wmi.com/export/cac  project-files   2   local
 phoenix.wmi.com/export/opt   project-files   2   local
 phoenix.wmi.com/export/project project-files   2   local
 
 goliath.wmi.com /users /users {
 user-files
 exclude ./jc517/lightvr/*/*.o
 exclude append ./jc517/lightvr/*/*.bz2
 } 1 ethernet
 
 goliath.wmi.com /export /export {
 user-files
 include  ./wumpus ./plover ./uclibc ./vendor
 } 1 ethernet
 
 Finally, here is the tail of amstatus
 
 network free kps:  6700
 holding space   :  10119040k ( 99.92%)
 dumper0 busy   :  4:20:38  ( 94.20%)
 dumper1 busy   :  0:02:16  (  0.82%)
taper busy   :  3:31:28  ( 76.43%)
 0 dumpers busy :  0:14:44  (  5.33%)no-diskspace:  0:14:44  (100.00%)
 1 dumper busy  :  4:19:40  ( 93.85%)no-bandwidth:  2:45:21  ( 63.68%)
  not-idle:  1:34:19  ( 36.32%)
 2 dumpers busy :  0:02:16  (  0.82%)no-bandwidth:  0:02:16  (100.00%)
 
 
 If I run amstatus I'll see no-bandwidth associated with 1 dumper busy more 
 often than not.
 What's a reasonable number to use so that I have more than 1 dumper running 
 at a 
 time? I guess the real question is should a single dump saturate connections
 to the localhost?

Unless you have other things running on your network while the
backups are running, and they can't tolerate the slowdown (such
as VOIP traffic), you should raise the bandwidth limit fairly high.
A single dump may not be able to stream more than a few Mb/sec,
due to factors such as disk speed, CPU, and network architecture.
As for your 'local' speed (where the client and server are the same
host), that interface has practically infinite bandwidth (not really,
but from a backup perspective it does), your limiting factor is bus,
disk controller, and disk I/O, and you're better off using a
combination of spindle numbers and inparallel to control the local
backups.

Frank

 
 Thanks
   - Joel
 
 


-- 
Frank Smith  [EMAIL PROTECTED]
Sr. Systems Administrator   Voice: 512-374-4673
Hoover's Online   Fax: 512-374-4501


Re: Looooooooong backup

2006-10-18 Thread Frank Smith
Steven Settlemyre wrote:
 I have a monthly (full) backup running for about 22 hrs now. Do you 
 think there is a problem, or is it possible it's just taking a long 
 time? about 150G of data.
 
 Steve

It depends on the speed of your network, disks, tape drive, and how
busy the servers involved are with other things, but I would say
it probably shouldn't take that long.
What does amstatus say?

Frank

-- 
Frank Smith  [EMAIL PROTECTED]
Sr. Systems Administrator   Voice: 512-374-4673
Hoover's Online   Fax: 512-374-4501


Re: Setting up AMANDA to use private network.

2006-10-11 Thread Frank Smith
Lengyel, Florian wrote:
 Sure: have the DLE (disk list entry) refer to the name
 of the interface you want to use (you should, of course,
 give them separate names). I have machines with multiple
 interfaces (external inet connection, internal backup 
 interface, etc) and I make sure the backup is through
 the designated backup interface. I don't want any
 amanda traffic on my web servers going through the
 internet interfaces: the traffic is confined to the
 interface for backup. Just use the hostname (FQDN or alias) 
 of the interface you want in the disk list entry on
 the amanda server.

Also make sure the client knows the server by the name that
uses the right interface, as some of the connections are
initiated by the client.

Frank

 
 -FL
 
 -Original Message-
 From: [EMAIL PROTECTED] on behalf of Alan Jedlow
 Sent: Wed 10/11/2006 4:32 PM
 To: amanda-users@amanda.org
 Subject: Setting up AMANDA to use private network.
  
 Greetings,
 
 Is possible to configure AMANDA to use a specific
 interface for data transfer?
 
 My backup server and clients have two NICs configured:
 eth0 is on a private 1000BaseT network and eth1 is on
 a public 100BaseT network.  I'm hoping to restrict
 backups to eth0, the private high speed network.
 
 thanks,
   alan
 
 


-- 
Frank Smith  [EMAIL PROTECTED]
Sr. Systems Administrator   Voice: 512-374-4673
Hoover's Online   Fax: 512-374-4501


Re: amrecover fails in rel 2.5.1 with NAK: amindexd: invalid service

2006-09-21 Thread Frank Smith
David Trusty wrote:
 Hi,
 
 I just installed release 2.5.1 on a SUSE 10.1 machine.  I was able
 to do a backup fine, but when I try to run amrecover, I get this error:
 
 # amrecover Monthly
 AMRECOVER Version 2.5.1. Contacting server on localhost ...
 NAK: amindexd: invalid service

Do you have the amindexd service configured in (x)inetd?
Does it show up in a 'netstat -l' ?

 The amcheck command shows no errors.

amcheck only checks to see if backups would work, not restores,
so it won't tell you if index or tape services are available.

 
 Any ideas?

See above.  Also, not related to your current problem, you
should look in the list archives for threads on how using
'localhost' in your disklist can cause problems later on.

Frank

 
 Thanks,
 
 David
 
 


-- 
Frank Smith  [EMAIL PROTECTED]
Sr. Systems Administrator   Voice: 512-374-4673
Hoover's Online   Fax: 512-374-4501


Re: changes to amanda.conf for amanda 2.5.1?

2006-09-14 Thread Frank Smith
Chan, Victor wrote:
 To all,
 
  
 
 I've changed up to Amanda 2.5.1.  For some reason, these three
 parameters that used to work in Amanda.conf didn't work anymore. 
 

What version were you using?
  
 
 diskdir var/tmp:

I don't see that in my 2.4.5 or my 2.5 configs.   I vaguely recall that
was what the 'holdingdisk' parameter used to be called.  Also, you're
missing the closing quote so the parser would choke anyway.

 
 disksize 25000 MB

Same issue as above.  You need something like:
 holdingdisk name {
use 200 Gb
   ...
   }
 
 logfile /usr/adm/Amanda/test/log

The log filenames are compiled in, but there is a logdir option to
specify the directory the logfiles get written into.  You might want
to read through the entire amanda.conf man page, as there are many
new options worth considering even if your old config does work.

Frank

 
  
 
 When I ran amcheck it tells me on those lines that 
 
  
 
  
 
 /usr/local/etc/amanda/test/amanda.conf, line 19: configuration keyword
 expected
 
 /usr/local/etc/amanda/test/amanda.conf, line 19: end of line is
 expected
 
  
 
 When I commented out those three parameters, amcheck works just fine.
 
  
 
 Is there a change in the parameter, or the syntax, that I did not know
 about?
 
  
 
 Victor Chan
 
 Network Administrator
 
 CSI Financial Services
 
 3636 Nobel Drive #215
 
 San Diego, CA 92122
 
 (858)200-9211
 
  
 
 


-- 
Frank Smith  [EMAIL PROTECTED]
Sr. Systems Administrator   Voice: 512-374-4673
Hoover's Online   Fax: 512-374-4501


Re: selfcheck request timed out WAS Re: Running amanda client only

2006-09-13 Thread Frank Smith
[EMAIL PROTECTED] wrote:
 Replying to my own post (sorry, but I'm getting desperate here), I took
 the advice of the FAQ-O-Matic and installed lsof and ran this command:
 
 su-2.05b# lsof -uamanda
 
 There was no output.  Should there have been?  Does this mean that amandad
 is not running?  

No, amandad should not be running except during a backup.  When the server
connects to the amanda port on the client inetd starts amandad.

 This is what I have in my /etc/inetd.conf:
 
 amanda   dgram  udp  wait  amanda /usr/local/libexec/amandad  amandad
 
 inetd.conf has beek killed and restarted, the client even
 rebooted...anything?

Have you tried running /usr/local/libexec/amandad from the command line
on the client (it should just sit there and eventually time out and
return your prompt, or immediately exit if you hit a key)?  Perhaps
you're missing a library (or need to run ldconfig or whatever is
needed to update the dynamic library cache on your platform).
Does the system log inetd even running amandad?
Does amandad run and create anything in /tmp/amanda?

Frank

 
 On Wed, 13 Sep 2006 [EMAIL PROTECTED] wrote:
 
 On Wed, 13 Sep 2006, Toomas Aas wrote:

 [EMAIL PROTECTED] wrote:

 The server is reporting that client down, so I checked and noticed that
 the FBSD port for the amanda-client did not install amindexd or
 amidxtaped, although it did install amandad.  Are all 3 needed for a
 client?
 No, for client only amandad is needed.
 Then I cannot figure out why I'm getting selfcheck request timed out
 from that client.  The path in inetd.conf is correct, as is the user
 (amanda) and /tmp/amanda is owned by amanda and has debug files there
 (just config info).  .amandaclients has localhost.fqdn as well as
 hostname.fqdn.  That client IS running alot of IP addresses on it, but
 I've done that before with no trouble.

 Here is amcheck -c output:

 su-2.05b$ amcheck -c weekly

 Amanda Backup Client Hosts Check
 
 ERROR: old.amanda.client: [host someIP.comcastbiz.net: hostname lookup
 failed]
 WARNING: new.amanda.client: selfcheck request timed out.  Host down?
 Client check: 2 hosts checked in 30.153 seconds, 2 problems found

 I understand the first error from the old client...there is no forward DNS
 on that IP (BTW, is there a way around that?  Just have it look at IP
 address?).  I can't figure out the cause of the second one...I went
 through everything on the FAQ-O-Matic about it...

 James Smallacombe  PlantageNet, Inc. CEO and Janitor
 [EMAIL PROTECTED]
 http://3.am
 =


 
 James Smallacombe   PlantageNet, Inc. CEO and Janitor
 [EMAIL PROTECTED] 
 http://3.am
 =
 


-- 
Frank Smith  [EMAIL PROTECTED]
Sr. Systems Administrator   Voice: 512-374-4673
Hoover's Online   Fax: 512-374-4501


Re: Why didn't my backup work (the way I thought it should)?

2006-09-13 Thread Frank Smith
 impossible, and the sooner you are
aware of it the better.

Frank

 
 Thanks for your advice and suggestions.
 
 -Kevin
 
 Kevin Zembower
 Internet Services Group manager
 Center for Communication Programs
 Bloomberg School of Public Health
 Johns Hopkins University
 111 Market Place, Suite 310
 Baltimore, Maryland  21202
 410-659-6139 
 


-- 
Frank Smith  [EMAIL PROTECTED]
Sr. Systems Administrator   Voice: 512-374-4673
Hoover's Online   Fax: 512-374-4501


Re: selfcheck request timed out WAS Re: Running amanda client only

2006-09-13 Thread Frank Smith
[EMAIL PROTECTED] wrote:
 On Wed, 13 Sep 2006, Frank Smith wrote:
 
 [EMAIL PROTECTED] wrote:
 Replying to my own post (sorry, but I'm getting desperate here), I took
 the advice of the FAQ-O-Matic and installed lsof and ran this command:

 su-2.05b# lsof -uamanda

 There was no output.  Should there have been?  Does this mean that amandad
 is not running?
 No, amandad should not be running except during a backup.  When the server
 connects to the amanda port on the client inetd starts amandad.

 This is what I have in my /etc/inetd.conf:

 amanda   dgram  udp  wait  amanda /usr/local/libexec/amandad  amandad

 inetd.conf has beek killed and restarted, the client even
 rebooted...anything?
 Have you tried running /usr/local/libexec/amandad from the command line
 on the client (it should just sit there and eventually time out and
 return your prompt, or immediately exit if you hit a key)?  Perhaps
 you're missing a library (or need to run ldconfig or whatever is
 needed to update the dynamic library cache on your platform).
 Does the system log inetd even running amandad?
 Does amandad run and create anything in /tmp/amanda?
 
 Thanks for your reply.  I did run it from the command line and it
 eventually times out, although it does not exit immediately if I type
 something.

2.4.5 will exit if you hit enter, 2.5 doesn't.

 debug files are created in /tmp/amanda, but not with any
 useful info...just how it was compiled.  There is absolutely nothing in
 /var/log/messages about amanda(d).

Inetd doesn't normally log unless you start it with a debug option.
In your case, since  a debug file is created then inetd is working
so you may not need to check further on that.
Post the debug file on the client and maybe we can see at what stage
of the connection it stops.  Are you running a firewall on or between
the client?

 I thought ldconfig was run upon
 boot...in any case, if I'm missing some kind of lib, wouldn't amanda
 complain about it when trying to build it?

The ldconfig suggestion is only relevant if running amandad from the
command line returns a missing library error, sorry I wasn't clear
about that.  And yes it would complain on build, I was thinking more
of cases where amanda lives on a shared filesystem or are copied to
multiple machines.

Frank

 
 Thanks again for any help...
 
 James Smallacombe   PlantageNet, Inc. CEO and Janitor
 [EMAIL PROTECTED] 
 http://3.am
 =
 


-- 
Frank Smith  [EMAIL PROTECTED]
Sr. Systems Administrator   Voice: 512-374-4673
Hoover's Online   Fax: 512-374-4501


Re: Amanda 2.4, not using work area

2006-09-08 Thread Frank Smith
Brian Cuttler wrote:
 Hello amanda users,
 
 I'm running an old version of amanda 2.4 on a Solaris 9 system
 with a Solaris 8 client.
 
 Its come to my attention that the two larger client partitions
 are not being moved through the work area but are being written
 directly to tape.
 
 The work area is a 70 Gig partition, the client DLEs are on 35 Gig
 partitions. I'd expect to use the work area even if they did so
 sequentially, the partitions however are only about 70% occupied,
 aprox 24 Gig each, so ideally I'd have liked to have seen some
 parallelism.
 
 From the daily reports I see that the smaller client partitions
 on both the Solaris 8 and 9 machine (the amanda server does have
 itself as a client) do utilize the work area.
 
 I do not know what is preventing the work area from being used.
 I would add more work area if I thought it would help, but I don't
 see anything screaming work area capacity issue.

If the direct-to-tape DLEs are level 0s, look at the 'reserve' option.
It tells amanda what percentage of your holdingdisk to save for use by
incrementals, so in case of tape problems you can run longer because
you don't fill it up with fulls. I don't remember what it defaults to
if not specified, but I think it is most of the space.

 
 Here is a question, I assume chunksize appeared around the same
 time (if not actually with) the ability to split a single DLE
 across multiple work areas. I see it back in the docs into '98
 or more but I'm not sure when it first appeared. Is there a list
 of what version which features where added, other than the changelog
 installation file ?

Chunksize was a workaround for writing dumps to disk larger than the
system's max file size (which was 2GB on many machines at the time).
I think support for multiple holding disks was added later.

Frank

 
 Anyway it doesn't look like a work area capacity issue. What, other
 than adding chunksize to my amanda.conf and perhaps adding additional
 work area can I do to investigate this issue.
 
 There does not seem to be any output in the /tmp/amanda/* files
 showing which DLEs will be work area and which will not, where else
 can I look for an explaintation/solution to this issue ?
 
   thank you,
 
   Brian
 ---
Brian R Cuttler [EMAIL PROTECTED]
Computer Systems Support(v) 518 486-1697
Wadsworth Center(f) 518 473-6384
NYS Department of HealthHelp Desk 518 473-0773
 


-- 
Frank Smith  [EMAIL PROTECTED]
Sr. Systems Administrator   Voice: 512-374-4673
Hoover's Online   Fax: 512-374-4501


Re: NFS strangeness

2006-09-08 Thread Frank Smith
Steffan Vigano wrote:
 Hello all,
 
 Just wondering if anyone might be able to shed some light on my 
 problem... we recently got a SnapServer on our network and would like to 
 use our existing Amanda install to back it up via NFS but so far have 
 been unsuccessful.
 
  - FreeBSD 4.7-RELEASE
  - Amanda v2.4.4b1
  - GNU tar v1.13.25
 
 I have tried mounting the NFS shares onto the main Amanda backup host as 
 well as other servers being backed up via Amanda.  I can manually, from 
 the command line, read/write and tar up the attached NFS shares without 
 issue, both as root and user Amanda.  Yet when Amanda runs through it's 
 backup process, nothing gets backed up from that share.  I've checked 
 the logs, and there are no errors showing.  Running an amrecover shows 
 the root directory that the share is mounted on, but there are no files 
 or directories within to recover.  I've tried changing the dump type in 
 amanda.conf to both DUMP and GNUTAR for the partition in question.  No 
 change.  I've also tried an NFS mount from an alternate FreeBSD machine 
 to rule out the SnapServer.  Same issue.
 
 Here are the runtar and sendsize debug files:
 
 runtar: debug 1 pid 49585 ruid 1002 euid 0: start at Fri Sep  8 
 13:41:26 2006
 gtar: version 2.4.4b1
 running: /usr/bin/tar: gtar --create --file - --directory /nfs 
 --one-file-system --listed-incremental 
 /usr/local/var/amanda/gnutar-lists/phatb.boothcreek.net_dev_aacd0s2h_1.new 
 --sparse --ignore-failed-read --totals --exclude-from 
 /tmp/amanda/sendbackup._dev_aacd0s2h.20060908134126.exclude . 
 
 sendsize: debug 1 pid 50252 ruid 1002 euid 1002: start at Fri Sep  8 
 13:59:38 2006
 sendsize: version 2.4.4b1
 sendsize[50252]: time 0.005: waiting for any estimate child
 sendsize[50257]: time 0.005: calculating for amname '/dev/aacd0s2h', 
 dirname '/nfs', spindle -1
 sendsize[50257]: time 0.006: getting size via gnutar for /dev/aacd0s2h 
 level 0
 sendsize[50257]: time 0.008: spawning /usr/local/libexec/runtar in 
 pipeline
 sendsize[50257]: argument list: /usr/bin/tar --create --file /dev/null 
 --directory /nfs --one-file-system --listed-incremental 
 /usr/local/var/amanda/gnutar-lists/phatb.boothcreek.net_dev_aacd0s2h_0.new 
 --sparse --ignore-failed-read --totals --exclude-from 
 /tmp/amanda/sendsize._dev_aacd0s2h.20060908135938.exclude .
 sendsize[50257]: time 0.052: Total bytes written: 102400 (100kB, 3.1MB/s)
 sendsize[50257]: time 0.053: .
 sendsize[50257]: estimate time for /dev/aacd0s2h level 0: 0.045
 sendsize[50257]: estimate size for /dev/aacd0s2h level 0: 100 KB
 sendsize[50257]: time 0.053: waiting for /usr/bin/tar /dev/aacd0s2h 
 child
 sendsize[50257]: time 0.053: after /usr/bin/tar /dev/aacd0s2h wait
 sendsize[50257]: time 0.054: done with amname '/dev/aacd0s2h', dirname 
 '/nfs', spindle -1
 sendsize[50252]: time 0.054: child 50257 terminated normally
 sendsize: time 0.054: pid 50252 finish time Fri Sep  8 13:59:38 2006
 
 as you can see... it only sees the 100k of real test files that I have 
 in the /nfs directory, and not the actual mounts that live under /nfs.  ???
 
 Am I missing something easy?   Anything special I need to add to 
 'amanda.conf'? or 'disklist'?  I do just mount the NFS share to an 
 existing entry in my disklist, right?  Rather then adding the NFS mount 
 as a separate entry? 

Dump works on devices, not filesystems, so it won't work on an NFS mount.
Tar works on filesystems, but Amanda calls it with the option to not
cross filesystem boundaries, so a backup of /nfs in your case will
just give you the local files in /nfs and not mounts under /nfs.
  If you are trying to backup /nfs/remotedir, add /nfs/remotedir
to your disklist and it will do what you want.  I backup our legacy
NetApps via an NFS mount this way and it works fine.

Frank

 
 I've check all the permissions on the share (no_root_squash, etc)...  
 I'm at a loss.  Digging around the net, I've been unable to find any 
 clear docs on what the proper setup might need to look like.  Anyone 
 have any more suggestions about next steps?
 
 Thanks a bunch,
 -Steffan


-- 
Frank Smith  [EMAIL PROTECTED]
Sr. Systems Administrator   Voice: 512-374-4673
Hoover's Online   Fax: 512-374-4501


Re: amdump frozen

2006-08-25 Thread Frank Smith
Jeff Portwine wrote:
 This morning first thing I looked in my email to see the amanda backup
 report to see how it ran last night there was no report there.So I
 checked the server and doing a ps showed:
 
 backup   17511  0.0  0.2  2272 1064 ?Ss   Aug24   0:00 /bin/sh 
 /usr/local/sbin/amdump DailySet1
 backup   17523  0.0  0.2  2604 1080 ?SAug24   0:00 
 /usr/local/libexec/driver DailySet1
 backup   17525  0.0  0.1  3268 1008 ?SAug24   0:00 taper DailySet1
 backup   17526  0.0  0.1  2592  952 ?SAug24   0:00 dumper0 
 DailySet1
 backup   17527  0.0  0.1  2592  952 ?SAug24   0:00 dumper1 
 DailySet1
 backup   17528  0.0  0.1  2592  952 ?SAug24   0:00 dumper2 
 DailySet1
 backup   17529  0.0  0.1  2592  952 ?SAug24   0:00 dumper3 
 DailySet1
 backup   17530  0.0  0.1  2592  952 ?SAug24   0:00 dumper4 
 DailySet1
 backup   17532  0.0  0.2  3260 1108 ?SAug24   0:00 taper DailySet1
 backup   17542  0.0  0.0 00 ?ZAug24   0:00 [chunker] 
 defunct
 
 It's been stuck like this since I don't know what time last night.  
 If I kill off these processes, what will Amanda do the next time it runs?
 Will it try again with the current tape, overwriting whatever it managed
 to get on the holding disk before it froze last night?I'm not really
 sure what (if anything) got backed up last night or how I should proceed.
 
 Thanks for any suggestions,
 Jeff
 
You can kill all the processes and then run amcleanup.  If you have
dumps in the holdingdisk then you can run amflush to write them to
tape (or if you have autoflush enabled in your config they will get
flushed on the next run).  On your next run amanda will use the next
tape, even if nothing actually was written to the one last night.
   Since your taper process is hung, I suspect either it was in the
middle of a direct to tape dump of a client that went away, or you
have some disk or tape I/O errors (SCSI bus hang, etc.) that might
need to be checked out.

Frank


-- 
Frank Smith  [EMAIL PROTECTED]
Sr. Systems Administrator   Voice: 512-374-4673
Hoover's Online   Fax: 512-374-4501


Re: amcheck failure did not resolve

2006-08-25 Thread Frank Smith
Chris Dennis wrote:
 Hello Amanda People
 
 I can't find any reference on the web to this error message which
 started occurring a couple of days ago:
 
WARNING: cherry2000: selfcheck request failed:
 cherry2000.mydomain.org.uk: did not resolve to cherry2000.mydomain.org.uk
 
 Running 'host cherry2000.mydomain.org.uk' gives the expected result:
 
cherry2000.mydomain.org.uk has address 192.168.1.2

but does 192.168.1.2 come back as cherry2000.mydomain.org.uk ?
 
 and I haven't (knowingly) changed anything related to domain names.
 
 Since I started getting that message, all backups have been failing with 
 messages such as
 
cherry2000  /RESULTS MISSING
 
 I'm running Amanda version 2.5.0p2 on Debian with a 2.6.17 kernel.
 
 Any suggestions as to where to start looking?

I would start with checking forward/reverse name resolution on the
client.  Adding it to the hosts file might be sufficient if your
nsswitch.conf file has files before dns for hosts.


Frank

 
 regards
 
 Chris


-- 
Frank Smith  [EMAIL PROTECTED]
Sr. Systems Administrator   Voice: 512-374-4673
Hoover's Online   Fax: 512-374-4501


Re: amanda client inetd problem

2006-08-11 Thread Frank Smith
Jon LaBadie wrote:
 
 From a few things I'm guessing that both client and server are
 running on linux systems.  Out of curiosity, which distros
 still use inetd rather than xinetd?
 
Debian still uses inetd by default, although xinetd and several
other variants are available as optional packages.

Frank

-- 
Frank Smith  [EMAIL PROTECTED]
Sr. Systems Administrator   Voice: 512-374-4673
Hoover's Online   Fax: 512-374-4501


Re: amanda client inetd problem

2006-08-11 Thread Frank Smith
Gene Heskett wrote:

 The debian camp and its offspring ubuntu, hasn't made the switch yet that 
 I'm aware of.  I just installed kubuntu-6.06 on my milling machines box so 
 I could stay reasonably well synched with the emc2 cvs, and was amazed 
 that the default install was still using inetd, or at least the 
 whole /etc/xinetd.d thing seemed to be missing.  I installed it, but the 
 basic install contains only:
 
 [EMAIL PROTECTED]:~$ ls /etc/xinetd.d
 chargen  daytime  echo  time
 
 So its not as if the system would die if I did an rm -fR /etc/xinetd.d.
 
 The added advantages of xinetd over inetd would seeem to make it imperitive 
 to switch, but then we all know the debian camp moves at glacial speed for 
 the core stuff.
 
 Maybe thats an unfair remark Jon, I just did a cat of /etc/inetd.conf and 
 found it only contains:
 #off# netbios-ssn stream  tcp nowait  
 root/usr/sbin/tcpd  /usr/sbin/smbd
 
 so what the heck *are* they doing to control daemon launching?  Me wanders 
 off, scratching head in wonderment.
 
They add init scripts and run them as daemons, naturally.  There is
considerable delay in starting a program of any size, so leaving
it running gives better response time.  Back in the old days, there
were memory constraints so many services were only started when needed
via inetd, trading off response time for memory space.
   Any service called with any frequency should be run as a daemon.
Amanda is one of those one-offs in that it usually only gets invoked
once a day.

Frank

-- 
Frank Smith  [EMAIL PROTECTED]
Sr. Systems Administrator   Voice: 512-374-4673
Hoover's Online   Fax: 512-374-4501


Re: Next Tapes are Offsite

2006-08-08 Thread Frank Smith
Jon LaBadie wrote:
 On Mon, Aug 07, 2006 at 02:37:17PM -0400, Ian Turner wrote:
 Marilyn,

 Amanda's tape selection policy is as follows.

 Consider the set of tapes T. We can partition the set into two disjoint 
 subsets A (the set of active tapes) and I (the set of inactive tapes). 
 Assuming I is nonempty, there exists a subset P of I, called the set of 
 preferred tapes. Note that T = A + I, and P is a weak subset of I.

 Amanda will only use tapes from I; active tapes are not considered for 
 overwriting. Also, tapes from P are preferred to other tapes in I; a tape 
 not 
 in P (but in I) will be used only if no tapes in P are available. If no 
 tapes 
 from I are available, then no tapes are used and Amanda will go into 
 degraded 
 mode.

 Tapes are assigned to each of the two sets as follows:
 -- Any labeled but unused tapes are in I and P. This includes unlabeled 
 tapes 
 if the label_new_tapes option is set.
 -- The most recently used tapecycle number of tapes is in A.
 -- Any remaining tapes are in I. The single least recently used of these is 
 also in P.

 This algorithm is applied from scratch any time a new tape is needed during 
 a 
 backup run. You can run the algorithm without running Amanda by doing 
 'amtape 
 taper'.

 What all of this means from a slightly less mathematical perspective is that 
 Amanda will not consider overwriting the tapecycle most recent tapes. If you 
 want to relax this restruction, just reduce tapecycle, and Amanda will 
 countenance the use of newer (more recently used) tapes.

 Alternatively, if you have a specific tape that you want Amanda to reuse, 
 just 
 relabel it, and it will be treated as a new tape.

 --Ian

 On Monday 07 August 2006 13:10, HUGHES Marilyn F wrote:
 We have a situation where the next  Amanda tapes that it is asking for
 are currently offsite.  It costs $75 for them to be retrieved so we
 don't want to do that.


 Besides we have available tapes here onsite.  Is there a way (a command
 or other way?) to force Amanda to select one of the tapes that we have
 onsite?  Does Amanda select from the top of the tapelist on down?
 
 
 Ian,
 
 Thanks for your description.  I was thinking of trying to put together
 a description of the tape selection algorithm myself.  But I didn't
 know some(most?) of the detail.  Certainly not in mathematical sets.
 
 One thing still up in the air (to me anyway) is final tape selection
 from within the tapelist and physical tape changer.  Your description
 gets to which tapes are eligible to be selected, but not which tape
 (or runtape number of tapes) among that set is ultimately chosen.
 
 Let me try to use my own terminology to descibe your algorithm so
 that I'm sure I understand it.  I'd appreciate corrections to any
 mis-statements, in fact or in timing.  And, perhaps you could extend
 it by describing the final tape selection.
 
 From the entire tapelist, those marked no-reuse are eliminated
 from further consideration.  Only those marked reuse are considered.
 (is there any other tapelist classification?)
 
 The reuseable list is divided into previously used and never used
 (have a valid date stamp or have a 0 date stamp respectively).
 
 The previously used tapes are date-sorted and the most recently used
 tapecycle-1 of those are eliminated from further consideration.  This
 would be your active set that are reusable but can't be overwritten
 at this time as they fall within a tapecycle's number of tapes.
 
 Any remaining, previously used tapes, plus the labeled but never used
 tapes (if any), constitute the set of tapes eligible to be used for
 the next run.
 
 If what I've described is reasonably accurate, none of it is dependent
 on tapelist order nor availability in the tape changer.  So the physical
 device must be accessed before the final selection of a tape to use.
 
 Does amanda at this point look specifically for the next tape based on
 the order in the tapelist file?  (i.e. the last tape in the tapelist
 file that is on the eligible list)  Will it scan the entire changer
 looking for that specific tape?  Or will it start to scan the changer
 looking for any tape from the eligible list?  Or something else?
 
 Thanks.
 
 jl
If Amanda's behavior hasn't changed (I'm still using 2.4.5), it will
use the first non-active tape it finds in the changer.  It appears
to just load the next tape until it finds a non-active one, not look
for an unused one or find the oldest.

Frank

-- 
Frank Smith  [EMAIL PROTECTED]
Sr. Systems Administrator   Voice: 512-374-4673
Hoover's Online   Fax: 512-374-4501


Re: Next Tapes are Offsite

2006-08-07 Thread Frank Smith
HUGHES Marilyn F wrote:
 We have a situation where the next  Amanda tapes that it is asking for
 are currently offsite.  It costs $75 for them to be retrieved so we
 don't want to do that. 
  
  
 Besides we have available tapes here onsite.  Is there a way (a command
 or other way?) to force Amanda to select one of the tapes that we have
 onsite?  Does Amanda select from the top of the tapelist on down?

If you have more tapes in your tapelist than your tapecycle,
Amanda will use the oldest available one.  You also always
have the option of labeling a new tape,and it will use that.

Frank

  
  
 Thanks for your help,
  
 Marilyn Hughes
 Multnomah County
 Technical Services
 4747 E. Burnside Street
 (503) 988-4635
  
 


-- 
Frank Smith  [EMAIL PROTECTED]
Sr. Systems Administrator   Voice: 512-374-4673
Hoover's Online   Fax: 512-374-4501


Re: notes

2006-08-02 Thread Frank Smith
Glenn English wrote:
 -BEGIN PGP SIGNED MESSAGE-
 Hash: SHA1
 
 The backup works and verifies, but the report says:
 
   planner: disk zbox.slsware.lan:/usr/bin, estimate of level 2 failed.
   planner: disk zbox.slsware.lan:/var, estimate of level 1 failed.
   planner: disk zbox.slsware.lan:/home, estimate of level 1 failed.
   planner: disk zbox.slsware.lan:/boot, estimate of level 1 failed.
   planner: disk zbox.slsware.lan:/, estimate of level 1 failed.
 
 for every DLE on this host. It does level 0s, so things get backed up.
 And it just started; I didn't change anything. Any explanations?

Yes, you updated your packages and got tar 1.15.91, which changed
something related to --listed-incremental.  I believe there is a
current snapshot of Amanda that addresses that issue, but you might
want to just revert to a previous version of tar, as 1.15.91 also
has an issue with using --one-file-system in conjunction with the
--listed-incremental option that causes it to leak out of the
base filesystem,

Frank
 
 Debian Linux, testing; VERSION=Amanda-2.5.0p2; installed by apt-get.
 This is the Amanda host; these DLEs are local disks. The hosts on the
 nets are fine.
 
 Does that first entry mean that the level 1 worked but the 2 didn't?
 
 - --
 Glenn English
 [EMAIL PROTECTED]
 
 -BEGIN PGP SIGNATURE-
 Version: GnuPG v1.4.3 (GNU/Linux)
 Comment: Using GnuPG with Mozilla - http://enigmail.mozdev.org
 
 iD8DBQFE0Q0E04yQfZbbTLYRAh9RAKCNXgsY1+6yuE+4Vzyv0RVBu3hOxgCeK9Y0
 NSE0z9k0Qk8W6HG/DCXf26o=
 =w4hQ
 -END PGP SIGNATURE-


-- 
Frank Smith  [EMAIL PROTECTED]
Sr. Systems Administrator   Voice: 512-374-4673
Hoover's Online   Fax: 512-374-4501


Re: amrecover question

2006-07-28 Thread Frank Smith
Jeff Allison wrote:
 I have just upgraded my client using the binarys at zmanda.com and when 
 I try a recover I get
 
 [EMAIL PROTECTED] xinetd.d]# amrecover homes -s dalston.blackshaw.dyn.dhs.org
 AMRECOVER Version 2.5.1b1. Contacting server on 
 dalston.blackshaw.dyn.dhs.org ...
 NAK: amindexd: invalid service

Do you add the amindexd service into xinetd?  And if so, did you
remember to restart xinetd after you added it (I think a kill -HUP
doesn't work on xinetd, it needs a -USR2 signal on the boxes I've
used, but a stop/start will always work)?

Frank

 
 any ideas
 
 TIA
 
 Jeff


-- 
Frank Smith  [EMAIL PROTECTED]
Sr. Systems Administrator   Voice: 512-374-4673
Hoover's Online   Fax: 512-374-4501


Re: Incorrect bahaviour causes backup loss - further update!!

2006-07-28 Thread Frank Smith

 Alan Pearson wrote:
 
 Now I'm faced with more difficulties because of this.

 Amanda is only dumping at level2 the machine it overwrote the only full
 backup of.

 How can I force a level 0 dump ?

 Cheers,

Pavel gave a response on 'how', but I'm confused as to the 'why'.
If the last full dump was overwritten, shouldn't Amanda be trying
to schedule a level 0 anyway, since there is no level 0 to base
an incremental on?

Frank


-- 
Frank Smith  [EMAIL PROTECTED]
Sr. Systems Administrator   Voice: 512-374-4673
Hoover's Online   Fax: 512-374-4501


Re: Many thanks and more questions (amrestore/amfetchdump)

2006-07-28 Thread Frank Smith
Ronald Vincent Vazquez wrote:
 The next questions are, how will amanda back up a single volume exceeding
 5 TB (10-14 TB) and how long will it take to write it to tape?  Should we
 be asking an average of how long does amanda takes to collect and write 1
 TB of data to tape and multiply by X?  Ok, let say that our volume grows
 to 10 TB, how long would the cycle have to be to perform a level 0 of this
 bad boy?

The backup time for 10TB will probably be close to 10x the time for
1TB, assuming all the data is similar in number of files per gig (for
tar, for dump I don't think it matters) and compressibility.
   The real question is going to be the length of your backup window
in relation to your backup speed.  For data that size you may need
to run multiple drives in parallel with the RAIT driver, and even
that might not help if you can't read the disk fast enough to keep
up, depending on the data layout on the disk, plus all the usual
bus bottlenecks, etc..
   I would personally leery of backing it up as a single entity,
you lose Amanda's ability of keeping daily tape usage balanced
via spreading the level 0s around to different days.   Also, what
happens when you hit an error 9TB into your 10TB backup, do you
end up with nothing (or at least nothing easy to recover from)?

Frank

 
 Have a great weekend everyone,
 
 /
 Ronald Vincent Vazquez
 Senior Unix Systems Administrator
 Senior Network Manager
 Christ Tabernacle Church Ministries
 http://www.ctcministries.org
 (301) 540-9394 Home
 (240) 401-9192 Cell
 
 For web hosting solutions, please visit:
 http://www.spherenix.com/
 


-- 
Frank Smith  [EMAIL PROTECTED]
Sr. Systems Administrator   Voice: 512-374-4673
Hoover's Online   Fax: 512-374-4501


Re: Tape Spanning - 105 hour backup of 1.3 Tb

2006-07-21 Thread Frank Smith
 mismatch
   driver: dumping megawatt:/vol06 directly to tape
   driver: send-cmd time 1601.080 to taper: PORT-WRITE 00-2 [cont'd]
  megawatt feff9ffe07 /vol06 0 20060703 2097152 /amdump4 1048576
 
 And in the log file:
 
   INFO taper mmap failed (Not enough space): using fallback split [cont'd]
 size of 1048576kb to buffer megawatt:/vol06.0 in-memory
 
 The second DLE (/vol06) *WAS* dumped to tape in 1Gb chunks as per the
 log message. Similarly, DLE's 3-5 were backed up in 1Gb chunks.
 
 DLE 6 also fell back using a 1Gb memory buffer, but then failed with the
 following message in the log file:
 
   FATAL taper [EMAIL PROTECTED]: memory allocation failed [cont'd]
 (1073741824 bytes requested)
 
 TRY #3:
 ---
 We were a bit puzzled by the memory allocation failure above since the
 machine has lots of unused memory, but decided to go ahead with another
 test using:
 
 tape_splitsize 2 Gb
 fallback_splitsize 10Mb (i.e. the default value)
 
 Again, the first volume dumped using the disk buffer of 2Gb, but subsequent
 DLE's were split up into 10Mb chunks (about 150 thousand in all) over 4
 tapes. The entire backup took 105 hours or about 4.4 days to complete, which
 is about 10 times slower (in terms of Gb/hr not total time) than our normal
 backups (we normally get ~100Gb/hr from our SDLT 600, which is fairly close
 to the manufacturer's advertised transfer rate).
 
 Note that this (poor) performance is similar that that described by
 Paul Graf in his post of 29Jun2006 titled 77 hour backup of 850Gb.
 
 TRY #4:
 ---
 Given the completely unacceptable performance using a 10Mb splitsize, we
 decided to try compiling amanda 64-bit based on the assumptions that:
 
   - 32-bit limitations might have been the cause of both large
 disk and memory buffers failing
   - lack of a proper buffer was the root cause of the performance
 issue
 
 To compile 64-bit, we used the following gcc flags:
 
 setenv CFLAGS -m64 -mcpu=ultrasparc3
 
 along with buffer sizes as follows:
 
 tape_splitsize 2 Gb
 fallback_splitsize 1Gb
 
 Although the 64-bit code compiled and ran, it seemed to get confused over the
 buffer sizes. For example, the amdump PORT-WRITE record was:
 
   driver: send-cmd time 3.512 to taper: PORT-WRITE 00-1 [cont'd]
 megawatt feff9ffe07 /vol07 0 20060713 9007199254740992 [cont'd]
 /holdvol01/MEGABAK2_DISKBUFFER 4503599627380736
 
 This *seems* to indicate that the requested diskbuffer size is 9 exabytes
 (9 * 10^18 bytes) and that the requested fallback size is 4 exabytes despite
 the fact that we specified 2G/1G. I'm guessing that these numbers are because
 some portions of the code aren't 64-bit clean.
 
 Since these buffer sizes were not available, the backup didn't seem to be
 splitting any of the DLE's so we terminated it.
 
 MISC:
 -
 Following these attempts, we had a look at the disk buffering code
 and noted that:
 
   - the disk buffer seems to be re-created for every DLE
   - after creation, the entire buffer is zeroed out by writing 1024 byte
 blocks of zeroes to it
 
 Assuming that:
 
   - a single SCSI disk can maintain a througput of ~50Mbyte/s
   - a reasonable split buffer size would be ~50Gb
 
 then this means that just zeroing the diskbuffer would take about half an
 hour per DLE. In our case (~40DLEs), this means that over 20 hours would
 be spent just zeroing the buffer file before even doing any real work!
 
 
 =
 Sean Walmsley [EMAIL PROTECTED]
 Nuclear Safety Solutions Ltd.  416-592-4608 (V)  416-592-5528 (F)
 700 University Ave M/S H04 J19, Toronto, Ontario, M5G 1X6, CANADA
 


-- 
Frank Smith  [EMAIL PROTECTED]
Sr. Systems Administrator   Voice: 512-374-4673
Hoover's Online   Fax: 512-374-4501


Re: Amrecover help needed

2006-07-20 Thread Frank Smith
Anne Wilson wrote:
 On Thursday 20 July 2006 20:07, Joshua Baker-LePain wrote:
 'man amrecover' and specifically look at the 'setdisk' command.  Depending
 on what your DLE looks like, you probably need to 'setdisk /home'.
 
 Right - /home is the root directory of one DLE, and the required file is 
 in /home/anne/Documents/Spreadsheets.  According to the examples in 
 man:amrecover I should now be able to ls the file, but
 
 200 Disk set to /home.
 amrecover ls -l bankrec.xls
 2006-07-20 nigel/
 2006-07-20 micky/
 2006-07-20 lost+found/
 2006-07-20 gillian/
 2006-07-20 david/
 2006-07-20 anne/
 2006-07-20 andy/
 2006-07-20 amanda/
 2006-07-17 amanda.exclude~
 2006-07-17 amanda.exclude
 2006-07-20 .Trash-0/
 2006-07-20 .
 Invalid command - syntax error

Not sure where this error is coming from, do you have a filename containing
special characters in that directory?

 amrecover setdisk /home/anne/Documents
 501 Disk borg:/home/anne/Documents is not in your disklist.
 amrecover setdisk /home
 200 Disk set to /home.
 amrecover  
 
 What is the next step?
setdate (if you want to recover from a date previous to the last run)
cd anne/Documents/Spreadsheets
ls  (if you want to see what is there)
add yourfilename
(other add, cd, and ls as needed to select additional files)
extract   (make sure you are in the directory you want it restored to,
   you can first do an lcd to change destination directory)

Keep in mind that if you restore a directory it will clear the dest
dir before the restore, so newer files will disappear.  It's safest
to always restore to a scratch dir and move the files afterwards.

Frank

 
 Anne


-- 
Frank Smith  [EMAIL PROTECTED]
Sr. Systems Administrator   Voice: 512-374-4673
Hoover's Online   Fax: 512-374-4501


Re: Amrecover help needed

2006-07-20 Thread Frank Smith
Joshua Baker-LePain wrote:
 On Thu, 20 Jul 2006 at 9:59pm, Anne Wilson wrote
 
 On Thursday 20 July 2006 21:44, Frank Smith wrote:
 Keep in mind that if you restore a directory it will clear the dest
 dir before the restore, so newer files will disappear.  It's safest
 to always restore to a scratch dir and move the files afterwards.

 That's what I want to do.  How do I tell it where to put the recovered file?
 
 It goes into $CWD from which you launched amrecover.

If you forget to change to your desired destination directory before
starting amrecover, you can use the lcd command to change to another
destination directory within amrecover before you give the extract
command.  Sometimes useful if you have spent some time picking and
choosing files before remembering you're not where you want them
restored to and don't want to start over.

Frank

-- 
Frank Smith  [EMAIL PROTECTED]
Sr. Systems Administrator   Voice: 512-374-4673
Hoover's Online   Fax: 512-374-4501


Re: A script to test GNU tar exclude patterns

2006-07-19 Thread Frank Smith
Gene Heskett wrote:
 On Wednesday 19 July 2006 01:20, Frank Smith wrote:
 Olivier Nicole wrote:
 Hi,

 I have put together the following srcipt
 http://www.cs.ait.ac.th/~on/testgtar to help testing exclude
 patterns in GNU tar.

 This is a simple Perl script, that should run on any installation;
 but I'd like to receive comments about the way I could improve it
 (this not working being first candidate for improvement :)
 I haven't had a chance to test the script yet, but since I ran across
 this thread about changes in GNU tar from the Debian developers list
 http://lists.debian.org/debian-devel/2006/06/msg01108.html default
 pattern matching action in tar 1.19, it is important for everyone to
 check to be sure your includes/excludes still do what they used to.

 Frank
 
 tar-1.19?  What happened to 1.16 thru 1.18?  Its normally about 3 years 
 per whole point increment...

Not sure what I was thinking while I was typing, that should be
tar 1.15.91

Frank


-- 
Frank Smith  [EMAIL PROTECTED]
Sr. Systems Administrator   Voice: 512-374-4673
Hoover's Online   Fax: 512-374-4501


Re: Restoring from tape when Amanda server failed

2006-07-19 Thread Frank Smith
 Joshua Baker-LePain [EMAIL PROTECTED] wrote:
  On Wed, 19 Jul 2006 at 8:12am, gil naveh wrote

  I have to restore from our tape drive, but our Amanda server  that 
 runs on Solaris 9 has failed.
  Any suggestion/ideas on how to recover files from tapes are mostly  
 welcome.

 http://www.amanda.org/docs/restore.html
 man amrestore

 This is very well explained in multiple (obvious) places.


gil naveh wrote:
 Thanks for the help.
   I am familiar with the Amrestore command.
   But the problem I am facing is that the Amanda server which also holds 
 other  applications crushed. So I have to restore data from another server - 
 I have  Solaris 9 and and or Solaris 10 servers that I can connect to the 
 tape drive...
   I also saved the configuration files of the amanda server.
   
   Is there a way to directly connect to the tape drive and use unix commands 
 to restore data from it?
   Or any other suggestions...
   

As was previously posted, http://www.amanda.org/docs/restore.html
Look at item 6.iii

Also look at http://www.amanda.org/docs/using.html#restoring_without_amanda

Frank


-- 
Frank Smith  [EMAIL PROTECTED]
Sr. Systems Administrator   Voice: 512-374-4673
Hoover's Online   Fax: 512-374-4501


Re: A script to test GNU tar exclude patterns

2006-07-18 Thread Frank Smith
Olivier Nicole wrote:
 Hi,
 
 I have put together the following srcipt
 http://www.cs.ait.ac.th/~on/testgtar to help testing exclude patterns
 in GNU tar.
 
 This is a simple Perl script, that should run on any installation; but
 I'd like to receive comments about the way I could improve it (this
 not working being first candidate for improvement :)
 
I haven't had a chance to test the script yet, but since I ran across
this thread about changes in GNU tar from the Debian developers list
http://lists.debian.org/debian-devel/2006/06/msg01108.html default
pattern matching action in tar 1.19, it is important for everyone to
check to be sure your includes/excludes still do what they used to.

Frank

-- 
Frank Smith  [EMAIL PROTECTED]
Sr. Systems Administrator   Voice: 512-374-4673
Hoover's Online   Fax: 512-374-4501


planner: disk xxx:/, estimate of level N failed

2006-07-12 Thread Frank Smith
In the last few days I've started seeing a couple of messages like
the above in the NOTES section of my daily reports.  The DLEs in
question do get backed up, but it is a level 0 each night. Originally
it was just one server, now it is two.
   Both are running 2.5.0p2-1 Debian etch packages, one is x86, the
other is arm.   A third amd64 machine is still working fine so far.
The appearance of the errors may have coincided with package updates,
but I'm not sure (although they have been 2.5.x for awhile).
   Since I've been running Amanda for years, my first thought was
just to check the /tmp/amanda/*debug files, but was surprised to
find that those files don't exist on any of the 2.5.0 machines,
all that is in /tmp/amanda are selfcheck*exclude, sendbackup*exclude,
and sendsize*exclude files.
   It's not critical, theses are my home machines and I have the
space to handle nightly level 0s, but I had been considering an
upgrade of my work machines to 2.5 from 2.4.5 and am now concerned
that this might be a bug that needs fixing before I upgrade. The
estimate error may be just a Debian package thing, either in the
Amanda package or possibly in tar, but evidently the debug files have
been missing longer than the estimate error.

Any debug suggestions?

Frank


-- 
Frank Smith  [EMAIL PROTECTED]
Sr. Systems Administrator   Voice: 512-374-4673
Hoover's Online   Fax: 512-374-4501


Re: amrecover failures

2006-07-10 Thread Frank Smith
Jerlique Bahn wrote:
 Hello,
 
 I have recently configured Amanda to backup our data.  I am now trying to
 restore some of the data that I have backed up, and I am having problems
 doing so.  The client and server are the same machine. I have chg-manual
 configured so that the first and last slot are both 1. I am running
 amrecover as root. I am guessing that its because the changer think that it
 needs a different tape, but I cannot work out why.
 
 Any suggestions?
 
 amrecover [cut]
 amrecover extract
 
 Extracting files using tape drive /dev/nsa0 on host 10.0.0.2.
 The following tapes are needed: backup-1
 backup-5
 
 Restoring files into directory /usr/recover
 Continue [?/Y/n]? Y
 
 Extracting files using tape drive /dev/nsa0 on host 10.0.0.2.
 Load tape backup-1 now
 Continue [?/Y/n/s/t]? Y
 EOF, check amidxtaped.timestamp.debug file on 10.0.0.2.
 amrecover: short block 0 bytes
 UNKNOWN file
 amrecover: Can't read file header
 extract_list - child returned non-zero status: 1
 Continue [?/Y/n/r]?
 
 # ammt -f /dev/nsa0 status
 /dev/nsa0 status: ONLINE ds == 0x0001 er == 0x
 
 # amtape backup reset
 changer: got exit: 0 str: 1 /dev/nsa0
 amtape: changer is reset, slot 1 is loaded.
 
 
 ## this is from /tmp/Amanda/amidxtaped
 Looking for tape backup-1...
 changer: got exit: 0 str: 0 1 1
 changer_query: changer return was 1 1
 changer_query: searchable = 0
 changer_find: looking for backup-1 changer is searchable = 0
 changer: got exit: 2 str: /usr/local/amanda/libexec/chg-manual: cannot
 create /dev/tty: Device not configured

I'm not very familiar with the chg-manual script, but as a wild guess
you might have the wrong console device specified for your OS.

 amidxtaped: could not load slot /usr/local/amanda/libexec/chg-manual::
 cannot create /dev/tty: Device not configured
 amidxtaped: time 0.186: could not load slot
 /usr/local/amanda/libexec/chg-manual:: cannot create /dev/tty: Device not
 configu
 red
 amidxtaped: time 0.186: pid 5948 finish time Mon Jul 10 23:01:39 2006
 
 
 ## This is from the changer.debug
 MT - /usr/local/amanda/sbin/ammt -f
 DD - /usr/local/amanda/sbin/amdd
 Args - -slot next
  - rewind /dev/nsa0
 /dev/nsa0 rewind failed: Permission denied

Looks like this is your main problem, you don't have permissions to
the tape drive.  Odd, since you say you are running amrecover as
root.  Is the tape actually loaded in the drive?  Can you (from the
command line) run 'mt -f /dev/nsa0 rewind' successfully?

Frank

  - status /dev/nsa0
 /dev/nsa0 status failed: Permission denied
  - loaded 
 
 
 
 JB
 


-- 
Frank Smith  [EMAIL PROTECTED]
Sr. Systems Administrator   Voice: 512-374-4673
Hoover's Online   Fax: 512-374-4501


Re: port NNNN not secure(newbie)

2006-07-10 Thread Frank Smith
Mike Allen wrote:
 Jon LaBadie wrote:
 On Mon, Jul 10, 2006 at 02:38:31PM -0700, Mike Allen wrote:
   
 Jon LaBadie wrote:
 
 On Mon, Jul 10, 2006 at 10:33:27AM -0700, Mike Allen wrote:
  
   
 I'm using Amanda 2.4.5 with FreeBSD 5.4.

 1.  Backups on the NATed side of our firewall work fine.

 2.  Our tape server is on the NATed side of firewall.

 3.  Backups through the firewall fail when I run AMCHECK.
The error message is port  not secure.

 I have attempted to research the environment variables 'tcpportrange', 
 'udpportrange' and
 'portrange' and configure our firewall appropriately  but with no success.

 Please tell me what I am doing wrong.  Any help would be appreciated.


 
 There are some user-supplied responses to that question in the FAQ-o-matic:

http://amanda.sourceforge.net/fom-serve/cache/14.html

 See if any of them apply.

 Other info might be available at zmanda.com

  
   
 Jon:

 Thanks for your reply. I had previously checked out Faq-O-Matic and 
 found nothing
 new or useful to me for this problem. The problem may be in my 
 perseptions about this problem.

 Regarding the file 'site.config'. I would like to know the difference 
 between the
 'portrange' parameter and the 'tcpportrange' parameter. Am I supposed to 
 use both?
 Does 'tcpportrange' override 'portrange' or what?

 I am afraid that much 'handholding' may be required to get me over this 
 problem!

 
 IIRC (I've not used port specification) portrange is the old synonym
 for tcpportrange.  Assuming I'm correct, don't use portrange.

 But do use BOTH udpportrange and tcpportrange.

 Again, suggested by someone who has not used them.

Mine is built with:
--with-tcpportrange=4,40030 --with-udpportrange=920,940
I believe the tcp ports must be 1024 and the udp ports 1024
or you will get various errors and/or warnings.  You might want
to read the PORT.USAGE file in the docs directory to get an idea
of how big a range you will need.


 jl
   
 Assuming I have configured the TAPEHOST correctly using both 
 'tcpportrange' and 'udpportrange',
 do I do the identical configuration on the CLIENT with respect to 
 'tcpportrange' and 'udpportrange'?

Yes, they both need to be configured the same or they won't talk
to each other.

 I cannot find anything about this in the documentation
 
 I had another thought, I am using a Netgear model FSV338 for my 
 firewall. (I use the appropriate
 settings for port-forwarding.) Could this be the problem?

A potential problem, depending on how its configured.  Keep in
mind that unless the firewall is Amanda aware it may block the
reverse connections from the client that go to different ports
on the server than the ones the server initially used.  That is
the purpose of the portrange configuration, to limit the number
of ports you need to open on your firewall between client and
server.  iptables (netfilter) has a module that is Amanda-aware
and can allow the reverse connections as a 'related' match, even
if you don't use portrange.

 
 How can I check this out?

Besides looking in the Amanda logs for timeouts on client and server,
tcpdump or other packet capturing tools will show what is happening
where.

Frank

 
 Mike
 


-- 
Frank Smith  [EMAIL PROTECTED]
Sr. Systems Administrator   Voice: 512-374-4673
Hoover's Online   Fax: 512-374-4501


Re: amrecover failures

2006-07-10 Thread Frank Smith
Jerlique Bahn wrote:
 Jerlique Bahn wrote:
 I have recently configured Amanda to backup our data.  I am now trying
 to
 restore some of the data that I have backed up, and I am having
 problems
 doing so.  The client and server are the same machine. I have chg-
 manual
 configured so that the first and last slot are both 1. I am running
 amrecover as root. I am guessing that its because the changer think
 that
 it needs a different tape, but I cannot work out why.
 # ammt -f /dev/nsa0 status
 /dev/nsa0 status: ONLINE ds == 0x0001 er == 0x

 # amtape backup reset
 changer: got exit: 0 str: 1 /dev/nsa0
 amtape: changer is reset, slot 1 is loaded.

 ## this is from /tmp/Amanda/amidxtaped
 Looking for tape backup-1...
 changer: got exit: 0 str: 0 1 1
 changer_query: changer return was 1 1
 changer_query: searchable = 0
 changer_find: looking for backup-1 changer is searchable = 0
 changer: got exit: 2 str: /usr/local/amanda/libexec/chg-manual: cannot
 create /dev/tty: Device not configured
 I'm not very familiar with the chg-manual script, but as a wild guess
 you might have the wrong console device specified for your OS.
 I don't think so, because the commands that are failing from within
 chg-manual work if I type them in eg:

 SRV# echo insert tape into slot $1 and press return /dev/tty
 insert tape into slot  and press return
 SRV# read ANSWER /dev/tty

 SRV#

 amidxtaped: could not load slot /usr/local/amanda/libexec/chg-manual::
 cannot create /dev/tty: Device not configured
 amidxtaped: time 0.186: could not load slot
 /usr/local/amanda/libexec/chg-manual:: cannot create /dev/tty: Device
 not
 configu
 red
 amidxtaped: time 0.186: pid 5948 finish time Mon Jul 10 23:01:39 2006


 ## This is from the changer.debug
 MT - /usr/local/amanda/sbin/ammt -f
 DD - /usr/local/amanda/sbin/amdd
 Args - -slot next
  - rewind /dev/nsa0
 /dev/nsa0 rewind failed: Permission denied
 Looks like this is your main problem, you don't have permissions to
 the tape drive.  Odd, since you say you are running amrecover as
 root.  Is the tape actually loaded in the drive?  Can you (from the
 command line) run 'mt -f /dev/nsa0 rewind' successfully?
 I agree, however from the command line I have full access:
 On poking around my installation, the changer scripts are all
 owned by my Amanda user, not root.  Can you 'su -' to your Amanda
 user and still run these commands?
 
 Ahh very good point!! I've been running it as root, and this is what is in
 the Amanda.conf file but I compiled with user=backup.  In inetd I'm running
 the services as user root.
 
 SRV# su backup -c /usr/local/amanda/sbin/ammt -f /dev/nsa0 rewind
 /dev/nsa0 rewind failed: Permission denied 
 
 SRV# la /usr/local/amanda/sbin/ammt -rwxr-xr-x  1 backup  backup  14857 Jul
 10 14:13 /usr/local/amanda/sbin/ammt 
 
 SRV# la /usr/local/amanda/sbin/amdd -rwxr-xr-x  1 backup  backup  13153 Jul
 10 14:13 /usr/local/amanda/sbin/amdd 
 
 What do you think I should do?

Easiest fix is probably add your user 'backup' to whatever group has
write permissions to your tape drive.  I believe the docs recommend
making your backup user a member of whatever group owns the devices
on your OS to avoid problems such as this.  You may also run into
permissions problems on your backups as well.  The tar wrapper is
suid root so it won't care, but if you use dump it needs access to
the disk device and can fail on permissions.
   Not sure which fix I would recommend. If adding backup to the
group owning the tape device solves your problem, then go with that,
but if you foresee  future issues with dump (I'm assuming the backup
group doesn't own your disks) you might want to go through the pain
of changing backup's group and chgrp-ing all Amanda's files.

Frank
 
 JB
 


-- 
Frank Smith  [EMAIL PROTECTED]
Sr. Systems Administrator   Voice: 512-374-4673
Hoover's Online   Fax: 512-374-4501


Re: ipv6 question

2006-07-03 Thread Frank Smith
Rodrigo Ventura wrote:
 Hello,
 
 I have a simple question: does amanda support dumping/restoring over IPv6? 
 
 I have a server behind a NATbox that I need to backup periodically. IPv6 
 would 
 be the neat solution. Otherwise, are there IPv4 solutions?

I don't know about the IPv6 question.  It depends on how the socket
code is written.
   NAT certainly works.  I back up quite a few remote servers which
only have private IPs, and my Amanda server appears to those clients
as a private IP that is different from it's real IP, NATted through
a VPN.  Works fine as long as names resolve properly based on who's
asking (i.e., my local clients resolve the server to its real IP,
while the remote clients resolve it to its NATted IP).
 
 I read somewhere about iptables having a module for amanda. How does it work?

It just sees the reply packets (that occur on different ports) as
related to the original request and allows them to pass through. Its
similar to the way the FTP module works.

Frank

 
 Cheers,
 
 Rodrigo
 


-- 
Frank Smith  [EMAIL PROTECTED]
Sr. Systems Administrator   Voice: 512-374-4673
Hoover's Online   Fax: 512-374-4501


  1   2   3   4   5   6   7   >