Re: reconstructing file hierarchy from indecies

2018-01-24 Thread Michael Stauffer
>
> On Fri, Jan 19, 2018 at 1:07 PM, Jean-Louis Martineau <
> jmartin...@carbonite.com> wrote:
>
>> On 19/01/18 12:51 PM, Michael Stauffer wrote:
>>
>> 
>
>> > When I look at the index file for a DLE from the incremental backup, I
>> > see a lot of directory names with no filenames listed. It seems like
>> > the index may hold all directories within the DLE, even if no files
>> > were backed up from that dir during the run? Is that right?
>>
>
>
> yes
>>
>
>
>
>> > If so, I can recreate the directory hierarchy at the time of the
>> > incremental backup just from the index files from the incremental
>> backup?
>> >
>> Yes, do a mkdir for each directory in the index file (the one which end
>> with /).
>>
>
I'm wondering now if I can also create a full file hierarchy listing from
the incremental dump indecies? Amanda must somehow know which files from a
level 0 dump were deleted at the time of a subsequent incremental dump. I'd
like to check my restored volume against a list of files that were present
at the time of the most recent incremental dump.

-M


Re: reconstructing file hierarchy from indecies

2018-01-19 Thread Michael Stauffer
On Fri, Jan 19, 2018 at 1:07 PM, Jean-Louis Martineau <
jmartin...@carbonite.com> wrote:

> On 19/01/18 12:51 PM, Michael Stauffer wrote:
>
> 

> > When I look at the index file for a DLE from the incremental backup, I
> > see a lot of directory names with no filenames listed. It seems like
> > the index may hold all directories within the DLE, even if no files
> > were backed up from that dir during the run? Is that right?
>


yes
>



> > If so, I can recreate the directory hierarchy at the time of the
> > incremental backup just from the index files from the incremental backup?
> >
> Yes, do a mkdir for each directory in the index file (the one which end
> with /).
>
> Jean-Louis
>

Great! Thanks Jean-Louis.

-M


>
>
> *Disclaimer*
>
> This message is the property of *CARBONITE, INC.*
> <http://www.carbonite.com> and may contain confidential or privileged
> information.
>
> If this message has been delivered to you by mistake, then do not copy or
> deliver this message to anyone. Instead, destroy it and notify me by reply
> e-mail.
>


reconstructing file hierarchy from indecies

2018-01-19 Thread Michael Stauffer
amanda 3.3.9

Hi,

I've got a large xfs filesystem that had trouble and has undergone repair.
Some inodes were disconnected from filenames. I've mostly been able to
identify what's what from contents of the dir trees now named as inodes in
lost+found. But I'd like to check the full file hierarchy to see what may
be missing so I can do only targeted restore operations if needed.

I have an incremental backup performed just before the fs repair operation
that moved some dirs into lost+found. I have a full backup from a couple
months before that.

When I look at the index file for a DLE from the incremental backup, I see
a lot of directory names with no filenames listed. It seems like the index
may hold all directories within the DLE, even if no files were backed up
from that dir during the run? Is that right? If so, I can recreate the
directory hierarchy at the time of the incremental backup just from the
index files from the incremental backup?

-M


Re: moving to amanda 3.3.7p1

2015-02-26 Thread Michael Stauffer
On Thu, Feb 26, 2015 at 8:37 AM, Toomas Aas toomas@raad.tartu.ee
wrote:

 Wed, 25 Feb 2015 kirjutas Michael Stauffer mgsta...@gmail.com:


 Directories:
   Application: /usr/local/libexec/amanda/application
   Configuration: /usr/local/etc/amanda
   GNU Tar lists: /usr/local/var/amanda/gnutar-lists
   Perl modules (amperldir): /usr/local/share/perl5
   Template and example data files (amdatadir): /usr/local/share/amanda
   Temporary: /tmp/amanda
 WARNINGS:
   no user specified (--with-user) -- using 'amanda'
   no group specified (--with-group) -- using 'backup'


 These are different dir locations than amanda 3.3.4 which I've been using
 until now. Is this b/c I've built from source or is that a change
 effecting
 rpm's too (not that it seems likely they'd be different I figure).


 As you deduced, this is probably because of different installation method
 (source vs rpm). Whoever supplied the RPMs that you previously used, had
 them set up with different configuration settings.


 The default user and group is now amanda:backup instead of
 amandabackup:disk. If I want to use this server with existing clients
 running 3.3.4 is this ok? Or should I rebuild with user amandabackup:disk?
 Or upgrade clients to 3.3.7?


 The user and group used on the server do not need to be the same as on
 clients. You can continue using the existing clients.


Thanks everyone for the replies. I'll probably just go with what I have
since it doesn't seem to matter.

-M


moving to amanda 3.3.7p1

2015-02-25 Thread Michael Stauffer
Hi,

I just built amanda 3.3.7p1 on a CentOS 7 box, to be used as our new backup
server with a new tape robot. The config step output this:

Directories:
  Application: /usr/local/libexec/amanda/application
  Configuration: /usr/local/etc/amanda
  GNU Tar lists: /usr/local/var/amanda/gnutar-lists
  Perl modules (amperldir): /usr/local/share/perl5
  Template and example data files (amdatadir): /usr/local/share/amanda
  Temporary: /tmp/amanda
WARNINGS:
  no user specified (--with-user) -- using 'amanda'
  no group specified (--with-group) -- using 'backup'

Two things:

These are different dir locations than amanda 3.3.4 which I've been using
until now. Is this b/c I've built from source or is that a change effecting
rpm's too (not that it seems likely they'd be different I figure).

The default user and group is now amanda:backup instead of
amandabackup:disk. If I want to use this server with existing clients
running 3.3.4 is this ok? Or should I rebuild with user amandabackup:disk?
Or upgrade clients to 3.3.7?

Thanks.

-M


amanda 3.3.7 home dir

2015-02-25 Thread Michael Stauffer
Hi again,

As in my previous email, I built 3.3.7 from source on Centos 7. The config
script choked on missing group 'backup', so I manually create user amanda
and group backup. Now the amanda home dir is /home/amanda. Does that
matter? Where should it go otherwise? I'm all confused now with these new
dir locations. :)

-M


Re: CentOS 7 support?

2015-01-13 Thread Michael Stauffer
OK thanks. I'll build from source and run 3.3.6. Looking at a few amanda
sites there was no mention of Centos/RHEL 7 in supported systems so I
thought maybe it there was some issue with that.

-M

On Tue, Jan 13, 2015 at 9:57 AM, Markus Iturriaga Woelfel 
mitur...@eecs.utk.edu wrote:

 Also as an aside, we like to run the latest Amanda and have had no
 problems simply rebuilding the source RPMs from
 http://www.zmanda.com/download-amanda.php for use in our internal RHEL7
 repo. RHEL7 is currently providing 3.3.3 and we’re running 3.3.5 on our
 RHEL7 systems (with an upgrade to 3.3.6 planned).

 Markus
  On Jan 12, 2015, at 5:50 PM, Jason L Tibbitts III ti...@math.uh.edu
 wrote:
 
  MS == Michael Stauffer mgsta...@gmail.com writes:
 
  MS Hi all, I'm setting up a new amanda server. Am I right in seeing
  MS that RHEL/Centos 7 is not yet supported? If so I'll just install
  MS Centos 6 as before. Thanks.
 
  Why do you think it wouldn't be supported?  Just yum install
  amanda-server.
 
  - J

 ---
 Markus A. Iturriaga Woelfel, IT Administrator
 Department of Electrical Engineering and Computer Science
 University of Tennessee
 Min H. Kao Building, Suite 424 / 1520 Middle Drive
 Knoxville, TN 37996-2250
 mitur...@eecs.utk.edu / (865) 974-3837
 http://twitter.com/UTKEECSIT











CentOS 7 support?

2015-01-12 Thread Michael Stauffer
Hi all,

I'm setting up a new amanda server. Am I right in seeing that RHEL/Centos 7
is not yet supported? If so I'll just install Centos 6 as before. Thanks.

-M


Re: can't connect to client

2014-12-22 Thread Michael Stauffer
Thanks for the reply Joi. Nothing you suggested seemed to be an issue, but
I did just restart xinetd on the client and it's working now! Very strange,
because I'd swear I tried that before. But who knows. It's working now
which is great. Thanks again.

-M

On Wed, Dec 17, 2014 at 6:40 PM, Joi L. Ellis jlel...@pavlovmedia.com
wrote:

  With the NIS change… did that involve changing the hostname or IPs the
 server appears to be using as seen from the client?  Is your xinetd using
 tcpwrappers? Do you need to update /etc/hosts.allow, hosts.deny, and/or
 ~amanda/.amandahosts?  Is the new Amanda UID reflected properly in
 xinetd.conf?  Perhaps bouncing xinetd may suffice to solve the issue, if
 you haven’t rebooted the whole host since the NIS changes.



 Is there an iptables  firewall on the client that needs updating?
 AppArmor or SELinux updates?





 *From:* owner-amanda-us...@amanda.org [mailto:
 owner-amanda-us...@amanda.org] *On Behalf Of *Michael Stauffer
 *Sent:* Wednesday, December 17, 2014 16:31
 *To:* amanda-users@amanda.org
 *Subject:* can't connect to client



 Amanda 3.3.4

 CentOS 6.5 (server and client)

 Hi,



 I'm having trouble connecting to a client. It used to work. The only thing
 I can think of that's changed since the last time it worked is that the NIS
 server changed. But that seems to be working fine and the amandabackup user
 is available on the client (the server is not using NIS).



 Note that the server works with other clients just fine.



 amcheck on the client shows this:



 WARNING: tesla: selfcheck request failed: recv error: Connection reset by
 peer



 The amandabackup uid changed with the NIS server change, so I went through
 these dirs on the client to make sure files were owned by amandabckup:disk
 (except where they should be owned by root):



 /usr/sbin

 /etc/amanda

 /usr/libexec/amanda

 /var/lib/amanda



 xinetd is running on the client.



 'netstat -a | grep amanada'  shows:

 tcp0  0 *:amanda*:*
   LISTEN



 Here's the tail of /var/log/amanda/server/jet1/amcheck.date.debug from
 the server:



 ...

 Wed Dec 17 17:27:50 2014: thd-0x1a0e4b0: amcheck-clients: make_socket
 opening socket with family 2

 Wed Dec 17 17:27:50 2014: thd-0x1a0e4b0: amcheck-clients: connect_port:
 Try  port 571: available - Success

 Wed Dec 17 17:27:50 2014: thd-0x1a0e4b0: amcheck-clients: connected to
 170.212.169.49:10080

 Wed Dec 17 17:27:50 2014: thd-0x1a0e4b0: amcheck-clients: our side is
 0.0.0.0:571

 Wed Dec 17 17:27:50 2014: thd-0x1a0e4b0: amcheck-clients: try_socksize:
 send buffer size is 65536

 Wed Dec 17 17:27:50 2014: thd-0x1a0e4b0: amcheck-clients: try_socksize:
 receive buffer size is 65536

 Wed Dec 17 17:27:50 2014: thd-0x1a0e4b0: amcheck-clients: tcpm_send_token:
 data is still flowing

 Wed Dec 17 17:27:50 2014: thd-0x1a0e4b0: amcheck-clients:
 security_stream_seterr(0x1eeb050, recv error: Connection reset by peer)

 Wed Dec 17 17:27:50 2014: thd-0x1a0e4b0: amcheck-clients:
 security_seterror(handle=0x1ee6e50, driver=0x3efe86fce0 (BSDTCP) error=recv
 error: Connection reset by peer)

 Wed Dec 17 17:27:50 2014: thd-0x1a0e4b0: amcheck-clients:
 security_close(handle=0x1ee6e50, driver=0x3efe86fce0 (BSDTCP))

 Wed Dec 17 17:27:50 2014: thd-0x1a0e4b0: amcheck-clients:
 security_stream_close(0x1eeb050)

 Wed Dec 17 17:27:50 2014: thd-0x1a0e4b0: amcheck: pid 1157 finish time Wed
 Dec 17 17:27:50 2014



 Can anyone help me figure this out? Thanks!



 -M



can't connect to client

2014-12-17 Thread Michael Stauffer
Amanda 3.3.4
CentOS 6.5 (server and client)
Hi,

I'm having trouble connecting to a client. It used to work. The only thing
I can think of that's changed since the last time it worked is that the NIS
server changed. But that seems to be working fine and the amandabackup user
is available on the client (the server is not using NIS).

Note that the server works with other clients just fine.

amcheck on the client shows this:

WARNING: tesla: selfcheck request failed: recv error: Connection reset by
peer

The amandabackup uid changed with the NIS server change, so I went through
these dirs on the client to make sure files were owned by amandabckup:disk
(except where they should be owned by root):

/usr/sbin
/etc/amanda
/usr/libexec/amanda
/var/lib/amanda

xinetd is running on the client.

'netstat -a | grep amanada'  shows:
tcp0  0 *:amanda*:*
LISTEN

Here's the tail of /var/log/amanda/server/jet1/amcheck.date.debug from
the server:

...
Wed Dec 17 17:27:50 2014: thd-0x1a0e4b0: amcheck-clients: make_socket
opening socket with family 2
Wed Dec 17 17:27:50 2014: thd-0x1a0e4b0: amcheck-clients: connect_port: Try
 port 571: available - Success
Wed Dec 17 17:27:50 2014: thd-0x1a0e4b0: amcheck-clients: connected to
170.212.169.49:10080
Wed Dec 17 17:27:50 2014: thd-0x1a0e4b0: amcheck-clients: our side is
0.0.0.0:571
Wed Dec 17 17:27:50 2014: thd-0x1a0e4b0: amcheck-clients: try_socksize:
send buffer size is 65536
Wed Dec 17 17:27:50 2014: thd-0x1a0e4b0: amcheck-clients: try_socksize:
receive buffer size is 65536
Wed Dec 17 17:27:50 2014: thd-0x1a0e4b0: amcheck-clients: tcpm_send_token:
data is still flowing
Wed Dec 17 17:27:50 2014: thd-0x1a0e4b0: amcheck-clients:
security_stream_seterr(0x1eeb050, recv error: Connection reset by peer)
Wed Dec 17 17:27:50 2014: thd-0x1a0e4b0: amcheck-clients:
security_seterror(handle=0x1ee6e50, driver=0x3efe86fce0 (BSDTCP) error=recv
error: Connection reset by peer)
Wed Dec 17 17:27:50 2014: thd-0x1a0e4b0: amcheck-clients:
security_close(handle=0x1ee6e50, driver=0x3efe86fce0 (BSDTCP))
Wed Dec 17 17:27:50 2014: thd-0x1a0e4b0: amcheck-clients:
security_stream_close(0x1eeb050)
Wed Dec 17 17:27:50 2014: thd-0x1a0e4b0: amcheck: pid 1157 finish time Wed
Dec 17 17:27:50 2014

Can anyone help me figure this out? Thanks!

-M


Re: amrecover - auto-reply to the load tape prompt?

2014-11-06 Thread Michael Stauffer
Thanks Chris! I'll try this out.

So it seems with '-d changer', 'changer' is a keyword that indicates to
amrecover that the tape device is a changer, rather than being the name of
the defined tape device?

-M

On Wed, Nov 5, 2014 at 12:28 PM, Chris Ritson chris.rit...@newcastle.ac.uk
wrote:

 I use amrecover -s ... -t ... -d changer and then so long as all the
 tapes identified later are in the changer (and in the right order if
 gravity fed) then I don't get prompted. This does mean starting the
 amrecover run and then leaving waiting for input while I fetch and load the
 necessary tapes. Changer is the critical word.


 Chris Ritson, Newcastle University IT and School Safety Officer

 Room:   707, Claremont Tower
 Phone:  8175
 Mail:   chris.rit...@ncl.ac.uk

  -Original Message-
  From: owner-amanda-us...@amanda.org [mailto:owner-amanda-
  us...@amanda.org] On Behalf Of Michael Stauffer
  Sent: 05 November 2014 16:56
  To: amanda-users@amanda.org
  Subject: amrecover - auto-reply to the load tape prompt?
 
  Amanda 3.3.4
 
  Hi,
 
  I ran an amrecover operation yesterday that required 3 tapes. I have a
 tape
  changer, but still was queried each time the next tape was required, i.e.
 
Load tape DMP014 now
Continue? [Y/n/t]:
 
  Is there a way to automatically reply to this, so the recover opeation
 can
  proceed on its own? This would be even more useful when doing a large
  recovery.
 
  Thanks
 
  -M



amdump doing level 0 unexpectedly, and weird level 1 estimates

2014-11-06 Thread Michael Stauffer
amanda 3.3.4

Hi,

I'm getting weird amdump estimates for DLE's. I'm using amanda to create
periodic archive backups that I take offsite, so the dump period is 90
days. Recently for the picsl-cluster host I've also wanted some level 1
backups before I make major changes to the system. Trying to run the next
dump, I'm getting the DLE's showing up as level 0 dumps even though the
'due' command shows there's still 63 days before the next level 0. Below is
various info - can anyone help me understand what's going on? Chances are
I'm not understanding something properly. Thanks!

[amandabackup@cback ~]$ amadmin jet1 due picsl-cluster
Due in 63 days: picsl-cluster:home-0-a
Due in 63 days: picsl-cluster:home-apouch
Due in 63 days: picsl-cluster:home-avants
Due in 63 days: picsl-cluster:home-b
Due in 63 days: picsl-cluster:home-c
...

run amdump:

picsl-cluster:home-0-a  0   206g estimate done
picsl-cluster:home-apouch   0   404g estimate done
picsl-cluster:home-avants   0   386g estimate done
picsl-cluster:home-b0   327g estimate done
picsl-cluster:home-c0   502g estimate done
...

Make sure there wasn't a force outstanding:

[amandabackup@cback ~]$ amadmin jet1 unforce picsl-cluster
amadmin: no force command outstanding for picsl-cluster:home-0-a, unchanged.
amadmin: no force command outstanding for picsl-cluster:home-apouch,
unchanged.
amadmin: no force command outstanding for picsl-cluster:home-avants,
unchanged.
amadmin: no force command outstanding for picsl-cluster:home-b, unchanged.
amadmin: no force command outstanding for picsl-cluster:home-c, unchanged.
...

Run a force-level-1, then amdump. This shows the same estimates!

picsl-cluster:home-0-a  1   206g estimate done
picsl-cluster:home-apouch   1   404g estimate done
picsl-cluster:home-avants   1   386g estimate done
picsl-cluster:home-b1   327g estimate done
picsl-cluster:home-c1   502g estimate done
...

[amandabackup@cback ~]$ amadmin jet1 info picsl-cluster home-b

Current info for picsl-cluster home-b:
  Stats: dump rates (kps), Full:  37127.1,  -1.0,  -1.0
Incremental:  357.8,  -1.0,  -1.0
  compressed size, Full: -100.0%,-100.0%,-100.0%
Incremental: -100.0%,-100.0%,-100.0%
  Dumps: lev datestmp  tape file   origK   compK secs
  0  20140930  L50003-jet1  1 343685280 343685280 9257
  1  20141004  000439-jet1  18 59400 59400 166

This info shows a level 1 dump on DLE home-b 10/04 that was appropriately
small. Checking with 'find', there are only a few small files that have
change since 10/04.

-M


amrecover - auto-reply to the load tape prompt?

2014-11-05 Thread Michael Stauffer
Amanda 3.3.4

Hi,

I ran an amrecover operation yesterday that required 3 tapes. I have a tape
changer, but still was queried each time the next tape was required, i.e.

  Load tape DMP014 now
  Continue? [Y/n/t]:

Is there a way to automatically reply to this, so the recover opeation can
proceed on its own? This would be even more useful when doing a large
recovery.

Thanks

-M


amanda: empty disklist

2014-09-29 Thread Michael Stauffer
Amanda 3.3.4

Hi,

I've got another strange problem. It's probably user-error again, but I
can't figure it out. I added another disklist file to my disklist (see
below), and now amcheck and amdump don't seem to read anything.

amcheck reports 0 hosts checked.

When I run amdump, I get this:

FAILURE DUMP SUMMARY:
  planner: FATAL empty disklist /etc/amanda/jet1/disklist
  driver: FATAL Did not get DATE line from planner

/etc/amanda/jet1/disklist looks like this:

#includefile disklist.cfile-jet
#includefile disklist.cfile-jag
#includefile disklist.picsl-cluster-home

cfile-local jet-a /jet {
   gui-base
   include ./[a]*
   exclude ./aguirre
   }

Even if I just have it like this it doesn't work. This is what it was like
before today and it was working:

includefile disklist.cfile-jet
includefile disklist.cfile-jag

Permissions are like this, I haven't changed them:

[amandabackup@cback ~]$ ll /etc/amanda/jet1/disklist
-rw---. 1 amandabackup disk 1679 Sep 29 17:22 /etc/amanda/jet1/disklist


Thanks for any suggestions!

-M


trouble with DLE include/exclude

2014-09-26 Thread Michael Stauffer
Amanda 3.3.4

Hi,

I'm having some inconsistencies with include/exclude in DLE lists.

For example, these are some of the enteries in my dislist file:

cfile-local jag-0-e /jag {
   gui-base
   # Also get everything starting with numerals
   include ./*
   exclude ./[f-zF-Z]*
   exclude ./bbcp
   exclude ./cnds
   }
cfile-local jag-f-j /jag {
   gui-base
   include ./[f-jF-J]*
   exclude ./jonghoyi
   }

The DLE jag-f-j turns out fine, but the DLE jag-0-e ends up including most
everything under /jag instead of stopping after /jag/[jJ]*, and it does not
exclude /jag/bbcp even though that's explicitly excluded. Interestingly,
/jag/bbcp *is* excluded.

I should mention in case this is somehow related, I have a disklist file
that includes other disklist files like so:

includefile disklist.cfile-jet
includefile disklist.cfile-jag

It recognizes all the DLE's defined in both files, not sure if this method
somehow screws with the file inclusion/exclusion.

Is there a way to get amanda to run estimates on DLE's without running
amdump? That would be helpful.

Any suggestions? Thanks!

-M


planner error with sendsize permissions

2014-09-16 Thread Michael Stauffer
Amanda 3.3.4

Hi,

I made some changes to an amanda client. I created a new DLE that pointed
to a dir without world-read/exec permissions, and then amcheck gave this
error:

ERROR: cfile-local: service selfcheck: selfcheck: Failed to
chdir(/jag/cnds): Permission denied

I googled a bit and saw that amcheck runs as user amandabackup so it
couldn't read /jag/cnds. But amdump runs (or its client-side daemon runs)
as root so it can read this dir regardless.

So pushing fwd and running amdump, I got errors like this:

planner: ERROR cfile-local:  service sendsize: sendsize: Failed to
chdir(/jag/cnds): Permission denied

So on the client (cfile-local) I tried adding amandabackup to relevant
group to allow read/exec permission on /jag/cnds dir. But that wouldn't
work, I'm not sure why, possibly because the client is an NIS client. So I
removed the amandabackup user, created amandabackup user on the NIS server,
and changed the uid on the client for all amandabackup files to match the
new uid from the NIS server.

Now when I run amcheck/amdump I get errors that it can't create log files
on the client:

WARNING: cfile-local: selfcheck request failed: tcpm_recv_token: invalid
size: amandad: critical (fatal): Cannot create debug file
\/var/log/amanda/amandad/amandad.20140916160902.debug\: Permission
denied\namandad: Cannot create debug file
\/var/log/amanda/amandad/amandad.2014091
Client check: 1 host checked in 0.285 seconds.  1 problem found.

Logging on to the cfile-local client, I can read and create files in
/var/log/amanda/amandad/ without problems, as root or amandabackup.

Anyone have any suggestions? I've obviously done something to screw things
up, but can't figure out what. Thanks.

-M


Re: planner error with sendsize permissions

2014-09-16 Thread Michael Stauffer
Thanks for the reply.
I presume that the errors are saying the trouble is writing the log file on
the *client* side, correct?

On the client, it looks like this:

drwx--.  2 amandabackup disk 4096 Sep 16 16:11 /var/log/amanda/amandad/

And when logged in as user amandabackup I can write/delete files in there.

Not sure if this matters:
I'm using 'bsdtcp' for authentication. And if I try to use ssh keys for
amandabackup login from server to client it's not working for some reason.
But I think that shouldn't matter if I'm using bsdtcp, and also the error
looks like the server is getting to the client alright?

-M

On Tue, Sep 16, 2014 at 4:12 PM, Michael Stauffer mgsta...@gmail.com
wrote:

 Amanda 3.3.4

 Hi,

 I made some changes to an amanda client. I created a new DLE that pointed
 to a dir without world-read/exec permissions, and then amcheck gave this
 error:

 ERROR: cfile-local: service selfcheck: selfcheck: Failed to
 chdir(/jag/cnds): Permission denied

 I googled a bit and saw that amcheck runs as user amandabackup so it
 couldn't read /jag/cnds. But amdump runs (or its client-side daemon runs)
 as root so it can read this dir regardless.

 So pushing fwd and running amdump, I got errors like this:

 planner: ERROR cfile-local:  service sendsize: sendsize: Failed to
 chdir(/jag/cnds): Permission denied

 So on the client (cfile-local) I tried adding amandabackup to relevant
 group to allow read/exec permission on /jag/cnds dir. But that wouldn't
 work, I'm not sure why, possibly because the client is an NIS client. So I
 removed the amandabackup user, created amandabackup user on the NIS server,
 and changed the uid on the client for all amandabackup files to match the
 new uid from the NIS server.

 Now when I run amcheck/amdump I get errors that it can't create log files
 on the client:

 WARNING: cfile-local: selfcheck request failed: tcpm_recv_token: invalid
 size: amandad: critical (fatal): Cannot create debug file
 \/var/log/amanda/amandad/amandad.20140916160902.debug\: Permission
 denied\namandad: Cannot create debug file
 \/var/log/amanda/amandad/amandad.2014091
 Client check: 1 host checked in 0.285 seconds.  1 problem found.

 Logging on to the cfile-local client, I can read and create files in
 /var/log/amanda/amandad/ without problems, as root or amandabackup.

 Anyone have any suggestions? I've obviously done something to screw things
 up, but can't figure out what. Thanks.

 -M



Re: planner error with sendsize permissions

2014-09-16 Thread Michael Stauffer
The client-side full tree for logs looks like this:

[amandabackup@cfile ~]$ ls -ld /var /var/log/ /var/log/amanda/
/var/log/amanda/amandad/
drwxr-xr-x. 23 root root 4096 Nov 12  2013 /var
drwxr-xr-x. 13 root root 4096 Sep 14 05:00 /var/log/
drwxr-x---.  4 amandabackup disk 4096 Sep 16 14:37 /var/log/amanda/
drwx--.  2 amandabackup disk 4096 Sep 16 16:11 /var/log/amanda/amandad/

On the server side, it's this:

[amandabackup@cback jet1]$ ls -ld /var /var/log/ /var/log/amanda/
/var/log/amanda/amandad/
drwxr-xr-x. 23 root root 4096 Jun  5 17:58 /var
drwxr-xr-x. 13 root root 4096 Sep 14 03:35 /var/log/
drwxr-x---.  6 amandabackup disk 4096 Jan 31  2014 /var/log/amanda/
drwx--.  2 amandabackup disk 4096 Sep 16 15:11 /var/log/amanda/amandad/

I can write to /var/log/amanda/amandad on the server side as well, both as
amandabackup and root.

Thanks again.

-M


On Tue, Sep 16, 2014 at 5:10 PM, Debra S Baddorf badd...@fnal.gov wrote:

 Well, there IS a log on both the client and the server side.  Best to
 check both.
 Those permissions from your client match what mine has, so they seem okay.
 Oh,  do check them a level higher too,  at your  /var/log/amandalevel
 since that’s the top of the logfile tree.

 I use bsdtcp (on some clients)   and use a  “.amandahosts”  file on every
 client.  I haven’t used ssh keys.
 Anybody else have an opinion here?

 Deb Baddorf
 Fermilab

 On Sep 16, 2014, at 3:45 PM, Michael Stauffer mgsta...@gmail.com wrote:

  Thanks for the reply.
  I presume that the errors are saying the trouble is writing the log file
 on the *client* side, correct?
 
  On the client, it looks like this:
 
  drwx--.  2 amandabackup disk 4096 Sep 16 16:11
 /var/log/amanda/amandad/
 
  And when logged in as user amandabackup I can write/delete files in
 there.
 
  Not sure if this matters:
  I'm using 'bsdtcp' for authentication. And if I try to use ssh keys for
 amandabackup login from server to client it's not working for some reason.
 But I think that shouldn't matter if I'm using bsdtcp, and also the error
 looks like the server is getting to the client alright?
 
  -M
 
  On Tue, Sep 16, 2014 at 4:12 PM, Michael Stauffer mgsta...@gmail.com
 wrote:
  Amanda 3.3.4
 
  Hi,
 
  I made some changes to an amanda client. I created a new DLE that
 pointed to a dir without world-read/exec permissions, and then amcheck gave
 this error:
 
  ERROR: cfile-local: service selfcheck: selfcheck: Failed to
 chdir(/jag/cnds): Permission denied
 
  I googled a bit and saw that amcheck runs as user amandabackup so it
 couldn't read /jag/cnds. But amdump runs (or its client-side daemon runs)
 as root so it can read this dir regardless.
 
  So pushing fwd and running amdump, I got errors like this:
 
  planner: ERROR cfile-local:  service sendsize: sendsize: Failed to
 chdir(/jag/cnds): Permission denied
 
  So on the client (cfile-local) I tried adding amandabackup to relevant
 group to allow read/exec permission on /jag/cnds dir. But that wouldn't
 work, I'm not sure why, possibly because the client is an NIS client. So I
 removed the amandabackup user, created amandabackup user on the NIS server,
 and changed the uid on the client for all amandabackup files to match the
 new uid from the NIS server.
 
  Now when I run amcheck/amdump I get errors that it can't create log
 files on the client:
 
  WARNING: cfile-local: selfcheck request failed: tcpm_recv_token: invalid
 size: amandad: critical (fatal): Cannot create debug file
 \/var/log/amanda/amandad/amandad.20140916160902.debug\: Permission
 denied\namandad: Cannot create debug file
 \/var/log/amanda/amandad/amandad.2014091
  Client check: 1 host checked in 0.285 seconds.  1 problem found.
 
  Logging on to the cfile-local client, I can read and create files in
 /var/log/amanda/amandad/ without problems, as root or amandabackup.
 
  Anyone have any suggestions? I've obviously done something to screw
 things up, but can't figure out what. Thanks.
 
  -M
 




Re: disklist and glob patterns

2014-03-18 Thread Michael Stauffer
For the sake of anyone following this thread in the future, some replies to
my last post:


On Tue, Mar 11, 2014 at 4:12 PM, Michael Stauffer mgsta...@gmail.comwrote:

 Thanks Jean-Lous.

 In my dumptype I have

   program GNUTAR

 but I don't know if this always means 'gtar' or if it's defined somewhere.
 I can't find a define for it.

 However, other 'include' expressions seem to be globbing correclty, even
 though the amgtar docs says gtar won't normal accept them. e.g. this
 expression is working:

  include ./[f-i]*

 So maybe GNUTAR is defined to use amgtar?


I haven't figured this out for sure, but based on the behavior of amdump
with the dumptype using 'program GNUTAR', yes, this means amdump is using
amgtar.


 Assuming I'm using amgtar, then it seems since it only manually globs
 expressions with a single forward slash, I should change my DLE to this,
 which includes the sub-dir in the DLE diskdevice:

 cfile.uphs.upenn.edu jet-grosspeople-h-z /jet/grosspeople {
gui-base
include ./[h-zH-Z]*
exclude ./Volumetric
}

 Does that seem right?


Yes, this works, i.e. when the DLE path itself references the sub-dir I'm
trying to break into smaller DLE's. This is consistent with the amgtar
documentation's explanation of processing only 'include' directives with a
single forward-slash.


 Regarding the related globbing issue:

 As an aside (or possibly related?) the case-sensitivity of globbing on my
 client is not behaving how I'd expect. e.g. 'echo [a-c]*' includes files
 that start with capital A-B, which I don't expect. Files starting with C
 are *not* listed. My shell option nocaseglob is off, and I've tried setting
 and unsetting it just to test. Nothing changes. I'll post about this last
 bit to another list too.


 It seems that with shift to unicode years ago, the sorting order doesn't
 follow ascii order by default anymore.

 If I add 'export LC_COLLATE=C' to my shell, then 'echo [a-c]*' behaves as
 expected. Assuming that amgtar uses shell globbing to do its manually
 globbing of 'include' expressions, and since gtar seems to use shell
 globbing on its own for 'exclude' expressions, I figure I should add
 'export LC_COLLATE=C' to my amandabackup profiles. It seems also that for
 'exclude' expressions, I could use  '[[:lower:]]' to indicate lower case,
 e.g. But then that wouldn't work for 'include' expressions b/c of amgtar's
 manual globbing.


This is working for me. i.e. globbing of ranges of characters within DLE's
follows older ascii-centric rules after setting LC_COLLATE=C.

-M


Re: amadmin estimate?

2014-03-18 Thread Michael Stauffer
Thanks Nathan - your replies are very helpful. Yeah that issue with a
trailing slash on a dir path bit me hard! :)

I've got my DLE's and globbing working well now, so for the time being will
not worry further about getting accurate estimates before a full dump.

-M


On Thu, Mar 6, 2014 at 6:19 PM, Nathan Stratton Treadway natha...@ontko.com
 wrote:

 On Wed, Mar 05, 2014 at 21:31:05 -0500, Michael Stauffer wrote:
  I want one DLE with all dirs starting with a, except for ./aguirre, then
  another with just ./aguirre
 
  cfile.uphs.upenn.edu jet-a /mnt/jet716s_1/jet-export/ {
 gui-base
 include ./[a]*
 exclude ./aguirre/
 }
  cfile.uphs.upenn.edu jet-aguirre /mnt/jet716s_1/jet-export/ {
 gui-base
 include ./aguirre/
 }
 
 [...]
  But then
 
   [amandabackup@cback ~]$ amadmin jet1 estimate cfile jet-a
   cfile.uphs.upenn.edu jet-a 0 2258244750
 
  shows the size of all ./[a]* dirs, including ./aguirre

 If you haven't already found it, you may find the Wiki pages on this
 topic to be useful:
   http://wiki.zmanda.com/index.php/Exclude_and_include_lists
   http://wiki.zmanda.com/index.php/How_To:Split_DLEs_With_Exclude_Lists

 In particular, take a close look at the last paragraph in the Utiliize
 and Exclude List section

 http://wiki.zmanda.com/index.php/Exclude_and_include_lists#Utilize_an_Exclude_List
 (just before the Do not include the data in the disklist section
 heading), which explains that you don't want to use the trailing slash
 in the excluded directory name.  So that probably explains why ./aguirre
 not successfully getting excluded from the jet-a dump.


  and then
 
   [amandabackup@cback ~]$ amadmin jet1 estimate cfile jet-aguirre
 
  doesn't output anything.

 (I'm not sure about this; does amadmin jet1 info cfile jet-aguirre
 return anything?)

 
  Also, this command returns instantly - I'm not sure how it could now the
  size of the DLE instantly after I make changes to it. In fact, if I
 change
  the jet-a DLE to include ./[a-b]*, the estimate command instantly returns
  the same value. Do I need to tell amanda to reprocess the disklist file?

 It seems like the estimate command might do some actual estimation for
 the highest-numbered dump level (or something), but overall the
 estimate and info subcommands just show info accumulated from
 earlier amdump runs and saved under in the infofile directory.  (The
 files are just text files so you can take a look at them if you are
 curious.)  So the data they show won't reflect any changes you've made
 to the disklist file until after the next amdump run (and even then the
 level 0 info shown won't get updated until a level 0 dump actually
 happens, etc).

 (On Debian/Ubuntu this directory is usuall found under
 /var/lib/amanda/[CONFIG]/curinfo; if you aren't sure where it is on your
 system, amgetconf jet1 infofile will tell you.)


 Nathan



 
 Nathan Stratton Treadway  -  natha...@ontko.com  -  Mid-Atlantic region
 Ray Ontko  Co.  -  Software consulting services  -
 http://www.ontko.com/
  GPG Key: http://www.ontko.com/~nathanst/gpg_key.txt   ID: 1023D/ECFB6239
  Key fingerprint = 6AD8 485E 20B9 5C71 231C  0C32 15F3 ADCD ECFB 6239



Re: can amanda auto-size DLE's?

2014-03-12 Thread Michael Stauffer
Thanks Stefan! I'll take a look.

How did this work for you in terms of daily, or almost daily, creating new
DLE's? I imagine it made for near-constant level 0 dumps? Maybe that was
what you needed anyway with lots of new data?

-M


On Wed, Mar 12, 2014 at 5:48 AM, Stefan G. Weichinger s...@amanda.orgwrote:

 Am 05.03.2014 14:10, schrieb Stefan G. Weichinger:

  Aside from this I back then had some other scripts that generated
  include-lists resulting in chunks of = X GB (smaller than one tape) ...
  I wanted to dump the videos in my mythtv-config and had the problem of
  very dynamic data in there ;-)
 
  So the goal was to re-create dynamic include-lists for DLEs everyday (or
  even at the actual time of amdump). It worked mostly. I would have to
  dig that up again.

 Digged that up and put it on github:

 https://github.com/stefangweichinger/am_dyn_dles

 feel free to use or improve.

 Stefan





disklist and glob patterns

2014-03-11 Thread Michael Stauffer
Amanda 3.3.4

Hi,

I'm confused about using glob patterns in disklist. Am I right that if I
use dumptype include and exclude directives, I can use shell globbing?

This is part of my disklist:

cfile.uphs.upenn.edu jet-grosspeople-0-g /jet {
   gui-base
   #Get everything ^[0-9] and ^[a-zA-Z]
   include ./grosspeople
   exclude ./grosspeople/[h-zH-Z]*
   }
cfile.uphs.upenn.edu jet-grosspeople-h-z /jet {
   gui-base
   include ./grosspeople/[h-zH-Z]*
   exclude ./grosspeople/Volumetric
   }
cfile.uphs.upenn.edu jet-grosspeople-Volumetric /jet {
   gui-base
   include ./grosspeople/Volumetric
   }

My log shows me this warning:

STRANGE dumper cfile.uphs.upenn.edu jet-grosspeople-h-z 1 [sec 0.232 kb 10
kps 43.1 orig-kb 10]
  sendbackup: start [cfile.uphs.upenn.edu:jet-grosspeople-h-z level 1]
  sendbackup: info BACKUP=/bin/gtar
  sendbackup: info RECOVER_CMD=/bin/gtar -xpGf - ...
  sendbackup: info end
  ? /bin/gtar: ./grosspeople/[h-zH-Z]*: Warning: Cannot stat: No such file
or directory
  ? /bin/gtar: ./grosspeople/[h-zH-Z]*: Warning: Cannot stat: No such file
or directory
  | Total bytes written: 10240 (10KiB, 41MiB/s)
  sendbackup: size 10
  sendbackup: end

If I do a shell glob manually on the client, e.g. 'echo
/jet/grosspeople/[h-zH-Z]*', there's no warning and it shows the list of
files.

Interestingly, it does not complain about the same glob in the
 jet-grosspeople-0-g DLE. And, the reported size from 'amstatus jet1' of
the jet-grosspeople-0-g DLE looks to match the size I expect if the
'exclude ./grosspeople/[h-zH-Z]*' directive was properly exectued.

Anyone know why this might be be responding differently to the same glob?

As an aside (or possibly related?) the case-sensitivity of globbing on my
client is not behaving how I'd expect. e.g. 'echo [a-c]*' includes files
that start with capital A-B, which I don't expect. Files starting with C
are *not* listed. My shell option nocaseglob is off, and I've tried setting
and unsetting it just to test. Nothing changes. I'll post about this last
bit to another list too.

Thanks for any thoughts.

-M


Re: disklist and glob patterns

2014-03-11 Thread Michael Stauffer
Thanks Jean-Lous.

In my dumptype I have

  program GNUTAR

but I don't know if this always means 'gtar' or if it's defined somewhere.
I can't find a define for it.

However, other 'include' expressions seem to be globbing correclty, even
though the amgtar docs says gtar won't normal accept them. e.g. this
expression is working:

 include ./[f-i]*

So maybe GNUTAR is defined to use amgtar?

Assuming I'm using amgtar, then it seems since it only manually globs
expressions with a single forward slash, I should change my DLE to this,
which includes the sub-dir in the DLE diskdevice:

cfile.uphs.upenn.edu jet-grosspeople-h-z /jet/grosspeople {
   gui-base
   include ./[h-zH-Z]*
   exclude ./Volumetric
   }

Does that seem right?

Regarding the related globbing issue:

As an aside (or possibly related?) the case-sensitivity of globbing on my
 client is not behaving how I'd expect. e.g. 'echo [a-c]*' includes files
 that start with capital A-B, which I don't expect. Files starting with C
 are *not* listed. My shell option nocaseglob is off, and I've tried setting
 and unsetting it just to test. Nothing changes. I'll post about this last
 bit to another list too.


It seems that with shift to unicode years ago, the sorting order doesn't
follow ascii order by default anymore.

If I add 'export LC_COLLATE=C' to my shell, then 'echo [a-c]*' behaves as
expected. Assuming that amgtar uses shell globbing to do its manually
globbing of 'include' expressions, and since gtar seems to use shell
globbing on its own for 'exclude' expressions, I figure I should add
'export LC_COLLATE=C' to my amandabackup profiles. It seems also that for
'exclude' expressions, I could use  '[[:lower:]]' to indicate lower case,
e.g. But then that wouldn't work for 'include' expressions b/c of amgtar's
manual globbing.

Thoughts?

Thanks

-M


On Tue, Mar 11, 2014 at 2:04 PM, Jean-Louis Martineau
martin...@zmanda.comwrote:

  Michael,

 Look at the amgtar man page if you are using amgtar:

Similarly, include expressions are supplied to GNU-tar's
 --files-from
option. This option ordinarily does not accept any sort of
 wildcards,
but amgtar manually applies glob pattern matching to include
expressions with only one slash. The expressions must still begin
 with
./, so this effectively only allows expressions like ./[abc]* or
./*.txt.

 Jean-Louis


 On 03/11/2014 01:53 PM, Michael Stauffer wrote:

 Amanda 3.3.4

  Hi,

  I'm confused about using glob patterns in disklist. Am I right that if I
 use dumptype include and exclude directives, I can use shell globbing?

  This is part of my disklist:

  cfile.uphs.upenn.edu jet-grosspeople-0-g /jet {
gui-base
#Get everything ^[0-9] and ^[a-zA-Z]
include ./grosspeople
exclude ./grosspeople/[h-zH-Z]*
}
 cfile.uphs.upenn.edu jet-grosspeople-h-z /jet {
 gui-base
include ./grosspeople/[h-zH-Z]*
exclude ./grosspeople/Volumetric
}
 cfile.uphs.upenn.edu jet-grosspeople-Volumetric /jet {
gui-base
include ./grosspeople/Volumetric
}

  My log shows me this warning:

  STRANGE dumper cfile.uphs.upenn.edu jet-grosspeople-h-z 1 [sec 0.232 kb
 10 kps 43.1 orig-kb 10]
   sendbackup: start [cfile.uphs.upenn.edu:jet-grosspeople-h-z level 1]
   sendbackup: info BACKUP=/bin/gtar
   sendbackup: info RECOVER_CMD=/bin/gtar -xpGf - ...
   sendbackup: info end
   ? /bin/gtar: ./grosspeople/[h-zH-Z]*: Warning: Cannot stat: No such file
 or directory
   ? /bin/gtar: ./grosspeople/[h-zH-Z]*: Warning: Cannot stat: No such file
 or directory
   | Total bytes written: 10240 (10KiB, 41MiB/s)
   sendbackup: size 10
   sendbackup: end

  If I do a shell glob manually on the client, e.g. 'echo
 /jet/grosspeople/[h-zH-Z]*', there's no warning and it shows the list of
 files.

  Interestingly, it does not complain about the same glob in the
  jet-grosspeople-0-g DLE. And, the reported size from 'amstatus jet1' of
 the jet-grosspeople-0-g DLE looks to match the size I expect if the
 'exclude ./grosspeople/[h-zH-Z]*' directive was properly exectued.

  Anyone know why this might be be responding differently to the same glob?

  As an aside (or possibly related?) the case-sensitivity of globbing on
 my client is not behaving how I'd expect. e.g. 'echo [a-c]*' includes files
 that start with capital A-B, which I don't expect. Files starting with C
 are *not* listed. My shell option nocaseglob is off, and I've tried setting
 and unsetting it just to test. Nothing changes. I'll post about this last
 bit to another list too.

  Thanks for any thoughts.

  -M





Re: amadmin estimate?

2014-03-06 Thread Michael Stauffer
OK, thanks Jean-Louis. I'll give that a try.

-M

On Thu, Mar 6, 2014 at 7:31 AM, Jean-Louis Martineau
martin...@zmanda.comwrote:


 $ man amadmin
estimate [ hostname [ disks ]* ]*
Print the server estimate for the dles, each output lines have
 the
following format:
  hostname diskname level size


 Server estimate can only be computed  if you already backed up the dles a
 few time.

 If you want to run the real estimate, you can try:
/path/to/planner CONF
 and try to understand its output.

 Jean-Louis


 On 03/05/2014 09:31 PM, Michael Stauffer wrote:

 Amanda 3.4.4

 Hi,

 I'm trying to use amadmin's estimate command to get an idea if my DLE
 entries are correct.
 For example:

 I want one DLE with all dirs starting with a, except for ./aguirre, then
 another with just ./aguirre

 cfile.uphs.upenn.edu http://cfile.uphs.upenn.edu jet-a
 /mnt/jet716s_1/jet-export/ {

gui-base
include ./[a]*
exclude ./aguirre/
}
 cfile.uphs.upenn.edu http://cfile.uphs.upenn.edu jet-aguirre
 /mnt/jet716s_1/jet-export/ {

gui-base
include ./aguirre/
}

 This looks fine:

  [amandabackup@cback ~]$ amadmin jet1 disklist cfile jet-a | less

  line 8 (/etc/amanda/jet1/disklist):
 host cfile.uphs.upenn.edu http://cfile.uphs.upenn.edu:

 interface default
 disk jet-a:
 snip
 EXCLUDE LIST
 EXCLUDE FILE ./aguirre/
 INCLUDE LIST
 INCLUDE FILE ./[a]*
 snip

 But then

  [amandabackup@cback ~]$ amadmin jet1 estimate cfile jet-a
 cfile.uphs.upenn.edu http://cfile.uphs.upenn.edu jet-a 0 2258244750


 shows the size of all ./[a]* dirs, including ./aguirre

 and then

  [amandabackup@cback ~]$ amadmin jet1 estimate cfile jet-aguirre

 doesn't output anything.

 Also, this command returns instantly - I'm not sure how it could now the
 size of the DLE instantly after I make changes to it. In fact, if I change
 the jet-a DLE to include ./[a-b]*, the estimate command instantly returns
 the same value. Do I need to tell amanda to reprocess the disklist file?

 Thanks

 -M





Re: ye old dumpcycle/runspercycle/tapecycle/runtapes question

2014-03-05 Thread Michael Stauffer
OK, thanks again Jon for the detailed answers. I feel I'm ready to go. I'll
use a holding disk and run archive dumps for offset storage.

-M


On Wed, Mar 5, 2014 at 12:49 AM, Jon LaBadie j...@jgcomp.com wrote:

 On Tue, Mar 04, 2014 at 05:16:05PM -0500, Michael Stauffer wrote:
  On Fri, Feb 28, 2014 at 1:07 AM, Jon LaBadie j...@jgcomp.com wrote:
 
   On Thu, Feb 27, 2014 at 07:19:24PM -0500, Michael Stauffer wrote:
Amanda 3.3.4
   
Hi,
   
Another long post from me - thanks to anyone who has time to read it.
   
I've been reading various docs and posts about dumpcycle,
 runspercycle,
tapecycle, runtapes online, but still can't figure out how I should
 set
things up for my needs.
   
I've got:
- ca. 30TB of data to backup
- I'll try to split this into 500GB or 1TB DLE's, but some will be
 much
easier to leave as 2-4TB DLE's
- I'm fine with a level 0 only every 30 days, especially considering
 a
   full
backup will take 8-10 days.
- I've got 1.5TB tapes that hold about 2TB of data with hardware
compression from testing
- my changer holds 35 tapes
  
   First question, will you be using more than 35 tapes.  I.e. will you
   periodically pull some recently used and replace with less recently
   used tapes?  If not, I think you are short on tapes.  A full dump will
   take 15 tapes.  You really want a MINIMUM of two full dumps.  I prefer
   more.  So 2x15 is two full dumps leaving only 5 tapes for incrementals.
   I don't think that will be enough.
  
 
  I was thinking of doing a periodic (at the start and then every 3 months)
  level 0 archive dump to a different set of tapes (probably four sets to
  retain 1 year's worth of dumps). Then I thought the library in the
 changer
  would be fine if it held less than two level 0 dumps at any time. I'd
  rather just go switch tapes once every 3 months than more often, and have
  offset archive too. Does that seem reasonable?
 
 I wouldn't be concerned about how many currently in the library.
 I'd consider how many total I have.  Anyone who has administered
 tape backups for any significant time has a war story.  Tapes that
 seemed to write correctly but a month later were not readable.
 Damage, physical and environmental.  My own involves a large
 magnet that I did not realize was right next to my tape storage.

 If you only have one level 0 and it goes bad, you have no backup!

  
I'd like to have the changer always hold a level 0 dump and then the
 set
   of
subseuqent incrementals. So it should take about 15-20 tapes for a
 level
   0
of all DLE's, and then the incrementals over a month should easily
 fit
within 2-4 tapes, judging from experience here. When the next level 0
   dump
starts, I'd like amanda to use as many of the remaining 10 or so
 tapes
before overwriting tapes from the old level 0 dump (overwriting only
   tapes
of DLE's that have just had a new level 0, of course). (I will
   periodically
do a level 0 dump to a different tape set for offset archiving)
   
   Do you plan to let amanda do the scheduling?  Or are you going to force
   her kicking and screaming into the traditional schedule of one monster
   full dump followed by all incrementals.  Then another monster.  Blech.
 
 
  I'm fine with Amanda's scheduling. When I do my first round of amdump's
  though, it will be effectively a monster dump until all DLE's have a
 level
  0. Then I presume amanda can even things out using her own scheduling?
 


 You can ease amanda into your dumpcycle.  Add a couple of DLEs with each
 dump run until they are all added.  Avoids the initial monster dump.


How do I setup dumpcycle, runspercycle, tapecycle, runtapes to
 achieve
   this?
   
From the docs, it seems I'd want:
dumpcycle   30 days
runspercycle 15  #15, for running amdump every other day
runtapes   2   # to allow for DLE's that can get up to 4TB
tapecycle 34  # at least (runspercycle + 1) * runtapes - per docs
suggestion
# and leave one extra as a spare
   
Is this right?
   
Does this mean that when I run amdump, it will at most write two
tapes-worth of DLE's, and then stop? Then the next run will pick up
 from
there? I think so, but would like to make sure. I'm used to the
 manual
paradigm of run a full backup and then do incrementals. But this
 seems
that it will level out to be the same in the end as that?
   
HOWEVER, I'd rather have runtapes at 3 or 4 to minimize tape waste
 and
   make
it less critical to have evenly-sized DLE's, which will be difficult
 to
maintain. But if runtapes is 3, the recommended value of tapecycle
 would
   be
= 48, more than my # of tapes. But in practice, 35 should still
 plenty
   of
tapes to do what I want without overwriting level 0's prematurely. It
   seems
like tapecycle minimum should be more like '# of tapes per full
 backup +
   #
of tapes

Re: can amanda auto-size DLE's?

2014-03-05 Thread Michael Stauffer
Thanks Debra, this is very helpful.


On Mon, Mar 3, 2014 at 3:50 PM, Debra S Baddorf badd...@fnal.gov wrote:

 Comments on questions that are at the very bottom.

 On Mar 3, 2014, at 1:47 PM, Michael Stauffer mgsta...@gmail.com
  wrote:

3) I had figured that when restoring, amrestore has to read in a
 complete
dump/tar file before it can extract even a single file. So if I have
 a
single DLE that's ~2TB that fits (with multiple parts) on a single
 tape,
then to restore a single file, amrestore has to read the whole tape.
HOWEVER, I'm now testing restoring a single file from a large 2.1TB
 DLE,
and the file has been restored, but the amrecover operation is still
running, for quite some time after restoring the file. Why might
 this be
happening?
 
  Most (all?) current tape formats and drives can fast forward looking
  for end of file marks.  Amanda knows the position of the file on the
  tape and will have to drive go at high speed to that tape file.
 
  For formats like LTO, which have many tracks on the tape, I think it
  is even faster.  I think a TOC records where (i.e. which track) each
  file starts.  So it doesn't have to fast forward and back 50 times to
  get to the tenth file which is on the 51st track.
 
  Jon, Olivier and Debra - thanks for reading my long post and replying.
 
  OK this makes sense about searching for eof marks from what I've read.
 Seems like it's a good reason to use smaller DLE's.
 
3a) Where is the recovered dump file written to by amrecover? I
 can't see
space being used for it on either server or client. Is it streaming
 and
untar'ing in memory, only writing the desired files to disk?
  
  The tar file is not written to disk be amrecover.  The desired files are
  extracted as the tarchive streams.
 
  Thanks, that makes sense too from what I've seen (or not seen, actually
 - i.e. large temporary files).
 
So assuming all the above is true, it'd be great if amdump could
automatically break large DLE's into small DLE's to end up with
 smaller
dump files and faster restore of individual files. Maybe it would
 happen
only for level 0 dumps, so that incremental dumps would still use
 the same
sub-DLE's used by the most recent level 0 dump.
 
  Sure, great idea.  Then all you would need to configure is one DLE
  starting at /.  Amanda would break things up into sub-DLEs.
 
  Nope, sorry amanda asks the backup-admin to do that part of the
  config.  That's why you get the big bucks ;)
 
  Good point! A bit of job security there. ;)
 
Any thoughts on how I can approach this? If amanda can't do it, I
 thought I
might try a script to create DLE's of a desired size based on
 disk-usage,
then run the script everytime I wanted to do a new level 0 dump.
 That of
course would mean telling amanda when I wanted to do level 0's,
 rather than
amanda controlling it.
 
  Using a scheme like that, when it comes to recovering data, which DLE
  was the object in last summer?  Remember that when you are asked to
  recover some data, you will probably be under time pressure with clients
  and bosses looking over your shoulder.  That's not the time you want
  to fumble around trying to determine which DLE the data is in.
 
  Yes, I can see the complications. That makes me think of some things:
 
  1) what do people do when they need to split a DLE? Just rely on
 notes/memory of DLE for restoring from older dumps if needed? Or just
 search using something like in question 3) below?

 I leave the old DLE  in my disk list, commented out.  Possibly with the
 date when it was removed.  This helps me to remember that
 I need to  UNcomment it before trying to restore using it.  I.E.  The DLE
 needs to be recreated  (needs to be in your disklist file)  when
 you run amrecover, in order for it to be a valid choice.  So if you are
 looking at an older tape,  you need to have those older DLEs  still in
 place.

 As I understand it, anyway!


 
  2) What happens if you split or otherwise modify a DLE during a cycle
 when normally the DLE would be getting an incremental dump? Will amanda do
 a new level 0 dump for it?

 Yes.  It's now a totally new DLE as far as amanda knows, so it gets a
 level 0 dump on the first backup.

 I've found  amdump  myconfig  --no-taper   node-name  [DLE-name] 
  useful sometimes.  It will do a backup of just the requested node and DLE
 but won't waste a tape on this small bit of data.   The data stays on my
 holding disk.  The next amdump will autoflush  it to tape with everything
 else
 (assuming   autoflush   is set to  AUTO  or YES  -- see your amanda.conf
  file)

 I use the  --no-taper   when I need to test a new DLE to make sure it
 works,  before the regular backup is due.Or perhaps,  to get that new
 level-0
 out of the way now,  so it doesn't extend the runtime of the regular
 amdump job.

 
  3) Is there a tool for seaching for a path or filename across all dump
 indecies? Or do I

Re: can amanda auto-size DLE's?

2014-03-05 Thread Michael Stauffer
Thanks again Jon - very helpful as usual.

-M

On Mon, Mar 3, 2014 at 7:01 PM, Jon LaBadie j...@jgcomp.com wrote:

 On Mon, Mar 03, 2014 at 02:47:53PM -0500, Michael Stauffer wrote:
  
 ...
 Any thoughts on how I can approach this? If amanda can't do it, I
   thought I
 might try a script to create DLE's of a desired size based on
   disk-usage,
 then run the script everytime I wanted to do a new level 0 dump.
 That
   of
 course would mean telling amanda when I wanted to do level 0's,
 rather
   than
 amanda controlling it.
  
   Using a scheme like that, when it comes to recovering data, which DLE
   was the object in last summer?  Remember that when you are asked to
   recover some data, you will probably be under time pressure with
 clients
   and bosses looking over your shoulder.  That's not the time you want
   to fumble around trying to determine which DLE the data is in.
 
 
  Yes, I can see the complications. That makes me think of some things:
 
  1) what do people do when they need to split a DLE? Just rely on
  notes/memory of DLE for restoring from older dumps if needed? Or just
  search using something like in question 3) below?

 In addition to the report, amanda can also print a TOC for the tapes.
 This is a list of what DLE's and levels are on the tape.  Its a joke
 today, but the original reason was to put the TOC in the plastic box
 with the tape.  I print them out in 3-hole format (8.5x11) and file
 them.  I also add handwritten notes for things like a DLE split.

 When you are splitting a DLE, in the short run you probably remember
 the differences when you need to recover.  For archive recovery the
 written notes are helpful.

 
  2) What happens if you split or otherwise modify a DLE during a cycle
 when
  normally the DLE would be getting an incremental dump? Will amanda do a
 new
  level 0 dump for it?
 
 Splitting a DLE means there is 'at least' one new DLE.  All new DLE
 must get a level 0.  If the original DLE is still active, possibly
 excluding some things that go into the new DLE, it will continue on
 its current dumpcycle.  I would probably use amadmin to force it to
 do a level 0 though.

  3) Is there a tool for seaching for a path or filename across all dump
  indecies? Or do I just grep through all the index files
  in /etc/amanda/config-name/index/ ?

 No am-tool that I know of.  Just zgrep (the indexes are compressed).

 --
 Jon H. LaBadie j...@jgcomp.com
  11226 South Shore Rd.  (703) 787-0688 (H)
  Reston, VA  20190  (609) 477-8330 (C)



include, exclude and append

2014-03-05 Thread Michael Stauffer
Amanda 3.3.4

Hi again,

I'm setting up my DLE's now. Regarding include, exclude and append.

I figure 'include' and 'exclude' items are kept in separate lists? As such,
is this then the proper way to use these:

cfile.uphs.upenn.edu jet-k-l /jet {
   gui-base
   include ./[k-l]*
   exclude ./kable
   exclude append ./kimj
   exclude append ./lmo
   }

That is, the first 'exclude' does not have 'append' since it's a new
exclude list, separate from the include list. Is that right? If I just
always use 'exclude append', is that safe too? I figure it depends on
whether I define any exclude or include items in 'gui-base'. If I don't,
then I assume a new list is created with the first 'exclude' or 'exclude
append'?

Thanks

-M


amdump dry run?

2014-03-05 Thread Michael Stauffer
Amanda 3.3.4

Hi,

Is there a way to get amdump to do a dry run? The idea is to see everything
that will be dumped, to check dle settings. For completeness' sake I'd like
a list of all files that will be backed up.

I can get some idea of amadmin's disklist and estimate commands, but would
like more if possible.

-M


Re: amdump dry run?

2014-03-05 Thread Michael Stauffer
Yes, thanks I was trying to avoid that since it will take a long time.
Although...if I start the dump with --no-taper, is there a file that's
written with at the begin of the process with all the files to be dumped? I
assume the index itself is written at the end?

-M


On Wed, Mar 5, 2014 at 5:55 PM, Debra S Baddorf badd...@fnal.gov wrote:


 On Mar 5, 2014, at 4:26 PM, Michael Stauffer mgsta...@gmail.com
  wrote:

  Amanda 3.3.4
 
  Hi,
 
  Is there a way to get amdump to do a dry run? The idea is to see
 everything that will be dumped, to check dle settings. For completeness'
 sake I'd like a list of all files that will be backed up.
 
  I can get some idea of amadmin's disklist and estimate commands, but
 would like more if possible.
 
  -M


 Wel .
 you could do   amdump  config  --no-taper  nodename  DLE DLE2  nodename
  DLE3
 This would give you the index files and would leave the dumps on your
 holding disk without using up a tape.
 Since you wouldn't be going to tape,  it might over-fill your holding
 disk,  which is why I suggested doing
 a few DLEs  at a time.

 Deb


amadmin estimate?

2014-03-05 Thread Michael Stauffer
Amanda 3.4.4

Hi,

I'm trying to use amadmin's estimate command to get an idea if my DLE
entries are correct.
For example:

I want one DLE with all dirs starting with a, except for ./aguirre, then
another with just ./aguirre

cfile.uphs.upenn.edu jet-a /mnt/jet716s_1/jet-export/ {
   gui-base
   include ./[a]*
   exclude ./aguirre/
   }
cfile.uphs.upenn.edu jet-aguirre /mnt/jet716s_1/jet-export/ {
   gui-base
   include ./aguirre/
   }

This looks fine:

 [amandabackup@cback ~]$ amadmin jet1 disklist cfile jet-a | less

 line 8 (/etc/amanda/jet1/disklist):
host cfile.uphs.upenn.edu:
interface default
disk jet-a:
snip
EXCLUDE LIST
EXCLUDE FILE ./aguirre/
INCLUDE LIST
INCLUDE FILE ./[a]*
snip

But then

 [amandabackup@cback ~]$ amadmin jet1 estimate cfile jet-a
 cfile.uphs.upenn.edu jet-a 0 2258244750

shows the size of all ./[a]* dirs, including ./aguirre

and then

 [amandabackup@cback ~]$ amadmin jet1 estimate cfile jet-aguirre

doesn't output anything.

Also, this command returns instantly - I'm not sure how it could now the
size of the DLE instantly after I make changes to it. In fact, if I change
the jet-a DLE to include ./[a-b]*, the estimate command instantly returns
the same value. Do I need to tell amanda to reprocess the disklist file?

Thanks

-M


Re: ye old dumpcycle/runspercycle/tapecycle/runtapes question

2014-03-04 Thread Michael Stauffer
On Fri, Feb 28, 2014 at 1:07 AM, Jon LaBadie j...@jgcomp.com wrote:

 On Thu, Feb 27, 2014 at 07:19:24PM -0500, Michael Stauffer wrote:
  Amanda 3.3.4
 
  Hi,
 
  Another long post from me - thanks to anyone who has time to read it.
 
  I've been reading various docs and posts about dumpcycle, runspercycle,
  tapecycle, runtapes online, but still can't figure out how I should set
  things up for my needs.
 
  I've got:
  - ca. 30TB of data to backup
  - I'll try to split this into 500GB or 1TB DLE's, but some will be much
  easier to leave as 2-4TB DLE's
  - I'm fine with a level 0 only every 30 days, especially considering a
 full
  backup will take 8-10 days.
  - I've got 1.5TB tapes that hold about 2TB of data with hardware
  compression from testing
  - my changer holds 35 tapes

 First question, will you be using more than 35 tapes.  I.e. will you
 periodically pull some recently used and replace with less recently
 used tapes?  If not, I think you are short on tapes.  A full dump will
 take 15 tapes.  You really want a MINIMUM of two full dumps.  I prefer
 more.  So 2x15 is two full dumps leaving only 5 tapes for incrementals.
 I don't think that will be enough.


I was thinking of doing a periodic (at the start and then every 3 months)
level 0 archive dump to a different set of tapes (probably four sets to
retain 1 year's worth of dumps). Then I thought the library in the changer
would be fine if it held less than two level 0 dumps at any time. I'd
rather just go switch tapes once every 3 months than more often, and have
offset archive too. Does that seem reasonable?



  I'd like to have the changer always hold a level 0 dump and then the set
 of
  subseuqent incrementals. So it should take about 15-20 tapes for a level
 0
  of all DLE's, and then the incrementals over a month should easily fit
  within 2-4 tapes, judging from experience here. When the next level 0
 dump
  starts, I'd like amanda to use as many of the remaining 10 or so tapes
  before overwriting tapes from the old level 0 dump (overwriting only
 tapes
  of DLE's that have just had a new level 0, of course). (I will
 periodically
  do a level 0 dump to a different tape set for offset archiving)
 
 Do you plan to let amanda do the scheduling?  Or are you going to force
 her kicking and screaming into the traditional schedule of one monster
 full dump followed by all incrementals.  Then another monster.  Blech.


I'm fine with Amanda's scheduling. When I do my first round of amdump's
though, it will be effectively a monster dump until all DLE's have a level
0. Then I presume amanda can even things out using her own scheduling?


  How do I setup dumpcycle, runspercycle, tapecycle, runtapes to achieve
 this?
 
  From the docs, it seems I'd want:
  dumpcycle   30 days
  runspercycle 15  #15, for running amdump every other day
  runtapes   2   # to allow for DLE's that can get up to 4TB
  tapecycle 34  # at least (runspercycle + 1) * runtapes - per docs
  suggestion
  # and leave one extra as a spare
 
  Is this right?
 
  Does this mean that when I run amdump, it will at most write two
  tapes-worth of DLE's, and then stop? Then the next run will pick up from
  there? I think so, but would like to make sure. I'm used to the manual
  paradigm of run a full backup and then do incrementals. But this seems
  that it will level out to be the same in the end as that?
 
  HOWEVER, I'd rather have runtapes at 3 or 4 to minimize tape waste and
 make
  it less critical to have evenly-sized DLE's, which will be difficult to
  maintain. But if runtapes is 3, the recommended value of tapecycle would
 be
  = 48, more than my # of tapes. But in practice, 35 should still plenty
 of
  tapes to do what I want without overwriting level 0's prematurely. It
 seems
  like tapecycle minimum should be more like '# of tapes per full backup +
 #
  of tapes for incrementals over dumpcycle + 2 * runtapes', plus one or two
  as a buffer.

 The following sentence shows you are still thinking the non-amanda way.

The formula in the docs of (runspercycle + 1) * runtapes
  plays it very safe when you consider many incremental dumps will go to
  holding disk and be collected onto one tape periodically.

 There are not going to be a bunch of incrementls to collect onto one
 tape.  Each amdump run will be a mix of level 0's for some DLEs and
 incrementals for the others.

 With 15 runspercycle you will AVERAGE 2TB of level 0 plus ??GB of
 incrementals.  But remember you said some DLEs will be 4 or more TB.
 When they get level 0's you'll need more than 2 tapes.


Can I be sure that there will always be some level 0's in a run of amdump?
I guess because there's ~30TB of data and 15 runs, so that pretty much
guarantees a level 0 every run? If not, or if they're  tape-size, I'd
like them to go to holding disk so as not to greatly underutilize a tape.
I'm going to try to have my largest DLE  2TB - there will be 3 or 4 of
those

Re: can amanda auto-size DLE's?

2014-03-03 Thread Michael Stauffer

   3) I had figured that when restoring, amrestore has to read in a
 complete
   dump/tar file before it can extract even a single file. So if I have a
   single DLE that's ~2TB that fits (with multiple parts) on a single
 tape,
   then to restore a single file, amrestore has to read the whole tape.
   HOWEVER, I'm now testing restoring a single file from a large 2.1TB
 DLE,
   and the file has been restored, but the amrecover operation is still
   running, for quite some time after restoring the file. Why might this
 be
   happening?

 Most (all?) current tape formats and drives can fast forward looking
 for end of file marks.  Amanda knows the position of the file on the
 tape and will have to drive go at high speed to that tape file.

 For formats like LTO, which have many tracks on the tape, I think it
 is even faster.  I think a TOC records where (i.e. which track) each
 file starts.  So it doesn't have to fast forward and back 50 times to
 get to the tenth file which is on the 51st track.


Jon, Olivier and Debra - thanks for reading my long post and replying.

OK this makes sense about searching for eof marks from what I've read.
Seems like it's a good reason to use smaller DLE's.


   3a) Where is the recovered dump file written to by amrecover? I can't
 see
   space being used for it on either server or client. Is it streaming and
   untar'ing in memory, only writing the desired files to disk?
 
 The tar file is not written to disk be amrecover.  The desired files are
 extracted as the tarchive streams.


Thanks, that makes sense too from what I've seen (or not seen, actually -
i.e. large temporary files).


   So assuming all the above is true, it'd be great if amdump could
   automatically break large DLE's into small DLE's to end up with smaller
   dump files and faster restore of individual files. Maybe it would
 happen
   only for level 0 dumps, so that incremental dumps would still use the
 same
   sub-DLE's used by the most recent level 0 dump.

 Sure, great idea.  Then all you would need to configure is one DLE
 starting at /.  Amanda would break things up into sub-DLEs.

 Nope, sorry amanda asks the backup-admin to do that part of the
 config.  That's why you get the big bucks ;)


Good point! A bit of job security there. ;)


   Any thoughts on how I can approach this? If amanda can't do it, I
 thought I
   might try a script to create DLE's of a desired size based on
 disk-usage,
   then run the script everytime I wanted to do a new level 0 dump. That
 of
   course would mean telling amanda when I wanted to do level 0's, rather
 than
   amanda controlling it.

 Using a scheme like that, when it comes to recovering data, which DLE
 was the object in last summer?  Remember that when you are asked to
 recover some data, you will probably be under time pressure with clients
 and bosses looking over your shoulder.  That's not the time you want
 to fumble around trying to determine which DLE the data is in.


Yes, I can see the complications. That makes me think of some things:

1) what do people do when they need to split a DLE? Just rely on
notes/memory of DLE for restoring from older dumps if needed? Or just
search using something like in question 3) below?

2) What happens if you split or otherwise modify a DLE during a cycle when
normally the DLE would be getting an incremental dump? Will amanda do a new
level 0 dump for it?

3) Is there a tool for seaching for a path or filename across all dump
indecies? Or do I just grep through all the index files
in /etc/amanda/config-name/index/ ?

Thanks

-M


Re: can amanda auto-size DLE's?

2014-03-03 Thread Michael Stauffer
Yes thanks, this is what I do. I've had some complication running the
restore from the backup server rather than the client, but I'll worry about
that later.


On Fri, Feb 28, 2014 at 1:47 PM, Debra S Baddorf badd...@fnal.gov wrote:

 one small comment inserted below

 On Feb 27, 2014, at 11:33 PM, Jon LaBadie j...@jgcomp.com
  wrote:

  Oliver already provided good answers, I'll just add a bit.
 
  On Fri, Feb 28, 2014 at 10:35:08AM +0700, Olivier Nicole wrote:
  Muchael,
 
  ...
 
  3) I had figured that when restoring, amrestore has to read in a
 complete
  dump/tar file before it can extract even a single file. So if I have a
  single DLE that's ~2TB that fits (with multiple parts) on a single
 tape,
  then to restore a single file, amrestore has to read the whole tape.
  HOWEVER, I'm now testing restoring a single file from a large 2.1TB
 DLE,
  and the file has been restored, but the amrecover operation is still
  running, for quite some time after restoring the file. Why might this
 be
  happening?
 
  Your touching the essence or tapes here: they are sequential access.
 
  So in order to access one specifi DLE on the tape, the tape has to
  position at the very begining of the tape and read everything until it
  reaches that dle (the nth file on the tape).
 
 
  Most (all?) current tape formats and drives can fast forward looking
  for end of file marks.  Amanda knows the position of the file on the
  tape and will have to drive go at high speed to that tape file.
 
  For formats like LTO, which have many tracks on the tape, I think it
  is even faster.  I think a TOC records where (i.e. which track) each
  file starts.  So it doesn't have to fast forward and back 50 times to
  get to the tenth file which is on the 51st track.
 
  Then it has to read sequentially all that file containing the backup of
  a dle to find the file(s) you want to restore. I am not sure about dump,
  but I am pretty sure that if your tar backup was a file on a disk
  instead of a file on a tape, it would read sequentially from the
  begining of the tar file, in a similar way.
 
  Then it has to read until the end of the tar (not sure about dump) to
  make sure that there is no other file(s) satisfying your extraction
  criteria.
 
  So yes, if the file you want to extract is at the begining of your tar,
  it will continue reading for a certain amount of time after the file has
  been extracted.
 
  Another reason this happens is the append feature of tar.  It is
  possible that a second, later version of the same file is in the tar
  file.  Amanda does not use this feature but tar does not know this.
  If you see the file you want has been recovered, you can interupt
  amrecover.
 
  The recover log shows this on the client doing the recovery:
 
  [root@cfile amRecoverTest_Feb_27]# tail -f
  /var/log/amanda/client/jet1/amrecover.20140227135820.debug
  Thu Feb 27 17:23:12 2014: thd-0x25f1590: amrecover:
 stream_read_callback:
  data is still flowing
 
  3a) Where is the recovered dump file written to by amrecover? I can't
 see
  space being used for it on either server or client. Is it streaming and
  untar'ing in memory, only writing the desired files to disk?
 
  The tar file is not written to disk be amrecover.  The desired files are
  extracted as the tarchive streams.
 
  In the directory from where you started the amrecover command. With tar,
  it will create the same exact hierarchy, reflecting the original DLE.
 
  try:
 
  find . -name myfilename -print
 
  I strongly suggest you NOT use amrecover to extract directly to the
  filesystem.  Extract them in a temporary directory and once you are
  sure they are what you want, copy/move them to their correct location.

 To make this completely clear  (i.e. restoring guide for idiots)
 -  cd  /tmp/something
 -  amrecover  .

 The files will be restored into the /tmp/something  which is your current
 directory
 when you typed the amrecover command.


 
  ...
  So assuming all the above is true, it'd be great if amdump could
  automatically break large DLE's into small DLE's to end up with smaller
  dump files and faster restore of individual files. Maybe it would
 happen
  only for level 0 dumps, so that incremental dumps would still use the
 same
  sub-DLE's used by the most recent level 0 dump.
 
  Sure, great idea.  Then all you would need to configure is one DLE
  starting at /.  Amanda would break things up into sub-DLEs.
 
  Nope, sorry amanda asks the backup-admin to do that part of the
  config.  That's why you get the big bucks ;)
 
 
  The issue I have is that with 30TB of data, there'd be lots of manual
  fragmenting of data directories to get more easily-restorable DLE's
 sizes
  of say, 500GB each. Some top-level dirs in my main data drive have
 3-6TB
  each, while many others have only 100GB or so. Manually breaking these
 into
  smaller DLE's once is fine, but since data gets regularly moved, added
 and
  deleted, things would quickly change and 

not recognizing holdingdisk define?

2014-02-27 Thread Michael Stauffer
Amanda 3.3.4

Hi,

Seems like I'm having trouble getting amanda to use my holding disk.

Here's my setup in amanda.conf:

define holdingdisk holdingdisk1 {
  directory /mnt/amanda-holdingdisk1/
  use 4500Gb
  chunksize 100Gb
}

define dumptype gui-base {
   global
   program GNUTAR
   comment gui base dumptype dumped with tar
   compress none
   index yes
   maxdumps 2
   max-warnings 100
   allow-split true #Stauffer. Default is true.
   holdingdisk yes  #Stauffer. Default is auto.
}


When I run amcheck, it's not giving me any msg regarding holding disk,
positive or negative. A few things I've seen online have shown amcheck
reporting on holding disk status.

My tapetype has this:

#settings for splitting/spanning
part_size 190G # about 1/10 of tape size - should be used when using
holding disk
# these should be used when no holding disk is used - but cache size
determines
#   part size (AFAIK), and lots of small parts in a dump is said to be
inefficient
part_cache_type memory
part_cache_max_size 12G


I ran a level 0 dump and saw this:

  USAGE BY TAPE:
  Label   Time Size  %  DLEs Parts
  000406-jet121:212152G  152.1 2   181
  000407-jet1 7:17 660G   46.6 156

which looks to me like parts of 12G, close to my 14G cache size.

Do I need to do something else to tell amanda to use my holding disk?
Thanks.

-M


Re: not recognizing holdingdisk define?

2014-02-27 Thread Michael Stauffer
Great, thanks - that's the trick. I figured it was something like this but
FWIW, this isn't in the amanda.conf docs.

-M


On Thu, Feb 27, 2014 at 3:21 PM, Jean-Louis Martineau
martin...@zmanda.comwrote:

 Michael,

 You define the holdingdisk but you don't tell amanda to use it.

 Add:
   holdingdisk holdingdisk1
 after the define

 Jean-Louis


 On 02/27/2014 03:09 PM, Michael Stauffer wrote:

 Amanda 3.3.4

 Hi,

 Seems like I'm having trouble getting amanda to use my holding disk.

 Here's my setup in amanda.conf:

 define holdingdisk holdingdisk1 {
   directory /mnt/amanda-holdingdisk1/
   use 4500Gb
   chunksize 100Gb
 }

 define dumptype gui-base {
global
program GNUTAR
comment gui base dumptype dumped with tar
compress none
index yes
maxdumps 2
max-warnings 100
allow-split true #Stauffer. Default is true.
holdingdisk yes  #Stauffer. Default is auto.
 }


 When I run amcheck, it's not giving me any msg regarding holding disk,
 positive or negative. A few things I've seen online have shown amcheck
 reporting on holding disk status.

 My tapetype has this:

 #settings for splitting/spanning
 part_size 190G # about 1/10 of tape size - should be used when using
 holding disk
 # these should be used when no holding disk is used - but cache size
 determines
 #   part size (AFAIK), and lots of small parts in a dump is said to
 be inefficient
 part_cache_type memory
 part_cache_max_size 12G


 I ran a level 0 dump and saw this:

 USAGE BY TAPE:
 Label   Time Size  %  DLEs Parts
 000406-jet1 21:21 2152G  152.1 2   181
 000407-jet1 7:17660G   46.6 156


 which looks to me like parts of 12G, close to my 14G cache size.

 Do I need to do something else to tell amanda to use my holding disk?
 Thanks.

 -M





Re: not recognizing holdingdisk define?

2014-02-27 Thread Michael Stauffer
Yes, it's 4.5TB.

I's not clear to me from the docs whether a level 0 dump gets written fully
to holding disk before it gets streamed to tape, or if streaming starts
once one or more chunks have been written to the holding disk - anyone
know? I'd prefer the latter for performance reasons. If the former, then I
figure I'd need two tapes-worth of holding disk space since I have two tape
drives and have setup tape-parallel-writes as 2.

-M


On Thu, Feb 27, 2014 at 3:43 PM, Jon LaBadie j...@jgcomp.com wrote:

 On Thu, Feb 27, 2014 at 03:09:20PM -0500, Michael Stauffer wrote:
  Amanda 3.3.4
 
  Hi,
 
  Seems like I'm having trouble getting amanda to use my holding disk.
 
  Here's my setup in amanda.conf:
 
  define holdingdisk holdingdisk1 {
directory /mnt/amanda-holdingdisk1/
use 4500Gb
chunksize 100Gb
  }

 Others pointed out the error, but is the size really 4.5 TeraBytes?

 --
 Jon H. LaBadie j...@jgcomp.com
  11226 South Shore Rd.  (703) 787-0688 (H)
  Reston, VA  20190  (609) 477-8330 (C)



can amanda auto-size DLE's?

2014-02-27 Thread Michael Stauffer
Amanda 3.3.4

Hi,

I'm guessing the answer is no since I haven't read about this, but maybe...

I'm hoping amanda might be able to auto-size DLE's into sub-DLE's of an
approximate size, say 500GB.

My understanding is this:

1) if I have multiple DLE's in my disklist, then tell amdump to perform a
level 0 dump of the complete config, each DLE gets written to tape as a
separate dump/tar file (possibly in parts if the tar is  part-size). Is
that right?

2) If multiple DLE's are processed in a single level 0 amdump run, with
each DLE  tape-size, then as many as can fit will be written to a single
tape, or possibly spanning tapes. But in any case it won't be a single DLE
per tape. Is that right? That looks like what I've observed so far.

3) I had figured that when restoring, amrestore has to read in a complete
dump/tar file before it can extract even a single file. So if I have a
single DLE that's ~2TB that fits (with multiple parts) on a single tape,
then to restore a single file, amrestore has to read the whole tape.
HOWEVER, I'm now testing restoring a single file from a large 2.1TB DLE,
and the file has been restored, but the amrecover operation is still
running, for quite some time after restoring the file. Why might this be
happening?

The recover log shows this on the client doing the recovery:

[root@cfile amRecoverTest_Feb_27]# tail -f
/var/log/amanda/client/jet1/amrecover.20140227135820.debug
Thu Feb 27 17:23:12 2014: thd-0x25f1590: amrecover: stream_read_callback:
data is still flowing

3a) Where is the recovered dump file written to by amrecover? I can't see
space being used for it on either server or client. Is it streaming and
untar'ing in memory, only writing the desired files to disk?

4) To restore from a single DLE's dump/tar file that's smaller than tape
size, and exists on a tape with multiple other smaller DLE dump/tar files,
amrestore can seek to the particular DLE's dump/tar file and only has to
read that one file. Is that right?

So assuming all the above is true, it'd be great if amdump could
automatically break large DLE's into small DLE's to end up with smaller
dump files and faster restore of individual files. Maybe it would happen
only for level 0 dumps, so that incremental dumps would still use the same
sub-DLE's used by the most recent level 0 dump.

The issue I have is that with 30TB of data, there'd be lots of manual
fragmenting of data directories to get more easily-restorable DLE's sizes
of say, 500GB each. Some top-level dirs in my main data drive have 3-6TB
each, while many others have only 100GB or so. Manually breaking these into
smaller DLE's once is fine, but since data gets regularly moved, added and
deleted, things would quickly change and upset my smaller DLE's.

Any thoughts on how I can approach this? If amanda can't do it, I thought I
might try a script to create DLE's of a desired size based on disk-usage,
then run the script everytime I wanted to do a new level 0 dump. That of
course would mean telling amanda when I wanted to do level 0's, rather than
amanda controlling it.

Thanks for reading this long post!

-M


Re: not recognizing holdingdisk define?

2014-02-27 Thread Michael Stauffer
I see, I think then that's why there's a separate disk cache option
(tapetype:part-cache-type)?
So if have a very large DLE and I use disk for cache, it will spool chunks
to disk and starting streaming to tape before the whole DLE is written to
disk? I want to avoid memory cache since I'd end up with lots of smaller
parts (~14GB on my system).

But how do I specify to use disk-cache'ing for level 0, but holding disk
for = level 1 dumps?

-M


On Thu, Feb 27, 2014 at 5:20 PM, Debra S Baddorf badd...@fnal.gov wrote:

 I believe the whole dump has to be done before it starts to write to tape.
   This prevents incomplete dumps from wasting space on the tape.

 I try to have numerous smaller DLEs, so that it takes several DLEs to fill
 a tape.  Thus, when any one of them is finished, it can start going
 to tape.   If you have only a single DLE which occupies the whole tape,
  then it does seem slower.  In that case, perhaps you don't even
 bother with a holding disk?

 Deb Baddorf


 On Feb 27, 2014, at 3:44 PM, Michael Stauffer mgsta...@gmail.com
  wrote:

  Yes, it's 4.5TB.
 
  I's not clear to me from the docs whether a level 0 dump gets written
 fully to holding disk before it gets streamed to tape, or if streaming
 starts once one or more chunks have been written to the holding disk -
 anyone know? I'd prefer the latter for performance reasons. If the former,
 then I figure I'd need two tapes-worth of holding disk space since I have
 two tape drives and have setup tape-parallel-writes as 2.
 
  -M
 
 
  On Thu, Feb 27, 2014 at 3:43 PM, Jon LaBadie j...@jgcomp.com wrote:
  On Thu, Feb 27, 2014 at 03:09:20PM -0500, Michael Stauffer wrote:
   Amanda 3.3.4
  
   Hi,
  
   Seems like I'm having trouble getting amanda to use my holding disk.
  
   Here's my setup in amanda.conf:
  
   define holdingdisk holdingdisk1 {
 directory /mnt/amanda-holdingdisk1/
 use 4500Gb
 chunksize 100Gb
   }
 
  Others pointed out the error, but is the size really 4.5 TeraBytes?
 
  --
  Jon H. LaBadie j...@jgcomp.com
   11226 South Shore Rd.  (703) 787-0688 (H)
   Reston, VA  20190  (609) 477-8330 (C)
 




ye old dumpcycle/runspercycle/tapecycle/runtapes question

2014-02-27 Thread Michael Stauffer
Amanda 3.3.4

Hi,

Another long post from me - thanks to anyone who has time to read it.

I've been reading various docs and posts about dumpcycle, runspercycle,
tapecycle, runtapes online, but still can't figure out how I should set
things up for my needs.

I've got:
- ca. 30TB of data to backup
- I'll try to split this into 500GB or 1TB DLE's, but some will be much
easier to leave as 2-4TB DLE's
- I'm fine with a level 0 only every 30 days, especially considering a full
backup will take 8-10 days.
- I've got 1.5TB tapes that hold about 2TB of data with hardware
compression from testing
- my changer holds 35 tapes

I'd like to have the changer always hold a level 0 dump and then the set of
subseuqent incrementals. So it should take about 15-20 tapes for a level 0
of all DLE's, and then the incrementals over a month should easily fit
within 2-4 tapes, judging from experience here. When the next level 0 dump
starts, I'd like amanda to use as many of the remaining 10 or so tapes
before overwriting tapes from the old level 0 dump (overwriting only tapes
of DLE's that have just had a new level 0, of course). (I will periodically
do a level 0 dump to a different tape set for offset archiving)

How do I setup dumpcycle, runspercycle, tapecycle, runtapes to achieve this?

From the docs, it seems I'd want:
dumpcycle   30 days
runspercycle 15  #15, for running amdump every other day
runtapes   2   # to allow for DLE's that can get up to 4TB
tapecycle 34  # at least (runspercycle + 1) * runtapes - per docs
suggestion
# and leave one extra as a spare

Is this right?

Does this mean that when I run amdump, it will at most write two
tapes-worth of DLE's, and then stop? Then the next run will pick up from
there? I think so, but would like to make sure. I'm used to the manual
paradigm of run a full backup and then do incrementals. But this seems
that it will level out to be the same in the end as that?

HOWEVER, I'd rather have runtapes at 3 or 4 to minimize tape waste and make
it less critical to have evenly-sized DLE's, which will be difficult to
maintain. But if runtapes is 3, the recommended value of tapecycle would be
= 48, more than my # of tapes. But in practice, 35 should still plenty of
tapes to do what I want without overwriting level 0's prematurely. It seems
like tapecycle minimum should be more like '# of tapes per full backup + #
of tapes for incrementals over dumpcycle + 2 * runtapes', plus one or two
as a buffer. The formula in the docs of (runspercycle + 1) * runtapes
plays it very safe when you consider many incremental dumps will go to
holding disk and be collected onto one tape periodically.

I was hung-up on thinking that runtapes had to be enough to fill an entire
level 0 dump, but it seems that's the non-amanda way of thinking?

Thanks

-M


proper amrecover end?

2014-02-27 Thread Michael Stauffer
Amanda 3.3.4

Hi,

I just finished an amrecover test. The run's debug file shows this at the
end:

Thu Feb 27 20:47:14 2014: thd-0x25f1590: amrecover: stream_read_callback:
data is still flowing
Thu Feb 27 20:47:25 2014: thd-0x25f1590: amrecover: stream_read_callback:
data is still flowing
Thu Feb 27 20:47:31 2014: thd-0x25f1590: amrecover:
security_stream_seterr(0x26535c0, EOF)
Thu Feb 27 20:47:31 2014: thd-0x25f1590: amrecover:
security_stream_close(0x26535c0)
Thu Feb 27 20:47:52 2014: thd-0x25f1590: amrecover:
security_stream_seterr(0x264b560, EOF)
Thu Feb 27 20:47:52 2014: thd-0x25f1590: amrecover: bytes read:
2312442624000
Thu Feb 27 20:47:52 2014: thd-0x25f1590: amrecover:
security_stream_close(0x264b560)

The tape library issued an error/warning about 20 minutes before amrecover
ended about needing to clean the tape drive asap. In the past that has
interfered with operations.

So is the above a normal ending for amrecover?

Is there a command to get a report on an amrecover run, or view its
progress? Or do I just look at this debug file?

Thanks

-M


Re: is part_cache_max_size shared?

2014-02-26 Thread Michael Stauffer
Thanks! That's very good to know.

As far as aborting the current amdump, do I just SIGINT it and then run
amcleanup?

-M


On Wed, Feb 26, 2014 at 7:22 AM, Jean-Louis Martineau
martin...@zmanda.comwrote:

 On 02/25/2014 04:24 PM, Michael Stauffer wrote:

 Amanda 3.3.4

 Hi,

 If amanda is using memory cache for splits, is the cache shared between
 simultaneous amdump runs, or does each try to grab that much memory?

 I'm setup like this:

 part_cache_type memory
  part_cache_max_size 20G

 and with

   taper-parallel-write 2

 and

   inparallel 10

 Thanks

 -M


 Each taper-parallel-write allocate part_cache_max_size of memory.

 Jean-Louis



Re: is part_cache_max_size shared?

2014-02-26 Thread Michael Stauffer
Thanks, 5 processes failed to terminate the first time so I ran it again.
Seems all good now.

-M


On Wed, Feb 26, 2014 at 11:19 AM, Jean-Louis Martineau martin...@zmanda.com
 wrote:

 On 02/26/2014 11:17 AM, Michael Stauffer wrote:

 Thanks! That's very good to know.

 As far as aborting the current amdump, do I just SIGINT it and then run
 amcleanup?


 use 'amcleanup -k CONF', it should kill all processes on the amanda
 server. Some process on the amanda client might not be killed.

 Jean-Louis


 -M



 On Wed, Feb 26, 2014 at 7:22 AM, Jean-Louis Martineau 
 martin...@zmanda.com mailto:martin...@zmanda.com wrote:

 On 02/25/2014 04:24 PM, Michael Stauffer wrote:

 Amanda 3.3.4

 Hi,

 If amanda is using memory cache for splits, is the cache
 shared between simultaneous amdump runs, or does each try to
 grab that much memory?

 I'm setup like this:

 part_cache_type memory
  part_cache_max_size 20G

 and with

   taper-parallel-write 2

 and

   inparallel 10

 Thanks

 -M


 Each taper-parallel-write allocate part_cache_max_size of memory.

 Jean-Louis






Best way to abort amdump?

2014-02-25 Thread Michael Stauffer
Amanda 3.3.4

Hi,

I've got a very slow amdump run going, looks like because I set the
part_cache_max_size too high and the servers memory is filled up.

What's the best way to abort this? I'm not worried about preserving what's
been written to tape so far. Do I just run amcleanup -k'? Or manually
abort the amdump process first and then run 'amcleanup -k? Something else?
Thanks.

After that, do I amrmtape on the tapes that have been used, and then
amlabel them again?

The backup is currently running straight to tape for a level 0 backup.
Haven't set up a holding disk yet.

-M


is part_cache_max_size shared?

2014-02-25 Thread Michael Stauffer
Amanda 3.3.4

Hi,

If amanda is using memory cache for splits, is the cache shared between
simultaneous amdump runs, or does each try to grab that much memory?

I'm setup like this:

   part_cache_type memory
   part_cache_max_size 20G

and with

  taper-parallel-write 2

and

  inparallel 10

Thanks

-M


Re: autolabel and automatic labeling

2014-02-04 Thread Michael Stauffer
Thanks Jon. What I'm unsure about is this part from the amanda.conf page,
about autolabel param:

*autolabel* *string* [*any*] [*other-config*] [*non-amanda*] [
*volume-error*] [*empty*]

Default: not set. When set, this directive will cause Amanda to
automatically write an Amanda tape label to most volume she encounters.
This option is DANGEROUS because when set, Amanda may erase near-failing
tapes or tapes accidentally loaded in the wrong slot.

What does encounters mean here? If amanda comes across a tape during an
operations other than 'amlabel', will a label be written if it matches any
of the cases defined by this param (presumably by amanda calling amlabel)?
Or does this only apply to when the operator manually runs 'amlabel'?

If I only define autolabel like so:

autolabel %b-%c

is it safe to assume Amanda will never label anything on her own?

-M


On Mon, Feb 3, 2014 at 7:18 PM, Jon LaBadie j...@jgcomp.com wrote:

 On Mon, Feb 03, 2014 at 06:47:32PM -0500, Michael Stauffer wrote:
  Hi,
 
  I'm hoping to use autolabel like this:
 
autolabel $b-$c-config-name
 
  so that I can do tape labeling like this to easily label new tapes:
 
amlabel config-name slot N
 
  With my definition of autolabel above, will amanda *never* automatically
  label a tape that it encounters and is unsure of? This is way I infer
 from
  the autolabel discussion in amanda.conf man page, but it's not stated
  explicitly. I'd like to play it safe (at least for now) and always
 manually
  label new tapes.

 It is explicit on the amlabel manpage:

   As a precaution, amlabel will not write a label if the volume
already contains an active label or if the label specified is on an
active tape.  The [-f] (force) flag bypasses these verifications.

 jl
 --
 Jon H. LaBadie j...@jgcomp.com
  11226 South Shore Rd.  (703) 787-0688 (H)
  Reston, VA  20190  (609) 477-8330 (C)



Re: autolabel and automatic labeling

2014-02-04 Thread Michael Stauffer
OK, thanks Jon.

-M


On Tue, Feb 4, 2014 at 2:57 PM, Jon LaBadie j...@jgcomp.com wrote:

 On Tue, Feb 04, 2014 at 11:18:24AM -0500, Michael Stauffer wrote:
  Thanks Jon. What I'm unsure about is this part from the amanda.conf page,
  about autolabel param:
 
  *autolabel* *string* [*any*] [*other-config*] [*non-amanda*] [
  *volume-error*] [*empty*]
 
  Default: not set. When set, this directive will cause Amanda to
  automatically write an Amanda tape label to most volume she encounters.
  This option is DANGEROUS because when set, Amanda may erase near-failing
  tapes or tapes accidentally loaded in the wrong slot.
 
  What does encounters mean here? If amanda comes across a tape during an
  operations other than 'amlabel', will a label be written if it matches
 any
  of the cases defined by this param (presumably by amanda calling
 amlabel)?
  Or does this only apply to when the operator manually runs 'amlabel'?
 
  If I only define autolabel like so:
 
  autolabel %b-%c
 
  is it safe to assume Amanda will never label anything on her own?
 

 This is based on my reading as I've always manually labelled my tapes.

 autolabel seems quite protective and thus has options to override
 the protections.  It will not autolabel empty tapes, amanda tapes
 from other configs, tapes with non-amanda data on them, and
 unreadable tapes.

 I think that eliminates all tapes except those already labelled with
 for the current config.  There I can think of 4 conditions:

   In Tapelist?Marked   Active  Expected Autolabel Action
   yesno-reuseNAskip
   yes reuse yesskip   (too recent)
   yes reuse  noskip   (use for the dump)
no  NANA ?

 The only one I can see being autolabelled is the last.

 It seems to me that you have to tell autolabel under what conditions
 it is allowed to autolabel.  Then the responsibility is yours ensure
 no valuable tape meeting those conditions is ever encountered by amdump.

 HTH
 Jon

  On Mon, Feb 3, 2014 at 7:18 PM, Jon LaBadie j...@jgcomp.com wrote:
 
   On Mon, Feb 03, 2014 at 06:47:32PM -0500, Michael Stauffer wrote:
Hi,
   
I'm hoping to use autolabel like this:
   
  autolabel $b-$c-config-name
   
so that I can do tape labeling like this to easily label new tapes:
   
  amlabel config-name slot N
   
With my definition of autolabel above, will amanda *never*
 automatically
label a tape that it encounters and is unsure of? This is way I infer
   from
the autolabel discussion in amanda.conf man page, but it's not stated
explicitly. I'd like to play it safe (at least for now) and always
   manually
label new tapes.
  
   It is explicit on the amlabel manpage:
  
 As a precaution, amlabel will not write a label if the volume
  already contains an active label or if the label specified is on an
  active tape.  The [-f] (force) flag bypasses these verifications.
  
   jl
   --
   Jon H. LaBadie j...@jgcomp.com
11226 South Shore Rd.  (703) 787-0688 (H)
Reston, VA  20190  (609) 477-8330 (C)
  
  End of included message 

 --
 Jon H. LaBadie j...@jgcomp.com
  11226 South Shore Rd.  (703) 787-0688 (H)
  Reston, VA  20190  (609) 477-8330 (C)



Re: 'labelstr' param in amanda.conf

2014-02-03 Thread Michael Stauffer
Deb, thanks very much for this detailed answer.

So if I understand correctly, if I follow your advice and have a number of
entries in my config's DLE, and call 'amdump myconfig', each DLE entry is
handled as a separate dump as amdump runs? That is, my main concern is to
do dumps in chunks so that with large full dumps, if there's an error,
there's no single massive dump that gets compromised and must be repeated.
Also, worst-case scenario restoration from smaller dumps will be easier.
Does that sound right?

When doing incremental dumps with this setup, does amdump combine the
changed files from various entries in the DLE until it reaches the minimum
dumpsize I set to fill a tape? Or is each DLE still handled separately even
if none of them is large enough to fill a tape (or match whatever dump size
limit is set).

Also, I like the advice on doing archive dumps, seems straightforward. I'll
be needing to do those too.

-M


On Thu, Jan 16, 2014 at 11:41 PM, Debra S Baddorf badd...@fnal.gov wrote:

  Since others have diverted into providing answers for your tape type,
 I'll resurrect this question and give some sort of an answer.

  Yes, you probably need just one configuration and one group of tapes
 labelled per 'labelstr'.  You create a list of DLEs to make smaller than
 tarp sized chunks.
node  /diskname  dumpname  [spindle  Local ] (optional)
node   /disk2/aDir  tar-type-dumpname  [...]
node   /disk2/another   tar-type-dumpname  [...]
 node2  /diskname  dumpname  [spindle  Local ] (optional)
node2   /disk2/aDir  tar-type-dumpname  [...]
node2   /disk2/another   tar-type-dumpname  [...]
  Etc

  Then, optimally, you create a cron job for each night (or maybe only
 week nights), something like this:
   21 00  *  *  *  amdump  myconfig
 And let Amanda shuffle your DLEs into a similarly sized amount each night.

  Stop reading here, for the simplest setup.

  ---

  If perchance the manager of nodeN insists on checking his dumpdates log
 on the same day each week, you CAN force nodeN with another cron entry 
 script:

  If Wednesday then   amadmin  myconfig  force nodeN  *  
 Or perhaps only amadmin  myconfig  force  nodeN  /usr

  Things like that.  It forces Amanda to do a level 0 THAT day, on
 whatever DLEs you do it for.  It messes up Amanda's balancing, but if you
 have a lot of DLEs it'll even out somewhat.

  You also CAN do a force on all DLEs ( * * )  on the day that you want a
 whole set of level 0's.  But that really messes up the balancing  isn't
 the best idea.

  -

  I personally DO run a second configuration  (archive) and a second set
 of tapes (same tape drive) so that I can get an archive tape.  I set it no
 record so it doesn't change the dumpdates, and so it doesn't stop other
 level 0's from being done in the normal configuration .   I do a force on
 all my DLEs and run it once a month.  These tapes go offsite, so I want my
 daily configuration to have its own set of level 0's.  Level 0's at least
 once a week (my dumpcycle) but then I keep 5-6 weeks of tapes before I
 recycle and start overwriting those level 0's.



 Deb

 On Jan 16, 2014, at 5:01 PM, Michael Stauffer mgsta...@gmail.com
 wrote:

   Thanks for the reply. I don't have multiple configurations, I'm just
 trying to figure out how to set things up.
 So it sounds like I can have a single configuration that uses the same
 group of tapes as defined by 'labelstr', and I use the DLE lists to break
 things up into manageable chunks. Then with amdump, I can specify the
 individual DLE's to be dumped?

  -M


 On Thu, Jan 9, 2014 at 1:58 PM, Jean-Louis Martineau martin...@zmanda.com
  wrote:

  On 01/09/2014 01:47 PM, Michael Stauffer wrote:


 Hi,

 I'm setting up amanda 3.3.4.

 Regarding 'labelstr' in amanda.conf:

 The documentation says: If multiple configurations are run from the
 same tape server host, it is helpful to set their labels to different
 strings (for example, DAILY[0-9][0-9]* vs. ARCHIVE[0-9][0-9]*) to avoid
 overwriting each other's tapes.

 Does this mean that if I have multiple configurations in order to break
 up a large 30TB data set into managable chunks, each configuration will
 have to have a particular set of tapes assigned to it? That seems very
 awkward if so.


  yes

 Why do you have multiple configurations?
 You should do it with one configuration and multiple dle.

 Jean-Louis





autolabel and automatic labeling

2014-02-03 Thread Michael Stauffer
Hi,

I'm hoping to use autolabel like this:

  autolabel $b-$c-config-name

so that I can do tape labeling like this to easily label new tapes:

  amlabel config-name slot N

With my definition of autolabel above, will amanda *never* automatically
label a tape that it encounters and is unsure of? This is way I infer from
the autolabel discussion in amanda.conf man page, but it's not stated
explicitly. I'd like to play it safe (at least for now) and always manually
label new tapes.

Thanks.

-M


Re: tapetype for IBM ULTRIUM-TD5?

2014-01-30 Thread Michael Stauffer
Thanks Jean, that was it. The tape drives themselves are /dev/nst0 and
/dev/nst1. I've got amtapetype working now, excellent. I'm trying to figure
out how to get the specifics on the different device modes (i.e.
/dev/nst0a, etc), but if I need help I'll make a separate post.

-M


On Tue, Jan 28, 2014 at 6:23 PM, Jean-Louis Martineau
martin...@zmanda.comwrote:

 /dev/sg2 is the changer device not the tape device, what is the tape
 device? It is a /dev/nst? device



 On 01/28/2014 05:54 PM, Michael Stauffer wrote:

 Tom and Jon, thanks for the great replies. I'm just getting back to this
 project again.

 I've tried this

amtapetype -f -b 524288 -t IBM-ULTRIUM-TD5 /dev/sg2 21 | tee
 tapetype-ultrium-512k-block

 and get this

 amtapetype: Error writing label 'amtapetype-1614155207': File
 /dev/sg2 is not a tape device at /usr/sbin/amtapetype line 93.

 I figure I need to define a tape device that includes /dev/sg2? I've
 added the changer definition to amanda.conf:

 define changer scalar_i500 {
 tpchanger chg-robot:/dev/sg2
 property tape-device 0=tape:/dev/sg2
 }
 tpchanger scalar_i500

 But am getting the same error. I've also tried

 amtapetype -f -b 524288 -t IBM-ULTRIUM-TD5 scalar_i500

 and get

 amtapetype: Error writing label 'amtapetype-1739790301': Can't
 open tape device scalar_i500: No such file or directory at
 /usr/sbin/amtapetype line 93.

 mtx works fine on /dev/sg2. Any suggestions? Thanks.

 -M


 On Thu, Jan 16, 2014 at 7:31 PM, Tom Robinson 
 tom.robin...@motec.com.aumailto:
 tom.robin...@motec.com.au wrote:

 On 17/01/14 10:55, Jon LaBadie wrote:
  On Thu, Jan 16, 2014 at 06:04:14PM -0500, Michael Stauffer wrote:
  Hi,
 
  I'm setting up amanda 3.3.4. I can't find a definiton for
 changer's drives
  (IBM ULTRIUM-TD5 LTO-5) in /etc/amanda/template.d/tapetypes
 
  Can someone point me to a source for this, or to where I can
 learn how to
  determine the params I need for the drives? Thanks
 
  The command amtapetype should be in your amanda server installation.
  It can be used to determine the values for your site.
 
  The only tapetype parameter amanda actually uses is capacity.
  So you could hand create a definition and be close enough
  for government work.
 
  Jon

 Created on OmniOS v11 r151006:

 $ amtapetype -f -b 524288 -t ULT3580-TD5 /dev/rmt/0b 21 | tee
 /etc/opt/csw/amanda/weekly/tapetype-512k-block
 Checking for FSF_AFTER_FILEMARK requirement
 Applying heuristic check for compression.
 Wrote random (uncompressible) data at 85721088 bytes/sec
 Wrote fixed (compressible) data at 295261525.33 bytes/sec
 Compression: enabled
 Writing one file to fill the volume.
 Wrote 1519480995840 bytes at 85837 kb/sec
 Got LEOM indication, so drive and kernel together support LEOM
 Writing smaller files (15194390528 bytes) to determine filemark.
 device-property FSF_AFTER_FILEMARK false
 define tapetype ULT3580-TD5 {
 comment Created by amtapetype; compression enabled
 length 1483868160 kbytes
 filemark 868 kbytes
 speed 85837 kps
 blocksize 512 kbytes
 }
 # for this drive and kernel, LEOM is supported; add
 #   device-property LEOM TRUE
 # for this device.

 I ran amtapetype for a number of different block sizes (32k, 256k,
 512k and 2048k) but I found that
 block sizes over 512 kbytes gave me driver issues and no LEOM
 capability. My amdump reports show
 fairly reasonable tape streaming speeds:

 Avg Tp Write Rate (k/s)   143476 143842 5723.0

 If you have a different system just run amtapetype and be patient.
 It will be worth the wait.

 Regards,
 Tom

 Tom Robinson
 IT Manager/System Administrator

 MoTeC Pty Ltd

 121 Merrindale Drive
 Croydon South
 3136 Victoria
 Australia

 T: +61 3 9761 5050
 F: +61 3 9761 5051
 E: tom.robin...@motec.com.au mailto:tom.robin...@motec.com.au








Re: tapetype for IBM ULTRIUM-TD5?

2014-01-30 Thread Michael Stauffer
Thanks Jon, the tape drives themselves are /dev/nst0 and /dev/nst1.

I must've been confused partly by the amtapetype man page which has
[config] as a parameter, but looking more closely, it's optional.

-M


On Tue, Jan 28, 2014 at 11:02 PM, Jon LaBadie j...@jgcomp.com wrote:

 On Tue, Jan 28, 2014 at 05:54:21PM -0500, Michael Stauffer wrote:
  Tom and Jon, thanks for the great replies. I'm just getting back to this
  project again.
 
  I've tried this
 
 amtapetype -f -b 524288 -t IBM-ULTRIUM-TD5 /dev/sg2 21 | tee
  tapetype-ultrium-512k-block
 
  and get this
 
  amtapetype: Error writing label 'amtapetype-1614155207': File /dev/sg2 is
  not a tape device at /usr/sbin/amtapetype line 93.
 
  I figure I need to define a tape device that includes /dev/sg2? I've
 added
  the changer definition to amanda.conf:

 No, amtapetype needs no configuration in amanda files.
 Note the command line does not include a Config Name
 like Daily.  That is because it is not related to
 any particular config.

 ...
 Tape libraries usually use a separate device for the
 changer mechanism and for the tape drive(s).  You are
 using the changer device while trying to operate the
 drive.

  mtx works fine on /dev/sg2. Any suggestions? Thanks.
 

 mtx works with the changer, so that is not surprising.

 mt works with the drives.  As do things like dd, tar,
 cat, etc.  Make sure you can write to, and read from,
 the drive device before using the amanda commands.

 Jon
 --
 Jon H. LaBadie j...@jgcomp.com
  11226 South Shore Rd.  (703) 787-0688 (H)
  Reston, VA  20190  (609) 477-8330 (C)



Re: tapetype for IBM ULTRIUM-TD5?

2014-01-28 Thread Michael Stauffer
Tom and Jon, thanks for the great replies. I'm just getting back to this
project again.

I've tried this

   amtapetype -f -b 524288 -t IBM-ULTRIUM-TD5 /dev/sg2 21 | tee
tapetype-ultrium-512k-block

and get this

amtapetype: Error writing label 'amtapetype-1614155207': File /dev/sg2 is
not a tape device at /usr/sbin/amtapetype line 93.

I figure I need to define a tape device that includes /dev/sg2? I've added
the changer definition to amanda.conf:

define changer scalar_i500 {
tpchanger chg-robot:/dev/sg2
property tape-device 0=tape:/dev/sg2
}
tpchanger scalar_i500

But am getting the same error. I've also tried

amtapetype -f -b 524288 -t IBM-ULTRIUM-TD5 scalar_i500

and get

amtapetype: Error writing label 'amtapetype-1739790301': Can't open tape
device scalar_i500: No such file or directory at /usr/sbin/amtapetype line
93.

mtx works fine on /dev/sg2. Any suggestions? Thanks.

-M


On Thu, Jan 16, 2014 at 7:31 PM, Tom Robinson tom.robin...@motec.com.auwrote:

 On 17/01/14 10:55, Jon LaBadie wrote:
  On Thu, Jan 16, 2014 at 06:04:14PM -0500, Michael Stauffer wrote:
  Hi,
 
  I'm setting up amanda 3.3.4. I can't find a definiton for changer's
 drives
  (IBM ULTRIUM-TD5 LTO-5) in /etc/amanda/template.d/tapetypes
 
  Can someone point me to a source for this, or to where I can learn how
 to
  determine the params I need for the drives? Thanks
 
  The command amtapetype should be in your amanda server installation.
  It can be used to determine the values for your site.
 
  The only tapetype parameter amanda actually uses is capacity.
  So you could hand create a definition and be close enough
  for government work.
 
  Jon

 Created on OmniOS v11 r151006:

 $ amtapetype -f -b 524288 -t ULT3580-TD5 /dev/rmt/0b 21 | tee
 /etc/opt/csw/amanda/weekly/tapetype-512k-block
 Checking for FSF_AFTER_FILEMARK requirement
 Applying heuristic check for compression.
 Wrote random (uncompressible) data at 85721088 bytes/sec
 Wrote fixed (compressible) data at 295261525.33 bytes/sec
 Compression: enabled
 Writing one file to fill the volume.
 Wrote 1519480995840 bytes at 85837 kb/sec
 Got LEOM indication, so drive and kernel together support LEOM
 Writing smaller files (15194390528 bytes) to determine filemark.
 device-property FSF_AFTER_FILEMARK false
 define tapetype ULT3580-TD5 {
 comment Created by amtapetype; compression enabled
 length 1483868160 kbytes
 filemark 868 kbytes
 speed 85837 kps
 blocksize 512 kbytes
 }
 # for this drive and kernel, LEOM is supported; add
 #   device-property LEOM TRUE
 # for this device.

 I ran amtapetype for a number of different block sizes (32k, 256k, 512k
 and 2048k) but I found that
 block sizes over 512 kbytes gave me driver issues and no LEOM capability.
 My amdump reports show
 fairly reasonable tape streaming speeds:

 Avg Tp Write Rate (k/s)   143476 143842 5723.0

 If you have a different system just run amtapetype and be patient. It will
 be worth the wait.

 Regards,
 Tom

 Tom Robinson
 IT Manager/System Administrator

 MoTeC Pty Ltd

 121 Merrindale Drive
 Croydon South
 3136 Victoria
 Australia

 T: +61 3 9761 5050
 F: +61 3 9761 5051
 E: tom.robin...@motec.com.au






Re: 'labelstr' param in amanda.conf

2014-01-16 Thread Michael Stauffer
Thanks for the reply. I don't have multiple configurations, I'm just trying
to figure out how to set things up.
So it sounds like I can have a single configuration that uses the same
group of tapes as defined by 'labelstr', and I use the DLE lists to break
things up into manageable chunks. Then with amdump, I can specify the
individual DLE's to be dumped?

-M


On Thu, Jan 9, 2014 at 1:58 PM, Jean-Louis Martineau
martin...@zmanda.comwrote:

 On 01/09/2014 01:47 PM, Michael Stauffer wrote:


 Hi,

 I'm setting up amanda 3.3.4.

 Regarding 'labelstr' in amanda.conf:

 The documentation says: If multiple configurations are run from the same
 tape server host, it is helpful to set their labels to different strings
 (for example, DAILY[0-9][0-9]* vs. ARCHIVE[0-9][0-9]*) to avoid
 overwriting each other's tapes.

 Does this mean that if I have multiple configurations in order to break
 up a large 30TB data set into managable chunks, each configuration will
 have to have a particular set of tapes assigned to it? That seems very
 awkward if so.


 yes

 Why do you have multiple configurations?
 You should do it with one configuration and multiple dle.

 Jean-Louis



tapetype for IBM ULTRIUM-TD5?

2014-01-16 Thread Michael Stauffer
Hi,

I'm setting up amanda 3.3.4. I can't find a definiton for changer's drives
(IBM ULTRIUM-TD5 LTO-5) in /etc/amanda/template.d/tapetypes

Can someone point me to a source for this, or to where I can learn how to
determine the params I need for the drives? Thanks

-M


amidxtaped running amok

2014-01-16 Thread Michael Stauffer
Hi,

I'm setting up amanda 3.3.4. My server is bogged down with 5 amidxtaped
processes taking up a big chunk of my cpu's, and just about all of my 32
gigs of RAM. I haven't been running anything for a while, as I'm slowly
working my way through the setup process.

Here's all the amandabackup proc's. Any ideas what may have happened or how
to investigate further? I haven't killed the procs yet since I don't need
the machine to be running properly right now.

[amandabackup@cback root]$ ps -u amandabackup
  PID TTY  TIME CMD
14215 ?00:00:00 screen
14216 pts/200:00:00 bash
17927 ?00:00:00 amandad
17929 ?00:00:00 amandad defunct
17943 ?37-02:16:31 amidxtaped
17944 ?00:00:00 amandad defunct
18243 ?00:00:00 amandad
18245 ?00:00:00 amandad defunct
18256 ?44-14:54:42 amidxtaped
18257 ?00:00:00 amandad defunct
24530 pts/000:00:00 bash
24598 pts/000:00:00 ps
26646 ?00:00:00 amandad
26648 ?00:00:00 amandad defunct
26665 ?32-16:04:27 amidxtaped
2 ?00:00:00 amandad defunct
27180 ?00:00:00 amandad
27182 ?00:00:00 amandad defunct
27189 ?32-11:27:04 amidxtaped
27190 ?00:00:00 amandad defunct
27279 ?00:00:00 amandad
27281 ?00:00:00 amandad defunct
27285 ?32-11:25:30 amidxtaped
27286 ?00:00:00 amandad defunct

Thanks!

-M


querying device-side compression setting

2014-01-09 Thread Michael Stauffer
Hi,


I'm setting up amanda 3.3.4 with a Quantum Scalar i500 tape library.


Regrading compression options, I'll go with the recommendation in the
documentation to let Amanda do the compression client-side.

How can I determine if hardware compression is enabled on my tape library?
I can't find anything about it in the library. Is this option controlled by
amanda or other mtx settings? Thanks.


-M


'labelstr' param in amanda.conf

2014-01-09 Thread Michael Stauffer
Hi,

I'm setting up amanda 3.3.4.

Regarding 'labelstr' in amanda.conf:

The documentation says: If multiple configurations are run from the same
tape server host, it is helpful to set their labels to different strings
(for example, DAILY[0-9][0-9]* vs. ARCHIVE[0-9][0-9]*) to avoid
overwriting each other's tapes.

Does this mean that if I have multiple configurations in order to break up
a large 30TB data set into managable chunks, each configuration will have
to have a particular set of tapes assigned to it? That seems very awkward
if so.

-M


'changerfile' param for tape robots

2014-01-09 Thread Michael Stauffer
Hi,

I'm working on setting up a robot tape changer.
The docs for chg-robot:DEVICE say this:the changerfile parameter can be
used to specify a filename at which it should store its state. Ordinarily,
this state is stored in a file named after the changer device under
$localstatedir/amanda, e.g., /var/amanda/chg-robot-dev-sg0. There should be
a single such statefile for each distinct tape library attached to the
Amanda server, even if multiple Amanda configurations reference that
library.

Is this saying I have to define 'changerfile', or that it will default to
placing it in $localstatedir/amanda?

Also, some other refs to this param on the web say that it's for a
configuration file, not a state file. Can I ignore those?

Thanks.

-M


'changerfile' param for tape robots

2013-12-19 Thread Michael Stauffer
Hi,

I'm working on setting up a robot tape changer.
The docs for chg-robot:DEVICE say this:the changerfile parameter can be
used to specify a filename at which it should store its state. Ordinarily,
this state is stored in a file named after the changer device under
$localstatedir/amanda, e.g., /var/amanda/chg-robot-dev-sg0. There should be
a single such statefile for each distinct tape library attached to the
Amanda server, even if multiple Amanda configurations reference that
library.

Is this saying I have to define 'changerfile', or that it will default to
placing it in  $localstatedir/amanda?

Also, some other refs to this param on the web say that it's for a
configuration file, not a state file. Can I ignore those?

Thanks.

-M


Re: trouble with test restore

2013-12-16 Thread Michael Stauffer
Hi Everyone,

I've gotten my focus back to this and sorted out my problems.

1) 'amrecover' on clients was unable to connect to server.
Turns out the server's firewall was blocking amanda's ports. Easy fix.
Should have thought of this when the tutorial said to check the firewall on
the clients which I did, but I forgot about the server.

2) 'amrecover' hangs as it tries to load a tape after 'extract' command:
I looked at the amidxtaped log on the server, and saw that the tape device
name/file was not recognized/found. Turns out I had a different name
defined for 'tpchanger' on the server set configuration, than for the
clients' configuration. Changing the clients' configuration fixed this. I'm
curious about the parameter naming - in the
server's /etc/amanda/DailySet1/amanda.conf  file, I define

...
define changer my_vtapes {
tpchanger chg-disk:/amandatest/vtape/DailySet1
property num-slot 10
property auto-create-slot yes
}
tpchanger my_vtapes
...


And in the clients'  /etc/amanda/amanda-client.conf, it's

tapedev  chg-disk://amandatest/vtape/DailySet1  # your tape device


Just curious where 'tapedev' gets mapped to 'tpchanger'.

-M


On Tue, Nov 26, 2013 at 9:04 AM, Debra S Baddorf badd...@fnal.gov wrote:

  Recovery from server would work if it's the same type node as the client
 (has to be able to run same version of system dump  able to read the saved
 file). It would place the recovered files in your current directory on the
 server, not on the client.

  As for your client's new problem, I'm adding the Amanda-users back into
 this reply cuz I can diagnose that from home. Hope someone else can step in
 here.

 Deb  (from phone)

 On Nov 25, 2013, at 9:08 PM, Michael Stauffer mgsta...@gmail.com
 wrote:

   Thanks Deb. I found an amrecover log but first this:

  I realized I've been trying to recover from the server (i.e. run
 amrecover on the server), and not the client as directed in the tutorial
 (but shouldn't recovery from the server work too?).

  Running amrecover on the client, I get this:

[root@cslim ~]# amrecover DailySet1
   AMRECOVER Version 3.3.4. Contacting server on cback.uphs.upenn.edu ...
   [request failed: No route to host]

  From the client (cslim) I can ping and ssh to the server (even
 passwordless ssh now). I'm using bsdtcp for auth - is there some other
 setup for that?

  Thanks.

  -M


 On Mon, Nov 25, 2013 at 6:16 PM, Debra S Baddorf badd...@fnal.gov wrote:

 
 
  xtracting files using tape drive cback://amandatest/vtape/DailySet1 on
 host localhost.
  Load tape DailySet1-3 now
  Continue [?/Y/n/s/d]? y
 
  [this is where it just hangs up]
 
  --
  Other info:
 
  /var/log/amanda/log.error/  is empty
 
  (As an aside, there are a large multitude of various log files - is
 there an amanda command that parses them all to look for errors and reports
 them?)
 

  Have you tried
   $  grep  -ir  DailySet1-3  /tmp/amanda/

 (substitute if your logs aren't being sent to that directory)

 There should also be a file  *amrecover*mmdd*   though the hangup
 might be in a *taper*  file or something else like that.

 Those are some first steps, in case better info doesn't come soon!
 Deb Baddorf
 Fermilab





Fwd: trouble with test restore

2013-11-25 Thread Michael Stauffer
Hi, does anyone have thoughts on the issue below?

What's the proper way to clean my amanda setup so I can try the test
dump/restore from scratch?

Thanks

-- Forwarded message --
From: Michael Stauffer mgsta...@gmail.com
Date: Thu, Nov 14, 2013 at 11:29 AM
Subject: trouble with test restore
To: amanda-users@amanda.org


Hi,

I'm still working through the 15-minute Amanda tutorial. Maybe '15-hour'
would be a better title? ;-)

My first backup/dump failed b/c my test backup set was 6GB,  than the
default 5GB tape/slot size for the test setup. I changed the tape/slot size
to 6.5GB and changed the config to allow 2 tapes per dump. Then I ran it
again, and the amreport was happy.

But then when I ran the recover, it hung during the 'Load tape' step (see
below, from 2nd attempt). So I tried amclean and there were no reported
issues with the log, and used amrmtape to remove the two tapes.

I reran the dump and again it reported no problems. But the recover is
hanging again. Here's the output. Does anyone have suggestions? Thanks!

[root@cback amanda]# amrecover DailySet1
AMRECOVER Version 3.3.4. Contacting server on localhost ...
220 cback AMANDA index server (3.3.4) ready.
Setting restore date to today (2013-11-13)
200 Working date set to 2013-11-13.
200 Config set to DailySet1.
501 Host cback.uphs.upenn.edu is not in your disklist.
Trying host cback.uphs.upenn.edu ...
501 Host cback.uphs.upenn.edu is not in your disklist.
Trying host cback.uphs.upenn.edu ...
501 Host cback.uphs.upenn.edu is not in your disklist.
Use the sethost command to choose a host to recover
amrecover sethost cslim.uphs.upenn.edu
200 Dump host set to cslim.uphs.upenn.edu.
amrecover listdisk
200- List of disk for host cslim.uphs.upenn.edu
201- /amandatestdata/
200 List of disk for host cslim.uphs.upenn.edu
amrecover setdisk /amandatestdata/
200 Disk set to /amandatestdata/.
amrecover ls
2013-11-13-15-15-01 otherfile
2013-11-13-15-15-01 helloworld.txt
2013-11-13-15-15-01 .
amrecover add *
Added file /otherfile
Added file /helloworld.txt
amrecover extract

Extracting files using tape drive cback://amandatest/vtape/DailySet1 on
host localhost.
The following tapes are needed: DailySet1-3

Extracting files using tape drive cback://amandatest/vtape/DailySet1 on
host localhost.
Load tape DailySet1-3 now
Continue [?/Y/n/s/d]? y

[this is where it just hangs up]

--
Other info:

/var/log/amanda/log.error/  is empty

(As an aside, there are a large multitude of various log files - is there
an amanda command that parses them all to look for errors and reports them?)

[amandabackup@cback ~]$ ls -l /amandatest/vtape/DailySet1/slot3/
total 300
-rw---. 1 amandabackup disk  32768 Nov 13 15:15 0.DailySet1-3
-rw---. 1 amandabackup disk  33062 Nov 13 15:15
1.cslim.uphs.upenn.edu._amandatestdata_.0
-rw---. 1 amandabackup disk 234609 Nov 13 15:15
2.cfile.uphs.upenn.edu._jet_admin_temp_.0

[amandabackup@cback ~]$ amreport DailySet1
Hostname: cback.uphs.upenn.edu
Org : DailySet1
Config  : DailySet1
Date: November 13, 2013

These dumps were to tape DailySet1-3.
The next 2 tapes Amanda expects to use are: 2 new tapes.
The next 2 new tapes already labelled are: DailySet1-4, DailySet1-5


STATISTICS:
  Total   Full  Incr.   Level:#
        
Estimate Time (hrs:min) 0:00
Run Time (hrs:min)  0:00
Dump Time (hrs:min) 0:00   0:00   0:00
Output Size (meg)0.20.20.0
Original Size (meg)  1.31.30.0
Avg Compressed Size (%) 14.5   14.5--
DLEs Dumped2  2  0
Avg Dump Rate (k/s)   1010.3 1010.3--

Tape Time (hrs:min) 0:00   0:00   0:00
Tape Size (meg)  0.20.20.0
Tape Used (%)0.00.00.0
DLEs Taped 2  2  0
Parts Taped2  2  0
Avg Tp Write Rate (k/s)985.0  985.0--

USAGE BY TAPE:
  Label   Time Size  %  DLEs Parts
  DailySet1-3 0:00   0G0.0 2 2

NOTES:
  planner: Adding new disk cslim.uphs.upenn.edu:/amandatestdata/.
  taper: Slot 2 is empty, autolabel not set
  taper: Slot 3 with label DailySet1-3 is usable
  taper: tape DailySet1-3 kb 197 fm 2 [OK]


DUMP SUMMARY:
DUMPER
STATS   TAPER STATS
HOSTNAME DISK L ORIG-GB  OUT-GB  COMP%  MMM:SS
KB/s MMM:SS   KB/s
--- --
-- -
cfile.uphs.upenn.edu /jet/admin/temp/ 0   0   0   14.60:00
1304.1   0:00 1970.0
cslim.uphs.upenn.edu /amandatestdata/ 0   0   0   10.00:00
22.5   0:000.0

(brought to you by Amanda version 3.3.4)


trouble with test restore

2013-11-14 Thread Michael Stauffer
Hi,

I'm still working through the 15-minute Amanda tutorial. Maybe '15-hour'
would be a better title? ;-)

My first backup/dump failed b/c my test backup set was 6GB,  than the
default 5GB tape/slot size for the test setup. I changed the tape/slot size
to 6.5GB and changed the config to allow 2 tapes per dump. Then I ran it
again, and the amreport was happy.

But then when I ran the recover, it hung during the 'Load tape' step (see
below, from 2nd attempt). So I tried amclean and there were no reported
issues with the log, and used amrmtape to remove the two tapes.

I reran the dump and again it reported no problems. But the recover is
hanging again. Here's the output. Does anyone have suggestions? Thanks!

[root@cback amanda]# amrecover DailySet1
AMRECOVER Version 3.3.4. Contacting server on localhost ...
220 cback AMANDA index server (3.3.4) ready.
Setting restore date to today (2013-11-13)
200 Working date set to 2013-11-13.
200 Config set to DailySet1.
501 Host cback.uphs.upenn.edu is not in your disklist.
Trying host cback.uphs.upenn.edu ...
501 Host cback.uphs.upenn.edu is not in your disklist.
Trying host cback.uphs.upenn.edu ...
501 Host cback.uphs.upenn.edu is not in your disklist.
Use the sethost command to choose a host to recover
amrecover sethost cslim.uphs.upenn.edu
200 Dump host set to cslim.uphs.upenn.edu.
amrecover listdisk
200- List of disk for host cslim.uphs.upenn.edu
201- /amandatestdata/
200 List of disk for host cslim.uphs.upenn.edu
amrecover setdisk /amandatestdata/
200 Disk set to /amandatestdata/.
amrecover ls
2013-11-13-15-15-01 otherfile
2013-11-13-15-15-01 helloworld.txt
2013-11-13-15-15-01 .
amrecover add *
Added file /otherfile
Added file /helloworld.txt
amrecover extract

Extracting files using tape drive cback://amandatest/vtape/DailySet1 on
host localhost.
The following tapes are needed: DailySet1-3

Extracting files using tape drive cback://amandatest/vtape/DailySet1 on
host localhost.
Load tape DailySet1-3 now
Continue [?/Y/n/s/d]? y

[this is where it just hangs up]

--
Other info:

/var/log/amanda/log.error/  is empty

(As an aside, there are a large multitude of various log files - is there
an amanda command that parses them all to look for errors and reports them?)

[amandabackup@cback ~]$ ls -l /amandatest/vtape/DailySet1/slot3/
total 300
-rw---. 1 amandabackup disk  32768 Nov 13 15:15 0.DailySet1-3
-rw---. 1 amandabackup disk  33062 Nov 13 15:15
1.cslim.uphs.upenn.edu._amandatestdata_.0
-rw---. 1 amandabackup disk 234609 Nov 13 15:15
2.cfile.uphs.upenn.edu._jet_admin_temp_.0

[amandabackup@cback ~]$ amreport DailySet1
Hostname: cback.uphs.upenn.edu
Org : DailySet1
Config  : DailySet1
Date: November 13, 2013

These dumps were to tape DailySet1-3.
The next 2 tapes Amanda expects to use are: 2 new tapes.
The next 2 new tapes already labelled are: DailySet1-4, DailySet1-5


STATISTICS:
  Total   Full  Incr.   Level:#
        
Estimate Time (hrs:min) 0:00
Run Time (hrs:min)  0:00
Dump Time (hrs:min) 0:00   0:00   0:00
Output Size (meg)0.20.20.0
Original Size (meg)  1.31.30.0
Avg Compressed Size (%) 14.5   14.5--
DLEs Dumped2  2  0
Avg Dump Rate (k/s)   1010.3 1010.3--

Tape Time (hrs:min) 0:00   0:00   0:00
Tape Size (meg)  0.20.20.0
Tape Used (%)0.00.00.0
DLEs Taped 2  2  0
Parts Taped2  2  0
Avg Tp Write Rate (k/s)985.0  985.0--

USAGE BY TAPE:
  Label   Time Size  %  DLEs Parts
  DailySet1-3 0:00   0G0.0 2 2

NOTES:
  planner: Adding new disk cslim.uphs.upenn.edu:/amandatestdata/.
  taper: Slot 2 is empty, autolabel not set
  taper: Slot 3 with label DailySet1-3 is usable
  taper: tape DailySet1-3 kb 197 fm 2 [OK]


DUMP SUMMARY:
DUMPER
STATS   TAPER STATS
HOSTNAME DISK L ORIG-GB  OUT-GB  COMP%  MMM:SS
KB/s MMM:SS   KB/s
--- --
-- -
cfile.uphs.upenn.edu /jet/admin/temp/ 0   0   0   14.60:00
1304.1   0:00 1970.0
cslim.uphs.upenn.edu /amandatestdata/ 0   0   0   10.00:00
22.5   0:000.0

(brought to you by Amanda version 3.3.4)


Trouble with passwordless ssh to client

2013-11-12 Thread Michael Stauffer
Hi,

I'm setting up Amanda 3.3.4 (CentOS 6.4), following to Amanda in 15
Minutes guide (btw, seems like a very optimistic title!).

I can't get passwordless ssh working between server and client with the
amandabackup user. I've followed the instructions in the doc which were to
manually copy the public key, and I've also generated new keys on the
server using ssh-keygen and copied them using ssh-copy-id onto the client.

It *does* work between these machines as user root, and between other
users, and between amandabackup on the server and another user on the
client.

I read online that someone thought the user on the login machine has to
have their home dir in /home (or /root, presumably, for root). What I see
so far suggests this might be right, as it works from amandabackup user on
the server to another user on the client when the other user has their home
dir in /home. However, it also works if I create a user with a home dir in
/tmp.

I have the ownership and permissions setup correctly for
/var/lib/amandabackup/.ssh and its files.

Has anyone else seen this issue, or have any ideas?

Thanks

-M


Re: Potential user - more questions

2013-10-10 Thread Michael Stauffer
Olivier and Jon, thanks for the helpful answers.
I'm going to setup my redeployed backup system with Amanda. It seems enough
easier than Bacula to make it worth while to make the switch, and I
especially like the simple format of the dump files and the simple text
indecies for cataloging backups.

I'm sure you'll hear from me more while I get things going!

-M


On Thu, Oct 10, 2013 at 12:45 AM, Jon LaBadie j...@jgcomp.com wrote:

 On Wed, Oct 09, 2013 at 06:27:48PM -0400, Michael Stauffer wrote:
  Hi again,
 
  I've got another batch of questions while I consider switching to Amanda:
 
  1) catalog (indecies)
  It seems the main catalog/database is stored in the index files. Is it
  straightforward to back these up?
  This doc (http://www.zmanda.com/protecting-amanda-server.html) sugests
  backing up these dirs/files to be able to restore an amanda
  configuration (and presumably the backup catalog): /etc/amandates,
  /etc/dumpdates, /etc/amanda, /var/lib/amanda.

 There is no built-in way to do this in amanda.  The problems are they
 are not complete, and changing, until the backup is done.  Several
 members of this list have described their home-grown techniques.
 
  2) Spanning and parts
  Say I split my 32TB of data into DLE's of 2-3TB.
 
  a) If I set a 'part' size of 150GB (10% of native tape capacity is
  what I saw recommended), what is the format of each part as it's
  written? Is each part its own tarfile? Seems that would make it easier
  to restore things manually.

 Traditional amanda tape files, holding the complete tar or dump archive,
 are a 32KB header followed by the archive.  Manual restoration is done
 with dd to skip the header and pipe the rest to the appropriate command
 line to restore the data.

 The header contains information identifying the contents, how they
 were created, and when.

 Parts alter this scheme only slightly.  Each part still has a header.
 The header now includes info on which sequential part it is.  The part
 name also identifies it location in the sequence.  The data is simply
 a chunk of the complete archive.  Manual restoration again is strip
 the headers and pipe to the restore command.

 
  b) If a part spans two volumes, what's the format of that? Is it a
  single tarfile that's split in two?

 A part will NOT span two volumes.  If the end of the media is reached,
 the part is restarted on the next volume.

 
  c) What's the manual restore process for such a spanned part? cat the
  two parts together and pipe to tar for extraction?
 
  3) Restoring w/out Amanda
  I thought data was written to tape as tar files. But this page
  suggests a dumpfile is only readable by Amanda apps. Is a dumpfile
  something else?
  http://wiki.zmanda.com/index.php/Dumpfile

 I think the author meant there are no standard unix/linux commands
 that know the header + data layout.  The dumpfiles can be handled
 with amanda commands or as described above, the operator can use
 standard commands when armed with knowledge of the layout.

 
  4) holding disk and flushing
  I see how flushing can be forced when the holding disk has a certain %
  of tape size.
  Can a flush be forced every N days? The idea here would be to get data
  to tape at a min of every week or so, should successive incrementals
  be small.

 Dumping to holding disk without taping can be done.  Then have a
 crontable entry to flush when you want.  This can done with a
 separate amflush command, or by varying amdump options.
 
  5) alerting
  Is there a provision for email and/or other alerts on job completion
  or error, etc?
 
 Most amanda admins have an amreport emailed to them at amdump or amflush
 completion.  As the cron entry can be a shell script, you could
 customize greatly.

 Jon
 --
 Jon H. LaBadie j...@jgcomp.com
  11226 South Shore Rd.  (703) 787-0688 (H)
  Reston, VA  20190  (609) 477-8330 (C)



Re: Considering Amanda - some questions

2013-10-09 Thread Michael Stauffer
Paul, thanks much for the replies, this is very helpful. I'll ask some
more questions in another thread.
BTW does this list prefer top- or bottom-posting, or no preference?

-M

On Fri, Oct 4, 2013 at 7:08 PM, Paul Yeatman pyeat...@zmanda.com wrote:
 On Fri, 2013-10-04 at 17:23 -0400, Michael Stauffer wrote:
 Hi,


 Hello!

 I'm thinking about switching to Amanda. I inherited a Bacula-based
 backup system (old version 3) and the server's drives are failing so
 I'm going to deploy a new server and am considering switching to
 Amanda at the same time.

 My setup: I need to backup 2 linux file servers, each hostinng a
 RAID's, sizes ~36TB and ~28TB. The raids full yet but let's assume
 we'll get close to full eventually. I have a 2-drive (LTO-5), 30-tape
 Quantum tape library. In addition, I'll probably back up the /etc dirs
 of a few servers.


 Your setup would be fairly straightforward in Amanda.

 From what I've read, Amanda could be a better choice for me than
 Bacula since mine is a pretty straight-forward setup. Does that seem
 right? Bacula's been something of a beast for me, especially when I
 had a catalog meltdown and had to restore the catalog piecemeal from
 tape.

 I have a number of questions. I'll post some here, and then some in
 followup posts to keep it manageable. Thanks for any help!

 1) I'm curious about how often amanda is updated, approximately. I see
 the current version was released June 2013. How about the previous
 release of 3.x?


 Recently, it has been about every 6 months.  3.3.2 was released July
 2012 with 3.3.3 release Jan 2013 followed by 3.3.4 last June/July.

 2) Is it clear whether it's better to use the amanda.org mailing lists
 or the forums on zmanda.com? The mailing lists seem significantly
 slower than Bacula's. Is Amanda less widely used? Or maybe that's
 because there are fewer problems with Amanda? :)


 Both mailing lists and forums are regularly watched so it is your pick
 which you prefer working with.  There is a large Amanda users base but I
 cannot compare with Bacula and its mailing list.  Hopefully it is just
 due to less problems in Amanda :-)

 3) Are multi-drive tape changers directly supported? If I run amanda
 and multiple clients need backing up, amanda will use both tape drives
 simultaneously?


 Yes.  If there are multiple objects to back up and Amanda estimates that
 it will ultimately need to write all the data to more than 1 tape, it
 will load up to this number of tapes into drives and begin writing data
 to multiple drives simultaneously.

 4) Anyway to find out if my Quantum Scalar I-500 tape library is
 supported? The link on the site regarding supported tape devices is
 dead. It seems Amanda uses low-level tape commands, so it shouldn't be
 an issue? Anything to test it's compatibility?


 Amanda depends on the system to correctly recognize the robot and tape
 devices and the UNIX mt and mtx commands to work with the tape drives
 and robot respectively.  If Quantum claims to support the OS you plan to
 use as the backup server and this library can be operated with the mt
 and mtx command, Amanda should be able to work with the library
 correctly.

 5) Many links on the FAQ and wiki pages are dead. Is that an ongoing
 issue or just temporary? It doesn't bode too well for the
 documentation.


 I am seeing that several of the FAQ links do not appear to be going
 where they should be.  I did a quick fix on these.  I am not immediately
 finding such links to be incorrect on wiki in general, however.  Feel
 free to point out any that I am missing I will look into these.

 Paul



Potential user - more questions

2013-10-09 Thread Michael Stauffer
Hi again,

I've got another batch of questions while I consider switching to Amanda:

1) catalog (indecies)
It seems the main catalog/database is stored in the index files. Is it
straightforward to back these up?
This doc (http://www.zmanda.com/protecting-amanda-server.html) sugests
backing up these dirs/files to be able to restore an amanda
configuration (and presumably the backup catalog): /etc/amandates,
/etc/dumpdates, /etc/amanda, /var/lib/amanda.

2) Spanning and parts
Say I split my 32TB of data into DLE's of 2-3TB.

a) If I set a 'part' size of 150GB (10% of native tape capacity is
what I saw recommended), what is the format of each part as it's
written? Is each part its own tarfile? Seems that would make it easier
to restore things manually.

b) If a part spans two volumes, what's the format of that? Is it a
single tarfile that's split in two?

c) What's the manual restore process for such a spanned part? cat the
two parts together and pipe to tar for extraction?

3) Restoring w/out Amanda
I thought data was written to tape as tar files. But this page
suggests a dumpfile is only readable by Amanda apps. Is a dumpfile
something else?
http://wiki.zmanda.com/index.php/Dumpfile

4) holding disk and flushing
I see how flushing can be forced when the holding disk has a certain %
of tape size.
Can a flush be forced every N days? The idea here would be to get data
to tape at a min of every week or so, should successive incrementals
be small.

5) alerting
Is there a provision for email and/or other alerts on job completion
or error, etc?

Thanks!

-M


Considering Amanda - some questions

2013-10-04 Thread Michael Stauffer
Hi,

I'm thinking about switching to Amanda. I inherited a Bacula-based
backup system (old version 3) and the server's drives are failing so
I'm going to deploy a new server and am considering switching to
Amanda at the same time.

My setup: I need to backup 2 linux file servers, each hostinng a
RAID's, sizes ~36TB and ~28TB. The raids full yet but let's assume
we'll get close to full eventually. I have a 2-drive (LTO-5), 30-tape
Quantum tape library. In addition, I'll probably back up the /etc dirs
of a few servers.

From what I've read, Amanda could be a better choice for me than
Bacula since mine is a pretty straight-forward setup. Does that seem
right? Bacula's been something of a beast for me, especially when I
had a catalog meltdown and had to restore the catalog piecemeal from
tape.

I have a number of questions. I'll post some here, and then some in
followup posts to keep it manageable. Thanks for any help!

1) I'm curious about how often amanda is updated, approximately. I see
the current version was released June 2013. How about the previous
release of 3.x?

2) Is it clear whether it's better to use the amanda.org mailing lists
or the forums on zmanda.com? The mailing lists seem significantly
slower than Bacula's. Is Amanda less widely used? Or maybe that's
because there are fewer problems with Amanda? :)

3) Are multi-drive tape changers directly supported? If I run amanda
and multiple clients need backing up, amanda will use both tape drives
simultaneously?

4) Anyway to find out if my Quantum Scalar I-500 tape library is
supported? The link on the site regarding supported tape devices is
dead. It seems Amanda uses low-level tape commands, so it shouldn't be
an issue? Anything to test it's compatibility?

5) Many links on the FAQ and wiki pages are dead. Is that an ongoing
issue or just temporary? It doesn't bode too well for the
documentation.

Thanks!

-Michael