I have a machine on which we replaced the bash shell, which is
used by some of the amanda scripts.
We can add ld_library_path to the .cshrc and run interactive
command like # amcheck -t, but the daemon is not finding the
library, so we have failures in the nightly run.
Is there a way to set
On Thu, Sep 11, 2014 at 05:29:08PM -0400, Gene Heskett wrote:
On Thursday 11 September 2014 13:19:14 Debra S Baddorf did opine
And Gene did reply:
I agree. I always build amanda myself. [ My package seems okay when
I have ROOT build it and install it
. I dont think Ive changed
The more you customize the greater the chance you will have a
problem unraveling come update time.
I'd suppose you, if you want an unsupported solution, that you
could replace the original path's binary with a # ln file.
On Wed, Sep 10, 2014 at 11:29:27AM -0500, Jason L Tibbitts III wrote:
JM
On Fri, Jul 25, 2014 at 04:15:43PM -0400, Gene Heskett wrote:
On Friday 25 July 2014 15:26:00 Debra S Baddorf did opine
And Gene did reply:
I just create two DLEs
mynode / backuptype
mynode /boot backuptype
Then amdump myconfig ( or amdump myconfig
On Wed, Mar 12, 2014 at 01:11:03PM -0400, Brian Cuttler wrote:
Don't know if its relevant, but I've got an LTO5/juke and it
dropped in both speed and capacity. I'm now trying to remember
if I had the host ESAS card replaced, or the I/O module in the
juke...
I'd found that reseating the esas
Don't know if its relevant, but I've got an LTO5/juke and it
dropped in both speed and capacity. I'm now trying to remember
if I had the host ESAS card replaced, or the I/O module in the
juke...
On Wed, Mar 12, 2014 at 12:28:33PM -0400, Jon LaBadie wrote:
On Wed, Mar 12, 2014 at 02:38:48PM
I believe amanda TEEs the output of the dump, spooling it to
the holding area and also reading the dump/tar file to create
an index. I'm uncertain when (as created or after the DLE is
fully spooled) the index file is placed in the server side
tree for storage.
On Wed, Mar 05, 2014 at 09:20:56PM
On Thu, Feb 27, 2014 at 10:20:29PM +, Debra S Baddorf wrote:
I believe the whole dump has to be done before it starts to write to tape.
This prevents incomplete dumps from wasting space on the tape.
I try to have numerous smaller DLEs, so that it takes several DLEs to fill a
tape.
I don't believe amanda attempts to write to the holding disk.
Who owns it?
On Thu, Feb 27, 2014 at 03:09:20PM -0500, Michael Stauffer wrote:
Amanda 3.3.4
Hi,
Seems like I'm having trouble getting amanda to use my holding disk.
Here's my setup in amanda.conf:
define holdingdisk
On Tue, Feb 04, 2014 at 07:24:38AM -0800, Hugh E Cruickshank wrote:
Hi All:
We have just setup a monthly backup set to augment our daily and weekly
sets. I would like to start a yearly permanent archive tape policy. My
initial thoughts were to take the January monthly tape out of the
Hugh,
Glad to have been of help.
Others may weigh in with alternate strategies but it sounds
as if you have a satisfactory plan for now.
Good luck,
Brian
On Tue, Feb 04, 2014 at 09:47:18AM -0800, Hugh E Cruickshank wrote:
From: Brian Cuttler Sent: February 4, 2014 07:42
Personally I
, Jan 08, 2014 at 10:06:54AM -0500, Jean-Louis Martineau wrote:
Brian,
Maybe you defined the tape larger than it is?
Are you sure the tape can hold 80M of data?
Are you using hardware compression?
Jean-Louis
On 01/08/2014 09:13 AM, Brian Cuttler wrote:
I'm not sure I understand
...
I think I need to run a cleaning tape and keep an eye on this.
thank you,
Brian
On Wed, Jan 08, 2014 at 10:57:22AM -0500, Jean-Louis Martineau wrote:
On 01/08/2014 10:47 AM, Brian Cuttler wrote:
Jean
On Wed, Jan 08, 2014 at 11:18:46AM -0500, Jean-Francois Malouin wrote:
* Brian Cuttler br...@wadsworth.org [20140108 11:09]:
Jean-Louis,
googled it... I had my numbers wrong, it can supposedly hold
800 Gig, and with HW compression up to 2x that. I should not
have to adjust the tape
I'd thought you could only have one PTR for any given IP,
and that while long ago you could have two A records, that
was no longer acceptable and it was recommended to have a
single A record and a CNAME (as you said).
This differs from having multiple IP addresses defined by
more than one A
My version of amanda is a little older, but I don't believe I've
seen anything on rejecting a DLE because of size.
I have a user that just grew their directory/files and is now sitting
on close to 2 TBytes of data.
Amanda discovered this, determined it didn't have sufficient work area
and began
I'll have to look at the amanda-devices page for the exceptions.
But I can tell you that while amanda is skipping some level 1
dumps on me, because of calculated size, and I am using vtapes.
Amanda is telling me that its filled the tape and then keeping
things in the holding area.
I suspect
Amanda users,
One of my amanda servers uses Vtapes, and has multiple zpools
(ZFS filesystem) assigned to it. Vtapes have been configured
at 1.8 Tbytes, which is a value that seems to be insufficient.
At least based on dump estimates.
I seem to be sitting at level 1 dumps for several nights, and
, remains
to be seen...
Thank you,
Brian
On Fri, Sep 13, 2013 at 12:02:13PM -0700, Chris Buxton wrote:
On Sep 11, 2013, at 8:11 AM, Brian Cuttler br...@wadsworth.org wrote:
We have remapped some of our DNS clients to point to another
DNS resolver, one that we do not control, but that has
the request.
Registering Amanda server by IP address on the Windows client will likely
help.
thanks
Paddy
On Wed, Sep 11, 2013 at 8:11 AM, Brian Cuttler br...@wadsworth.org wrote:
Cross posting to both Amanda users and bind users lists.
We have remapped some of our DNS clients to point
I do that sort of thing a lot.
finsen /export/home-AZ /export/home {
user-tar2
include ./[A-Z]*
}
trel /trelRZ /trel {
comp-server-user-tar
include ./[R-Z]*
}
you have teh case correct? The dot between the first letter
and the wild card is
On Wed, Jul 31, 2013 at 01:43:50PM -0400, Mike Neimoyer wrote:
Thanks to Gerrit, Brian and Jean-Louis. I'll be responding to them in
this message
Hello Gerrit, thanks for chiming in!! include ./[a-c].*
Maybe you should first try an existing pathname (so no wildcard), just
to be sure
Karsten,
Do you have indexing enabled? The tee that runs # tar -tf
also seems to pipe the output through gzip to produce a
listing file that is .gz
On Thu, Jul 11, 2013 at 01:02:42AM +0200, Karsten Fuhrmann wrote:
Hello,
in my process list on the amanda server i see one gzip process per
), and it seems that it breaks the work
flow that I am used to. Still, I will need to straighten out what it is
doing with regard to tape drive and index server.
On 6/17/13 2:06 PM, Brian Cuttler wrote:
Chris,
You are using the amanda client/server and issuing the restore
from the E250
On Mon, Jun 17, 2013 at 03:00:44PM -0400, Chris Hoogendyk wrote:
hmm, didn't specify either of those in configure when I built Amanda 3.3.3.
However, now that you mention amanda-client.conf, I went and looked for
that. Turns out they were both specified there (I originally rsynced
Installed Amanda 3.3.0 on solaris 10/x86 two days ago.
Have found that both amdumps since did not complete normally.
While completing all DLE and sending the report.
amstatus finsen
Using /usr/local/etc/amanda/finsen/DailySet1/amdump
From Thu Jun 6 18:30:00 EDT 2013
finsen:/
Jean-Louis,
added a couple of switches to # ls, got a much more informative output.
[finsen]: /proc/734/fd ls -F -C /proc/10832/fd
0= 1= 10 12| 13| 16| 17| 2= 20| 21| 3 6| 8|
On Thu, Jun 06, 2013 at 11:09:20AM -0400, Jean-Louis Martineau wrote:
On 06/05/2013 11:54 AM,
.
thank you,
Brian
On Wed, Jun 05, 2013 at 01:41:16PM -0400, Brian Cuttler wrote:
Jean-Louis,
Yes, I did find some information on a run time mechanism to
increase the 256 file limit (file limit stored in unsigned
Hello amanda users,
I just updates amanda 3.3.0 to 3.3.0 on a Solaris 10/x86 system.
The system is both the server and the client, there are no other
clients of this system.
We have ~265 DLEs on this system (large zfs arrays and all
samba shares are their own file systems and DLE, thank
On 06/05/2013 11:05 AM, Brian Cuttler wrote:
Hello amanda users,
I just updates amanda 3.3.0 to 3.3.0 on a Solaris 10/x86 system.
The system is both the server and the client, there are no other
clients of this system.
We have ~265 DLEs on this system (large zfs arrays and all
samba shares
wrote:
On 06/05/2013 01:41 PM, Brian Cuttler wrote:
Jean-Louis,
Yes, I did find some information on a run time mechanism to
increase the 256 file limit (file limit stored in unsigned character).
The work-around employes requires the exection of
/usr/lib/extendedFILE.so.1
prior
Robert,
On Thu, May 23, 2013 at 10:04:29AM -0600, Charles Curley wrote:
On Thu, 23 May 2013 13:59:33 +
McGraw, Robert P rmcg...@purdue.edu wrote:
Why does amanda stop at %52 when I still have 1.5TB of data in the
holding disk to write to the tape? It is hard to believe that the
Hello Amanda users,
Yesterday I upgraded an client on Solaris 10/Sparc from 2.4.2p2,
which working great (except perhaps for not being built with GTAR
in mind, in, I think 2001!) with amanda 3.3.3.
Found the following in the client side debug file.
Mon May 20 18:36:42 2013: thd-2a4b8: amgtar:
:39AM -0400, Jean-Louis Martineau wrote:
On 05/21/2013 09:47 AM, Brian Cuttler wrote:
Hello Amanda users,
Yesterday I upgraded an client on Solaris 10/Sparc from 2.4.2p2,
which working great (except perhaps for not being built with GTAR
in mind, in, I think 2001!) with amanda 3.3.3.
Found
Hello amanda users,
And the great wheel turns again... I've been here before, but
apparently fell back to 2.6.1 and upd from a failed 3.3.0 install.
I am running amanda 3.1.2 on Solaris 10/x86 as a server and
am in the process of upgrading the amanda client on solaris 10/Sparc
from 2.6.1 to
, perhaps, looking in the wrong directory, any
debug files.
Jean-Louis
On 05/16/2013 02:06 PM, Brian Cuttler wrote:
Hello amanda users,
And the great wheel turns again... I've been here before, but
apparently fell back to 2.6.1 and upd from a failed 3.3.0 install.
I am running
/tmp/amanda/amandad - which was created by the manual run
of amandad, no new files.
On Thu, May 16, 2013 at 02:50:37PM -0400, Jean-Louis Martineau wrote:
On 05/16/2013 02:46 PM, Brian Cuttler wrote:
But I'm not seeing, perhaps, looking in the wrong directory, any
debug files
On Thu, May 16, 2013 at 03:02:53PM -0400, Jean-Louis Martineau wrote:
On 05/16/2013 02:53 PM, Brian Cuttler wrote:
/tmp/amanda/amandad - which was created by the manual run
of amandad, no new files.
If telnet do not create a new debug file it is because amandad is not
executed, which means
often hang after job completion,
causing problems dumping the 'pending' DLE the next day.
We'll see how the client behaves this evening.
On Thu, May 16, 2013 at 03:11:33PM -0400, Brian Cuttler wrote:
On Thu, May 16, 2013 at 03:02:53PM -0400, Jean-Louis Martineau wrote:
On 05/16/2013 02:53 PM
I'd thought for both ZFS and the newer (LTO) tape drives
that HW-compression was determined on a block by block
basic (if enabled) so that expansion of data would not occur.
Granted, this does nothing to help with on CPU usage, but I'd
thought it did save, rather, preserve, storage volume.
On
Already, _now_ I'm ready to solve the real problem...
Server is Solaris 10 x86, Amanda server 3.1.2.
Client, a CentOS 5x box has had amanda 2.5 removed
in favor of 3.3.0-1.
I've finally figured out the dumptype/auth/protocal, amcheck
is running properly.
From the amanda debug file, I find
Jean-Louis,
On Thu, Apr 25, 2013 at 11:03:50AM -0400, Jean-Louis Martineau wrote:
On 04/25/2013 10:39 AM, Brian Cuttler wrote:
Already, _now_ I'm ready to solve the real problem...
Server is Solaris 10 x86, Amanda server 3.1.2.
Client, a CentOS 5x box has had amanda 2.5 removed
Amanda users,
Server is Solaris x86, with amanda 3.1.2, locally built
Client is CentOS 5.9 with amanda 2.5.0p2, package
I know I must have restricted port ranges on the server because
I pass through a firewall and have ipf.conf settings on a client
on the far side set to
(edited for line
at 01:28:02PM -0700, Jean-Louis Martineau wrote:
On 04/12/2013 11:52 AM, Brian Cuttler wrote:
amandad: try_socksize: send buffer size is 65536
amandad: try_socksize: receive buffer size is 65536
amandad: time 3.128: bind_portrange2: trying port=831
amandad: time 3.129: stream_server: waiting
Hi Amanda users,
I'm running Amanda 3.1.2 on Solaris x86 and I'm trying to add
several linux clients. Linux version varies but the problems
are all similar, so I will select a specific instance.
amanda client
-
cat: /etc/lsb-release.d: Is a directory
CentOS release 5.9 (Final)
. Just so its
not lost if we need to refer back to it later on.
thank you/good weekend,
Brian
On Fri, Apr 12, 2013 at 01:28:02PM -0700, Jean-Louis Martineau wrote:
On 04/12/2013 11:52 AM, Brian Cuttler
Chris,
I don't know what tif files look like internally, don't know how
they compress.
Just of out left field... does your zpool have compression
enabled? I realized zfs will compress or not on a per block
basis, but I don't know what if any overhead is being incurred,
if the tif files are not
into multiple DLE's, but Amanda will still need estimates of all the pieces.
Or is it something entirely different? And, if so, how should I go about
looking for it?
On 4/3/13 1:14 PM, Brian Cuttler wrote:
Chris,
for larger file systems I've moved to server estimate, less
accurate
said, everything else runs without trouble, including DLE's that
are different zfs filesystems on the same zpool.
On 4/4/13 9:39 AM, Brian Cuttler wrote:
Chris,
sorry for the email trouble, this is a new phenomenon and I
don't know what is causing it, if you can identify the bad
header please let
Chris,
for larger file systems I've moved to server estimate, less
accurate but takes the entire estimate phase out of the equation.
We have had a lot of success with pig zip rather than regular
gzip, is it'll take advantage of the mutiple CPUs and give
parallelization during compression, which
Erik,
Yes - if you have enough disk to configure virtual tapes you
can backup without physical tape. The docs are more than sufficient
and numerous people on the amanda list have configured their amanda
servers this way, myself included.
On Tue, Mar 26, 2013 at 09:08:23PM +0100, Erik P. Olsen
Amit,
Did I understand you to say that you are not using an amanda
work area, an area on the server for temporary files?
Brian
On Fri, Mar 15, 2013 at 08:15:38AM -0400, Jean-Louis Martineau wrote:
On 03/15/2013 12:11 AM, Amit Karpe wrote:
I did not able to observe parallel processing. I
Amit,
I don't think you told us how many client systems, compression
can be done on the client or the server. Also, besides the inparallel
and maxdump settings, are you short on work area - as Jean-Louis
said, the amplot output will help you spot those bottlenecks.
Brian
On Thu, Mar 14, 2013
amanda 3.1.2, solaris x86 server, solaris x86 client, client != server
client successfully backing up.
We moved a zpool from another machine and imported several
non-global zones onto the client.
Underlying mount points seem to be protection 700, we now see the
following error from amcheck.
Hi Amanda users,
Think I saw a reference to this at some point, but not finding
what I'm looking for on the wiki or in google (I may not be doing
the right search).
I'm being asked by the admin of a specific machine if I can dump
the DLEs on that client later in the evening, they need to do some
this in a new definition for this client's DLE and see
how it works for us.
thank you,
Brian
On Fri, Jan 25, 2013 at 11:08:41AM -0500, Jean-Louis Martineau wrote:
On 01/25/2013 10:12 AM, Brian Cuttler wrote
Oliveier,
Did some parameter in amanda.conf get reset?
Where is the failure occuring? Estimate phase (etimeout)?
Is the error in a consistent place?
Was there a change to the version of gtar being used? Is there
an incompattibility with gtar and amanda version that is only
catching on large (or
Gour,
On Sat, Oct 27, 2012 at 10:24:04AM +0200, Gour wrote:
On Fri, 26 Oct 2012 11:40:20 -0400
Brian Cuttler br...@wadsworth.org wrote:
That is odd, and not reflective out output for ZFS file systems
on a Solaris box.
Hmm..
Clearly the script will not work for you as intended
Gour,
That is odd, and not reflective out output for ZFS file systems
on a Solaris box.
NAME USED AVAIL REFER MOUNTPOINT
Main 70.4G 63.5G 32.5K /Main
Main/ROOT47.0G 63.5G20K /Main/ROOT
Main/ROOT/Main 47.0G 63.5G 47.0G /
Main/dump
Granted - I'm running versions of Amanda that are a little older,
vary from 2.6.x through 3.0.x, the enhancement I'm asking about
may already exist.
I'm thinking of a 'dirty bit' ie, has the tape actually been used.
We have situations where a DLE dumps to holding area, or even to
tape, where
Andreas,
Please post you solution to the list, I've now got a couple of
zpools, and it would be helpful if I was able to use a 'tape changer'
to find different vtapes on different pools.
This would be better, I think, than linking directories across, and
because of the different disk
Ok, here is a crazy question.
We where talking about backups in the office, and saving space
on our Vtapes. And with zfs snapshot taken nightly and kept for
a week, I suggested that we really only needed amanda to backup
things that where more than a week old, that if Amanda had weekly
level 0
Just to add to a discussion from about a year ago.
gtar 1.23 works fine on ZFS file systems (Solaris 10x86 box)
for user-tar backups, but fails (error 11) for snapshots.
I have no idea what the difference is, you'd think backing
up a zfs partition was the same as backing up a zfs partition,
but
Not a matter of empty, it has a default value, but I forget if
its a reserve for degraded or reserve for nightly one being
the inverse of the other. Value is a percentage of the holding
area, I just forget if you want to shoot for 0% or 100%, read
carefully.
Thanks!
On 2012-08-01 13:24, Brian Cuttler
Matt,
My guess is that with --no-taper you are writing to the reserve
area as if the tape failed and you had gone into degraded mode.
In that mode you do in fact perform incrementals because without
the tape being available you are trying to conserve your work area.
Not sure what the goal is
Amanda users,
We are running a Solaris 9/sparc cluster with amanda 2.4.4p2.
We have been backing up a DLE using the HW name, rather than the
mount name, on a specific cluster member.
disklist
bionsc1 /dev/md/bg-schost-1/rdsk/d311 comp-user
The volume no longer compresses enough to
,
Brian
On Fri, Jun 29, 2012 at 09:53:47AM -0400, Brian Cuttler wrote:
Amanda Users,
I've been successfully backing up amanda client on Suse with
amanda server on Suse, but want to migrate the client to another
amanda server that is a newer version.
I have
Dan,
This has been a problem with our Amanda 3.3.0 on Solaris 10x86.
There are currently 241 DLE and the 'problem' one, the one that
doesn't destroy the snapshot seems to move down the list as we
add new DLE's. It does not seem to be related to actual use of
the DLE not destroyed.
I will take
?
thanks,
Brian
On Fri, May 25, 2012 at 12:27:13PM -0400, Jean-Louis Martineau wrote:
On 05/25/2012 11:39 AM, Brian Cuttler wrote:
New installation
Amanda 3.3.0 installed on openSUSE 12.1 (Asparagus
On Fri, May 25, 2012 at 01:06:45PM -0400, Brian Cuttler wrote:
Charles,
Jean-Louis,
I did include the amgetconf reserved-tcp-port info in my email,
I get the same results you do.
I am logged in as root when running amcheck.
I was afraid that was an IPv6 error, was running it out
Thank you - Amcheck running now, client==server.
Will test and then add other clients.
thank you - Brian
On Fri, May 25, 2012 at 02:07:49PM -0400, Jean-Louis Martineau wrote:
On 05/25/2012 02:02 PM, Brian Cuttler wrote:
On Fri, May 25, 2012 at 01:06:45PM -0400, Brian Cuttler wrote:
Charles
On Fri, May 25, 2012 at 02:28:06PM -0400, gene heskett wrote:
On Friday, May 25, 2012 02:24:39 PM Brian Cuttler did opine:
Charles,
Jean-Louis,
I did include the amgetconf reserved-tcp-port info in my email,
I get the same results you do.
I am logged in as root when running
On Thu, Apr 19, 2012 at 05:11:02PM +0300, Toomas Aas wrote:
Hello Alan!
The disk list in question looks like this:
localhost /export_homes1 /export {
user-tar
include ./homes/[a-d]*
}
From amanda.conf manpage:
All include expressions
Matt,
You could copy/rsync/whatever your amanda config directory and
subdirectories, this will provide index and history.
But in worst case you can unpack the amanda dump files without
that, you (optionally) decompress the dump file, use the analog
of whatever utility created the dump stream
On Fri, Feb 24, 2012 at 02:57:14PM +0900, purpleshadow livedoor wrote:
Thanks.
This makes me understand the new function clearly.
? ? ? includefile string
? ? ? ? ? Default: no default. The name of a disklist file to include within
? ? ? ? ? the current file. Useful for sharing
Hi Amanda users,
Server Solaris 10x86, amanda 2.6.1p1
Client Solaris 10x86, amanda 3.1.1
Sorry, its not like I'm new to snapshots, but I'm not catching
the error, it must be something in my config...
If anyone can help me to find the error of my ways...
Neil,
I'd had a variety of tape errors, one of the linux boxes will
occasionally drop the driver for the robot on the floor. My
most recent problem was timeouts to the LTO5 in my G2 juke
attached to a Solaris x86 box.
The answers very and you have to run the checklist, the linux box,
you unload
Neil,
I'm a big believer in # ldd as in
ie
# ldd /usr/local/libexec/amanda/amandad
On Wed, Jan 11, 2012 at 07:37:11PM -0600, Neil Carter wrote:
Greetings:
I found the article 361http://network.zmanda.com/lore/article.php?id=361
and it
seemed to fit perfectly, as I get the same error:
on the client manually to test?
Thanks!
Neil
On 2012.01.12 8:09 AM, Brian Cuttler wrote:
Neil,
I'm a big believer in # ldd as in
ie
# ldd /usr/local/libexec/amanda/amandad
On Wed, Jan 11, 2012 at 07:37:11PM -0600, Neil Carter wrote:
Greetings:
I found the article 361http
Neil,
I built 3.3.0 ok on my systems, but have it running on only
1/3 where I tried to install it. I can dig up the thread if
you'd like, it would be nice to resolve it and maybe together
we can find out what I did wrong.
As a matter of course though, I build once, tar up the installation
tree
Amanda users,
We have been seeing a high I/O load on one of our boxes.
One of the things noticed by one of the other admins is that
what amanda is running gzip --best in the background.
I believe that GZIP utilizes CPU and does not consume disk I/O,
also the system is sluggish during
,
Brian
On Mon, Dec 05, 2011 at 11:07:00AM -0700, Charles Curley wrote:
On Mon, 5 Dec 2011 11:43:49 -0500
Brian Cuttler br...@wadsworth.org wrote:
Gzip is a CPU hog, doesn't use any disk I/O, does it ?
It would use I/O if it generates swapping
if there are no errors, try
reseating any cables that may have been moved, adjusted, or jostled
since the last time you had no issues.
On 12/5/2011 10:42, Brian Cuttler wrote:
Charles,
We are already using pigz, it makes a huge difference.
We are seeing a lot of sluggishness, command line, during
Hi Stefan,
We also do backups of VMs, ours to fiber backed storage attached
to our Solaris systems (that is the short version anyway). We have
a fabric rather than a NAS.
We are or where backing that up using amanda, I'd have to look
and see if we where still doing it, but we where not
On Tue, Nov 29, 2011 at 03:02:39PM +0100, Jens Berg wrote:
On Tue Nov 29 2011 14:30:25 GMT+0100
s...@amanda.org (Stefan G. Weichinger) wrote:
That's why I thought of mounting that share and running find over it,
thereby setting some links or so (although I have to keep in mind that
amanda
. Weichinger wrote:
Am 29.11.2011 14:49, schrieb Brian Cuttler:
Hi Stefan,
We also do backups of VMs, ours to fiber backed storage attached to
our Solaris systems (that is the short version anyway). We have a
fabric rather than a NAS.
We are or where backing that up using amanda, I'd have
+0100
br...@wadsworth.org (Brian Cuttler) wrote:
(although I have to keep in mind that
amanda doesn't follow symlinks, does it?).
No, AFAIK it doesn't, and you probably don't want to copy the tarballs
around before running amdump.
Just remembered the -h option of gnutar, which
So, the idea is to get only fairly recent (no more than 3 day old)
files that contain _F_.
Not sure if the nofull strategy will deselect some of the files
in the include list on you. Depends on how often you generate files
with _F_ in the file name and how often you are running amanda.
Do you
On Thu, Nov 17, 2011 at 07:43:34AM -0500, Jean-Louis Martineau wrote:
Why everybody try to find a workaround, --no-taper is designed exactly
for what Matt want.
We should try to find why it doesn't works for Matt.
Matt, post more informations.
Did it work if you run it from command line?
-dell.local:/home/mlb/Documents promoted from 5 days ahead.
and
here's the report from the amflush
On Thu, 17 Nov 2011 12:57:26 -0500,
Brian Cuttler wrote:
Matt,
Doesn't do much as in, don't collect
much data, doesn't seem
to use any time, doesn't progress at all ?
On Thu
. incremental
backups.
Did you need more info? Thanks for all the attention!
Matt
On Thu, 17 Nov 2011 10:27:33 -0500, Brian Cuttler wrote:
On Thu,
Nov 17, 2011 at 07:43:34AM -0500, Jean-Louis Martineau wrote:
Why
everybody try to find a workaround, --no-taper is designed exactly
0:00 150.0 (brought to you by
Amanda version 3.2.0)
On Thu, 17 Nov 2011 12:57:26 -0500, Brian Cuttler
wrote:
Matt,
Doesn't do much as in, don't collect much data,
doesn't seem
to use any time, doesn't progress at all ?
On Thu,
Nov 17, 2011 at 01:51:28PM -0400, Matt wrote
Jean-Louis,
Sorry, took a day to get back to amanda...
On Mon, Oct 03, 2011 at 07:43:07PM -0400, Jean-Louis Martineau wrote:
On 10/03/2011 04:18 PM, Brian Cuttler wrote:
The default changed, so you must add '-auth=bsd' if you want to use the
bsd auth.
This is in place.
for bsdtcp auth
Jean-Louis,
thank you for your help, I don't quite have it though.
The default changed, so you must add '-auth=bsd' if you want to use the
bsd auth.
For bsd auth:
amanda dgram udp wait amanda /usr/local/libexec/amanda/amandad
amandad -auth=bsd amdump
for bsdtcp auth:
amanda
Marc,
I like # amadmin config force [host [DLE]]*
I have had some systems where I put in a cron job, just prior
to the amdump, but the amdump was nightly (5x/week since we
have no operations staff) and the amadmin ran once/week, for
systems where I wanted for force level 0, lets say, weekends.
Robert,
We have several machines that are on multiple networks.
for instance, amanda server curie received a second functional
interface at one point, this is the DNS information but its really
just one multi-homed box.
Name: curie.wadsworth.org
Address: 10.50.156.66
Name:
Gour-Gadadhara,
On Tue, Aug 30, 2011 at 10:30:33AM -0400, Jon LaBadie wrote:
On Tue, Aug 30, 2011 at 08:41:22AM +0200, Gour-Gadadhara Dasa wrote:
On Mon, 29 Aug 2011 19:02:12 -0400
Jon LaBadie j...@jgcomp.com wrote:
Nothing strange there. It attempted to put it a DLE, or depending on
As long as we are talking about HW vs SW compression, I should
say that we've installed pigz, a parallelized version of gzip,
on some of our systems with good result. If you have a system
that will support it and SW compression is running long you might
want to test it out.
On Wed, Aug 24, 2011
All we did at my site was to essentially replace the gzip binary.
Its wrong and its cheating, but that is what we did.
On Thu, Aug 25, 2011 at 02:21:13PM -0600, Charles Curley wrote:
On Thu, 25 Aug 2011 09:45:38 -0400
gene heskett ghesk...@wdtv.com wrote:
That could be handy if it is
Horacio,
I think Chris is on the mark. You are going to want snapshots
of your file system, which might be difficult to do with a
remote setup -- because if you are worried about bring stuff
back on-line in 15 minutes you are going to need a second site,
hot and constantly updated.
Your going
1 - 100 of 776 matches
Mail list logo