On Mon, Jun 15, 2020 at 00:20:25 -0400, Nathan Stratton Treadway wrote:
> On Mon, Jun 15, 2020 at 10:41:58 +0700, Olivier wrote:
> > I have an Amanda client that takes more than 4 hours to do the
> > estimate. The estimate is computed correctly, but when amandad on the
[...]
&g
On Mon, Jun 15, 2020 at 10:41:58 +0700, Olivier wrote:
> I have an Amanda client that takes more than 4 hours to do the
> estimate. The estimate is computed correctly, but when amandad on the
> client tries to send back the estimate to the server, the packet times
> out.
>
>
Hi,
I have an Amanda client that takes more than 4 hours to do the
estimate. The estimate is computed correctly, but when amandad on the
client tries to send back the estimate to the server, the packet times
out.
I kind of remember that there is a timeout parameter that I need to
tweak before
not
the server.
For my own understanding: If estimate is set to client, the client runs
calcsize and returns a most accurate point-in-time size of the DLE?
My understanding is that 'estimate client' actually invokes the backup
program itself to get the size estimate (so, `tar` or
n the client. It may also be _really_ fast in your setup,
> but that doesn't inherently mean it's running locally (Amanda is smart
> enough to spread out estimates across hosts and spindles just like it
> does backups).
>
> >> then factors in compression ratios and such t
On Fri, Nov 16, 2018 at 7:24 AM Austin S. Hemmelgarn
wrote:
>
> Except that it actually runs on the client systems. I've actually
> looked at this, the calcsize program is running on the clients and not
> the server.
>
For my own understanding: If estimate is set to cli
nd ./[Q-Z]*", so the next run will have 5
new dle's.
But an estimate does not show the new names that results in.
I've even took the estimate assignment calcsize back out of the
global dumptype, which ack the manpage, forces the estimates to
be derived from a dummy run of tar, didn
t;>> locations of some categories and broke the big one up into 2
> >>>>> pieces. "./[A-P]*" and ./[Q-Z]*", so the next run will have 5
> >>>>> new dle's.
> >>>>>
> >>>>> But an estimate does not show the n
t;>> locations of some categories and broke the big one up into 2
> >>>>> pieces. "./[A-P]*" and ./[Q-Z]*", so the next run will have 5
> >>>>> new dle's.
> >>>>>
> >>>>> But an estimate does not show the n
it showed one huge and 3 teeny
level 0's for the 4 new dle's. So I just re-adjusted the
locations of some categories and broke the big one up into 2
pieces. "./[A-P]*" and ./[Q-Z]*", so the next run will have 5 new
dle's.
But an estimate does not show the new names t
it showed one huge and 3 teeny
> >>> level 0's for the 4 new dle's. So I just re-adjusted the
> >>> locations of some categories and broke the big one up into 2
> >>> pieces. "./[A-P]*" and ./[Q-Z]*", so the next run will have 5 new
ns
of some categories and broke the big one up into 2 pieces.
"./[A-P]*" and ./[Q-Z]*", so the next run will have 5 new dle's.
But an estimate does not show the new names that results in. I've
even took the estimate assignment calcsize back out of the global
dumptype, which ac
ategories and broke the big one up into 2 pieces.
> > "./[A-P]*" and ./[Q-Z]*", so the next run will have 5 new dle's.
> >
> > But an estimate does not show the new names that results in. I've
> > even took the estimate assignment calcsize back o
7;s for the 4 new dle's. So I just re-adjusted the locations of some
> categories and broke the big one up into 2 pieces. "./[A-P]*"
> and ./[Q-Z]*", so the next run will have 5 new dle's.
>
> But an estimate does not show the new names t
gories and broke the big one up into 2 pieces. "./[A-P]*"
> > and ./[Q-Z]*", so the next run will have 5 new dle's.
> >
> > But an estimate does not show the new names that results in. I've even
> > took the estimate assignment calcsize back out of th
next run will have 5 new dle's.
But an estimate does not show the new names that results in. I've even
took the estimate assignment calcsize back out of the global dumptype,
which ack the manpage, forces the estimates to be derived from a dummy
run of tar, didn't help.
Clues? Havin
I ask because after last nights run it showed one huge and 3 teeny level
0's for the 4 new dle's. So I just re-adjusted the locations of some
categories and broke the big one up into 2 pieces. "./[A-P]*"
and ./[Q-Z]*", so the next run will have 5 new dle's.
But
Am 2018-06-20 um 11:04 schrieb Jens Berg:
> Two ideas:
> 1. Increase etimeout for the vtape-config and run amdump with the
> --no-dump option. If I understand that option correctly, it should do a
> full backup run including the estimates without actually writing
> anything to (v)tapes or the holdi
;vtape" backing up to 2 NASes where I
> symlink the disklist ... even with the same etimeout=1200 and estimate
> method some DLEs fail to estimate.
>
> Now I assume that if I increase etimeout for the first week or so things
> will work out sooner or later.
>
> But an init
th the same etimeout=1200 and estimate
method some DLEs fail to estimate.
Now I assume that if I increase etimeout for the first week or so things
will work out sooner or later.
But an initial "do estimates only" run would be more elegant, right?
thanks for any pointers
ps: for sur
Hello,
we ran into trouble. The estimate/calcsize hangs when aquiring a
write-lock for /dev/null.
/dev/null seems have an read-lock readlock (held by the dockerd)
already.
Is there any reason for amanda, having this lock?
Best regards from Dresden/Germany
Viele Grüße aus Dresden
Hallo!
I've installed Amanda 3.3.3 on a Centos server. Sometimes backups are
failing with "RESULTS MISSING" and "driver: WARNING: got empty
schedule from planner". I think this is mostly because of estimate
timeouts. I see "planner: ERROR Some estimate timeout
Nathan Stratton Treadway writes:
>On Mon, Apr 10, 2017 at 07:06:55 -0400, hy...@lactose.homelinux.net wrote:
>>
>> Let's say I have a very recent full (level 0) backup of /home. If I
>> split up my /home into two or three DLEs, will Amanda recognize that it
>> already has a level-0 backup, and th
Bummer.
Thanks.
--hymie!
Jean-Louis Martineau writes:
>
>man amadmin:
> estimate [ hostname [ disks ]* ]*
>Print the server estimate for the dles ...
>
>The answer is NO, since 'amadmin estimate' only print the server=20
>estimate (based on
On Mon, Apr 10, 2017 at 07:06:55 -0400, hy...@lactose.homelinux.net wrote:
> Separate but related question.
>
> Let's say I have a very recent full (level 0) backup of /home. If I
> split up my /home into two or three DLEs, will Amanda recognize that it
> already has a level-0 backup, and then ta
man amadmin:
estimate [ hostname [ disks ]* ]*
Print the server estimate for the dles ...
The answer is NO, since 'amadmin estimate' only print the server
estimate (based on history).
The best tool is du.
Jean-Louis
On 10/04/17 07:06 AM, hy...@lactose.homelinux
Greetings.
I'm experimenting with how I can split up a couple of very large
backups into multiple smaller backups.
newlaptop.local.net /home/nothymie /home {
simple-gnutar-remote
exclude "./hymie"
estimate client
}
newlaptop.local.net /home/hymie /home {
simpl
On Thu, Jan 12, 2017 at 09:57:42PM -0600, Jason L Tibbitts III wrote:
> > "jc" == jon@jgcomp com writes:
>
> jc> Will try. I was still researching and hoping for some alternative,
> jc> actual fix. Seems not to be an unusual situation.
>
> That's going to have to come either from the selin
> "jc" == jon@jgcomp com writes:
jc> Will try. I was still researching and hoping for some alternative,
jc> actual fix. Seems not to be an unusual situation.
That's going to have to come either from the selinux policy authors or
from someone who sits down and learns enough to get things wo
On Thu, Jan 12, 2017 at 06:02:38PM -0600, Jason L Tibbitts III wrote:
> > "jc" == jon@jgcomp com writes:
>
> jc> Ok, I confirmed my home dir can be backed up with selinux set to
> jc> non-enforcing.
>
> How about just setting amanda_t to permissive as I suggested in my
> previous message? A
> "jc" == jon@jgcomp com writes:
jc> Ok, I confirmed my home dir can be backed up with selinux set to
jc> non-enforcing.
How about just setting amanda_t to permissive as I suggested in my
previous message? At least then you wouldn't have to disable selinux
throughout your system.
# semanag
On Mon, Jan 09, 2017 at 04:19:35PM -0500, Jon LaBadie wrote:
> On Mon, Jan 09, 2017 at 06:12:25PM +, Debra S Baddorf wrote:
[ snip ]
>
> Debra -- thank you!!!
> Doing the above caused me to also look at the extended attributes.
>
> $ ls -lZ /home
> total 128
> drwxrwxr-x. 28 gundi gund
> "UM" == Uwe Menges writes:
UM> I filed https://bugzilla.redhat.com/show_bug.cgi?id=1280526 but I
UM> don't know if this is really the same cause.
Sadly that one got closed because the release was never updated to
something newer than Fedora 22. I suspect it's still a problem.
I'll make a
On 01/09/17 22:19, j...@jgcomp.com wrote:
> Hmmm, rootk is "system_u", jon and all the other home dirs are "unconfined_u".
> The lost+found directory is also "system_u". If this is the problem,
> lost+found
> should also be getting backed up and should appear in the gnutar lists.
>
> $ strings
On Mon, Jan 09, 2017 at 06:12:25PM +, Debra S Baddorf wrote:
> Well, due to the resounding silence …. let’s try some experiments:
> What happens when you manually dump (or tar?) the area to a scratch area?
> What size backup do you get?
> Perhaps some of the files are large but marked “no
Well, due to the resounding silence …. let’s try some experiments:
What happens when you manually dump (or tar?) the area to a scratch area?
What size backup do you get?
Perhaps some of the files are large but marked “no dump” or something.
(Though I imagine estimates take that into account.)
I'm getting faulty, incomplete dumps of several
DLE. For example, /home is a separate DLE,
82GB used according to df(1) and estimated by
calcsize at 52GB. But the dump is only 0.5GB.
I see nothing unusual in the logs, suggestions
on what to research welcomed.
Client is Fedora 24, amanda 3.3.9
S
t716s_1/jet-export/ {
> >gui-base
> >include "./[a]*"
> >exclude "./aguirre/"
> >}
> > cfile.uphs.upenn.edu jet-aguirre /mnt/jet716s_1/jet-export/ {
> >gui-base
> >include "./aguirre/"
> >}
> >
uot;
>exclude "./aguirre/"
>}
> cfile.uphs.upenn.edu jet-aguirre /mnt/jet716s_1/jet-export/ {
>gui-base
>include "./aguirre/"
> }
>
[...]
> But then
>
> [amandabackup@cback ~]$ amadmin jet1 estimate cfile jet-a
> cfile.uphs.upenn.e
OK, thanks Jean-Louis. I'll give that a try.
-M
On Thu, Mar 6, 2014 at 7:31 AM, Jean-Louis Martineau
wrote:
>
> $ man amadmin
> estimate [ hostname [ disks ]* ]*
>Print the server estimate for the dles, each output lines have
> the
>
$ man amadmin
estimate [ hostname [ disks ]* ]*
Print the server estimate for the dles, each output lines
have the
following format:
hostname diskname level size
Server estimate can only be computed if you already backed up the dles
a few time
Amanda 3.4.4
Hi,
I'm trying to use amadmin's estimate command to get an idea if my DLE
entries are correct.
For example:
I want one DLE with all dirs starting with a, except for ./aguirre, then
another with just ./aguirre
cfile.uphs.upenn.edu jet-a /mnt/jet716s_1/jet-export/ {
r/log
tank/var/tmp 349G 32K349G
0%/var/tmp
tank/usr/jails/.warden-template-10.0-RELEASE-amd64349G178M349G
0%/usr/jails/.warden-template-10.0-RELEASE-amd64
So, few questions:
a) what is the reason that estimate on FreeBS/ZFS is so slow in
c
On Fri, Apr 12, 2013 at 17:09:11 -0400, Chris Hoogendyk wrote:
> The "Total bytes written:" was identical with and without the
> --sparse option (right down to the last byte ;-) ). It was the time
> taken to arrive at that estimate that was so very different:
>
> Total
Thank you, Nathan. Informative.
The "Total bytes written:" was identical with and without the --sparse option (right down to the
last byte ;-) ). It was the time taken to arrive at that estimate that was so very different:
Total bytes written: 2086440960 (2.0GiB, 11MiB/s)
real
than the %s
value.
- using the standard Sun "ls", you can do
ls -sl
, and then multiply the value in the first column by 512. (I assume
the "block size" used is a constant 512 in that case, regardless of
file system.)
* The doubling of the time
backup will be large because tar
fill the holes with 0.
Your best option is to use the calcsize or server estimate.
Jean-Louis
--
---
Chris Hoogendyk
-
O__ Systems Administrator
c/ /'_ --- Biology & Geology Departments
(*) \(*) -- 140 Morrill
Thank you!
Not sure why the debug file would list runtar in the form of a parameter, when it's not to be used
as such. Anyway, that got it working.
Which brings me back to my original problem. As indicated previously, the filesystem in question
only has 2806 files and 140 directories. As I wa
OK, folks, it is the "--sparse" option that Amanda is putting on the gtar. This is /usr/sfw/bin/tar
version 1.23 on Solaris 10. I have a test script that runs the runtar and a test directory with just
10 of the tif files in it.
Without the "--sparse" option, time tells me that it takes 0m0.57s
f files are not compressed then there should be no
additional overhead to decompress them on read.
I would also probably hesitate to enable compression of a zfs
file system that was used for amanda work area, since you are
storing data that has already been zip'd. Though this also has
no im
he holes with 0.
Your best option is to use the calcsize or server estimate.
Jean-Louis
On Thu, Apr 04, 2013 at 17:48:46 -0400, Chris Hoogendyk wrote:
> So, I created a script working off that and adding verbose:
>
>#!/bin/ksh
>
>OPTIONS=" --create --file /dev/null --numeric-owner --directory
> /export/herbarium
>--one-file-system --listed-incremental";
>OPTIONS="${
On Thu, Apr 04, 2013 at 17:48:46 -0400, Chris Hoogendyk wrote:
> If I exchange the two commands so that I'm using gtar directly rather
> than runtar, then I get:
>
>/usr/sfw/bin/gtar: Cowardly refusing to create an empty archive
>Try `/usr/sfw/bin/gtar --help' or `/usr/sfw/bin/gtar --usage
On 04/04/2013 02:48 PM, Chris Hoogendyk wrote:
I may just quietly go nuts. I'm trying to run the command directly. In
the debug file, one example is:
Mon Apr 1 08:05:49 2013: thd-32a58: sendsize: Spawning
"/usr/local/libexec/amanda/runtar runtar daily
/usr/local/etc/amanda/tools/gtar --creat
than mutt.
Any way to vet the zfs file system? Make sure its sane and doesn't contain some
kind of a bad
link causing a loop?
If you where to run the command used by estimate, which I believe displays in
the debug file,
can you run that successfully on the command line? If you run it verbo
Reply using thunderbird rather than mutt.
Any way to vet the zfs file system? Make sure its sane and doesn't
contain some kind of a bad
link causing a loop?
If you where to run the command used by estimate, which I believe
displays in the debug file,
can you run that successfully o
hose by my mail
client -- Thunderbird 17.0.5.
I changed the dump type to not use compression. If tif files are not going to compress anyway, then
I might as well not even ask Amanda to try. However, it never gets to the dump, because it gets "all
estimate timed out."
I will try breaki
are exceptions, even here, and I am educating
the users to do things differently. However I do have lots of files
on zfs in general...
I don't believe that gzip is used in the estimate phase, I think
that it produces "raw" dump size for dump scheduling and that tape
allocation is lef
, if so, how should I go about
looking for it?
On 4/3/13 1:14 PM, Brian Cuttler wrote:
Chris,
for larger file systems I've moved to "server estimate", less
accurate but takes the entire estimate phase out of the equation.
We have had a lot of success with pig zip rather than
Chris,
for larger file systems I've moved to "server estimate", less
accurate but takes the entire estimate phase out of the equation.
We have had a lot of success with pig zip rather than regular
gzip, is it'll take advantage of the mutiple CPUs and give
parallelizatio
13: thd-32a58: sendsize: .
Mon Apr 1 08:05:49 2013: thd-32a58: sendsize: estimate time for /export/herbarium level 0:
26302.500
Nice, it took 7 hours 18 Minutes and 22 Seconds to get the level-0 estimate.
Mon Apr 1 08:05:49 2013: thd-32a58: sendsize: estimate size for
/export/herbarium le
over 200G of tif scans,
compression is causing
trouble? But this is just getting estimates, output going to /dev/null.
Here is a segment from the very end of the sendsize debug file from April 1
(the debug file ends
after these lines):
Mon Apr 1 08:05:49 2013: thd-32a58: sendsize: .
Mon Apr
is is just getting estimates, output going to /dev/null.
Here is a segment from the very end of the sendsize debug file from April 1 (the debug file ends
after these lines):
Mon Apr 1 08:05:49 2013: thd-32a58: sendsize: .
Mon Apr 1 08:05:49 2013: thd-32a58: sendsize: estimate time for
/expor
Today I had put back the parameter estimate to "client" as it always
had been, while etimeout is still at the new value of 14400. Backup
failed again, even on 4 DLE's.
I had changed "estimate" back because although the backup succeeded
yesterday, it had this strange beh
ion is right
indeed. Good to know! I really need to investigate my estimate
timeouts then...
Cheers,
Alan
On 12/11/2012 12:56 PM, Charles Stroom wrote:
The planner has an "ERROR" to make the estimate, but than later the
dump itself FAILs as well. So no backup is made of that par
The planner has an "ERROR" to make the estimate, but than later the
dump itself FAILs as well. So no backup is made of that particular DLE.
Regards, Charles
On Tue, 11 Dec 2012 09:04:46 +0300
Alan Orth wrote:
> Hi, All.
>
> It's good that you brought this up on the
06:05 PM, Charles Stroom wrote:
Hi, the forwarded email below was meant to go to the list, but I noticed
later it was only to 1 recepient. Hence the forward.
Regards, Charles
Begin forwarded message:
Date: Sat, 8 Dec 2012 22:08:48 +0100
From: Charles Stroom
To: Jens Berg
Subject: Re: failure &quo
Hi, the forwarded email below was meant to go to the list, but I noticed
later it was only to 1 recepient. Hence the forward.
Regards, Charles
Begin forwarded message:
Date: Sat, 8 Dec 2012 22:08:48 +0100
From: Charles Stroom
To: Jens Berg
Subject: Re: failure "estimate of level x
The time required for the estimate is more a matter of the number of
files and not of their size. Most of the time needed for the estimation
is spent in traversing through the directory structure of the file
system. Could it be, that you have installed some big source-tarballs
to, for example
Charles Stroom:
Have you tried making "etimeout" a bigger number, in the
amanda.conf file?
As the disk space gets more and more full, the time to do the estimate
grows.I've got etimeout set to 2000 because of a really large disk
in my past. Don't know if I st
I hope somebody can help me. Since already some time, most backups
fail, mostly on one item "/usr" with the message that the estimate
timed out. Only once in a while (1 out of 10), this failure does not
occur and the backup completes normally. This sounds to me that in
principal all
for
>> years, now regularly throws the message
>>
>> FAILURE AND STRANGE DUMP SUMMARY:
>> srv-erp3 /mnt1 lev 0 FAILED [disk /mnt1, all estimate timed out]
>> planner: ERROR Request to srv-erp3 failed: timeout waiting for REP
>>
>> in the report, but
ND STRANGE DUMP SUMMARY:
> srv-erp3 /mnt1 lev 0 FAILED [disk /mnt1, all estimate timed out]
> planner: ERROR Request to srv-erp3 failed: timeout waiting for REP
>
> in the report, but the other disks are written properly:
Did this ever got resolved? How?
Since a few days, I'm g
FAILED [disk /mnt1, all estimate timed out]
planner: ERROR Request to srv-erp3 failed: timeout waiting for REP
in the report, but the other disks are written properly:
srv-erp3 /boot 1 0 0 320.00:00 10.0 0:01 24.8
srv-erp3 /home 0 0 0 160.0
Hi all,
I use amanda 2.5.2p1 on a Ubuntu 8.04 server to back up several machines. The
backup of /one/ disk from /one/ machine, which worked flawlessly for years, now
regularly throws the message
FAILURE AND STRANGE DUMP SUMMARY:
srv-erp3 /mnt1 lev 0 FAILED [disk /mnt1, all estimate timed
I reran the dump seperately and here is the section of the report
FAILURE DUMP SUMMARY:
hutton.earth.northwestern.edu / lev 0 FAILED "[disk /, all estimate
timed out]"
planner: ERROR Request to hutton.earth.northwestern.edu failed: timeout
waiting for REP
In the m
Jean-Louis,
On Thu, Jul 07, 2011 at 03:35:58PM -0400, Jean-Louis Martineau wrote:
> If you got only 'failed' for the error message, then something is really
> broken, I would like to see it, can you post the complete line from the
> 'FAILURE DUMP SUMMARY' section of the report.
I mispoke, this
Jul 2011, Jean-Louis Martineau wrote:
Can you post the exact error message you get from amanda?
You said 'broken pipe', where do you get it.
Telling 'tar estimate dying' or 'broken pipe' is useless if you don't
show how/where you get them.
The sendsize debug
tiate the connection itself. Something to look for
Deb Baddorf
On Jul 7, 2011, at 2:14 PM, Jean-Louis Martineau wrote:
> Can you post the exact error message you get from amanda?
> You said 'broken pipe', where do you get it.
>
>
> Telling 'tar estimate dying&
working for the last six years.
On Thu, 7 Jul 2011, Jean-Louis Martineau wrote:
Can you post the exact error message you get from amanda?
You said 'broken pipe', where do you get it.
Telling 'tar estimate dying' or 'broken pipe' is useless if you don't show
ho
Can you post the exact error message you get from amanda?
You said 'broken pipe', where do you get it.
Telling 'tar estimate dying' or 'broken pipe' is useless if you don't
show how/where you get them.
The sendsize debug file looks good, please post the
). After about 30 minutes the tar
> process just dies and the estimate doesnt seem to get to the
> server. The / is only about 10gig and until recently been
> fine. These are old computers, however the /home part. is
> 20gig and has been fine.
>
>I am using tar (now 1.20 after
the tar
process just dies and the estimate doesnt seem to get to the
server. The / is only about 10gig and until recently been
fine. These are old computers, however the /home part. is
20gig and has been fine.
I am using tar (now 1.20 after downgrading from 1.23 for
testing)
I have also tried
Jon,
We also see 'big estimate' output for some partitions.
I assumed some sort of bounding but never looked into
it any further.
Reminds me of the O (big-oh) and omega we'd compute for
algorithms back in computer science class [upper and lower
bounds limits for algorithms]. S
My daily reports regularly contain lines like
the following (commas added):
NOTES:
big estimate: lastchance Cdrive 1
est: 6,944,416Mout 975,940M
big estimate: mumsxp Cdrive 1
est: 3,303,904Mout 288,480M
big estimate: vostxp Cdrive 1
sendsize[77723]: time 5.465: Create: tar -cf
[filenames...]
sendsize[77723]: time 5.465: Help:tar --help
sendsize[77723]: time 5.466: .
sendsize[77723]: estimate time for /usr level 0: 0.154
sendsize[77723]: no size line match in /usr/bin/tar output for "/usr"
send
On Mon, Feb 07, 2011 at 04:31:57PM -0500, Mike Neimoyer wrote:
> Okay, looking further, I see the following in the file
>
...
>
> Just a guess here, but I presume that since amanda is waiting on a
> response from tar, and tar has errored out, that amanda skips this
> item in it's DLE because of t
On Mon, 07 Feb 2011 16:31:57 -0500
Mike Neimoyer wrote:
> Just a guess here, but I presume that since amanda is waiting on a
> response from tar, and tar has errored out, that amanda skips this
> item in it's DLE because of this error.
>
> Does this sound reasonable?
It sounds reasonable to me
sendsize[77723]: time 5.466: .
sendsize[77723]: estimate time for /usr level 0: 0.154
sendsize[77723]: no size line match in /usr/bin/tar output for "/usr"
sendsize[77723]: .
sendsize[77723]: estimate size for /usr level 0: -1 KB
Just a guess here, but I presume
gular incremental and even our weekly and Monthly runs
going off successfully. But now one of my DLE's for the Amanda server
itself shows an error ("/usr lev 0 FAILED [disk /usr, all estimate
failed]") in the daily report, even though the selfcheck.*.debug file
shows no errors, and an a
d, but I find myself wondering how
the average size of the DLE in the estimate phase is lowered
once the average has gotten above the tape size cut-off.
What I mean is, since actually live estimates are not performed
and archive data is relied on for the estimate, how does the
new lower size get averaged
;large then the size of a tape.
> >
> >This has been corrected, but I find myself wondering how
> >the average size of the DLE in the estimate phase is lowered
> >once the average has gotten above the tape size cut-off.
> >
> >What I mean is, since actually live e
For Amanda 2.6.1p1-20091023.
We had a problem where a user caused a DLE to increase to
large then the size of a tape.
This has been corrected, but I find myself wondering how
the average size of the DLE in the estimate phase is lowered
once the average has gotten above the tape size cut-off
Dustin,
On Thu, Sep 16, 2010 at 12:22:42PM -0500, Dustin J. Mitchell wrote:
> On Thu, Sep 16, 2010 at 11:59 AM, Brian Cuttler wrote:
> > What I mean is, since actually live estimates are not performed
> > and archive data is relied on for the estimate, how does the
> &g
d a similar problem
or not, to catch it I had to run amstatus while the run was
still active.
I think it did not have this feature though, I think the
estimate was equal to the dump size, which for level 1
was just unreasonably large (level 0 sized).
>
On Fri, Jun 18, 2010 at 10:04:40 -0400, Brian Cuttler wrote:
> We just updated gtar to 1.22.
>
What version were you using before you upgraded?
Nathan
Nathan Stratton Treadway
Amanda 2.4.4 on Solaris 10/x86.
We just updated gtar to 1.22.
We backup only 2 DLE with this server, server==client.
I'm noticing that the estimate for non-level-0 is not
awefull, around 50 Gig, however we are actually backing
up the equiv of a level 0
Not sure where to look for the sour
gt;subdirectory to a separate mount point, I know that gtar would not
> >(by default) cross file system boundries.
> >
> >star seems to be trying to cross that boundry, at least for the
> >estimate phase if not the actual backup phase.
> >
> >NOTES:
&
ms to be trying to cross that boundry, at least for the
estimate phase if not the actual backup phase.
NOTES:
planner: disk cascade:/cascadep/export/ghost, full dump (951917180KB) will be
larger than available tape space, you could define a splitsize
planner: cascade /cascadep/export/ghost 2
"./[ghost-archive]*"
}
Rather than using split size, we thought to move the ghost-archive
subdirectory to a separate mount point, I know that gtar would not
(by default) cross file system boundries.
star seems to be trying to cross that boundry, at least for the
estimate phase
1 - 100 of 525 matches
Mail list logo