Re: Timeout during estimate

2020-06-14 Thread Nathan Stratton Treadway
On Mon, Jun 15, 2020 at 00:20:25 -0400, Nathan Stratton Treadway wrote: > On Mon, Jun 15, 2020 at 10:41:58 +0700, Olivier wrote: > > I have an Amanda client that takes more than 4 hours to do the > > estimate. The estimate is computed correctly, but when amandad on the [...] &g

Re: Timeout during estimate

2020-06-14 Thread Nathan Stratton Treadway
On Mon, Jun 15, 2020 at 10:41:58 +0700, Olivier wrote: > I have an Amanda client that takes more than 4 hours to do the > estimate. The estimate is computed correctly, but when amandad on the > client tries to send back the estimate to the server, the packet times > out. > >

Timeout during estimate

2020-06-14 Thread Olivier
Hi, I have an Amanda client that takes more than 4 hours to do the estimate. The estimate is computed correctly, but when amandad on the client tries to send back the estimate to the server, the packet times out. I kind of remember that there is a timeout parameter that I need to tweak before

Re: Does anyone know how to make an amadmin $config estimate work for new dle's?

2018-11-16 Thread Austin S. Hemmelgarn
not the server. For my own understanding: If estimate is set to client, the client runs calcsize and returns a most accurate point-in-time size of the DLE? My understanding is that 'estimate client' actually invokes the backup program itself to get the size estimate (so, `tar` or

Re: Does anyone know how to make an amadmin $config estimate work for new dle's?

2018-11-16 Thread Gene Heskett
n the client. It may also be _really_ fast in your setup, > but that doesn't inherently mean it's running locally (Amanda is smart > enough to spread out estimates across hosts and spindles just like it > does backups). > > >> then factors in compression ratios and such t

Re: Does anyone know how to make an amadmin $config estimate work for new dle's?

2018-11-16 Thread Chris Nighswonger
On Fri, Nov 16, 2018 at 7:24 AM Austin S. Hemmelgarn wrote: > > Except that it actually runs on the client systems. I've actually > looked at this, the calcsize program is running on the clients and not > the server. > For my own understanding: If estimate is set to cli

Re: Does anyone know how to make an amadmin $config estimate work for new dle's?

2018-11-16 Thread Austin S. Hemmelgarn
nd ./[Q-Z]*", so the next run will have 5 new dle's. But an estimate does not show the new names that results in. I've even took the estimate assignment calcsize back out of the global dumptype, which ack the manpage, forces the estimates to be derived from a dummy run of tar, didn&#x

Re: Does anyone know how to make an amadmin $config estimate work for new dle's?

2018-11-15 Thread Gene Heskett
t;>> locations of some categories and broke the big one up into 2 > >>>>> pieces. "./[A-P]*" and ./[Q-Z]*", so the next run will have 5 > >>>>> new dle's. > >>>>> > >>>>> But an estimate does not show the n

Re: Does anyone know how to make an amadmin $config estimate work for new dle's?

2018-11-15 Thread Gene Heskett
t;>> locations of some categories and broke the big one up into 2 > >>>>> pieces. "./[A-P]*" and ./[Q-Z]*", so the next run will have 5 > >>>>> new dle's. > >>>>> > >>>>> But an estimate does not show the n

Re: Does anyone know how to make an amadmin $config estimate work for new dle's?

2018-11-15 Thread Austin S. Hemmelgarn
it showed one huge and 3 teeny level 0's for the 4 new dle's. So I just re-adjusted the locations of some categories and broke the big one up into 2 pieces. "./[A-P]*" and ./[Q-Z]*", so the next run will have 5 new dle's. But an estimate does not show the new names t

Re: Does anyone know how to make an amadmin $config estimate work for new dle's?

2018-11-15 Thread Gene Heskett
it showed one huge and 3 teeny > >>> level 0's for the 4 new dle's. So I just re-adjusted the > >>> locations of some categories and broke the big one up into 2 > >>> pieces. "./[A-P]*" and ./[Q-Z]*", so the next run will have 5 new

Re: Does anyone know how to make an amadmin $config estimate work for new dle's?

2018-11-15 Thread Austin S. Hemmelgarn
ns of some categories and broke the big one up into 2 pieces. "./[A-P]*" and ./[Q-Z]*", so the next run will have 5 new dle's. But an estimate does not show the new names that results in. I've even took the estimate assignment calcsize back out of the global dumptype, which ac

Re: Does anyone know how to make an amadmin $config estimate work for new dle's?

2018-11-15 Thread Gene Heskett
ategories and broke the big one up into 2 pieces. > > "./[A-P]*" and ./[Q-Z]*", so the next run will have 5 new dle's. > > > > But an estimate does not show the new names that results in. I've > > even took the estimate assignment calcsize back o

Re: Does anyone know how to make an amadmin $config estimate work for new dle's?

2018-11-15 Thread Austin S. Hemmelgarn
7;s for the 4 new dle's.  So I just re-adjusted the locations of some > categories and broke the big one up into 2 pieces. "./[A-P]*" > and ./[Q-Z]*", so the next run will have 5 new dle's. > > But an estimate does not show the new names t

Re: Does anyone know how to make an amadmin $config estimate work for new dle's?

2018-11-15 Thread Chris Nighswonger
gories and broke the big one up into 2 pieces. "./[A-P]*" > > and ./[Q-Z]*", so the next run will have 5 new dle's. > > > > But an estimate does not show the new names that results in. I've even > > took the estimate assignment calcsize back out of th

Re: Does anyone know how to make an amadmin $config estimate work for new dle's?

2018-11-15 Thread Austin S. Hemmelgarn
next run will have 5 new dle's. But an estimate does not show the new names that results in. I've even took the estimate assignment calcsize back out of the global dumptype, which ack the manpage, forces the estimates to be derived from a dummy run of tar, didn't help. Clues? Havin

Does anyone know how to make an amadmin $config estimate work for new dle's?

2018-11-15 Thread Gene Heskett
I ask because after last nights run it showed one huge and 3 teeny level 0's for the 4 new dle's. So I just re-adjusted the locations of some categories and broke the big one up into 2 pieces. "./[A-P]*" and ./[Q-Z]*", so the next run will have 5 new dle's. But

Re: estimate only?

2018-06-21 Thread Stefan G. Weichinger
Am 2018-06-20 um 11:04 schrieb Jens Berg: > Two ideas: > 1. Increase etimeout for the vtape-config and run amdump with the > --no-dump option. If I understand that option correctly, it should do a > full backup run including the estimates without actually writing > anything to (v)tapes or the holdi

Re: estimate only?

2018-06-20 Thread Jens Berg
;vtape" backing up to 2 NASes where I > symlink the disklist ... even with the same etimeout=1200 and estimate > method some DLEs fail to estimate. > > Now I assume that if I increase etimeout for the first week or so things > will work out sooner or later. > > But an init

estimate only?

2018-06-19 Thread Stefan G. Weichinger
th the same etimeout=1200 and estimate method some DLEs fail to estimate. Now I assume that if I increase etimeout for the first week or so things will work out sooner or later. But an initial "do estimates only" run would be more elegant, right? thanks for any pointers ps: for sur

estimate, calcsize locking /dev/null

2018-05-24 Thread Heiko Schlittermann
Hello, we ran into trouble. The estimate/calcsize hangs when aquiring a write-lock for /dev/null. /dev/null seems have an read-lock readlock (held by the dockerd) already. Is there any reason for amanda, having this lock? Best regards from Dresden/Germany Viele Grüße aus Dresden

RESULTS MISSING on all DLE when only one estimate fails?

2017-06-26 Thread Matthias Teege
Hallo! I've installed Amanda 3.3.3 on a Centos server. Sometimes backups are failing with "RESULTS MISSING" and "driver: WARNING: got empty schedule from planner". I think this is mostly because of estimate timeouts. I see "planner: ERROR Some estimate timeout

Re: force an estimate?

2017-04-10 Thread hymie
Nathan Stratton Treadway writes: >On Mon, Apr 10, 2017 at 07:06:55 -0400, hy...@lactose.homelinux.net wrote: >> >> Let's say I have a very recent full (level 0) backup of /home. If I >> split up my /home into two or three DLEs, will Amanda recognize that it >> already has a level-0 backup, and th

Re: force an estimate?

2017-04-10 Thread hymie
Bummer. Thanks. --hymie! Jean-Louis Martineau writes: > >man amadmin: > estimate [ hostname [ disks ]* ]* >Print the server estimate for the dles ... > >The answer is NO, since 'amadmin estimate' only print the server=20 >estimate (based on

Re: force an estimate?

2017-04-10 Thread Nathan Stratton Treadway
On Mon, Apr 10, 2017 at 07:06:55 -0400, hy...@lactose.homelinux.net wrote: > Separate but related question. > > Let's say I have a very recent full (level 0) backup of /home. If I > split up my /home into two or three DLEs, will Amanda recognize that it > already has a level-0 backup, and then ta

Re: force an estimate?

2017-04-10 Thread Jean-Louis Martineau
man amadmin: estimate [ hostname [ disks ]* ]* Print the server estimate for the dles ... The answer is NO, since 'amadmin estimate' only print the server estimate (based on history). The best tool is du. Jean-Louis On 10/04/17 07:06 AM, hy...@lactose.homelinux

force an estimate?

2017-04-10 Thread hymie
Greetings. I'm experimenting with how I can split up a couple of very large backups into multiple smaller backups. newlaptop.local.net /home/nothymie /home { simple-gnutar-remote exclude "./hymie" estimate client } newlaptop.local.net /home/hymie /home { simpl

Re: big estimate / faulty dump

2017-01-14 Thread j...@jgcomp.com
On Thu, Jan 12, 2017 at 09:57:42PM -0600, Jason L Tibbitts III wrote: > > "jc" == jon@jgcomp com writes: > > jc> Will try. I was still researching and hoping for some alternative, > jc> actual fix. Seems not to be an unusual situation. > > That's going to have to come either from the selin

Re: big estimate / faulty dump

2017-01-12 Thread Jason L Tibbitts III
> "jc" == jon@jgcomp com writes: jc> Will try. I was still researching and hoping for some alternative, jc> actual fix. Seems not to be an unusual situation. That's going to have to come either from the selinux policy authors or from someone who sits down and learns enough to get things wo

Re: big estimate / faulty dump

2017-01-12 Thread j...@jgcomp.com
On Thu, Jan 12, 2017 at 06:02:38PM -0600, Jason L Tibbitts III wrote: > > "jc" == jon@jgcomp com writes: > > jc> Ok, I confirmed my home dir can be backed up with selinux set to > jc> non-enforcing. > > How about just setting amanda_t to permissive as I suggested in my > previous message? A

Re: big estimate / faulty dump

2017-01-12 Thread Jason L Tibbitts III
> "jc" == jon@jgcomp com writes: jc> Ok, I confirmed my home dir can be backed up with selinux set to jc> non-enforcing. How about just setting amanda_t to permissive as I suggested in my previous message? At least then you wouldn't have to disable selinux throughout your system. # semanag

Re: big estimate / faulty dump

2017-01-12 Thread j...@jgcomp.com
On Mon, Jan 09, 2017 at 04:19:35PM -0500, Jon LaBadie wrote: > On Mon, Jan 09, 2017 at 06:12:25PM +, Debra S Baddorf wrote: [ snip ] > > Debra -- thank you!!! > Doing the above caused me to also look at the extended attributes. > > $ ls -lZ /home > total 128 > drwxrwxr-x. 28 gundi gund

Re: big estimate / faulty dump

2017-01-10 Thread Jason L Tibbitts III
> "UM" == Uwe Menges writes: UM> I filed https://bugzilla.redhat.com/show_bug.cgi?id=1280526 but I UM> don't know if this is really the same cause. Sadly that one got closed because the release was never updated to something newer than Fedora 22. I suspect it's still a problem. I'll make a

Re: big estimate / faulty dump

2017-01-10 Thread Uwe Menges
On 01/09/17 22:19, j...@jgcomp.com wrote: > Hmmm, rootk is "system_u", jon and all the other home dirs are "unconfined_u". > The lost+found directory is also "system_u". If this is the problem, > lost+found > should also be getting backed up and should appear in the gnutar lists. > > $ strings

Re: big estimate / faulty dump

2017-01-09 Thread j...@jgcomp.com
On Mon, Jan 09, 2017 at 06:12:25PM +, Debra S Baddorf wrote: > Well, due to the resounding silence …. let’s try some experiments: > What happens when you manually dump (or tar?) the area to a scratch area? > What size backup do you get? > Perhaps some of the files are large but marked “no

Re: big estimate / faulty dump

2017-01-09 Thread Debra S Baddorf
Well, due to the resounding silence …. let’s try some experiments: What happens when you manually dump (or tar?) the area to a scratch area? What size backup do you get? Perhaps some of the files are large but marked “no dump” or something. (Though I imagine estimates take that into account.)

big estimate / faulty dump

2017-01-07 Thread Jon LaBadie
I'm getting faulty, incomplete dumps of several DLE. For example, /home is a separate DLE, 82GB used according to df(1) and estimated by calcsize at 52GB. But the dump is only 0.5GB. I see nothing unusual in the logs, suggestions on what to research welcomed. Client is Fedora 24, amanda 3.3.9 S

Re: amadmin estimate?

2014-03-18 Thread Michael Stauffer
t716s_1/jet-export/ { > >gui-base > >include "./[a]*" > >exclude "./aguirre/" > >} > > cfile.uphs.upenn.edu jet-aguirre /mnt/jet716s_1/jet-export/ { > >gui-base > >include "./aguirre/" > >} > >

Re: amadmin estimate?

2014-03-06 Thread Nathan Stratton Treadway
uot; >exclude "./aguirre/" >} > cfile.uphs.upenn.edu jet-aguirre /mnt/jet716s_1/jet-export/ { >gui-base >include "./aguirre/" > } > [...] > But then > > [amandabackup@cback ~]$ amadmin jet1 estimate cfile jet-a > cfile.uphs.upenn.e

Re: amadmin estimate?

2014-03-06 Thread Michael Stauffer
OK, thanks Jean-Louis. I'll give that a try. -M On Thu, Mar 6, 2014 at 7:31 AM, Jean-Louis Martineau wrote: > > $ man amadmin > estimate [ hostname [ disks ]* ]* >Print the server estimate for the dles, each output lines have > the >

Re: amadmin estimate?

2014-03-06 Thread Jean-Louis Martineau
$ man amadmin estimate [ hostname [ disks ]* ]* Print the server estimate for the dles, each output lines have the following format: hostname diskname level size Server estimate can only be computed if you already backed up the dles a few time

amadmin estimate?

2014-03-05 Thread Michael Stauffer
Amanda 3.4.4 Hi, I'm trying to use amadmin's estimate command to get an idea if my DLE entries are correct. For example: I want one DLE with all dirs starting with a, except for ./aguirre, then another with just ./aguirre cfile.uphs.upenn.edu jet-a /mnt/jet716s_1/jet-export/ {

slow estimate on FreeBSD/ZFS

2014-02-08 Thread Gour
r/log tank/var/tmp 349G 32K349G 0%/var/tmp tank/usr/jails/.warden-template-10.0-RELEASE-amd64349G178M349G 0%/usr/jails/.warden-template-10.0-RELEASE-amd64 So, few questions: a) what is the reason that estimate on FreeBS/ZFS is so slow in c

Re: all estimate timed out

2013-04-12 Thread Nathan Stratton Treadway
On Fri, Apr 12, 2013 at 17:09:11 -0400, Chris Hoogendyk wrote: > The "Total bytes written:" was identical with and without the > --sparse option (right down to the last byte ;-) ). It was the time > taken to arrive at that estimate that was so very different: > > Total

Re: all estimate timed out

2013-04-12 Thread Chris Hoogendyk
Thank you, Nathan. Informative. The "Total bytes written:" was identical with and without the --sparse option (right down to the last byte ;-) ). It was the time taken to arrive at that estimate that was so very different: Total bytes written: 2086440960 (2.0GiB, 11MiB/s) real

Re: all estimate timed out

2013-04-12 Thread Nathan Stratton Treadway
than the %s value. - using the standard Sun "ls", you can do ls -sl , and then multiply the value in the first column by 512. (I assume the "block size" used is a constant 512 in that case, regardless of file system.) * The doubling of the time

Re: all estimate timed out

2013-04-12 Thread Chris Hoogendyk
backup will be large because tar fill the holes with 0. Your best option is to use the calcsize or server estimate. Jean-Louis -- --- Chris Hoogendyk - O__ Systems Administrator c/ /'_ --- Biology & Geology Departments (*) \(*) -- 140 Morrill

Re: all estimate timed out

2013-04-05 Thread Chris Hoogendyk
Thank you! Not sure why the debug file would list runtar in the form of a parameter, when it's not to be used as such. Anyway, that got it working. Which brings me back to my original problem. As indicated previously, the filesystem in question only has 2806 files and 140 directories. As I wa

Re: all estimate timed out

2013-04-05 Thread Chris Hoogendyk
OK, folks, it is the "--sparse" option that Amanda is putting on the gtar. This is /usr/sfw/bin/tar version 1.23 on Solaris 10. I have a test script that runs the runtar and a test directory with just 10 of the tif files in it. Without the "--sparse" option, time tells me that it takes 0m0.57s

Re: all estimate timed out

2013-04-05 Thread Brian Cuttler
f files are not compressed then there should be no additional overhead to decompress them on read. I would also probably hesitate to enable compression of a zfs file system that was used for amanda work area, since you are storing data that has already been zip'd. Though this also has no im

Re: all estimate timed out

2013-04-05 Thread Jean-Louis Martineau
he holes with 0. Your best option is to use the calcsize or server estimate. Jean-Louis

Re: all estimate timed out

2013-04-04 Thread Nathan Stratton Treadway
On Thu, Apr 04, 2013 at 17:48:46 -0400, Chris Hoogendyk wrote: > So, I created a script working off that and adding verbose: > >#!/bin/ksh > >OPTIONS=" --create --file /dev/null --numeric-owner --directory > /export/herbarium >--one-file-system --listed-incremental"; >OPTIONS="${

Re: all estimate timed out

2013-04-04 Thread Nathan Stratton Treadway
On Thu, Apr 04, 2013 at 17:48:46 -0400, Chris Hoogendyk wrote: > If I exchange the two commands so that I'm using gtar directly rather > than runtar, then I get: > >/usr/sfw/bin/gtar: Cowardly refusing to create an empty archive >Try `/usr/sfw/bin/gtar --help' or `/usr/sfw/bin/gtar --usage

Re: all estimate timed out

2013-04-04 Thread Jean-Louis Martineau
On 04/04/2013 02:48 PM, Chris Hoogendyk wrote: I may just quietly go nuts. I'm trying to run the command directly. In the debug file, one example is: Mon Apr 1 08:05:49 2013: thd-32a58: sendsize: Spawning "/usr/local/libexec/amanda/runtar runtar daily /usr/local/etc/amanda/tools/gtar --creat

Re: all estimate timed out

2013-04-04 Thread Chris Hoogendyk
than mutt. Any way to vet the zfs file system? Make sure its sane and doesn't contain some kind of a bad link causing a loop? If you where to run the command used by estimate, which I believe displays in the debug file, can you run that successfully on the command line? If you run it verbo

Re: all estimate timed out

2013-04-04 Thread Brian Cuttler
Reply using thunderbird rather than mutt. Any way to vet the zfs file system? Make sure its sane and doesn't contain some kind of a bad link causing a loop? If you where to run the command used by estimate, which I believe displays in the debug file, can you run that successfully o

Re: all estimate timed out

2013-04-04 Thread Chris Hoogendyk
hose by my mail client -- Thunderbird 17.0.5. I changed the dump type to not use compression. If tif files are not going to compress anyway, then I might as well not even ask Amanda to try. However, it never gets to the dump, because it gets "all estimate timed out." I will try breaki

Re: all estimate timed out

2013-04-04 Thread Brian Cuttler
are exceptions, even here, and I am educating the users to do things differently. However I do have lots of files on zfs in general... I don't believe that gzip is used in the estimate phase, I think that it produces "raw" dump size for dump scheduling and that tape allocation is lef

Re: all estimate timed out

2013-04-03 Thread Chris Hoogendyk
, if so, how should I go about looking for it? On 4/3/13 1:14 PM, Brian Cuttler wrote: Chris, for larger file systems I've moved to "server estimate", less accurate but takes the entire estimate phase out of the equation. We have had a lot of success with pig zip rather than

Re: all estimate timed out

2013-04-03 Thread Brian Cuttler
Chris, for larger file systems I've moved to "server estimate", less accurate but takes the entire estimate phase out of the equation. We have had a lot of success with pig zip rather than regular gzip, is it'll take advantage of the mutiple CPUs and give parallelizatio

Re: all estimate timed out

2013-04-03 Thread Chris Hoogendyk
13: thd-32a58: sendsize: . Mon Apr 1 08:05:49 2013: thd-32a58: sendsize: estimate time for /export/herbarium level 0: 26302.500 Nice, it took 7 hours 18 Minutes and 22 Seconds to get the level-0 estimate. Mon Apr 1 08:05:49 2013: thd-32a58: sendsize: estimate size for /export/herbarium le

Re: all estimate timed out

2013-04-03 Thread C.Scheeder
over 200G of tif scans, compression is causing trouble? But this is just getting estimates, output going to /dev/null. Here is a segment from the very end of the sendsize debug file from April 1 (the debug file ends after these lines): Mon Apr 1 08:05:49 2013: thd-32a58: sendsize: . Mon Apr

all estimate timed out

2013-04-03 Thread Chris Hoogendyk
is is just getting estimates, output going to /dev/null. Here is a segment from the very end of the sendsize debug file from April 1 (the debug file ends after these lines): Mon Apr 1 08:05:49 2013: thd-32a58: sendsize: . Mon Apr 1 08:05:49 2013: thd-32a58: sendsize: estimate time for /expor

Re: failure "estimate of level x timed out"

2012-12-11 Thread Charles Stroom
Today I had put back the parameter estimate to "client" as it always had been, while etimeout is still at the new value of 14400. Backup failed again, even on 4 DLE's. I had changed "estimate" back because although the backup succeeded yesterday, it had this strange beh

Re: failure "estimate of level x timed out"

2012-12-11 Thread Alan Orth
ion is right indeed. Good to know! I really need to investigate my estimate timeouts then... Cheers, Alan On 12/11/2012 12:56 PM, Charles Stroom wrote: The planner has an "ERROR" to make the estimate, but than later the dump itself FAILs as well. So no backup is made of that par

Re: failure "estimate of level x timed out"

2012-12-11 Thread Charles Stroom
The planner has an "ERROR" to make the estimate, but than later the dump itself FAILs as well. So no backup is made of that particular DLE. Regards, Charles On Tue, 11 Dec 2012 09:04:46 +0300 Alan Orth wrote: > Hi, All. > > It's good that you brought this up on the

Re: Fw: failure "estimate of level x timed out"

2012-12-10 Thread Alan Orth
06:05 PM, Charles Stroom wrote: Hi, the forwarded email below was meant to go to the list, but I noticed later it was only to 1 recepient. Hence the forward. Regards, Charles Begin forwarded message: Date: Sat, 8 Dec 2012 22:08:48 +0100 From: Charles Stroom To: Jens Berg Subject: Re: failure &quo

Fw: failure "estimate of level x timed out"

2012-12-10 Thread Charles Stroom
Hi, the forwarded email below was meant to go to the list, but I noticed later it was only to 1 recepient. Hence the forward. Regards, Charles Begin forwarded message: Date: Sat, 8 Dec 2012 22:08:48 +0100 From: Charles Stroom To: Jens Berg Subject: Re: failure "estimate of level x

Re: failure "estimate of level x timed out"

2012-12-06 Thread Jens Berg
The time required for the estimate is more a matter of the number of files and not of their size. Most of the time needed for the estimation is spent in traversing through the directory structure of the file system. Could it be, that you have installed some big source-tarballs to, for example

Re: failure "estimate of level x timed out"

2012-12-05 Thread Debra S Baddorf
Charles Stroom: Have you tried making "etimeout" a bigger number, in the amanda.conf file? As the disk space gets more and more full, the time to do the estimate grows.I've got etimeout set to 2000 because of a really large disk in my past. Don't know if I st

failure "estimate of level x timed out"

2012-12-05 Thread Charles Stroom
I hope somebody can help me. Since already some time, most backups fail, mostly on one item "/usr" with the message that the estimate timed out. Only once in a while (1 out of 10), this failure does not occur and the backup completes normally. This sounds to me that in principal all

Re: Q: 'all estimate timed out' error

2012-08-26 Thread Geert Uytterhoeven
for >> years, now regularly throws the message >> >> FAILURE AND STRANGE DUMP SUMMARY: >> srv-erp3 /mnt1 lev 0 FAILED [disk /mnt1, all estimate timed out] >> planner: ERROR Request to srv-erp3 failed: timeout waiting for REP >> >> in the report, but

Re: Q: 'all estimate timed out' error

2012-08-23 Thread Geert Uytterhoeven
ND STRANGE DUMP SUMMARY: > srv-erp3 /mnt1 lev 0 FAILED [disk /mnt1, all estimate timed out] > planner: ERROR Request to srv-erp3 failed: timeout waiting for REP > > in the report, but the other disks are written properly: Did this ever got resolved? How? Since a few days, I'm g

Q: 'all estimate timed out' error

2011-10-10 Thread Albrecht Dre�
FAILED [disk /mnt1, all estimate timed out] planner: ERROR Request to srv-erp3 failed: timeout waiting for REP in the report, but the other disks are written properly: srv-erp3 /boot 1 0 0 320.00:00 10.0 0:01 24.8 srv-erp3 /home 0 0 0 160.0

Q: 'all estimate timed out' error

2011-10-10 Thread Albrecht Dre�
Hi all, I use amanda 2.5.2p1 on a Ubuntu 8.04 server to back up several machines. The backup of /one/ disk from /one/ machine, which worked flawlessly for years, now regularly throws the message FAILURE AND STRANGE DUMP SUMMARY: srv-erp3 /mnt1 lev 0 FAILED [disk /mnt1, all estimate timed

Re: tar estimate dying

2011-07-09 Thread Tim Johnson
I reran the dump seperately and here is the section of the report FAILURE DUMP SUMMARY: hutton.earth.northwestern.edu / lev 0 FAILED "[disk /, all estimate timed out]" planner: ERROR Request to hutton.earth.northwestern.edu failed: timeout waiting for REP In the m

Re: tar estimate dying

2011-07-07 Thread Brian Cuttler
Jean-Louis, On Thu, Jul 07, 2011 at 03:35:58PM -0400, Jean-Louis Martineau wrote: > If you got only 'failed' for the error message, then something is really > broken, I would like to see it, can you post the complete line from the > 'FAILURE DUMP SUMMARY' section of the report. I mispoke, this

Re: tar estimate dying

2011-07-07 Thread Jean-Louis Martineau
Jul 2011, Jean-Louis Martineau wrote: Can you post the exact error message you get from amanda? You said 'broken pipe', where do you get it. Telling 'tar estimate dying' or 'broken pipe' is useless if you don't show how/where you get them. The sendsize debug

Re: tar estimate dying

2011-07-07 Thread Debra Baddorf
tiate the connection itself. Something to look for Deb Baddorf On Jul 7, 2011, at 2:14 PM, Jean-Louis Martineau wrote: > Can you post the exact error message you get from amanda? > You said 'broken pipe', where do you get it. > > > Telling 'tar estimate dying&

Re: tar estimate dying

2011-07-07 Thread Tim Johnson
working for the last six years. On Thu, 7 Jul 2011, Jean-Louis Martineau wrote: Can you post the exact error message you get from amanda? You said 'broken pipe', where do you get it. Telling 'tar estimate dying' or 'broken pipe' is useless if you don't show ho

Re: tar estimate dying

2011-07-07 Thread Jean-Louis Martineau
Can you post the exact error message you get from amanda? You said 'broken pipe', where do you get it. Telling 'tar estimate dying' or 'broken pipe' is useless if you don't show how/where you get them. The sendsize debug file looks good, please post the

Re: tar estimate dying

2011-07-07 Thread Brian Cuttler
). After about 30 minutes the tar > process just dies and the estimate doesnt seem to get to the > server. The / is only about 10gig and until recently been > fine. These are old computers, however the /home part. is > 20gig and has been fine. > >I am using tar (now 1.20 after

tar estimate dying

2011-07-07 Thread Tim Johnson
the tar process just dies and the estimate doesnt seem to get to the server. The / is only about 10gig and until recently been fine. These are old computers, however the /home part. is 20gig and has been fine. I am using tar (now 1.20 after downgrading from 1.23 for testing) I have also tried

Re: NOTES: big estimate

2011-03-15 Thread Brian Cuttler
Jon, We also see 'big estimate' output for some partitions. I assumed some sort of bounding but never looked into it any further. Reminds me of the O (big-oh) and omega we'd compute for algorithms back in computer science class [upper and lower bounds limits for algorithms]. S

NOTES: big estimate

2011-03-15 Thread Jon LaBadie
My daily reports regularly contain lines like the following (commas added): NOTES: big estimate: lastchance Cdrive 1 est: 6,944,416Mout 975,940M big estimate: mumsxp Cdrive 1 est: 3,303,904Mout 288,480M big estimate: vostxp Cdrive 1

Re: Error: disk /usr, all estimate failed

2011-02-08 Thread Jean-Louis Martineau
sendsize[77723]: time 5.465: Create: tar -cf [filenames...] sendsize[77723]: time 5.465: Help:tar --help sendsize[77723]: time 5.466: . sendsize[77723]: estimate time for /usr level 0: 0.154 sendsize[77723]: no size line match in /usr/bin/tar output for "/usr" send

Re: Error: disk /usr, all estimate failed

2011-02-07 Thread Jon LaBadie
On Mon, Feb 07, 2011 at 04:31:57PM -0500, Mike Neimoyer wrote: > Okay, looking further, I see the following in the file > ... > > Just a guess here, but I presume that since amanda is waiting on a > response from tar, and tar has errored out, that amanda skips this > item in it's DLE because of t

Re: Error: disk /usr, all estimate failed

2011-02-07 Thread Charles Curley
On Mon, 07 Feb 2011 16:31:57 -0500 Mike Neimoyer wrote: > Just a guess here, but I presume that since amanda is waiting on a > response from tar, and tar has errored out, that amanda skips this > item in it's DLE because of this error. > > Does this sound reasonable? It sounds reasonable to me

Re: Error: disk /usr, all estimate failed

2011-02-07 Thread Mike Neimoyer
sendsize[77723]: time 5.466: . sendsize[77723]: estimate time for /usr level 0: 0.154 sendsize[77723]: no size line match in /usr/bin/tar output for "/usr" sendsize[77723]: . sendsize[77723]: estimate size for /usr level 0: -1 KB Just a guess here, but I presume

Error: disk /usr, all estimate failed

2011-02-07 Thread Mike Neimoyer
gular incremental and even our weekly and Monthly runs going off successfully. But now one of my DLE's for the Amanda server itself shows an error ("/usr lev 0 FAILED [disk /usr, all estimate failed]") in the daily report, even though the selfcheck.*.debug file shows no errors, and an a

Re: DLE size change with "estimate server"

2010-09-16 Thread Jean-Louis Martineau
d, but I find myself wondering how the average size of the DLE in the estimate phase is lowered once the average has gotten above the tape size cut-off. What I mean is, since actually live estimates are not performed and archive data is relied on for the estimate, how does the new lower size get averaged

Re: DLE size change with "estimate server"

2010-09-16 Thread Brian Cuttler
;large then the size of a tape. > > > >This has been corrected, but I find myself wondering how > >the average size of the DLE in the estimate phase is lowered > >once the average has gotten above the tape size cut-off. > > > >What I mean is, since actually live e

DLE size change with "estimate server"

2010-09-16 Thread Brian Cuttler
For Amanda 2.6.1p1-20091023. We had a problem where a user caused a DLE to increase to large then the size of a tape. This has been corrected, but I find myself wondering how the average size of the DLE in the estimate phase is lowered once the average has gotten above the tape size cut-off

Re: DLE size change with "estimate server"

2010-09-16 Thread Brian Cuttler
Dustin, On Thu, Sep 16, 2010 at 12:22:42PM -0500, Dustin J. Mitchell wrote: > On Thu, Sep 16, 2010 at 11:59 AM, Brian Cuttler wrote: > > What I mean is, since actually live estimates are not performed > > and archive data is relied on for the estimate, how does the > &g

Re: estimate vs actual

2010-06-18 Thread Brian Cuttler
d a similar problem or not, to catch it I had to run amstatus while the run was still active. I think it did not have this feature though, I think the estimate was equal to the dump size, which for level 1 was just unreasonably large (level 0 sized). >

Re: estimate vs actual

2010-06-18 Thread Nathan Stratton Treadway
On Fri, Jun 18, 2010 at 10:04:40 -0400, Brian Cuttler wrote: > We just updated gtar to 1.22. > What version were you using before you upgraded? Nathan Nathan Stratton Treadway

estimate vs actual

2010-06-18 Thread Brian Cuttler
Amanda 2.4.4 on Solaris 10/x86. We just updated gtar to 1.22. We backup only 2 DLE with this server, server==client. I'm noticing that the estimate for non-level-0 is not awefull, around 50 Gig, however we are actually backing up the equiv of a level 0 Not sure where to look for the sour

Re: Estimate crossing file system boundries

2010-06-03 Thread Brian Cuttler
gt;subdirectory to a separate mount point, I know that gtar would not > >(by default) cross file system boundries. > > > >star seems to be trying to cross that boundry, at least for the > >estimate phase if not the actual backup phase. > > > >NOTES: &

Re: Estimate crossing file system boundries

2010-06-03 Thread Jean-Louis Martineau
ms to be trying to cross that boundry, at least for the estimate phase if not the actual backup phase. NOTES: planner: disk cascade:/cascadep/export/ghost, full dump (951917180KB) will be larger than available tape space, you could define a splitsize planner: cascade /cascadep/export/ghost 2

Estimate crossing file system boundries

2010-06-03 Thread Brian Cuttler
"./[ghost-archive]*" } Rather than using split size, we thought to move the ghost-archive subdirectory to a separate mount point, I know that gtar would not (by default) cross file system boundries. star seems to be trying to cross that boundry, at least for the estimate phase

  1   2   3   4   5   6   >