Bernhard,
Use the attached patch for 3.3
Let me know if it improve the balancing, or if some dle get promoted too
often.
Jean-Louis
On 03/27/2012 01:27 PM, Bernhard Erdmann wrote:
Hi Jean-Louis,
will your patch apply to Amanda version 3.3.1?
For several months I have an ongoing similar problem with one Amanda
configuration. One big DLE (120 GB), two DLEs at 75 and 50 GB and ca.
35 DLEs at 20-35 GB each.
Amanda 3.3.1 does not move the biggest DLE to a day when it only
full-dumps this DLE or shuffles the smaller DLEs around so that only
the biggest DLE is full-dumped at a particular day. Instead, always
the second medium-sized DLE (50 GB) is full-dumped at the same day as
the biggest DLE.
This configuration has been stable for more than half a year, i.e. 4-5
dumpcycles have passed.
$ amadmin be balance
due-date #fs orig kB out kB balance
----------------------------------------------
3/27 Tue 2 50952368 50952368 -9.5%
3/28 Wed 2 49318040 49318040 -12.4%
3/29 Thu 1 73811810 73811810 +31.1%
3/30 Fri 2 40733720 40733720 -27.6%
3/31 Sat 2 55579040 55579040 -1.3%
4/01 Sun 2 180292110 180292110 +220.3%
4/02 Mon 1 44029300 44029300 -21.8%
4/03 Tue 2 67804280 67804280 +20.4%
4/04 Wed 2 48122090 48122090 -14.5%
4/05 Thu 2 55214820 55214820 -1.9%
4/06 Fri 2 58847690 58847690 +4.5%
4/07 Sat 2 47259350 47259350 -16.1%
4/08 Sun 1 39843350 39843350 -29.2%
4/09 Mon 1 41888900 41888900 -25.6%
4/10 Tue 2 58437479 58437479 +3.8%
4/11 Wed 2 55687590 55687590 -1.1%
4/12 Thu 0 0 0 ---
4/13 Fri 2 56478270 56478270 +0.3%
4/14 Sat 2 61045722 58645042 +4.2%
4/15 Sun 1 44408210 44408210 -21.1%
4/16 Mon 2 56126655 51227393 -9.0%
4/17 Tue 3 65676196 53594814 -4.8%
4/18 Wed 2 64966230 64966230 +15.4%
4/19 Thu 4 81805259 55246746 -1.9%
4/20 Fri 10 67815815 54983817 -2.3%
----------------------------------------------
TOTAL 54 1466144294 1407372459 56294898
(estimated 25 runs per dumpcycle)
Quoting Jean-Louis Martineau <martin...@zmanda.com> on Fri, 23 Mar
2012 08:18:51 -0400:
Hi Gene,
Can you try the attached patch? (it is lightly tested and uncommitted).
Jean-Louis
On 03/21/2012 12:48 PM, gene heskett wrote:
Greetings from the canary;
One of the things that constantly get under my skin is the apparent
lack of
amanda's ability to juggle backup order in order to balance the
sizes of
the backups from night to night. I have fussed about this before
without
arriving at a solution, but it seems to me amanda has gone dumb with
all
the re-writes in the last 3 or 5 years.
I am seemingly locked into a cadence of 4 nights worth of doing
about 15Gb
a night, followed by the night when it does the largest 5 or so
DLE's all
on the same run, which makes that run be 45+Gb.
The biggest one is /usr/movies, at a bit over 16Gb. If I could get
that
one separated from the other larger ones, it would help. Sure, I could
comment that DLE out for a day or 2. Or I could force a level 0 on
Friday.
The point is that 5 years ago, amanda would do this all by itself
and it is
no longer even making the effort for at least the last 2 or 3 years.
Here is the output of amadmin Daily balance:
due-date #fs orig MB out MB balance
----------------------------------------------
3/21 Wed 2 22104 9865 -32.9%
3/22 Thu 8 6787 3178 -78.4%
3/23 Fri 12 1117 1065 -92.8%
3/24 Sat 9 28015 14595 -0.7%
3/25 Sun 5 48037 44776 +204.7%
----------------------------------------------
TOTAL 36 106060 73479 14695
(estimated 5 runs per dumpcycle)
No huge files to disturb it have been added or deleted in at least a
month.
Now it used to be, years ago, that amanda's planner would come
within 20-50
megs of filling a 4Gb tape every night for weeks at a time without ever
hitting an EOT. Now it seems as if amanda is making no effort to
move that
16Gb movie (its weddings I've shot, all since I moved to vtapes
years ago)
DLE to Friday or even Thursday where it would fit quite nicely in
the 30Gb
I give it as a virtual tape size.
Do I have it being forced to maintain the existing terrible schedule by
some option with a hidden interaction in my amanda.conf?
Please name that option if there is such a beast.
Thanks.
Cheers, Gene
diff --git a/server-src/planner.c b/server-src/planner.c
index 15fa1c4..9808fc3 100644
--- a/server-src/planner.c
+++ b/server-src/planner.c
@@ -2801,6 +2801,7 @@ static int promote_highest_priority_incremental(void)
int check_days;
int nb_today, nb_same_day, nb_today2;
int nb_disk_today, nb_disk_same_day;
+ gint64 same_day_lev0;
char *qname;
/*
@@ -2829,11 +2830,14 @@ static int promote_highest_priority_incremental(void)
nb_same_day = 0;
nb_disk_today = 0;
nb_disk_same_day = 0;
+ same_day_lev0 = 0;
for(dp1 = schedq.head; dp1 != NULL; dp1 = dp1->next) {
- if(est(dp1)->dump_est->level == 0)
+ if (est(dp1)->dump_est->level == 0) {
nb_disk_today++;
- else if(est(dp1)->next_level0 == est(dp)->next_level0)
+ } else if(est(dp1)->next_level0 == est(dp)->next_level0) {
nb_disk_same_day++;
+ same_day_lev0 += est(dp1)->last_lev0size;
+ }
if(strcmp(dp->host->hostname, dp1->host->hostname) == 0) {
if(est(dp1)->dump_est->level == 0)
nb_today++;
@@ -2847,9 +2851,11 @@ static int promote_highest_priority_incremental(void)
continue;
/* do not promote if overflow balanced size and something today */
+ /* and if today full will be larger than that day full */
/* promote if nothing today */
- if((new_lev0 > (gint64)(balanced_size + balance_threshold)) &&
- (nb_disk_today > 0))
+ if ((new_lev0 > (gint64)(balanced_size + balance_threshold)) &&
+ (new_lev0 > same_day_lev0 - level0_est->csize) &&
+ (nb_disk_today > 0))
continue;
/* do not promote if only one disk due that day and nothing today */