Things are beginning to run smoothly. Amanda has caught up with full backups, and last nights run completed by 7:57am this morning with a total of 5.2TB of dumps and no problems.

Two issues came up in the final stages of getting this working.

*1.* After adding a PSEUDO tape type with a length of ~15TB and making it the default, I was still only getting ~2.5TB of dumps. I don't know how Amanda was deciding on a dump size. It seemed to be taking one LTO6, even though that was no longer the default tapetype.

Looking through the amanda.conf man page, I came upon the parameter maxdumpsize, which, in my amanda.conf, had been set to -1. I changed that to ~15TB and that night got respectable filling of both the LTO7 and LTO6 totaling over 9TB of dumps. This seems to work, but also seems brittle in the sense that if I change the number of tapes I'm using, I will have to calculate a new maxdumpsize manually and put it in (presuming I remember).

*2.* I was seeing a substantial number of DLEs that were being written to both tapes. The ones going only to the LTO7 were using dumptypes that had the tag set explicitly in the dumptype. The ones going to both were supposed to have inherited a tag from the global dumptype. They didn't appear to, although they had inherited other parameters such as auth.

When I explicitly added the tag to the dumptype, they started going only to 
that storage.


Another interesting aside. On Sunday night, I was watching the progress with amstatus. Amanda had nearly finished, but still had two DLEs to complete just 15 minutes before the next day's run was supposed to begin. I did an `amcleanup -k daily` to terminate it and clean up so that the next day's run would be able to take off. On Monday night, I was again watching it, and had a similar situation. I decided to let it go, because I wanted to see how much it would fit on the tape. On Tuesday, I noticed that while the previous run had completed at 12:35am 9/5, the next run nevertheless kicked off at 11:30pm 9/4. They were both running at the same time for about an hour. One consequence was that for the second run, none of the LTO6 DLEs got written to tape, because that tape drive was unavailable when the run was initiated. The LTO7 DLEs were fine, because there are two drives on that tape library. Since I have a total of 8TB of holding space, nothing is lost, and the LTO6 dumps got flushed to tape the next day.


On 9/1/17 3:25 PM, Jean-Louis Martineau wrote:
On 01/09/17 03:16 PM, Chris Hoogendyk wrote:
hmm. That is kind of a problem. I should have more than sufficient space now, 
but the allocation of
where it goes could overflow if the planning is blind to the storage 
allocations. Maybe that is
something to be developed?
As I already wrote, It is an unimplemented features.

As to where to report the storage use, the Amanda report doesn't have much 
space for another column.
However, the Dump Summary could be broken into separate Dump Summary tables for 
each storage. I can
obviously look at my configuration files to see where things go, but it would 
be reassuring to have
the Amanda report reflect that.
The Dump Summary is for what is dumped,they could still be in holding disk and 
not on tape.

Maybe we could completetly rewrite the columnspec to be more like a printf
  "%12.0H %-11.0D %2L .... %S

H is for host
D is for disk
L is for level
...
S is for storage

We can then add many fields.


As for my current configuration, I created a pseudo tapetype that has a length 
equal to the sum of
what I expect to be able to write to one LTO6 plus one LTO7 tape. I put that as 
the global tapetype,
with runtapes=1. Each storage still references it's own tpchanger and tapetype.


On 9/1/17 2:31 PM, Jean-Louis Martineau wrote:
> On 01/09/17 02:10 PM, Chris Hoogendyk wrote:
> > OK.
> >
> > So, how does the planner plan for what goes on each storage?
> It doesn't, it use the size and plan to dump less than that.
>
> > That is, if I set up a pseudo tapetype
> > for global and give it a length of, say, 12.5TB; how will it know that
> > the uncompressed DLEs are
> > targeted to the LTO7 and the compressed DLEs are targeted at the LTO6?
> > What if the proportions came
> > out wrong (say, 8TB of DLEs intended for the LTO7 and 4.5TB of DLEs
> > intened for the LTO6), but the
> > total was within the 12.5TB?
> That's the problem, some DLEs will fail unless you have enough holdingdisk.
> I never liked the idea that amanda delay some full dump because it do
> not have enough tape space, It get worse at every days
> The solution to never hit that problem is to always provide enough tape,
> you could increase the runtapes of both storage.
>
> >
> > Also, there is no commentary in the Amanda email report indicating
> > where each DLE went, just the
> > overall amount of data that was written to each tape. I presume DLEs
> > are going to the intended
> > tapes, but . . . .
> Where would you like the information to be printed.
>
> You can run 'amadmin CONF find', it should list the storage for each DLEs.
>
> Jean-Louis
>
>
> *Disclaimer*
>
> This message is the property of *CARBONITE, INC.* <http://www.carbonite.com> 
and may contain
> confidential or privileged information.
>
> If this message has been delivered to you by mistake, then do not copy or 
deliver this message to
> anyone. Instead, destroy it and notify me by reply e-mail.
>

--
---------------

Chris Hoogendyk

-
O__ ---- Systems Administrator
c/ /'_ --- Biology & Geosciences Departments
(*) \(*) -- 315 Morrill Science Center
~~~~~~~~~~ - University of Massachusetts, Amherst

<[email protected]>

---------------

Erdös 4




*Disclaimer*

This message is the property of *CARBONITE, INC.* <http://www.carbonite.com> and may contain confidential or privileged information.

If this message has been delivered to you by mistake, then do not copy or deliver this message to anyone. Instead, destroy it and notify me by reply e-mail.


--
---------------

Chris Hoogendyk

-
   O__  ---- Systems Administrator
  c/ /'_ --- Biology & Geosciences Departments
 (*) \(*) -- 315 Morrill Science Center
~~~~~~~~~~ - University of Massachusetts, Amherst

<[email protected]>

---------------

Erdös 4

Reply via email to