Well, I guess it worked. Backups finish before I sit down at the
breakfast table computer in the morning. ;-)
I haven't yet gotten the gumption to check in the middle of the night to
see how the processes are playing out on the servers. I suppose I could
be lazy and cron out a couple of amstatus reports during the night, but
I'd be more content actually looking at the servers and seeing what's
going on. I'll just have to bite the bullet and wake up. ;-)
---------------
Chris Hoogendyk
-
O__ ---- Systems Administrator
c/ /'_ --- Biology & Geology Departments
(*) \(*) -- 140 Morrill Science Center
~~~~~~~~~~ - University of Massachusetts, Amherst
<[email protected]>
---------------
Erdös 4
Chris Hoogendyk wrote:
Just for further reference,
I don't have an issue with my current speed, I was just wondering
about selectively cranking it up. I have two 300G holding disks. I
have GigE end to end among the servers. I'm already backing up servers
in parallel. I have an AIT5 tape library.
Anyway, I just found the white paper, "Configuring Amanda for Parallel
Backups" By Gregory Grant on the Zmanda Network / Resources / White
Papers -- http://network.zmanda.com/amanda-whitepapers.php. One item I
was missing was the maxdumps parameter in the dumptype, which defaults
to 1 (that's per client).
I stepped maxdumps up to 4, changed inparallel from 4 to 8, and
changed netusage to 20000 (up from 8000, which was up from much lower
going back a couple of years). That white paper didn't make any
reference to using spindle numbers on the disklist. In my current
configuration, the most vulnerable clients have the same spindle
number on all their DLE's. I'm assuming that will be interpreted by
Amanda and will prevent any attempt to launch parallel dumps on those
clients.
The white paper also didn't make any reference to the network
environment. It talked about bandwidth usage in Kbps, but did not say
anything about what makes sense in, say, a 10Mb, 100Mb or GigE
environment. I'm assuming that my 20,000Kbps is quite conservative in
a GigE environment. I'm assuming the overall changes will result in
more simultaneous dumps and faster overall backups. I'll watch and see.
The white paper was listed as revision 01 but not dated (blog
references indicate 2007), so perhaps some additions for a revision?
---------------
Chris Hoogendyk
-
O__ ---- Systems Administrator
c/ /'_ --- Biology & Geology Departments
(*) \(*) -- 140 Morrill Science Center
~~~~~~~~~~ - University of Massachusetts, Amherst
<[email protected]>
---------------
Erdös 4
Chris Hoogendyk wrote:
Just an off hand question.
I now have my T5220 servers in full operation and have decommissioned
the E250's they replaced. Backups are noticeably faster now. What I'm
wondering is the following. The T5220's have 8 core with 8 threads
per core as well as 8 encryption accelerators that are tied into the
ssl libraries. They easily run a higher work load with head room to
spare. If I bumped up the inparallel (it is currently 4), the T5220's
would be quite happy with that. One of them has 86 drive partitions
hanging off it as well as an iSCSI array configured with ZFS.
However, I don't want to stress out any of the older servers that are
still running, some on hardware never meant to be servers.
I take it there aren't any nuances to the inparallel configuration?
hmm. I'm using spindle names in my disklist. I suppose I could juggle
that to limit parallel dumping on some servers, while cranking up the
global setting. Comments? Suggestions?