I've now got a working dump configuration - or at least I was able to test a dump [to completion] a subset of the data. I'm now expanding that test.
I can see how important it is to get the DLEs right/manageable now and the implications of data structure (for Amanda). I might have quite a lot to do - to define them all, when I get to that, for the full archive. I'm likely to split up by host with labels to make it a bit easier. I think I've been able to form something vaguely sensible for the first server (interms of disklist) by firstly running a du then examining it. Forming the DLE's using includes/excludes then a catch all (hope that works! will find out when I've done the dump - ongoing). I can see the problem of large data deep in the directory structure. Not so nice from a DLE/disklist definition POV. The problem now is throughput. My average dump rate is quite poor on the test I've run - I'm not surprised though as this client is limited by network. This will be made better at some point by running a test over a separate logical network. The average write/throughput to tape (HP LTO6 + Quantum superloader 3 + Centos7) is around 83 mb/s . That's the best I've seen from my testing so far. This is with 5x DLEs ranging from around 100 to around 200 GB. I've been working with that same data. Current settings: LTO6 tape part_size set to 200GB chuncksize set to 500mb holding disk usable 2000GB/2TB software compression off (Amanda) hardware compression on (tape) record no strategy noinc skip-incr yes auth "ssh" GNUTAR I'm starting to think more CPU (clock) + striped RAID (for Amanda holding disk) to feed it would help. What other factors should I look at? What are the best ways to tune it? Still not convinced about my tape profile (amtapetype). It returned quite a low speed... What kernel module are you using? thanks David -------------------- David Simpson - Computing Support Officer IBME - Institute of Biomedical Engineering Old Road Campus Research Building Oxford, OX3 7DQ Tel: 01865 617697 ext: 17697
