> On Jul 23, 2021, at 12:35 PM, Jose M Calhariz 
> <[email protected]> wrote:
> 
> On Fri, Jul 23, 2021 at 10:45:49AM +0200, Stefan G. Weichinger wrote:
>> Am 22.07.21 um 13:54 schrieb Stefan G. Weichinger:
>> 
>>> I defined this in amanda.conf
>>> 
>>> define interface chelsio {
>>> 
>>>     comment "10G NIC"
>>> 
>>>     use 10 Gbps
>>> 
>>> }
>> 
>> More tuning ahead, not yet verified (= waiting for the next amdump):
>> 
>> 
>> inparallel 6
>> dumporder "BTBTBT"
>> netusage  10 Gbps
>> device-output-buffer-size 2048k # LTO6 drive
> 
> How many clients do you have?
> 
> 
> 
>> 
>> In
>> 
>> "define application-tool app_amgtar"
>> 
>> property "TAR-BLOCKSIZE" "1024"
>> 
>> Quick tests show that this improves the creation of the "tar-stream" on the
>> client.
>> 
>> Restore test: todo. Sure.
>> 
>> -
>> 
>> What I can't yet explain:
>> 
>> the holdingdisk (Samsung SSD 860 EVO 2TB)
>> 
>> * it is attached to a MegaRAID SAS 2008 controller. The local admin created
>> a RAID0 device on the controller, this gives us /dev/sdb in linux
>> 
>> * one partition /dev/sdb1, XFS
>> 
>> UUID=781a9caf-39f2-4e5a-b7cd-f2b320e06b74 /mnt/amhold   xfs noatime 0 0
>> 
>> * # cat /sys/block/sdb/queue/scheduler
>> 
>> [none] mq-deadline
>> 
>> I can write to it via dd with over 200 MB/s:
>> 
>> root@backup:/mnt/amhold# dd if=/dev/zero of=/mnt/amhold/testfile bs=1G
>> count=1 oflag=direct
>> 
>> 1+0 Datensätze ein
>> 
>> 1+0 Datensätze aus
>> 
>> 1073741824 bytes (1,1 GB, 1,0 GiB) copied, 4,52339 s, 237 MB/s
>> 
>> OK; slower with smaller blocks:
>> 
>> root@backup:/mnt/amhold# dd if=/dev/zero of=/mnt/amhold/testfile bs=4M
>> count=1000 oflag=direct
>> 
>> 1000+0 Datensätze ein
>> 
>> 1000+0 Datensätze aus
>> 
>> 4194304000 bytes (4,2 GB, 3,9 GiB) copied, 23,2486 s, 180 MB/s
> 
> I was expecting much more from the SSD, but I can be thinking in
> faster NVME.
> 
> Just for curiosity, what "hdparm -tT /dev/sdb" gives.
> 
>> 
>> -
>> 
>> When I amdump a single DLE from another client, no compression, and watch
>> with "atop", I see /dev/sdb1 at 100% "load" ... with around 130 MB/s max,
>> decreasing write rate over time.
>> 
>> The clients storage is able to deliver with around 300MB/s.
>> 
>> And the NICs are 10G each.
>> 
>> Where is the bottleneck? Or do I expect too much?
>> 
>> The CPU cores are NOT fully loaded ...
>> 
> 
> On my setup I have more than 190 clients using auth=ssh with a NIC
> of 10Gb/s.
> 
> On my amanda.conf for example:
> 
> inparallel 62
> dumporder "TTTTTTTTTTTTTTTTTTTTTTTTTTTTTTTTTTTTTTTTTTTTTTTTTTTTTTTTTTTTtt"
> 
> Kind regards
> Jose M Calhariz
> 
> 
> -- 
> --
>       Não preciso de advogados para me dizer o que não devo fazer.
>       Eu os contrato para me dizer como fazer o que quero fazer.
>               -- J. Pierpont Morgan


Are you STORING your backups on the holding disk,  or using it in the original 
sense,
where backups only live there long enough to be copied to (a tape / a virtual 
tape on disk) ?

I’m wondering about that RAID part of the equation.   If this is really a 
temporary holding disk 
doesn’t the copying inherent in a RAID setup get in the way of speed? 

I’m not doing Debian,  but my holding disks are just big physical disks that 
are used for
nothing else.   At the end of every backup run,  they are empty again.

Deb Baddorf
Fermilab


Reply via email to