Am 22.07.21 um 13:54 schrieb Stefan G. Weichinger:

I defined this in amanda.conf

define interface chelsio {

     comment "10G NIC"

     use 10 Gbps

}

More tuning ahead, not yet verified (= waiting for the next amdump):


inparallel 6
dumporder "BTBTBT"
netusage  10 Gbps
device-output-buffer-size 2048k # LTO6 drive

In

"define application-tool app_amgtar"

property "TAR-BLOCKSIZE" "1024"

Quick tests show that this improves the creation of the "tar-stream" on the client.

Restore test: todo. Sure.

-

What I can't yet explain:

the holdingdisk (Samsung SSD 860 EVO 2TB)

* it is attached to a MegaRAID SAS 2008 controller. The local admin created a RAID0 device on the controller, this gives us /dev/sdb in linux

* one partition /dev/sdb1, XFS

UUID=781a9caf-39f2-4e5a-b7cd-f2b320e06b74 /mnt/amhold   xfs noatime 0 0

* # cat /sys/block/sdb/queue/scheduler

[none] mq-deadline

I can write to it via dd with over 200 MB/s:

root@backup:/mnt/amhold# dd if=/dev/zero of=/mnt/amhold/testfile bs=1G count=1 oflag=direct

1+0 Datensätze ein

1+0 Datensätze aus

1073741824 bytes (1,1 GB, 1,0 GiB) copied, 4,52339 s, 237 MB/s

OK; slower with smaller blocks:

root@backup:/mnt/amhold# dd if=/dev/zero of=/mnt/amhold/testfile bs=4M count=1000 oflag=direct

1000+0 Datensätze ein

1000+0 Datensätze aus

4194304000 bytes (4,2 GB, 3,9 GiB) copied, 23,2486 s, 180 MB/s

-

When I amdump a single DLE from another client, no compression, and watch with "atop", I see /dev/sdb1 at 100% "load" ... with around 130 MB/s max, decreasing write rate over time.

The clients storage is able to deliver with around 300MB/s.

And the NICs are 10G each.

Where is the bottleneck? Or do I expect too much?

The CPU cores are NOT fully loaded ...

Reply via email to