‐‐‐‐‐‐‐ Original Message ‐‐‐‐‐‐‐
On Thursday 30 July 2020 22:36, Chris Cappuccio <ch...@nmedia.net> wrote:

> Rupert Gallagher [r...@protonmail.com] wrote:
>
> > No, I am not using USB.
>
> rsync between disks should be very fast.

Right.

> you are going from the sata to the nvme ?

No. It is SATA to SATA, using a M14TQC with a Mini-SAS HD to 4 SATA cable:

https://www.supermicro.com/en/products/accessories/mobilerack/CSE-M14TQC.php

> it might be interesting to try using cp between filesystems, or tar
>
> such as: cp -r /usr/bin /mnt/usr/bin
> or: tar cf - -C /usr/bin . | tar xpf - -C /mnt/usr/bin
>
> also what speeds are you getting on the destination filesystem?
>
> dd count=1 bs=1G if=/dev/zero of=/mnt/test conv=fsync
>
> might give you some rough idea of what 1G write costs.
>
> here's 1G write on my Samsung 845DC Pro which is one of my all-time favorite
> SATA SSDs for reliability
>
> dd count=1 bs=1G if=/dev/zero of=test conv=fsync
>
> =================================================
>
> 1+0 records in
> 1+0 records out
> 1073741824 bytes transferred in 2.906 secs (369450372 bytes/sec)
>
> here's the same for a Crucial M500
>
> dd count=1 bs=1G if=/dev/zero of=test conv=fsync
>
> =================================================
>
> 1+0 records in
> 1+0 records out
> 1073741824 bytes transferred in 4.356 secs (246484472 bytes/sec)
>
> it's not clear to me how much the buffer cache affects this but i'm hoping
> here that conv=fsync helps. in a wierd twist, tests like this with conv=fsync
> run consistently faster than without, so my understanding isn't that great.

Yours are NVMe. I have an SSD on a SATA bus.

This is my result:

>doas dd count=1 bs=1G if=/dev/zero of=/archive2/test conv=fsync
1+0 records in
1+0 records out
1073741824 bytes transferred in 8.118 secs (132261121 bytes/sec)

>grep archive2 /etc/fstab
[label].a /archive2 ffs rw,nodev,nosuid,softdep,noatime 1 2

However, 1G is not enough to go past the cache in ram.

This is what I do:

write test
doas /bin/dd if=/dev/zero of=$testfile bs=$(( $bs * 1024 )) count=$count 
conv=sync

read test
doas /bin/dd if=$testfile of=/dev/null bs=$(( $bs * 1024 )) conv=sync

where

$testfile = /archive2/test for example
$bs=$(( stat -f "%k" /dev/sd1a )) where sd1a is the device of /archive2

count=$(( $ram / ( $bs * 1024 ) )); # ram expressed in blocks
count=$(( $count + 1 )); # exceed ram by 1 block

This is the speed test on WDS400T1R0A (WD RED SSD 4TB)

Free disk space    : 3151013175296 bytes
RAM                : 17125511168 bytes
fs block size      : 8192 bytes
Size of test file  : 17129537536 bytes = 2042 block(s) of 8192K
Test file          : /archive2/disk-speed-test.raw

Writing speed      : 182 MB/s
Reading speed      : 109 MB/s

The product brief of WD RED SSD says "560MB/s read" and "530MB/s write".

By comparison, this is the speed test on the ST2000NX0403 (Seagate Exos 2TB)

Writing speed      : 117 MB/s
Reading speed      : 99 MB/s

The product brief of the Exos says "136MB/s" max transfer.

Both the exos and the wd red have hardware bytes/sector of 512.

This is how I prepared both, with details for the wd red ssd only:

> fdisk -iy -g sd1

>echo "/  1G-*  100%" >/tmp/my_disk_label
>disklabel -w -A -T /tmp/my_disk_label sd1

> disklabel -hn sd1
# /dev/rsd1c:
type: SCSI
disk: SCSI disk
label: WDC  WDS400T1R0A
duid: b8d30be7c118b250
flags:
bytes/sector: 512
sectors/track: 255
tracks/cylinder: 511
sectors/cylinder: 130305
cylinders: 59967
total sectors: 7814037168 # total bytes: 3.6T
boundstart: 64
boundend: 7814037105
drivedata: 0

16 partitions:
#                size           offset  fstype [fsize bsize   cpg]
  a:             3.6T               64  4.2BSD   8192 65536     1
  c:             3.6T                0  unused

> newfs -O2 sd1a
/dev/rsd1a: 3815447.8MB in 7814036928 sectors of 512 bytes
1168 cylinder groups of 3266.88MB, 52270 blocks, 104704 inodes each
super-block backups (for fsck -b #) at:
[omissis]

> dumpfs /dev/sd1a | head -19

magic   19540119 (FFS2) time    Wed Jul 29 18:41:40 2020
superblock location     65536   id      [ 5f21a6c4 bb9dec49 ]
ncg     1168    size    488377308       blocks  484536905
bsize   65536   shift   16      mask    0xffff0000
fsize   8192    shift   13      mask    0xffffe000
frag    8       shift   3       fsbtodb 4
minfree 5%      optim   time    symlinklen 120
maxbsize 0      maxbpg  8192    maxcontig 1     contigsumsize 0
nbfree  60567111        ndir    1       nifree  122294269       nffree  16
bpg     52270   fpg     418160  ipg     104704
nindir  8192    inopb   256     maxfilesize     36033195603132415
sbsize  8192    cgsize  65536   csaddr  3304    cssize  24576
sblkno  16      cblkno  24      iblkno  32      dblkno  3304
cgrotor 0       fmod    0       ronly   0       clean   1
avgfpdir 64     avgfilesize 16384
flags   none
fsmnt
volname         swuid   0

Finally, this is how I use rsync:

#!/bin/sh
from="$1";
to="$2";
if [[ "$from" == "" || "$to" == "" ]]; then
   echo "usage: copy /from /to":
   exit 1;
fi
doas /usr/bin/nice -n 11 rsync --recursive --links --times --modify-window=1 -O 
-J --devices --specials --update --super --owner --group --perms --delete 
--delete-before --delete-excluded --exclude-from=/etc/excluded_from_backup.conf 
--numeric-ids --compress-level=2 --outbuf=Block --inplace $from/ $to/;
doas rm gmon.out

Again, the SSD is brand new, just prepared, and writing on it for the very 
first time, so write amplification should not happen.

Reply via email to