Just a follow up on this for general interest.
I got boards made in Hong Kong from the design done by Tobias Schramm
generously made available on github. I received the board a few days
ago, I ordered then the nvme 2230 to test and received it today and here
we are.
The following tests are done on an APU1 as the others are in use now and
I had this one available so I used it.
Put the mPCIE broad in the mPCIe1 and put the nvme on the board and it
worked right away.
I will do the tests on the APU2 soon as well when I get the additional
nvme boards I order.
Just FYI, the tests below are done on NIXOS as that's what I had running
on the APU1 now testing stuff, so I used that.
If there is a need for the tests on OpenBSD I can do that later if
anyone interested.
The ONLY thing I am not sure is on the APU2, the line on the mPCIe
schematics for J14 pins 23 and 25 are reverse compare to the mPCIe J13.
There is a note on the schematics for that. Why that is I can't say but
the APU1 doesn't.
Based on Tobias, he never said that the nvme didn't work in both slots,
so I will find out somehow.
In any case, the mini,um order was 5 boards and the difference in price
was pretty small that I order 25 instead, so if anyone might be
interested, I would be happy to ship some if needed.
I d the board made as I couldn't find some and only 3 company made them,
two of them were out, or none available the third one in China, I didn't
order there.
This is nvme M-Key for either 2230 or 2242. It doesn't support bigger
one at all. No space in the APU for it.
Just also remember that the mPCIe connectors in the APU use only one
lane, not 4. So 1x if you want. But still the results are pretty good.
10x speed compare to mSATA in there., both dogfish one, so fare
comparison I guess. The third drive is an SSD SanDisk one.
So if you want to make a little NAS out of an APU I guess you can and it
would be decent I suppose.
If you want to know more, fell free to contact me off list, unless more
here want to know more.
I used fio a standard benchmark tests.
I only did the write test on the nvme as my other two drives have data
and I didn't want to loose it! (;
I did the test on the raw device to eliminate anything else that could
affect it and hopefully give a more real results.
Same tests on all 3 different drives in the same box.
The number speak for themselves.
And that's MBytes, not Mbits speed. I can only imagine if I had 4 PCIE
lanes...
Really not bad for the small APU's
=====================
NVME (READ) 401MB/sec
=====================
[nix-shell:~]# fio --filename=/dev/nvme0n1 --rw=read --direct=1 --bs=1M
--ioengine=libaio --runtime=60 --numjobs=1 --time_based
--group_reporting --name=seq_read --iodepth=16
seq_read: (g=0): rw=read, bs=(R) 1024KiB-1024KiB, (W) 1024KiB-1024KiB,
(T) 1024KiB-1024KiB, ioengine=libaio, iodepth=16
fio-3.33
Starting 1 process
Jobs: 1 (f=1): [R(1)][100.0%][r=383MiB/s][r=383 IOPS][eta 00m:00s]
seq_read: (groupid=0, jobs=1): err= 0: pid=1383: Fri Jun 9 17:54:30 2023
read: IOPS=382, BW=382MiB/s (401MB/s)(22.4GiB/60042msec)
slat (usec): min=110, max=4089, avg=156.26, stdev=68.23
clat (usec): min=13238, max=78390, avg=41671.32, stdev=4830.72
lat (usec): min=13494, max=80091, avg=41827.59, stdev=4827.39
clat percentiles (usec):
| 1.00th=[21103], 5.00th=[40109], 10.00th=[41157], 20.00th=[41681],
| 30.00th=[41681], 40.00th=[41681], 50.00th=[41681], 60.00th=[41681],
| 70.00th=[41681], 80.00th=[41681], 90.00th=[41681], 95.00th=[42206],
| 99.00th=[64226], 99.50th=[68682], 99.90th=[71828], 99.95th=[71828],
| 99.99th=[77071]
bw ( KiB/s): min=339968, max=394475, per=100.00%, avg=391725.72,
stdev=4887.40, samples=119
iops : min= 332, max= 385, avg=382.43, stdev= 4.77, samples=119
lat (msec) : 20=0.93%, 50=96.11%, 100=2.96%
cpu : usr=1.25%, sys=7.95%, ctx=22974, majf=0, minf=4108
IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=99.9%, 32=0.0%,
>=64=0.0%
submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%,
>=64=0.0%
complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.1%, 32=0.0%, 64=0.0%,
>=64=0.0%
issued rwts: total=22954,0,0,0 short=0,0,0,0 dropped=0,0,0,0
latency : target=0, window=0, percentile=100.00%, depth=16
Run status group 0 (all jobs):
READ: bw=382MiB/s (401MB/s), 382MiB/s-382MiB/s (401MB/s-401MB/s),
io=22.4GiB (24.1GB), run=60042-60042msec
Disk stats (read/write):
nvme0n1: ios=91589/0, merge=0/0, ticks=3717080/0, in_queue=3717080,
util=100.00%
======================
NVME (WRITE) 363MB/sec
======================
[nix-shell:~]# fio --filename=/dev/nvme0n1 --rw=write --direct=1 --bs=1M
--ioengine=libaio --runtime=60 --numjobs=1 --time_based
--group_reporting --name=seq_read --iodepth=16
seq_read: (g=0): rw=write, bs=(R) 1024KiB-1024KiB, (W) 1024KiB-1024KiB,
(T) 1024KiB-1024KiB, ioengine=libaio, iodepth=16
fio-3.33
Starting 1 process
Jobs: 1 (f=1): [W(1)][100.0%][w=346MiB/s][w=346 IOPS][eta 00m:00s]
seq_read: (groupid=0, jobs=1): err= 0: pid=1431: Fri Jun 9 18:29:10 2023
write: IOPS=346, BW=346MiB/s (363MB/s)(20.3GiB/60046msec); 0 zone resets
slat (usec): min=252, max=3063, avg=424.77, stdev=69.50
clat (usec): min=6740, max=85608, avg=45756.89, stdev=1377.10
lat (usec): min=7186, max=86058, avg=46181.65, stdev=1374.24
clat percentiles (usec):
| 1.00th=[44827], 5.00th=[45351], 10.00th=[45351], 20.00th=[45351],
| 30.00th=[45351], 40.00th=[45351], 50.00th=[45351], 60.00th=[45351],
| 70.00th=[45876], 80.00th=[45876], 90.00th=[45876], 95.00th=[48497],
| 99.00th=[49546], 99.50th=[50070], 99.90th=[50594], 99.95th=[56886],
| 99.99th=[80217]
bw ( KiB/s): min=349508, max=358400, per=100.00%, avg=354865.66,
stdev=2388.36, samples=119
iops : min= 341, max= 350, avg=346.33, stdev= 2.35, samples=119
lat (msec) : 10=0.01%, 20=0.02%, 50=99.75%, 100=0.22%
cpu : usr=11.74%, sys=6.12%, ctx=20827, majf=0, minf=10
IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=99.9%, 32=0.0%,
>=64=0.0%
submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%,
>=64=0.0%
complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.1%, 32=0.0%, 64=0.0%,
>=64=0.0%
issued rwts: total=0,20793,0,0 short=0,0,0,0 dropped=0,0,0,0
latency : target=0, window=0, percentile=100.00%, depth=16
Run status group 0 (all jobs):
WRITE: bw=346MiB/s (363MB/s), 346MiB/s-346MiB/s (363MB/s-363MB/s),
io=20.3GiB (21.8GB), run=60046-60046msec
Disk stats (read/write):
nvme0n1: ios=47/82897, merge=0/0, ticks=56/3700620, in_queue=3700675,
util=100.00%
=====================
mSATA (READ) 40MB/sec
=====================
[nix-shell:~]# fio --filename=/dev/sda --rw=read --direct=1 --bs=1M
--ioengine=libaio --runtime=60 --numjobs=1 --time_based
--group_reporting --name=seq_read --iodepth=16
seq_read: (g=0): rw=read, bs=(R) 1024KiB-1024KiB, (W) 1024KiB-1024KiB,
(T) 1024KiB-1024KiB, ioengine=libaio, iodepth=16
fio-3.33
Starting 1 process
Jobs: 1 (f=1): [R(1)][100.0%][r=55.1MiB/s][r=55 IOPS][eta 00m:00s]
seq_read: (groupid=0, jobs=1): err= 0: pid=1390: Fri Jun 9 17:57:26 2023
read: IOPS=38, BW=38.4MiB/s (40.2MB/s)(2318MiB/60436msec)
slat (usec): min=93, max=1720, avg=218.16, stdev=125.26
clat (msec): min=20, max=1483, avg=416.63, stdev=283.25
lat (msec): min=22, max=1483, avg=416.85, stdev=283.23
clat percentiles (msec):
| 1.00th=[ 62], 5.00th=[ 77], 10.00th=[ 111], 20.00th=[ 157],
| 30.00th=[ 203], 40.00th=[ 279], 50.00th=[ 355], 60.00th=[ 460],
| 70.00th=[ 550], 80.00th=[ 651], 90.00th=[ 785], 95.00th=[ 961],
| 99.00th=[ 1267], 99.50th=[ 1401], 99.90th=[ 1452], 99.95th=[ 1452],
| 99.99th=[ 1485]
bw ( KiB/s): min= 6144, max=132854, per=100.00%, avg=39298.92,
stdev=27077.33, samples=120
iops : min= 6, max= 129, avg=38.34, stdev=26.41, samples=120
lat (msec) : 50=0.39%, 100=7.51%, 250=27.87%, 500=29.16%, 750=24.03%
lat (msec) : 1000=7.08%, 2000=3.97%
cpu : usr=0.23%, sys=1.07%, ctx=2323, majf=0, minf=4107
IO depths : 1=0.1%, 2=0.1%, 4=0.2%, 8=0.3%, 16=99.4%, 32=0.0%,
>=64=0.0%
submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%,
>=64=0.0%
complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.1%, 32=0.0%, 64=0.0%,
>=64=0.0%
issued rwts: total=2318,0,0,0 short=0,0,0,0 dropped=0,0,0,0
latency : target=0, window=0, percentile=100.00%, depth=16
Run status group 0 (all jobs):
READ: bw=38.4MiB/s (40.2MB/s), 38.4MiB/s-38.4MiB/s
(40.2MB/s-40.2MB/s), io=2318MiB (2431MB), run=60436-60436msec
Disk stats (read/write):
sda: ios=2313/37, merge=0/11, ticks=961185/27554, in_queue=995983,
util=99.56%
================================
SD Card SandDisk (READ) 29MB/sec
================================
[nix-shell:~]# fio --filename=/dev/sdb --rw=read --direct=1 --bs=1M
--ioengine=libaio --runtime=60 --numjobs=1 --time_based
--group_reporting --name=seq_read --iodepth=16
seq_read: (g=0): rw=read, bs=(R) 1024KiB-1024KiB, (W) 1024KiB-1024KiB,
(T) 1024KiB-1024KiB, ioengine=libaio, iodepth=16
fio-3.33
Starting 1 process
Jobs: 1 (f=1): [R(1)][100.0%][r=27.0MiB/s][r=27 IOPS][eta 00m:00s]
seq_read: (groupid=0, jobs=1): err= 0: pid=1396: Fri Jun 9 17:58:38 2023
read: IOPS=27, BW=27.4MiB/s (28.8MB/s)(1647MiB/60039msec)
slat (usec): min=29954, max=60953, avg=36401.73, stdev=3061.57
clat (msec): min=9, max=615, avg=544.12, stdev=36.64
lat (msec): min=51, max=653, avg=580.52, stdev=37.04
clat percentiles (msec):
| 1.00th=[ 502], 5.00th=[ 510], 10.00th=[ 514], 20.00th=[ 527],
| 30.00th=[ 535], 40.00th=[ 542], 50.00th=[ 550], 60.00th=[ 558],
| 70.00th=[ 558], 80.00th=[ 567], 90.00th=[ 575], 95.00th=[ 584],
| 99.00th=[ 592], 99.50th=[ 609], 99.90th=[ 617], 99.95th=[ 617],
| 99.99th=[ 617]
bw ( KiB/s): min=24576, max=30720, per=100.00%, avg=28104.99,
stdev=1426.10, samples=118
iops : min= 24, max= 30, avg=26.97, stdev= 1.48, samples=118
lat (msec) : 10=0.06%, 100=0.12%, 250=0.24%, 500=0.36%, 750=99.21%
cpu : usr=0.14%, sys=2.89%, ctx=14860, majf=0, minf=4108
IO depths : 1=0.1%, 2=0.1%, 4=0.2%, 8=0.5%, 16=99.1%, 32=0.0%,
>=64=0.0%
submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%,
>=64=0.0%
complete : 0=0.0%, 4=99.9%, 8=0.0%, 16=0.1%, 32=0.0%, 64=0.0%,
>=64=0.0%
issued rwts: total=1647,0,0,0 short=0,0,0,0 dropped=0,0,0,0
latency : target=0, window=0, percentile=100.00%, depth=16
Run status group 0 (all jobs):
READ: bw=27.4MiB/s (28.8MB/s), 27.4MiB/s-27.4MiB/s
(28.8MB/s-28.8MB/s), io=1647MiB (1727MB), run=60039-60039msec
Disk stats (read/write):
sdb: ios=14789/0, merge=0/0, ticks=118085/0, in_queue=118085,
util=100.00%