2018-03-19 13:06 időpontban Sad Clouds ezt írta:
Hello, which virtual controller do you use in VirtualBox and do you
have "Use Host I/O Cache" selected on that controller? If yes, then you
need to disable it before running I/O tests, otherwise it caches loads
of data in RAM instead of sending it to disk.
On Mon, Mar 19, 2018 at 8:59 AM, Martin Husemann <mar...@duskware.de>
wrote:
On Mon, Mar 19, 2018 at 08:54:12AM +0000, Chavdar Ivanov wrote:
I'd be also interested in your setup - on a W10 hosted VBox (latest)
on a
fast M.2 disk I get approximately 5 times slower values, on -current
amd64,
having disks attached to SATA, SAS and NVMe controllers (almost the
same,
the SAS one is a little slower than the rest, but nowhere near your
figures. :
Hmm, nothing special, latest VBox, Win7 host, plain old hard disk as
backend store with NTFS on a SATA disk. But the host has *plenty* of
memory,
maybe I should have used a larger dd to exhaust buffering.
Martin
Hi There,
I've performed a short test. Hereby my results:
HOST: Win7, Intel Core5, 2 CPU's
1. VM: NetBSD 7.1.1, original kernel
Controller: SATA
Driver: AHCI
FS: ffs, default settings
Command: dd if=/dev/zero of=out bs=1m count=1000
Host I/O cache off.
Average result of 3 runs: 105,5 MB/sec
2. VM: Debian 9.0, original kernel
Controller: SATA
Driver: AHCI
FS: ext4, default settings
Command: dd if=/dev/zero of=out bs=1M count=1000
Host I/O cache off.
Average result of 3 runs: 588,0 MB/sec
So, Debian performed almost 6 times faster than NetBSD on the same
machine.
Any setting which influence the test and I didn't apply?
Rgds,
FeZ