Right, you were using read directly from block device, which simply isn't optimized since there nothing which uses directly the block device.
Better real-world benchmark is to read a file from mounted filesystem, this would include the file system and UVM overhead. There this ubc_direct should help. Jaromir 2018-08-11 13:42 GMT+02:00 Sad Clouds <cryintotheblue...@gmail.com>: > On Fri, 10 Aug 2018 22:01:19 +0200 > Jaromír Doleček <jaromir.dole...@gmail.com> wrote: > >> 2018-08-10 11:35 GMT+02:00 Sad Clouds <cryintotheblue...@gmail.com>: >> > localhost# dd if=/dev/rsd0d of=/dev/null bs=1m count=10000 >> > 10000+0 records in >> > 10000+0 records out >> > 10485760000 bytes transferred in 11.749 secs (892481062 bytes/sec) >> > >> > localhost# dd if=/dev/sd0d of=/dev/null bs=1m count=10000 >> > 10000+0 records in >> > 10000+0 records out >> > 10485760000 bytes transferred in 196.552 secs (53348528 bytes/sec) >> > >> > Any ideas why block device I/O is so abysmal? Is this something >> > specific to NetBSD? >> >> There is some experimental code on -current to optimize one part of >> the read/write-based I/O, supported on amd64. >> >> You can just boot a -current kernel (for example one from daily >> builds) to single user, and enable the code by escaping to DDB and >> setting variable ubc_direct to 1 (ctrl-alt-esc will give you DDB >> prompt, then 'w ubc_direct 1'), then 'continue' in DDB, then run your >> dd command. >> >> Jaromir > > Thanks, I don't think I'll need it, I just wanted to test how NetBSD > would perform on this disk device, but as was explained to me, real file > system I/O uses a different data path and should be much faster.