Hey guys,

We wrote this simple program to try to achieve what Fio (linux program) 
does. Fio can easily achieve 100K IOPS on an Amazon i3.large instance with 
NVMe SSD. However, with Go we're unable to achieve anything close to that.

https://github.com/dgraph-io/badger-bench/blob/master/randread/main.go

This program should be simple to run. It uses Fio generated files. And 
basically tries 3 things: 1. random reads in a single goroutine (turned off 
by default), 2. random reads using specified number of goroutines, 3. same 
as 2, but using a channel.

3 is slower than 2 (of course). But, 2 is never able to achieve the IOPS 
that Fio can achieve. I've tried other things, to no luck. What I notice is 
that Go and Fio are close to each other as long as number of Goroutines is 
<= number of cores. Once you exceed cores, Go stays put, while Fio IOPS 
keeps on improving, until it reaches SSD thresholds.

So, how could I change my Go program to realize the true throughput of an 
SSD? Or, is this something that needs further work in Go (saw a thread 
about libaio).

Cheers,
Manish

-- 
You received this message because you are subscribed to the Google Groups 
"golang-nuts" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to golang-nuts+unsubscr...@googlegroups.com.
For more options, visit https://groups.google.com/d/optout.

Reply via email to