ceph osd pool create scbench 100 100 rados bench -p scbench 10 write --no-cleanup rados bench -p scbench 10 seq
On Mon, Nov 13, 2017 at 1:28 AM, Rudi Ahlers <[email protected]> wrote: > Would you mind telling me what rados command set you use, and share the > output? I would like to compare it to our server as well. > > On Fri, Nov 10, 2017 at 6:29 AM, Robert Stanford <[email protected]> > wrote: > >> >> In my cluster, rados bench shows about 1GB/s bandwidth. I've done some >> tuning: >> >> [osd] >> osd op threads = 8 >> osd disk threads = 4 >> osd recovery max active = 7 >> >> >> I was hoping to get much better bandwidth. My network can handle it, and >> my disks are pretty fast as well. Are there any major tunables I can play >> with to increase what will be reported by "rados bench"? Am I pretty much >> stuck around the bandwidth it reported? >> >> Thank you >> >> _______________________________________________ >> ceph-users mailing list >> [email protected] >> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com >> >> > > > -- > Kind Regards > Rudi Ahlers > Website: http://www.rudiahlers.co.za >
_______________________________________________ ceph-users mailing list [email protected] http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
