On Sat, 4 Oct 2014 17:56:22 +0100 (BST) Andrei Mikhailovsky wrote:
> > Read the above link again, carefully. ^o^
> > In in it I state that:
> > a) despite reading such in old posts, setting read_ahead on the OSD
> > nodes
> > has no or even negative effects. Inside the VM, it is very helpful:
>
> > b) the read speed increased about 10 times, from 35MB/s to 380MB/s
>
> Christian, are you getting 380MB/s from hdd osds or ssd osds? It seems a
> bit high for a single thread cold data throughput.
>
Sorry, I described that cluster so many times...
2 nodes, 32GB RAM each (so that is where any "unnatural" speedup would
come from), 2 SSD journals and 2 OSDs backed by a 11 HDD RAID6 each behind
a 4GB HW cache Areca controller.
OTOH you should be able to easily outdo that with a "classic" Ceph cluster
backed by sufficient individual HDD OSDs.
Christian
> Cheers
>
> > Regards,
>
> > Christian
> > > # set read_ahead values
> > > ACTION=="add|change", KERNEL=="sd[a-z]",
> > > ATTR{queue/rotational}=="1",
> > > ATTR{queue/read_ahead_kb}="2048" ACTION=="add|change",
> > > KERNEL=="sd[a-z]", ATTR{queue/rotational}=="1",
> > > ATTR{queue/nr_requests}="2048" # set deadline scheduler for
> > > non-rotating
> > > disks ACTION=="add|change", KERNEL=="sd[a-z]",
> > > ATTR{queue/rotational}=="0", ATTR{queue/scheduler}="noop" # # set
> > > cfq
> > > scheduler for rotating disks ACTION=="add|change",
> > > KERNEL=="sd[a-z]",
> > > ATTR{queue/rotational}=="1", ATTR{queue/scheduler}="cfq"
> > >
> > > Is there anything else that I am missing?
>
> > --
> > Christian Balzer Network/Systems Engineer
> > [email protected] Global OnLine Japan/Fusion Communications
> > http://www.gol.com/
> > _______________________________________________
> > ceph-users mailing list
> > [email protected]
> > http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
--
Christian Balzer Network/Systems Engineer
[email protected] Global OnLine Japan/Fusion Communications
http://www.gol.com/
_______________________________________________
ceph-users mailing list
[email protected]
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com