All the stuff I'm aware of is part of the testing we're doing for
Giant. There is probably ongoing work in the pipeline, but the fast
dispatch, sharded work queues, and sharded internal locking structures
that Somnath has discussed all made it.
-Greg
Software Engineer #42 @ http://inktank.com | http://ceph.com


On Wed, Oct 1, 2014 at 7:07 AM, Andrei Mikhailovsky <[email protected]> wrote:
>
> Greg, are they going to be a part of the next stable release?
>
> Cheers
> ________________________________
>
> From: "Gregory Farnum" <[email protected]>
> To: "Andrei Mikhailovsky" <[email protected]>
> Cc: "Timur Nurlygayanov" <[email protected]>, "ceph-users"
> <[email protected]>
> Sent: Wednesday, 1 October, 2014 3:04:51 PM
> Subject: Re: [ceph-users] Why performance of benchmarks with small blocks is
> extremely small?
>
> On Wed, Oct 1, 2014 at 5:24 AM, Andrei Mikhailovsky <[email protected]>
> wrote:
>> Timur,
>>
>> As far as I know, the latest master has a number of improvements for ssd
>> disks. If you check the mailing list discussion from a couple of weeks
>> back,
>> you can see that the latest stable firefly is not that well optimised for
>> ssd drives and IO is limited. However changes are being made to address
>> that.
>>
>> I am well surprised that you can get 10K IOps as in my tests I was not
>> getting over 3K IOPs on the ssd disks which are capable of doing 90K IOps.
>>
>> P.S. does anyone know if the ssd optimisation code will be added to the
>> next
>> maintenance release of firefly?
>
> Not a chance. The changes enabling that improved throughput are very
> invasive and sprinkled all over the OSD; they aren't the sort of thing
> that one does backport or that one could put on top of a stable
> release for any meaningful definition of "stable". :)
> -Greg
> Software Engineer #42 @ http://inktank.com | http://ceph.com
>
>
_______________________________________________
ceph-users mailing list
[email protected]
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

Reply via email to