Hi Irek,
I am using v0.80.5 Firefly
<http://ceph.com/docs/master/release-notes/#v0-80-5-firefly>
-sumit

On Fri, Feb 13, 2015 at 1:30 PM, Irek Fasikhov <malm...@gmail.com> wrote:

> Hi.
> What version?
>
> 2015-02-13 6:04 GMT+03:00 Sumit Gaur <sumitkg...@gmail.com>:
>
>> Hi Chir,
>> Please fidn my answer below in blue
>>
>> On Thu, Feb 12, 2015 at 12:42 PM, Chris Hoy Poy <ch...@gopc.net> wrote:
>>
>>> Hi Sumit,
>>>
>>> A couple questions:
>>>
>>> What brand/model SSD?
>>>
>> samsung 480G SSD(PM853T) having random write 90K IOPS (4K, 368MBps)
>>
>>>
>>> What brand/model HDD?
>>>
>> 64GB memory, 300GB SAS HDD (seagate), 10Gb nic
>>
>>>
>>> Also how they are connected to controller/motherboard? Are they sharing
>>> a bus (ie SATA expander)?
>>>
>> no , They are connected with local Bus not the SATA expander.
>>
>>
>>>
>>> RAM?
>>>
>> *64GB *
>>
>>>
>>> Also look at the output of  "iostat -x" or similiar, are the SSDs
>>> hitting 100% utilisation?
>>>
>> *No, SSD was hitting 2000 iops only.  *
>>
>>>
>>> I suspect that the 5:1 ratio of HDDs to SDDs is not ideal, you now have
>>> 5x the write IO trying to fit into a single SSD.
>>>
>> * I have not seen any documented reference to calculate the ratio. Could
>> you suggest one. Here I want to mention that results for 1024K write
>> improve a lot. Problem is with 1024K read and 4k write .*
>>
>> *SSD journal 810 IOPS and 810MBps*
>> *HDD journal 620 IOPS and 620 MBps*
>>
>>
>>
>>
>>> I'll take a punt on it being a SATA connected SSD (most common), 5x ~130
>>> megabytes/second gets very close to most SATA bus limits. If its a shared
>>> BUS, you possibly hit that limit even earlier (since all that data is now
>>> being written twice out over the bus).
>>>
>>> cheers;
>>> \Chris
>>>
>>>
>>> ------------------------------
>>> *From: *"Sumit Gaur" <sumitkg...@gmail.com>
>>> *To: *ceph-users@lists.ceph.com
>>> *Sent: *Thursday, 12 February, 2015 9:23:35 AM
>>> *Subject: *[ceph-users] ceph Performance with SSD journal
>>>
>>>
>>> Hi Ceph-Experts,
>>>
>>> Have a small ceph architecture related question
>>>
>>> As blogs and documents suggest that ceph perform much better if we use
>>> journal on SSD.
>>>
>>> I have made the ceph cluster with 30 HDD + 6 SSD for 6 OSD nodes. 5 HDD
>>> + 1 SSD on each node and each SSD have 5 partition for journaling 5 OSDs
>>> on the node.
>>>
>>> Now I ran similar test as I ran for all HDD setup.
>>>
>>> What I saw below two reading goes in wrong direction as expected
>>>
>>> 1) 4K write IOPS are less for SSD setup, though not major difference
>>> but less.
>>> 2) 1024K Read IOPS are  less  for SSD setup than HDD setup.
>>>
>>> On the other hand 4K read and 1024K write both have much better numbers
>>> for SSD setup.
>>>
>>> Let me know if I am missing some obvious concept.
>>>
>>> Thanks
>>> sumit
>>>
>>> _______________________________________________
>>> ceph-users mailing list
>>> ceph-users@lists.ceph.com
>>> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
>>>
>>>
>>
>> _______________________________________________
>> ceph-users mailing list
>> ceph-users@lists.ceph.com
>> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
>>
>>
>
>
> --
> С уважением, Фасихов Ирек Нургаязович
> Моб.: +79229045757
>
_______________________________________________
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

Reply via email to