Re: [ceph-users] Ceph performance IOPS

2019-07-15 Thread Christian Wuerdig
Option 1 is the official way, option 2 will be a lot faster if it works for
you (I was never in the situation requiring this so can't say) and option 3
is for filestore and not applicable to bluestore

On Wed, 10 Jul 2019 at 07:55, Davis Mendoza Paco 
wrote:

> What would be the most appropriate procedure to move blockdb/wal to SSD?
>
> 1.- remove the OSD and recreate it (affects the performance)
> ceph-volume lvm prepare --bluestore --data  --block.wal
>  --block.db 
>
> 2.- Follow the documentation
>
> http://heiterbiswolkig.blogs.nde.ag/2018/04/08/migrating-bluestores-block-db/
>
> 3.- Follow the documentation
>
> https://swamireddy.wordpress.com/2016/02/19/ceph-how-to-add-the-ssd-journal/
>
> Thanks for the help
>
> El dom., 7 jul. 2019 a las 14:39, Christian Wuerdig (<
> christian.wuer...@gmail.com>) escribió:
>
>> One thing to keep in mind is that the blockdb/wal becomes a Single Point
>> Of Failure for all OSDs using it. So if that SSD dies essentially you have
>> to consider all OSDs using it as lost. I think most go with something like
>> 4-8 OSDs per blockdb/wal drive but it really depends how risk-averse you
>> are, what your budget is etc. Given that you only have 5 nodes I'd probably
>> go for fewer OSDs per blockdb device.
>>
>>
>> On Sat, 6 Jul 2019 at 02:16, Davis Mendoza Paco 
>> wrote:
>>
>>> Hi all,
>>> I have installed ceph luminous, witch 5 nodes(45 OSD), each OSD server
>>> supports up to 16HD and I'm only using 9
>>>
>>> I wanted to ask for help to improve IOPS performance since I have about
>>> 350 virtual machines of approximately 15 GB in size and I/O processes are
>>> very slow.
>>> You who recommend me?
>>>
>>> In the documentation of ceph recommend using SSD for the journal, my
>>> question is
>>> How many SSD do I have to enable per server so that the journals of the
>>> 9 OSDs can be separated into SSDs?
>>>
>>> I currently use ceph with OpenStack, on 11 servers with SO Debian
>>> Stretch:
>>> * 3 controller
>>> * 3 compute
>>> * 5 ceph-osd
>>>   network: bond lacp 10GB
>>>   RAM: 96GB
>>>   HD: 9 disk SATA-3TB (bluestore)
>>>
>>> --
>>> *Davis Mendoza P.*
>>> ___
>>> ceph-users mailing list
>>> ceph-users@lists.ceph.com
>>> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
>>>
>>
>
> --
> *Davis Mendoza P.*
>
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


Re: [ceph-users] Ceph performance IOPS

2019-07-09 Thread Davis Mendoza Paco
What would be the most appropriate procedure to move blockdb/wal to SSD?

1.- remove the OSD and recreate it (affects the performance)
ceph-volume lvm prepare --bluestore --data  --block.wal
 --block.db 

2.- Follow the documentation
http://heiterbiswolkig.blogs.nde.ag/2018/04/08/migrating-bluestores-block-db/

3.- Follow the documentation
https://swamireddy.wordpress.com/2016/02/19/ceph-how-to-add-the-ssd-journal/

Thanks for the help

El dom., 7 jul. 2019 a las 14:39, Christian Wuerdig (<
christian.wuer...@gmail.com>) escribió:

> One thing to keep in mind is that the blockdb/wal becomes a Single Point
> Of Failure for all OSDs using it. So if that SSD dies essentially you have
> to consider all OSDs using it as lost. I think most go with something like
> 4-8 OSDs per blockdb/wal drive but it really depends how risk-averse you
> are, what your budget is etc. Given that you only have 5 nodes I'd probably
> go for fewer OSDs per blockdb device.
>
>
> On Sat, 6 Jul 2019 at 02:16, Davis Mendoza Paco 
> wrote:
>
>> Hi all,
>> I have installed ceph luminous, witch 5 nodes(45 OSD), each OSD server
>> supports up to 16HD and I'm only using 9
>>
>> I wanted to ask for help to improve IOPS performance since I have about
>> 350 virtual machines of approximately 15 GB in size and I/O processes are
>> very slow.
>> You who recommend me?
>>
>> In the documentation of ceph recommend using SSD for the journal, my
>> question is
>> How many SSD do I have to enable per server so that the journals of the 9
>> OSDs can be separated into SSDs?
>>
>> I currently use ceph with OpenStack, on 11 servers with SO Debian Stretch:
>> * 3 controller
>> * 3 compute
>> * 5 ceph-osd
>>   network: bond lacp 10GB
>>   RAM: 96GB
>>   HD: 9 disk SATA-3TB (bluestore)
>>
>> --
>> *Davis Mendoza P.*
>> ___
>> ceph-users mailing list
>> ceph-users@lists.ceph.com
>> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
>>
>

-- 
*Davis Mendoza P.*
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


Re: [ceph-users] Ceph performance IOPS

2019-07-07 Thread Christian Wuerdig
One thing to keep in mind is that the blockdb/wal becomes a Single Point Of
Failure for all OSDs using it. So if that SSD dies essentially you have to
consider all OSDs using it as lost. I think most go with something like 4-8
OSDs per blockdb/wal drive but it really depends how risk-averse you are,
what your budget is etc. Given that you only have 5 nodes I'd probably go
for fewer OSDs per blockdb device.


On Sat, 6 Jul 2019 at 02:16, Davis Mendoza Paco 
wrote:

> Hi all,
> I have installed ceph luminous, witch 5 nodes(45 OSD), each OSD server
> supports up to 16HD and I'm only using 9
>
> I wanted to ask for help to improve IOPS performance since I have about
> 350 virtual machines of approximately 15 GB in size and I/O processes are
> very slow.
> You who recommend me?
>
> In the documentation of ceph recommend using SSD for the journal, my
> question is
> How many SSD do I have to enable per server so that the journals of the 9
> OSDs can be separated into SSDs?
>
> I currently use ceph with OpenStack, on 11 servers with SO Debian Stretch:
> * 3 controller
> * 3 compute
> * 5 ceph-osd
>   network: bond lacp 10GB
>   RAM: 96GB
>   HD: 9 disk SATA-3TB (bluestore)
>
> --
> *Davis Mendoza P.*
> ___
> ceph-users mailing list
> ceph-users@lists.ceph.com
> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
>
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


Re: [ceph-users] Ceph performance IOPS

2019-07-05 Thread solarflow99
Just set 1 or more SSDs for bluestore, as long as you're within the 4% rule
I think it should be enough.


On Fri, Jul 5, 2019 at 7:15 AM Davis Mendoza Paco 
wrote:

> Hi all,
> I have installed ceph luminous, witch 5 nodes(45 OSD), each OSD server
> supports up to 16HD and I'm only using 9
>
> I wanted to ask for help to improve IOPS performance since I have about
> 350 virtual machines of approximately 15 GB in size and I/O processes are
> very slow.
> You who recommend me?
>
> In the documentation of ceph recommend using SSD for the journal, my
> question is
> How many SSD do I have to enable per server so that the journals of the 9
> OSDs can be separated into SSDs?
>
> I currently use ceph with OpenStack, on 11 servers with SO Debian Stretch:
> * 3 controller
> * 3 compute
> * 5 ceph-osd
>   network: bond lacp 10GB
>   RAM: 96GB
>   HD: 9 disk SATA-3TB (bluestore)
>
> --
> *Davis Mendoza P.*
> ___
> ceph-users mailing list
> ceph-users@lists.ceph.com
> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
>
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com