Option 1 is the official way, option 2 will be a lot faster if it works for
you (I was never in the situation requiring this so can't say) and option 3
is for filestore and not applicable to bluestore
On Wed, 10 Jul 2019 at 07:55, Davis Mendoza Paco
wrote:
> What would be the most appropriate
What would be the most appropriate procedure to move blockdb/wal to SSD?
1.- remove the OSD and recreate it (affects the performance)
ceph-volume lvm prepare --bluestore --data --block.wal
--block.db
2.- Follow the documentation
One thing to keep in mind is that the blockdb/wal becomes a Single Point Of
Failure for all OSDs using it. So if that SSD dies essentially you have to
consider all OSDs using it as lost. I think most go with something like 4-8
OSDs per blockdb/wal drive but it really depends how risk-averse you
Just set 1 or more SSDs for bluestore, as long as you're within the 4% rule
I think it should be enough.
On Fri, Jul 5, 2019 at 7:15 AM Davis Mendoza Paco
wrote:
> Hi all,
> I have installed ceph luminous, witch 5 nodes(45 OSD), each OSD server
> supports up to 16HD and I'm only using 9
>
> I
Hi all,
I have installed ceph luminous, witch 5 nodes(45 OSD), each OSD server
supports up to 16HD and I'm only using 9
I wanted to ask for help to improve IOPS performance since I have about 350
virtual machines of approximately 15 GB in size and I/O processes are very
slow.
You who recommend