Re: [ceph-users] how to swap osds between servers

2018-09-03 Thread Ronny Aasen

On 03.09.2018 17:42, Andrei Mikhailovsky wrote:

Hello everyone,

I am in the process of adding an additional osd server to my small 
ceph cluster as well as migrating from filestore to bluestore. Here is 
my setup at the moment:


Ceph - 12.2.5 , running on Ubuntu 16.04 with latest updates
3 x osd servers with 10x3TB SAS drives, 2 x Intel S3710 200GB ssd and 
64GB ram in each server. The same servers are also mon servers.


I am adding the following to the cluster:
1 x osd+mon server with 64GB of ram, 2xIntel S3710 200GB ssds.
Adding 4 x 6TB disks and 2x 3TB disks.

Thus, the new setup will have the following configuration:
4 x osd servers with 8x3TB SAS drives and 1x6TB SAS drive, 2 x Intel 
S3710 200GB ssd and 64GB ram in each server. This will make sure that 
all servers have the same amount/capacity drives. There will be 3 mon 
servers in total.


As a result, I will have to remove 2 x 3TB drives from the existing 
three osd servers and place them into the new osd server and add a 6TB 
drive into each osd server. As those 6 x 3TB drives which will be 
taken from the existing osd servers and placed to the new server will 
have the data stored on them, what is the best way to do this? I would 
like to minimise the data migration all over the place as it creates a 
havoc on the cluster performance. What is the best workflow to achieve 
the hardware upgrade? If I add the new osd host server into the 
cluster and physically take the osd disk from one server and place it 
in the other server, will it be recognised and accepted by the cluster?


Data will migrate no matter how you change the crushmap.  since you want 
to migrate to bluestore this is also unavoidable.


if it is critical data, and you want to minimize impact, I prefer to do 
it the slow and steady way of adding a new bluestore drive to the new 
host, with weight 0 and gradually upping it's weight, while gradually 
lowering the weight of the filestore drive beeing removed.


a worse option if you do not have a drive to spare for that, is to 
gradually drain a drive, remove it from the cluster, move it over, zap 
and recreate as bluestore, and gradually fill it again. but this takes 
longer, and if you have space issues can be complicated.


an even worse option is to move the osd drive over, (with it's journal 
and  data), and have the cluster shuffle all the data around, this is a 
big impact.
And then you are still running filestore. so you still need to migrate 
to bluestore


kind regards
Ronny Aasen

___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


[ceph-users] how to swap osds between servers

2018-09-03 Thread Andrei Mikhailovsky
Hello everyone, 

I am in the process of adding an additional osd server to my small ceph cluster 
as well as migrating from filestore to bluestore. Here is my setup at the 
moment: 

Ceph - 12.2.5 , running on Ubuntu 16.04 with latest updates 
3 x osd servers with 10x3TB SAS drives, 2 x Intel S3710 200GB ssd and 64GB ram 
in each server. The same servers are also mon servers. 

I am adding the following to the cluster: 
1 x osd+mon server with 64GB of ram, 2xIntel S3710 200GB ssds. 
Adding 4 x 6TB disks and 2x 3TB disks. 

Thus, the new setup will have the following configuration: 
4 x osd servers with 8x3TB SAS drives and 1x6TB SAS drive, 2 x Intel S3710 
200GB ssd and 64GB ram in each server. This will make sure that all servers 
have the same amount/capacity drives. There will be 3 mon servers in total. 

As a result, I will have to remove 2 x 3TB drives from the existing three osd 
servers and place them into the new osd server and add a 6TB drive into each 
osd server. As those 6 x 3TB drives which will be taken from the existing osd 
servers and placed to the new server will have the data stored on them, what is 
the best way to do this? I would like to minimise the data migration all over 
the place as it creates a havoc on the cluster performance. What is the best 
workflow to achieve the hardware upgrade? If I add the new osd host server into 
the cluster and physically take the osd disk from one server and place it in 
the other server, will it be recognised and accepted by the cluster? 

Thanks 

Andrei 
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com