First question:  why do you want to do this?

There are some deployment scenarios in which moving the drives will Just Work, 
and others in which it won’t.  If you try, I suggest shutting the system down 
all the way, exchanging just two drives, then powering back on — and see if all 
is well before doing all.

On which Ceph release were these OSDs deployed? Containerized? Are you using 
ceph-disk or ceph-volume? LVM?  Colocated journal/DB/WAL, or on a seperate 
device?

Try `ls -l /var/lib/ceph/someosd` or whatever you have, look for symlinks that 
reference device paths that may be stale if drives are swapped.

> 
> Hello,
> 
> Have I check same global flag for this operation?
> 
> Thanks!
> ________________________________
> De: Stefan Kooman <[email protected]>
> Enviado: miércoles, 18 de mayo de 2022 14:13
> Para: Jorge JP <[email protected]>
> Asunto: Re: [ceph-users] Best way to change disk in controller disk without 
> affect cluster
> 
> On 5/18/22 13:06, Jorge JP wrote:
>> Hello!
>> 
>> I have a cluster ceph with 6 nodes with 6 HDD disks in each one. The status 
>> of my cluster is OK and the pool 45.25% (95.55 TB of 211.14 TB). I don't 
>> have any problem.
>> 
>> I want change the position of a various disks in the disk controller of some 
>> nodes and I don't know what is the way.
>> 
>>  - Stop osd and move the disk of position (hotplug).
>> 
>>  - Reweight osd to 0 and move the pgs to other osds, stop osd and change 
>> position
>> 
>> I think first option is ok, the data not deleted and when I will changed the 
>> disk the server recognised again and I will can start osd without problems.
> 
> Order of the disks should not matter. First option is fine.
> 
> Gr. Stefan
> _______________________________________________
> ceph-users mailing list -- [email protected]
> To unsubscribe send an email to [email protected]

_______________________________________________
ceph-users mailing list -- [email protected]
To unsubscribe send an email to [email protected]

Reply via email to